AI systems learn from user behaviour. If you click on a post with a radical political view, Facebook will show you more posts with similar topics.
The problem is that many of our actions on devices are system 1 behaviours, instinctive and emotional - they reflect our largely obsolete evolutionary instincts. AI systems learn to reinforce our system 1 behaviour, which often prevents us from changing our beliefs.
People don’t intend to create echo chambers for themselves, many people want to be more balanced in their views, but the current AI systems push us to follow our predispositions, not to challenge ourselves.
System2 AI helps users become how they want to be by identifying their system 2 behaviour, slower more intentional actions, and valuing these actions more when deciding what items to recommend.
When your friend tells you they want to eat healthier, that’s a system 2 behaviour - careful thought went into this action. When your friend binges on pizza the next day that’s a system 1 action - impulsive.
You’d be foolish to weigh these 2 actions equally, and assume that your friend doesn’t want to be healthy any more - but this is what current AI systems do.