What’s happened? After examining nearly 11 million posts across 7 social-media platforms, researchers have found a consistent pattern. People are more likely to click and engage with lower-quality news links than higher-quality ones, even when the same user posts both, says research by Cornell University. This is why several of the news pieces popping up on your social media feed may not be reliable despite the huge number of likes, especially as even AI systems struggle with news accuracy.

- The study included platforms such as BlueSky, Mastodon, LinkedIn, Twitter/X, TruthSocial, Gab, and GETTR — each with its own unique inclination.
- On every platform, news from lower-credibility sites received 7% more engagement than posts from higher-credibility outlets.
- This trend appeared on both left-leaning and right-leaning platforms, and notably, sensational headlines and emotional framing of the news seem to drive the clicks.
- The study tested with the same set of poster and audience, but the lower-quality news still pulled more engagement.
This is important because: If people constantly reward poor journalism or more dramatic content, platforms have no incentive to boost reliable info, and misinformation gets a free algorithmic ride.
- Higher-quality journalism risks losing reach and influence when clicks chase chaos.
- Engagement-based feeds could be amplifying bad content by design.
- This isn’t just a ‘bad algorithm’ problem; it’s a human behavior problem as well.
Why it matters? This challenges the idea that misinformation spreads only because of tech. Sometimes, people simply choose the louder link.
- The findings weaken the narrative that misinformation only spreads on platforms with certain political leanings.
- Users reward outrage, not accuracy. As a result, good reporting often loses to viral drama.
- Platforms need to rethink their recommendation systems, not just moderation; some are already testing tools that give users more control over what they see.
Ok, what’s next? Expect more debate on whether social platforms should prioritize credible sources, not just whatever drives attention.
- Platforms are already experimenting with credibility signals by implementing AI to verify facts.
- Users may see prompts or labels nudging them toward reliable sources in the future.