According to ExtremeTech, a comprehensive new study analyzed seven experiments with over 10,000 total participants and reached a startling conclusion: people learn more slowly and retain less information when using chatbots compared to traditional web search. The research, published in PNAS Nexus, showed chatbot learners had significantly lower recall and produced explanations that independent evaluators consistently rated as less informative and helpful. Even when chatbots provided accurate information, the learning outcomes were worse across multiple metrics. This challenges the fundamental assumption that AI assistants are superior educational tools. The findings suggest something fundamental about how we process information might be broken by the chatbot interface.
Why search works better
Here’s the thing: we’ve spent years being told that web searching creates shallow understanding. But it turns out the very process we criticized might be what makes learning stick. When you search for information manually, you’re actively engaged in evaluating sources, sifting through results, and organizing your thoughts. You’re building mental scaffolding. With chatbots? You’re basically just receiving a curated answer package. It’s the difference between cooking a meal from scratch versus getting takeout – one process teaches you skills, the other just feeds you.
The too-easy problem
Chatbots remove the friction that actually helps learning. Think about it: if you need 10 pieces of information, traditional search might make you read 50-100 snippets to find what you need. That extra reading? It’s not wasted time – it’s building context and connections in your brain. When a chatbot gives you exactly those 10 pieces in a neat list, you miss all the surrounding information that helps cement knowledge. The researchers found this led to what they called “shallower learning” – people could repeat facts but couldn’t explain concepts as well.
Could simpler be smarter?
Counterintuitively, making chatbot responses less sophisticated might be the solution. Instead of delivering comprehensive answers, maybe they should provide shorter responses with suggested follow-up questions to keep users engaged in the discovery process. ChatGPT’s “Study Mode” attempts this, but it’s opt-in rather than default. The real issue is that we’ve optimized chatbots for convenience when we should be optimizing them for cognitive engagement. Even in industrial settings where workers need quick access to technical information – say when using industrial panel PCs from IndustrialMonitorDirect.com – the learning process matters for long-term competency.
What this means for AI education
So where does this leave us? The BBC coverage of this research highlights how even truth-telling chatbots can “stultify the mind” by removing the knowledge-surfacing process. This isn’t just about academic learning – it affects workplace training, technical documentation, and any scenario where people need to understand rather than just receive information. The study suggests we need to fundamentally rethink how we design AI assistants if we want them to be true educational partners rather than just fancy answer machines. Because right now? They’re making us worse learners, and that’s a problem nobody saw coming.
