A comprehensive international study finds AI assistants frequently misrepresent news content, with 45% of responses containing significant errors. Gemini performed worst with issues in 76% of answers, raising concerns about public trust in AI-generated news summaries.
Widespread Inaccuracy in AI News Responses
Major AI assistants are routinely providing misleading and inaccurate news information across languages and territories, according to the largest-ever research study on the topic. The European Broadcasting Union-coordinated investigation, led by the BBC, evaluated over 3,000 responses from ChatGPT, Copilot, Gemini, and Perplexity against key criteria including accuracy, sourcing, and contextual understanding.