More
    HomeTechAI Assistants Fail at News Accuracy, Major Study Reveals

    AI Assistants Fail at News Accuracy, Major Study Reveals

    A new international study has revealed that popular AI chatbots, including ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity AI, often struggle with delivering accurate and reliable news.

    Journalists from 18 countries reviewed over 3,000 AI-generated responses, assessing accuracy, sourcing, context, and the ability to distinguish fact from opinion. The findings showed that 53% of answers contained major issues, while 29% lacked accuracy and 20% had factual errors.

    According to Deutsche Welle (DW), over half of the AI-generated answers failed to meet journalistic standards, highlighting serious concerns about news reliability.

    The Reuters Institute’s Digital News Report 2025 found that 7% of online news consumers use AI chatbots for news, rising to 15% among users under 25, showing how deeply these tools are influencing information consumption.

    Jean Philip De Tender, deputy director general of the European Broadcasting Union (EBU), which coordinated the research, said, “These failings are not isolated—they are systemic and multilingual. This threatens public trust, and when people lose trust, democracy suffers.”

    The large-scale study followed a similar BBC-led review from February 2025, which also found that over half of AI responses contained major issues and one-fifth included factual errors when quoting BBC content.

    Among the four AI systems, Google’s Gemini performed the worst, with 72% of its responses showing sourcing problems, followed by Microsoft’s Copilot. Despite minor improvements since February, the BBC reported that all systems still had significant issues.

    OpenAI responded by emphasizing its support for publishers and content integrity, stating that ChatGPT helps millions of users discover quality news through clear links and attributions.

    Call for Action

    In response, the EBU and media organizations worldwide are urging governments and regulators to take action. They are pressing the EU and national authorities to enforce digital information integrity laws and to ensure independent monitoring of AI assistants.

    The EBU and its partners also launched the “Facts In: Facts Out” campaign, calling on AI companies to protect the integrity of news. The campaign stresses that if accurate facts go into AI systems, accurate facts must come out—without distortion, misattribution, or misinformation.

    👉 Stay updated with the latest international cricket stories and breaking headlines only on GRY News.

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here

    Must Read

    Privacy Overview

    This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.