Your data. Your choice.

If you select «Essential cookies only», we’ll use cookies and similar technologies to collect information about your device and how you use our website. We need this information to allow you to log in securely and use basic functions such as the shopping cart.

By accepting all cookies, you’re allowing us to use this data to show you personalised offers, improve our website, and display targeted adverts on our website and on other websites or apps. Some data may also be shared with third parties and advertising partners as part of this process.

gguy / Shutterstock
News + Trends

Guardian research: GPT-5.2 uses Grokipedia as a source

Kim Muntinga
26/1/2026
Translation: machine translated

An investigation by the Guardian reveals that GPT-5.2 used Grokipedia as a source in several cases. The findings bring a central problem of modern AI into focus: the difficult dividing line between verified information and synthetically generated knowledge.

The fact that a large language model such as GPT-5.2 takes content unfiltered from an AI-generated encyclopaedia was long considered a theoretical risk. Now it is at least partially a reality. The Guardian has revealed that the model repeatedly draws on Elon Musk's Grokipedia. A source that has been highly controversial since its launch due to political bias, lack of transparency and factual errors.

The revelation raises key questions about data quality, the neutrality of AI systems and the effectiveness of their safety and control mechanisms.

What the Guardian found out

In its tests, the Guardian found that GPT-5.2 accessed content from Musk's Grokipedia nine times, across more than a dozen queries that varied widely in topic.

This happened particularly frequently for less prominent topics, such as details on state-affiliated organisations in Iran - including the salaries of the Basij militia or the ownership structure of the Mostazafan Foundation. Biographical details about the British historian Sir Richard Evans, who testified as an expert witness in the defamation trial against Holocaust denier David Irving, were also taken from Grokipedia, although some of these details are demonstrably false.

In some cases, the AI prioritised content from Grokipedia over established encyclopaedias, even if these provided contradictory or more precise information. This contradicts OpenAI's long-communicated claim to provide answers based on verified and trustworthy sources and makes the problem all the more serious.

According to research by the Guardian, it is not only GPT-5.2 that uses Grokipedia. Anthropic's language model Claude also found evidence of Grokipedia being used as a source, for example for queries on technical or specialised topics such as petroleum production or niche topics such as Scottish ales.

Why Grokipedia is problematic as a source

Elon Musk presented Grokipedia in October 2025 as an AI-generated answer to Wikipedia: with the declared aim of balancing out supposed «left-wing tendencies» of the classic online encyclopaedia. Critics, however, see Grokipedia more as an antithesis with clearly recognisable political biases, such as a tendency towards right-wing narratives.

  • News + Trends

    "Grokipedia" - Elon Musk launches Wikipedia competition

    by Samuel Buchmann

The crucial problem lies deeper: Grokipedia is generated almost exclusively by Musk's AI model Grok, without open editing by a community. All content is therefore created under the same algorithmic conditions, without independent control, without collaborative quality assurance, without transparent editorial corrections.

When a large language model such as GPT-5.2 quotes content from another AI platform, there is a feedback effect: «Garbage in, garbage out». Errors or misrepresentations can reinforce each other, gain credibility and creep almost unnoticed into the responses of other AI systems.

The overriding danger: LLM grooming

Beyond this technical mechanism, however, the Guardian points to an even greater risk: the phenomenon of «LLM grooming». The term refers to the targeted injection of such content into the public information landscape with the intention of influencing the knowledge base of language models in the long term.

In contrast to the unintentional feedback effect, the focus here is on a strategic intention: by systematically placing AI-generated material on the web, it can later appear in search results or encyclopaedias and is evaluated as a legitimate source by models such as GPT-5.2. As a result, the information ecosystem is gradually shifting away from human-verified facts towards synthetic content generated by AI and picked up by other AIs. It is precisely this mechanism that the Guardian visualises in the Grokipedia case.

How OpenAI reacts

OpenAI emphasises that GPT-5.2 is designed to draw from a wide range of publicly available sources. Security filters are designed to prevent links or content with a high potential for damage from appearing. However, recent findings show that these filters do not work consistently when AI-generated sources such as Grokipedia are involved.

For OpenAI, the revelation is a critical moment: it raises the question of whether modern language models can sufficiently distinguish which information originates from humans and which comes exclusively from AI-generated cycles.

Header image: gguy / Shutterstock

8 people like this article


User Avatar
User Avatar

My interests are varied, I just like to enjoy life. Always on the lookout for news about darts, gaming, films and series.


News + Trends

From the latest iPhone to the return of 80s fashion. The editorial team will help you make sense of it all.

Show all

These articles might also interest you

  • News + Trends

    CES gadgets 2026 that you don't need (but might want)

    by Michelle Brändle

  • Opinion

    AI makes people stupid – and companies poorer

    by Oliver Herren

  • Opinion

    The Alters and others: why gamers are right to protest against the use of AI

    by Debora Pape

7 comments

Avatar
later