Your data. Your choice.

We use cookies and similar technologies to provide you with the best shopping experience as well as for marketing purposes. Please accept, decline or manage the use of your information.

Shutterstock/BOY ANTHONY
News + Trends

New study: AIs discriminate against humans

Debora Pape
19/8/2025
Translation: machine translated

According to a new study, there are many indications that AIs categorise human-generated content as inferior. This could become a problem for humans in the future.

AI chatbots are increasingly finding their way into the everyday lives of many people. They help with travel planning, provide support with research and give advice for all situations in life. As helpful as this is, AI-generated content is still considered less creative and, above all, potentially error-prone.

A team of researchers at Stanford University has now discovered that LLMs (large language models, i.e. large language-processing AI models) see things differently. They prefer AI-generated content to content created by humans. This can lead to discrimination against humans, for example when LLMs make a pre-selection for job applications.

Which film descriptions do AIs find more interesting?

The team conducted a series of experiments with the LLMs GPT-4, Llama 3.1, Mixtral and Qwen 2.5. These AIs were each presented with two versions of 250 film summaries, excerpts from 100 scientific papers and 109 product descriptions. One version of the text was generated by one of the participating AIs, the other by a human.

The LLMs were then asked to decide which text version they would use to recommend the respective film or product and which excerpt they would select for an overview collection. They did not specify who created the texts.

AI prefers AI

The research team found that the AIs showed a «moderate to strong» preference for AI-generated content. In the product description experiment, the LLMs almost exclusively opted for the AI-generated texts. The values are slightly lower for the excerpts from scientific papers and the film summaries, but here too the preference for generated texts is clearly recognisable.

What cannot be seen is that the respective LLMs generally opt for their own texts. Meaning: Just because GPT-4 generated a text does not mean that GPT-4 subsequently considers this text to be the better one. The results show that the AIs often decide in favour of Qwen and GPT-4 texts for product descriptions. They prefer GPT-4 for study excerpts and Mixtral and GPT-4 for film summaries.

The authors of the study also found that all AIs, but especially GPT-4, tend to opt for the text version presented first («First-Item Bias»). To counteract this effect, the AIs had to perform all experiments twice, each time changing the order of the two text versions.

As a small additional experiment, human test subjects also had to choose between two text versions. They tended to favour the texts created by humans. Only in the case of scientific study extracts did humans also slightly prefer the texts generated by AIs.

AIs could discriminate against humans

The AI preference is not due to the quality of the texts. According to the study, it is comparable. The authors assume that the AI-generated texts contain subtle signals that identify them as AI-generated. The language models could therefore perceive the texts as «better» and therefore decide against human-generated content. The research team sees this as an indication of actual discrimination against «humans as a class».

This could become a problem if AIs are used as pre-decisive authorities. They could then favour offers, applications and job applications that have been created or supported by AIs, as human work is considered to be of lower value. People who do not use AI or cannot afford to use it could therefore be disadvantaged in future - for example on the labour market and when it comes to loans.

The study refers to this as a «gating effect»: LLMs could make it more difficult for people and companies that are not explicitly supported by AI to access the market.

The authors recommend that further research should address the reasons for AI bias - with the aim of resolving it. The next step would therefore be to analyse the differences between human and generated texts in more detail. This could make it clearer why exactly the LLMs favour these texts.

You can read the study on «AI-AI bias» here for yourself.

Header image: Shutterstock/BOY ANTHONY

6 people like this article


User Avatar
User Avatar

Feels just as comfortable in front of a gaming PC as she does in a hammock in the garden. Likes the Roman Empire, container ships and science fiction books. Focuses mostly on unearthing news stories about IT and smart products.

These articles might also interest you

  • News + Trends

    Lumo: Proton's AI won't spy on you

    by Jan Johannsen

  • News + Trends

    Design that includes everyone: 3 projects that show how inclusion works

    by Pia Seidel

  • Opinion

    The Alters and others: why gamers are right to protest against the use of AI

    by Debora Pape

12 comments

Avatar
later