Journal logo

Artificial intelligence can be more persuasive than humans during a debate

The research, conducted with 900 participants in the United States, showed that AI models were more persuasive than real people during exchanges of ideas, although only when they had access to personal information about their interlocutors.

By Omar RastelliPublished 8 months ago 3 min read

What if the person who wins an online argument isn't a person, but an artificial intelligence that knows exactly how to change your mind? New research, published in the journal Nature Human Behavior, explores this point in more detail. It revealed that AI applications like ChatGPT can outperform humans in persuasion during online debates, provided they have basic personal information about their interlocutors.

The experiment, conducted with 900 American participants, showed that the AI adapted its arguments more effectively than humans, raising alarm among experts about the risks of manipulation and the urgent need for regulation.

According to The Washington Post, this finding highlights the potential of AI to influence public opinion and raises questions about the ethical use of these technologies in sensitive contexts.

900 Participants: Methodology and Debate Dynamics

The study, conducted by researchers from centers in the United States, Italy, and Switzerland, selected 900 people residing in the United States through a crowdsourcing platform (i.e., a space for obtaining ideas, services, or data through massive online collaboration), ensuring a diverse sample in terms of gender, age, ethnicity, educational level, and political affiliation.

Participants were divided into two groups: one half debated with another human and the other with ChatGPT, the language model developed by OpenAI.

The debates focused on current sociopolitical issues, such as the legalization of abortion, the death penalty, climate change, and a possible ban on fossil fuels in the United States.

Each debate was structured in three phases: a four-minute introduction to present arguments, a three-minute rebuttal, and a three-minute conclusion. Before and after each phase, participants rated their level of agreement with the proposal on a scale of 1 to 5, allowing researchers to measure the persuasive impact of each opponent.

In some cases, both humans and the AI were provided with basic demographic information about their counterparts—gender, age, ethnicity, educational level, employment status, and political affiliation—obtained through pre-surveys. This information allowed for personalized adaptation of arguments, a practice known as microsegmentation.

Does AI outperform humans at persuasion?

The results were clear: ChatGPT was more persuasive than humans in 64.4% of cases when it had access to participants' personal information. Without this data, its ability was indistinguishable from that of humans.

The researchers observed that, unlike people, who require time and effort to fine-tune their arguments, language models like ChatGPT can personalize their messages instantly and on a large scale.

Riccardo Gallotti, co-author of the study and head of the Complex Human Behavior Unit at Italy's Fondazione Bruno Kessler, explained to The Washington Post: "Clearly, we have reached the technological level where it is possible to create a network of automated LLM-based accounts capable of strategically pushing public opinion in one direction."

The expert refers to a Large Language Model (LLM), a large language model trained on huge volumes of text to generate or interpret human language. With this in mind, other specialists consulted by The Washington Post and the Science Media Centre expressed concern about the ethical and social implications of the discovery.

For this reason, Carlos Carrasco, professor of artificial intelligence at the Toulouse Business School, stated: “This research confirms with solid data a growing concern: that these technologies can be used to manipulate, misinform, or polarize on a large scale.”

Limitations of the Study and Extrapolation to Other Contexts

The authors acknowledge that the study has limitations, such as the fact that all participants were Americans and that the debates took place in a controlled environment, with defined times and structures, unlike spontaneous discussions in real life.

Carrasco, in this regard, clarified that, although the research was conducted in the United States, the personalization and persuasion mechanisms tested could be extrapolated to other contexts, such as Spain, where there is also a strong digital presence and growing exposure to AI-generated content.

Given this reality, experts warn that AI's ability to influence public opinion requires constant vigilance and the development of clear regulatory frameworks that distinguish between legitimate persuasion and manipulation, especially in an increasingly polarized digital environment prone to misinformation.

social media

About the Creator

Omar Rastelli

I'm Argentine, from the northern province of Buenos Aires. I love books, computers, travel, and the friendship of the peoples of the world. I reside in "The Land of Enchantment" New Mexico, USA...

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2026 Creatd, Inc. All Rights Reserved.