Skip links

Political chatbots promising too much

Politicians trying to persuade voters sometimes promise a bit too much. A new study shows that political chatbots can be equally untrustworthy. Conversations with AI models can influence people’s political opinions with information-packed arguments proving the most convincing – but the most persuasive arguments made by AI, tend to be the least accurate, the new study has found.

The new paper, The levers of political persuasion with conversational AI, is published in the journal Science. It is written by academics from the UK AI Security Institute, the University of Oxford, London School of Economics,, Stanford University, the Massachusetts Institute of Technology and Cornell University.

Researchers ran three experiments with almost 77 000 people in the UK using 19 different AI models to discuss over 700 political issues. 

Read Also:  Women underrepresented in popular podcasts

They found that ‘information dense’ arguments made by the chatbots, packed with lots of facts and evidence, were the most persuasive in changing people’s views. However, the more information-heavy the arguments became, the less accurate they tended to be.

Another key factor in making the chatbot arguments more persuasive was if the models had received post-training to be refined after development to fulfil certain goals or preferences. 

Tests show that chatbots that had been post-trained, specifically for persuasion, were up to 51% more convincing than those that had not been trained in this way. Similarly, if the chatbots received certain prompts, they were found to be 27% more persuasive.

The researchers found that post-training and prompts made AI arguments much more persuasive than if they had been fed tailored personal information about individual users. 

Read Also:  Using AI saves a full working day per week

They also found that making AI models bigger by increasing their scale didn’t have a large impact on their persuasiveness.

The researchers highlight the potentially dangerous consequences of post-training techniques having such an impact on persuasion: 

“Powerful actors with privileged access to such post-training techniques could thus enjoy a substantial advantage from using persuasive AI to shape public opinion— further concentrating these actors’ power.”

“…Even actors with limited computational resources could use these techniques to potentially train and deploy highly persuasive AI systems, bypassing developer safeguards that may constrain the largest proprietary models (now or in the future). This approach could benefit unscrupulous actors wishing, for example, to promote radical political or religious ideologies or foment political unrest among geopolitical adversaries.”

“Accurately understanding AI-driven persuasion is important so that policymakers and the public can be clear-eyed about both its potential and its limitations for influencing public opinion and behaviour. We see this work as contributing to that goal”, says Dr Ben Tappin at London School of Economics.

Read Also:  Media coverage of women in science

 

Moonshot News is an independent European news website for all IT, Media and Advertising professionals, powered by women and with a focus on driving the narrative for diversity, inclusion and gender equality in the industry.

Our mission is to provide top and unbiased information for all professionals and to make sure that women get their fair share of voice in the news and in the spotlight!

We produce original content, news articles, a curated calendar of industry events and a database of women IT, Media and Advertising associations.

    Do you want an experienced opinion on a job issue?
    Moonshot Manager is here to answer!

      Moonshot community sharing thoughts and ideas, in a anonymous, safe environment.