DIn 2023, the upcoming political form appeared in a video. Former Democratic presidential candidate and Secretary of State Hillary Clinton says in it: “You know, people might be surprised to hear me say this, but I actually really like Ron DeSantis. Yeah, I know. I’d say he’s exactly the kind of guy this country needs.
It seems strange that Clinton would warmly support a Republican presidential candidate. And it’s. Further investigations revealed that the video was produced use generative artificial intelligence (AI).
Clinton’s video is just a small example of how generative AI could profoundly reshape politics in the near future. Experts have highlighted the consequences for the elections. These include the possibility that false information will be created at little or no cost and that highly personalized advertisements will be produced to manipulate voters. The results could be said “October surprises» – that is, news that breaks just before the US elections in November, where misinformation is circulating and there is not enough time to refute it – and the generation of misleading information about the election administration, such as the location of polling stations.
Concerns about the impact of generative AI on elections have become urgent as we enter a year in which billions of people across the planet will vote. In 2024, elections are expected to be held in Taiwan, India, Russia, South Africa, Mexico, Iran, Pakistan, Indonesia, the European Union, the United States and the United Kingdom. Many of these elections will not only determine the future of nation states; they will also shape how we address global challenges such as geopolitical tensions and the climate crisis. It is likely that each of these elections will be influenced by new generative AI technologies, in the same way that the elections of the 2010s were shaped by social media.
While politicians spent millions harnessing the power of social media to shape elections in the 2010s, generative AI effectively reduces the cost of producing empty and misleading information to zero. This is particularly concerning because over the past decade we have witnessed the role that so-called “bullshit” can play in politics. In a short book on the subject, the late Princeton philosopher Harry Frankfurt specifically defined bullshit as speech intended to persuade without regard to the truth. Throughout the 2010s, this practice became increasingly common among political leaders. With the rise of generative AI and technologies like ChatGPT, we may see the rise of a phenomenon my colleagues and I refer to as “botshit.”
In a recent article, Tim Hannigan, Ian McCarthy and I set out to understand what exactly botshit is and how it works. It is well known that generative AI technologies such as ChatGPT can produce so-called “hallucinations”. This is because generative AI answers questions by making statistically informed guesses. These assumptions are often correct, but sometimes they are completely wrong. The result can be artificially generated “hallucinations” that have little relation to reality, such as explanations or images that seem superficially plausible, but are not actually the correct answer to the question asked.
Humans could use fake materials created by generative AI in uncritical and thoughtless ways. And that could make it harder for people to know what’s true and false in the world. In some cases, these risks may be relatively small, for example if generative AI was used for a task that was not very important (like coming up with ideas for a birthday party speech), or if the veracity of the result was doubtful. easily verifiable using another source (like when the Battle of Waterloo took place). The real problems arise when the results of generative AI have significant consequences and cannot be easily verified.
If AI-produced hallucinations are used to answer important but hard-to-verify questions, such as the state of the economy or the war in Ukraine, there is a real risk that it will create an environment in which some people start to make important voting decisions based on an entirely illusory universe of information. There is a risk that voters will end up living in generated online realities based on a toxic mix of AI hallucinations and political opportunism.
Although AI technologies present dangers, some measures could be taken to limit them. Tech companies could continue to use watermarking, which allows users to easily identify AI-generated content. They could also ensure that AIs are trained on authoritative sources of information. Journalists could take extra precautions to avoid covering AI-generated stories during an election cycle. Political parties could develop policies to prevent the use of misleading AI-generated information. More importantly, voters could exercise their critical judgment by reality-checking important information about which they are unsure.
The rise of generative AI has already begun to fundamentally change many professions and industries. Politics will likely be at the forefront of this change. The Brookings Institution points out that there are many positive ways to use generative AI in politics. But at the moment, its negative uses are the most obvious and are most likely to affect us imminently. It is essential that we work to ensure that generative AI is used for beneficial purposes and does not simply lead to more bullshit.
André Spicer is Professor of Organizational Behavior at Bayes Business School at the City, University of London. He is the author of the book Business bullshit