Voters in New Hampshire received phone calls over the weekend from a presumably AI-generated voice. The voice claimed to be US President Biden and urged voters not to vote in the primary.
The voice reportedly told those called that “your vote makes a difference in November, not this Tuesday.” The call was spoofed to make it appear that it was sent by a member of the Democratic committee.
The Attorney General’s Office sees this as an “unlawful attempt to disrupt the New Hampshire Presidential Primary and suppress New Hampshire voters.” The office has launched an investigation into the allegations.
“Although the voice in the robocall sounds like the voice of President Biden, this message appears to be artificially generated based on initial indications.”
AI finds its way into political propaganda
What researchers warned about years ago seems to be coming true. There are now numerous documented cases of AI being used for political propaganda.
The risks of deepfakes in politics are many. For example, they can be used to generate videos and images, write manipulative texts en masse, or, as in the example above, generate credible fake voices. In the context of increasing audiovisual, rapid, and superficial communication in social media, this development can be considered a social risk.
Politicians warned of an onslaught of deepfakes during the last US election. Perhaps they were just one election too early: since then, deepfake technologies have become more accessible, cheaper, and better.
In California, deepfakes became a crime in 2019 if they are used to portray politicians in a way that damages their reputation during an election campaign. Social media platforms have been asked to take action against deepfakes and have introduced guidelines against deepfakes and manipulated media. Companies and researchers are working on tools to reliably identify deepfakes.
But the question remains whether the battle against ever-improving deepfakes can be won by technical means. As detection technology improves, so does generation technology. Moreover, there may come a point where deepfakes are so similar to the original that they are indistinguishable – even by the most sophisticated detectors.
As AI becomes more widespread, providers of AI technology are finding it increasingly difficult to control its use. For example, OpenAI, which has outlined detailed measures against the misuse of its technology in the U.S. election, recently blocked a chatbot for a Democratic candidate created by an outside vendor using its API. At the same time, OpenAI’s own GPT store is flooded with chatbots imitating Donald Trump and representing his political style.