By John Lloyd
As we enter an election year in countries of varying geopolitical sensitivities covering more than a third of the world’s population, especially in Asia (with not only Bangladesh but also Bhutan already out of the blocks and an interesting one coming up next this month in Taiwan), the commentariat is all a flutter about the risks to democracy posed by AI, as if the technology itself is the threat. As ever, of course, these oracles are less Pythia and more Cassandra (if you think that reference is simply for classicists, try this) since nobody this century has garnered more attention by being cheerily optimistic.
No doubt the battle will be joined not only by the enemies of the various states whose rule is being contested but also by the parties within those states who are scrambling for power and who now find at their disposal tools which are cheaper and more effective than ever before. You can even ask generative AI to create campaigns for you. Maybe British politicians will realise that with AI you no longer need to afford the side of a bus to create disinformation.
As primary schools are being vacated to create polling stations, ballot boxes are being dusted down and new rules are being concocted to deny citizens the right to vote, politicians are asking their technologically able interns to use AI in all sorts of nefarious ways: targeted disinformation, sophisticated feedback loops to hone attack messages, large scale social media sweeps crunching data to escalate whatever culture wars are flavour of the month in your jurisdiction.
It is certainly true that the rise of AI and the potential for ever more sophisticated forms of (individually targeted) misinformation and disinformation campaigns is growing, however it may also be worth thinking also about how these new technologies can be harnessed for the benefit of the electorate.
What can be done about all this? There are certainly plenty of willing players who want to counter the offensive AI offensive by fighting fire with fire. Watermarking represents a typical example of the cops trying (and failing) to keep up with the robbers. Other tools are being developed to detect the naughty neural newsbots and to help the hapless steer a way through the morass of political propaganda, competing slogans and pamphlets and mind-numbing broadcasts, fake or otherwise.
One of the perks of the current generation of AI developments is their accessibility, which can be seen as empowering. Among the first things you or I could do, for example, is to ask a large language model to consume the large volumes of confusing and obfuscatory language generated by politicians so that we do not have to. Some bright spark must be working on a mechanism to test the veracity of claims made in election literature, for example, or to compare past promises and current claims, or to run the rule over the endless quantities of bumf produced by politicians to compare and contrast not only rival claims but also within any given party’s activities, their own consistency.
Over at the glamour end of AI, what about those deep fakes? The technology has progressed very rapidly since the heavily implausible days of the scammer posing as a French minister with a latex mask that Tom Cruise would have rejected even for the first Mission Impossible film. Nowadays it is increasingly likely that we shall see really polished performances from artificial politicians, maybe so polished that this is what gives them away. Come to think of it how can one tell the difference between an artificial politician and a real one?
While everyone is panicking or mongering doom, our AI eye was caught by some interesting goings on in Pakistan at the end of last year. Here the enterprising PTI party, whose leader, the superlative cricketer Imran Khan, languishes in prison, having fallen foul of the military. Mr Khan and the PTI took it upon themselves to create a ‘real deep fake’ (a deep real fake? A deep real? I am not sure what may be the correct term here): taking Mr Khan's written words from jail, his legal team combined samples of his speech with the text to produce a facsimile of the man himself speaking. This might not be a great leap forward technically but it does point to an important principle, which is that the technology is neutral.
AI can help us to see, then, as much as cloud our vision. There is always a battle of resources, however just as the costs to entry in the political arena have come down, so an AI-activated citizenry may have some weapons at their disposal too. I am encouraged by the idea that an imprisoned political leader retains a democratic voice. If the American justice system somehow manages to lock up Donald Trump, I might even be persuaded that he should also be allowed this particular freedom.