Thanks to the advances in machine learning in recent years, computer scientists are now able to manipulate technology to function in ways that are immeasurably beneficial to humankind. We can detect illnesses, rely on AI assistants, and propel ageing industries into the futuristic world of automation.
Unfortunately, the same technology can be used for detrimental purposes – and it’s worryingly easy to do if you have the know-how.
Using big data, experts can programme AI to mimic the real world. This has already been done with art and music, and some neural networks have even been taught to write whole chapters of books, but more recent developments in voice replication have demonstrated ways in which artificial intelligence could be seriously damaging.
Just recently, two researchers – Joseph Bullock and Miguel Luengo-Oroz – found they could teach an AI language model to fake speeches from political leaders. By feeding it vast amounts of data from Wikipedia and training it on 40 years’ worth of speeches given at the United Nations General Assembly, they were able to produce convincing mimicries.
What they were able to get the machine to say, however, was more concerning than impressive. When Bullock and Luengo-Oroz gave the model the inflammatory phrase, “Immigrants are to blame,” it produced speech that alleged immigrants are to blame for the spread of HIV/AIDS. Obviously, in the wrong hands, this could easily be used maliciously.
What’s more, the researchers found that the machine was able to correctly reproduce the cadence of a speech 90% of the time when they programmed it to talk about general topics such as climate change. Results generated from the inflammatory phrase prompts were only convincing 60% of the time where the cadence was concerned, simply because there was less data on this material for the model to learn from.
The learning process took 13 hours and the teaching materials cost just $7.80.
“Monitoring and responding to automated hate speech – which can be disseminated at a large scale, and often indistinguishable from human speech – is becoming increasingly challenging and will require new types of countermeasures and strategies at both the technical and regulatory level,” the researchers said in the study.
Already, the UN is taking steps to ensure that technology like this is not used to compromise the integrity or legitimacy of the international body.
“We, at the UN, are working hard to make sure that AI is used ethnically to ensure that we leave no one behind,” said Luengo-Oroz.