Business decisions are often said to be motivated by either fear or greed. When it comes to the emergence and seemingly unstoppable proliferation of AI deepfakes, fear appears to be the leading emotion.
Earlier this year, The Daily Telegraph shared with its readers a disturbing story about a finance worker in Hong Kong being bamboozled into transferring £20m to crafty scammers by a deepfake video call featuring AI versions of his senior colleagues, including his UK-based CFO.
Sadly, this incident is no longer the exception but, increasingly, the rule and is something all of us will have to contend with during our working lives, with rapidly escalating frequency.
It was the Wall Street Journal that broke the story of what is widely believed to be the first instance of fraud facilitated by the use of AI way back in 2019, in which the imposters designed an impersonation of a German-based CEO, resulting in the fraudulent transfer of $243,000. Just last month, the Sunday Times explained how it has become perfectly possible to steal £250 in just 15 minutes because of the convulsive pace of change in AI voice cloning technology.
Banks, utilities and airlines all now quail at the thought that their customers’ voices can routinely and compellingly be imitated. None yet have been able to provide concrete assurances that every such attempted deception will be successfully repulsed. Insurance against this kind of commercial risk is still only in an embryonic stage and is far from being universally available. IT security companies will doubtless make bold claims about the robustness of the tools they are developing to mitigate these risks. But, at least for now, the best form of defence is probably good, old-fashioned human vigilance and common sense. Having said that, however, it would take a fairly courageous junior employee to risk their blossoming career by demanding that their boss prove verifiably that they are is human and not just an AI imitation of themselves.
The sense of fear about AI’s pernicious potential extends well beyond the curtain walls of central business districts. With over 2 billion people across 50 countries expected to be heading to the polls this year in a range of national elections, the risk of AI disrupting and undermining our democracy is clear and present. In February 2024, the Home Secretary warned that deepfake videos, generated through AI, could provide the “perfect storm” for malign state actors to subvert our democratic processes. The Mayor of London, facing voters today, has similarly warned that “in a close election … these sorts of deepfake videos and audios can be the difference”. Sadiq Khan himself fell foul of an AI hoax amid the Armistice Day protests last year, with ‘his’ inflammatory remarks being widely shared at the time.
Even as the subversive potential of AI deepfakes grows alarmingly more apparent with each successfully executed swindle, clear solutions to this problem remain worryingly remote. Bill Gates has suggested that we will simply “get better” at identifying them, through novel technologies such as those being developed by Intel and DARPA, supported by more stringent legislation. Such a view mirrors those expressed at international AI Summits, demonstrating a shared desire from public authorities to remain one step ahead of this truly disruptive, and rapidly developing, technology. So for now, it seems, we have to trust in our ability to create more powerful AI models to ensure the ‘good guys’ remain at the forefront of this new technology and to maintain our resolve to win the war on deepfakes. Only time will tell whether electorates and legislators will be able to force the AI genie back into its bottle or whether Arnie’s Terminator might just get us all first.
https://www.thetimes.co.uk/article/james-cleverly-deepfakes-threat-next-general-election-bwmjcdfpm
https://www.telegraph.co.uk/business/2024/02/05/deepfake-video-call-tricked-finance-worker-out-of-20m/
https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402
https://www.ukfinance.org.uk/system/files/2023-11/The%20impact%20of%20AI%20in%20financial%20services.pdf
https://www.reuters.com/legal/legalindustry/real-insurance-coverage-increasing-ai-deepfake-risks-2024-04-11/
https://www.cnbc.com/2023/07/12/bill-gates-explains-why-we-shouldnt-be-afraid-of-ai.html