Deepfake technology is a growing threat, particularly in the hands of cybercriminals. With the rise of artificial intelligence (AI) the risks posed by deepfakes are becoming more significant. Researchers predict that as much as 90% of online content may be synthetically generated by 2026. Solutions include policies that promote responsible development and use of technology and new detection technologies. These are conclusions in a blog post by two specialists at the World Economic Forum.
“In recent years, we have seen a rise in deepfakes. Between 2019 and 2020, the number of deepfake online content increased by 900%. Forecasts suggest that this worrisome trend will continue in the years to come – with some researchers predicting that “as much as 90% of online content may be synthetically generated by 2026”, write research specialists Gretchen Bueermann, Future Networks and Technology and Natasa Perucica, Cybersecurity Industry Solutions, in the WEF blog post.
“Often misused to deceive and conduct social engineering attacks, deepfakes erode trust in digital technology and increasingly pose a threat to businesses.”
They report that last year, 66% of cybersecurity professionals experienced deepfake attacks within their respective organizations.
“An example of deepfake crime includes the creation of fake audio messages from CEOs or other high-ranking company executives, using voice-altering software to impersonate them. These manipulated audio messages often contain urgent requests for the recipient to transfer money or disclose sensitive information.”
“Deepfakes also have the potential to undermine election outcomes, social stability and even national security, particularly in the context of disinformation campaigns. In some instances, deepfakes have been used to manipulate public opinion or spread fake news leading to distrust and confusion among the public.”
The specialists says that development of artificial intelligence (AI) has significantly increased the risk of deepfakes.
“AI algorithms, including generative models, can now create media that are difficult to distinguish from real images, videos or audio recordings. Moreover, these algorithms can be acquired at a low cost and trained on easily accessible datasets, making it easier for cybercriminals to create convincing deepfakes for phishing attacks and scam content.”
Research shows that the banking sector is particularly concerned by deepfake attacks, with 92% of cyber practitioners worried about its fraudulent misuse. Services, such as personal banking and payments, are of particular apprehension and such concerns are not baseless. To illustrate, in 2021, a bank manager was tricked into transferring $35 million to a fraudulent account, they write.
“The high cost of deepfakes is also felt across other industries. In the past year, 26% of smaller and 38% of large companies experienced deepfake fraud resulting in losses of up to $480,000.”
“To address these emerging threats, we must continue to develop and improve deepfake detection technologies. This can involve the use of more sophisticated algorithms, as well as the development of new methods that can identify deepfakes based on their context, metadata or other factors.”
“Another potential solution is to promote media literacy and critical thinking. By educating the public on the dangers of deepfakes and how to spot them, we can reduce the impact of these malicious campaigns. Incorporating a digital trust framework into everyday use can help reassure individuals that digital technologies and services – and the organizations providing them – will protect all stakeholders’ interests and uphold societal expectations and values.”
“Finally, we must consider the ethical implications of AI and deepfake technology. Governments and regulatory bodies can play a significant role in shaping policies that regulate deepfake technology and promote transparent, accountable and responsible technology development and use. By doing so, we can ensure that AI does not cause harm” the two researchers write.