Last Updated on January 10, 2023 by Raoul Patel
Facebook, Google, AWS, and Microsoft Join the Fight Against Deepfakes
“Deepfakes,” digitally faked video and audio which can be created with sophisticated artificial intelligence (AI) tools by fraudsters and attackers is on the rise. But AI is also being deployed in the battle against deepfakes and Google, AWS, and Microsoft, are launching a “Deepfake Detection Challenge.”
The number of deepfakes emerging in the digital landscape is still relatively small but deepfakes can lead to high-value fraud, big buck ransom requests, and much heartache for those involved.
Eileen Donahoe, a member of the Transatlantic Commission on Election Integrity, told CNBC last year that deepfakes could
“potentially” be “the next generation of disinformation.”
Eileen Donahoe, a member of the Transatlantic Commission on Election Integrity
Californian lawmakers are now so worried about the potential impact of political deepfakes that they passed a law in October banning the distribution of “materially deceptive” audio or visual in the two months preceding an election.
AI can learn to detect deepfakes
The problem with deepfakes is that they are often good enough to fool a human audience, making it even harder to detect them using technology. However, where conventional software might struggle to detect deepfakes, AI is able to learn to detect false audio and video content. By nature, AI can adapt to new techniques used by fraudsters and attackers. Machine learning models need data to begin the learning process and produce models through which anomalies can be detected.
With the goal of creating training data, as per Wired, Facebook, Google, Amazon Web Services, and Microsoft, are collaborating to launch their Deepfake Detection Challenge. A purposely created dataset of deepfakes which features paid actors will be released for global researchers to start to populate their models.
It’s in the interest of technology and social media giants to join the fight as it is they who will need to either enforce or provide data for the enforcement of laws like that recently passed by California.
There are numerous companies working on other platforms and tools to detect deepfakes, and researchers around the globe focusing on the problem. Siwei Lyu of the University of Albany and colleague Yuezun Li are working on technology that can detect deepfakes by measuring the warping of faces or the number of blinks by people in the videos. Both are telltale signs of deepfakes but fraudsters will eventually adapt their methods. Lyu says:
“The competition between forgery making and detection is an ongoing cat-and-mouse game, each side will learn from the other and improves.”
Siwei Lyu of the University of Albany
Other researchers suggest that to aid the fight against deepfakes, awareness of the problem can only help. The more individuals realise that what they are watching, or hearing could have been faked using AI tools to replicate voices and editing to change visuals, the less they may trust the content they discover. Aware individuals may be more inclined to fact check before absorbing, sharing or acting upon the deepfake that’s trying to fool them.
Deepfake detection systems may eventually need to be incorporated into social media platforms and global shared databases of malicious, fraudulent, and faked content may need to be developed so that technology companies can compare and remove problem files.