The Context
Threat 2: Deep Fakes
What’s the risk?
Deep fakes use AI and machine learning technologies to create deceptive video content. Typically, footage of famous or trustworthy people is manipulated to deliver misleading or untrue video messages. If you watched the hit BBC drama series Capture, in which rogue spooks fake video footage in real time to frame innocent people, you’ll have seen almost everyone has an unshakeable sense that ‘seeing is believing.’
Why is it increasing?
In simple terms, deep fakes are increasing because the barriers to entry are getting lower; and because video is fast becoming the world’s go-to communication channel. Convincing deep fakes are popping up on social media, and they can now be made by anyone with a reasonable level of competence.
As with many attack methods, these techniques can move from the margins to the mainstream relatively quickly. What was once the preserve of a well-funded university laboratory can show up on an opensource code repository like GitHub in just a few short years.
What to look out for
As yet, the know-how required to create deep fakes means they are mostly confined to creating (relatively) harmless and comical spoofs. Although there easily available services that can take a picture of a celebrity and make them talk. Which is cool, but not a risk.
But as those barriers to entry get lower, look out for deepfake attacks like these:
- It will soon be trivial to make a Teams call and use a deepfake video to respond on the fly. The deepfake could impersonate anyone: the HR team demanding your data, the boss asking you to pay an urgent invoice, IT requesting your login details… the list goes on.
- Criminals have used deepfakes to stage fraudulent remote job interviews, with the FBI reporting scammers impersonating innocent IT professionals to get roles where they have access to sensitive data.
- Watch out for fake video ads in which a trusted figure appears to endorse a product they have no connection with.
"Experts recently ranked deepfake technology as the most worrying use of artificial intelligence, one that could have serious implications for cybercrime and terrorism."