AI expert warns deepfakes are now the ‘biggest evolving threat’ and reveals two ways they’re now being used against you
DEEPFAKES are now the "biggest evolving threat" when it comes to cyber-crime.
That's what a leading cyber-expert told The U.S. Sun in a stark warning over the dangers of the face-faking technology.
Deepfakes are fraudulent videos that appear to show a person doing (and possibly saying) things that they never did.
It uses -style software to clone the features of a person – and map them onto something else.
Of course, AI is being used for plenty of sinister purposes – including generally making scams quicker to create and execute – but deepfakes are one of the most serious threats.
The U.S. Sun spoke to Adam Pilton, a UK-based cyber-security consultant at CyberSmart and a former Detective Sergeant investigating , about the threats we're facing.
"AI can generate highly convincing phishing emails with ease and this means that unskilled cybercriminals are making hay while the sun shines," Adam told us.
"The National Cyber Security Centre warned us in their latest annual report that cybercriminals are already using AI to develop increasingly sophisticated phishing emails and scams.
"The threat will continue to grow as the technology develops and the skills of those involved increase too."
"Without a doubt, the biggest evolving threat is deepfakes," he continued.
"Deepfake technology can create realistic video and audio impersonations of people."
There are two key ways that criminals are using deepfakes, Adam explained.
SCAM SCHEMES
The first sinister use of deepfakes is to trick you into making some kind of security mistake.
This might be as simple as a crook using a deepfake to pretend to be a loved one – and convincing you to hand over some cash.
The rise of deepfakes is one of the most worrying trends in online security.
Deepfake technology can create videos of you even from a single photo – so almost no one is safe.
But although it seems a bit hopeless, the rapid rise of deepfakes has some upsides.
For a start, there's much greater awareness about deepfakes now.
So people will be looking for the signs that a video might be faked.
Similarly, tech companies are investing time and money in software that can detect faked AI content.
This means social media will be able to flag faked content to you with increased confidence – and more often.
As the quality of deepfakes grow, you'll likely struggle to spot visual mistakes – especially in a few years.
So your best defence is your own common sense: apply scrutiny to everything you watch online.
Ask if the video is something that would make sense for someone to have faked – and who benefits from you seeing this clip?
If you're being told something alarming, a person is saying something that seems out of character, or you're being rushed into an action, there's a chance you're watching a fraudulent clip.
BAD NEWS
The second way deepfakes are being used for nefarious purposes is to spread fake news.
This is particularly worrying as voters head to the polls for upcoming elections in the United States and the United Kingdom.
"The World Economic Forum has ranked misinformation and disinformation as the greatest global risk over the next two years," Adam told The U.S. Sun.
READ MORE SUN STORIES
"With a series of elections approaching for democracies around the world, it is easy to understand why.
"We continue to see the ability of AI to generate fake news articles, social media posts, and other content that spreads misinformation."