ASKING a simple question about dinner could save you from a costly scam powered by artificial intelligence.
Cybersecurity experts are trying to arm The Sun readers with easy tricks to defend themselves against AI crime.
AI can be used to generate startlingly convincing human-like "deepfake" voices that say anything cyber-criminals want.
And artificial intelligence can even clone the voices of your friends, family, and loved ones in a matter of seconds.
Experts have warned The Sun over how fast and easy it is for cyber-criminals to abuse these AI tools to craft dangerous scams.
"AI is becoming increasingly sophisticated, the responses are increasingly accurate," said security expert Adam Pilton, speaking to The Sun.
Read more on AI
"The number of tools and resources we have access to has dramatically increased, from simple text responses, to pictures, audio, video and more.
"Deepfake creation tools are becoming more user-friendly and accessible, lowering the technical barrier for attackers,"
Adam, a senior cybersecurity consultant at CyberSmart, said AI chatbots are now smarter than ever.
And AI "deepfakes" that create fraudulent audio or video content are more convincing too.
Most read in Tech
"We are seeing deepfakes that have more realistic facial expressions, lip movements, and voice synthesis," Adam told The Sun.
"This will make them even harder to distinguish from real videos and audio.
"This means that undoubtedly it will become increasingly difficult to distinguish between chatbots, AI-generated voices, and AI-faked videos, as technology continues to improve rapidly."
HOW TO BEAT THE AI DEEPFAKES
Thankfully there's some hope of beating the AI.
Adam pointed out that tech companies are making tools to spot AI fakes.
That means it'll be harder for fraudulent AI content to sneak past safeguards on popular apps.
The simplest defence is to ask questions that only you would know
Adam Pilton
But no system is fool-proof – so you'll have to take your online safety into your own hands.
Adam explained that if you receive a phone or video call from someone you know asking for money or sensitive info, you'll want to ask some quick questions.
It might even be as simple as asking about dinner.
"The simplest defence is to ask questions that only you would know," Adam told us.
CREATE A SAFE WORD!
Here's some advice for staying safe from The Sun's Head of Technology and Science ...
Artificial intelligence is now very powerful – so smart that it can convincingly replicate the voices and faces of people you know.
That means criminals can use AI to trick you into handing over cash or info by posing as friends, family, or loved ones.
It's easy to give up hope. Surely we're all doomed, right?
Well one of the best defences against this is very simple: a safe word.
Speak to your partner, for instance, and set up a simple safe word or phrase.
Try to pick something very strange and random – not the street you live on or your favourite band.
Then if you ever receive a call from them asking for money or info, ask them to tell you the safe word.
You'll instantly be able to verify whether you're talking to the real person.
If they say they've forgotten (genuinely, or because they're a scammer) then try asking questions that only they'd know the answer to. Specific memories that wouldn't have been posted online.
And unless it's urgent, just check with them in person if possible – or contact them directly by calling them yourself and verify that way instead.
"These questions shouldn't be based on information that is generic such as the company you work for, the football team you support or even a secret word you've agreed to.
"It must be completely random and based on something only you and that person would know.
"So, with a family member, it could be what you ate for dinner the night before or a memory from a Christmas gone by.
"With a colleague, it could be an event which you first met at or who you spilt a drink on at the office party.
READ MORE SUN STORIES
"No attacker or AI would be able to respond accurately to such a question."
But he warned: "That is assuming you didn't post a picture of your dinner on social media last night!"