A TINDER swindler scammed a victim out of thousands of pounds by pretending to be a young, hot, and rich single from New York.
This dream girl was part of a new wave of AI chatbots, plaguing dating apps with fraudulent money-making schemes that have left a trail of heartbreak in their wake.
Peter, 43, who didn't want to give his last name, told The Sun how he was scammed out of an eye-watering $22,000 (£17, 337) in a tragic rom con.
He revealed how he was wooed by a pretty woman, who claimed to be 24, when she messaged him first on Tinder.
The couple spoke over the app, spoke on the phone and even video-called - with him being none-the-wiser.
But after he was lured into a scam trap - he realised the entire thing was AI-generated.
read more on AI
It was a quick Google Search of the images which made him realise the pictures being used by his "girlfriend" were fake.
Experts told The Sun that Peter's story is a chilling cautionary tale as AI becomes more and more powerful.
And more importantly, AI creations become increasingly hard to discern from reality - making it even easier for them to entrap their victims.
Peter realises that the first red flag was the fact his "lover" - messaged him first.
Most read in Tech
"No 20-something-year-old woman is messaging a middle-aged man first on a dating app," Peter told The Sun.
He added that he's been single since his divorce a decade ago, and had desperately been using dating apps in the hope of finding somebody.
Although, Peter said that all had seemed well when the pair first started chatting.
I'm not sure I'll ever find somebody now
Peter
They spoke on Tinder for a few days, before swiftly moving over onto WhatsApp.
Tinder says it had a dedicated fraud team designed to scan for fake profiles and red flag language - and also urging users not to ever send cash to people they've not met in person.
The pair then started speaking on the phone, and even video-called on a couple of occasions.
"I was falling," Peter said.
"It seemed perfect. She was gorgeous, educated, and very easy to talk to."
The AI chatbot had told Peter that she was a rich Asian woman living in New York.
"She said she owned several properties, and that she was an investor.
"She told me she had made her money through cryptocurrency and that she could help me do the same."
After a month of chatting, Peter said he thought he had found a real connection, and trusted her enough to invest his money.
He said that he initially sent the bot an instalment of $10,000 (£7,880), and after receiving "proof" of his successes a few weeks later, he sent a further $12,000 (£9,457).
It was only after he sent the second huge sum of money that Peter began to suspect something was up.
He said that he started to notice how many of her messages came through twice on Whatsapp.
Out of curiosity, Peter went on to reverse search some of the images she had sent him on Google - quickly wishing he hadn't.
"When I found her images online it said she was an actress from China. My heart sank into my chest.
"I knew then that the whole thing had been deepfake AI. The video chats, the phone calls, everything was faked."
Peter says he has been in contact with his bank in regards to the scam, but it is currently unclear whether any of the money will be returned to him.
"I’m trying not to beat myself up but I think this is a sign for me to stop using dating apps and stop dating in general," he said.
"I've deleted Tinder, WhatsApp, and I blocked and reported the account, but it was too late.
"It's completely turned me off dating. I'm not sure I'll ever find somebody now."
"Eerily convincing deepfakes can be used with relative ease for highly sophisticated social engineering attacks
Chris Dyer
According to Cyber Security Technical Consultant Chris Dyer, the use of AI in dating fraud is becoming increasingly common.
He told The Sun: "Scammers are using AI to whip up convincing fake profiles and have automated chats without the need to put in the hours themselves to see return on investment.
"Eerily convincing deepfakes can be used with relative ease for highly sophisticated social engineering attacks.
"Nowadays you can use AI and deepfake technology to create a completely false online identity that even works with live video calls."
Dyer warned that AI technology is becoming so advanced that validating someone's identity over a video call can no longer be trusted.
He says that the tech has become so easy to use that it's incredibly easy to fake a seemingly real person over live calls.
He worries that this is going to add another layer to trust issues.
"It used to be that we could not trust everything we read online without corroborating evidence, but now that we know of AI models that can create realistic and imaginative scenes purely from text input, even that corroboration can easily be falsified," he says.
"My biggest concern for the general public is that not enough is being done to bring awareness to this potential issue.
"I foresee many victims of scams where they have been presented with too much plausible and believable content, which then triggers them to send money to who the target believes is a loved one."
How to protect yourself from AI scammers
Be critical of everything you see online
Dyer warns that fake imagery and videos are becoming more widespread.
As a result, it's important to stay on your toes and not take everything you see online at face value.
Never transfer money without research
Generating heart-breaking and convincing stories or images is easier than ever before.
Scammers can do it with the push of a button, and ask you to send money through channels that are difficult to trace - like crypto.
If you're asked to send a substantial sum of money, you are advised to think about it.
You should independently verify anyone's identity before acting.
Verify unexpected calls
69 per cent of individuals struggle to distinguish between human and AI generated voices.
If you receive a call from an unknown number, be wary.
Even if the voice is saying they are a friend or family member, take care to verify the caller's identity.
You can do this by asking specific questions that only they would know.
Experts also suggest keeping an eye out for:
Odd body parts
You should pay proper attention to any people or animals in an image.
AI is known to struggle with the details of living beings, especially on hands.
It's not uncommon to see AI generated images with abnormally long or short fingers, missing fingers, or extra fingers.
Ears, eyes and body proportions are another sign of AI involvement.
Absurd details
AI has also been known to mess up with depicting everyday objects.
Glasses, jewellery and handheld items are just some of the things it struggles with.
Some AI generates images have placed pens upside down in hands.
Often, AI forgets to match earrings, or make sure that rings go all the way around fingers.
Strange lighting or shadows
Watch out for seemingly-off shadows and lighting.
Sometimes AI can create a shadow that's pointing the wrong way, or feature lighting that doesn't make sense given the setting.
AI also tends to smoothen skin, ridding humans of the blemishes on real skin.
Weird backgrounds
There are some subtle nuances in AI-generated backgrounds that you should keep an eye out for.
Unnecessary patterns on walls or floors can give away AI - especially as the pattern might abruptly change.
Dyer added that scammers have been using AI to polish their phishing attacks - many of which revolve around cryptocurrencies.
He explained that these sorts of fraudulent attacks are perfect for AI because of the "semi-lawless nature" of the world of digital coins.
It's much easier to get victims to funnel money into channels that are difficult to trace - making it a prime opportunity for scammers.
He said: "Scammers create a false 'buzz' around crypto scams in stages, which can start with market manipulation and compromising trusted sources.
"AI can then follow up to create feelings of FOMO and sense of urgency to trigger actions without their targets having time to consider the consequences."
But, Dyer maintains that it's not all doom and gloom.
He said: "While AI is being used maliciously in the dating world, it's not all scams and scares.
"AI is doing some genuinely helpful stuff in the background, like refining how we find matches and keeping the creeps at bay.
"It also helps weed out those fake profiles - AI fighting against AI.
"There's a lot of potential for AI to make finding love smoother and easier, if it's used wisely by the right people."
READ MORE SUN STORIES
Dyer said that it's important to remain vigilant, and to remain aware of the potential issues with AI. But he has hope for the future.
He concluded: "It's a bit of a tightrope walk, but with some smart regulations and savvy implementations of AI tech, there's hope for a future where AI helps more than it hinder."