Jump directly to the content

THE dawn of generative AI, an advanced form of machine intelligence, has flooded social media with computer-produced images of people.

These AI-generated people, generally referred to as deepfakes, are usually a Frankenstein-cocktail of hundreds of different faces that the AI has 'seen' on the internet.

The image bares all the hallmarks of a deepfake: extra fingers, a gibberish language, and a nonsensical five-seater arrangement for a plane
1
The image bares all the hallmarks of a deepfake: extra fingers, a gibberish language, and a nonsensical five-seater arrangement for a planeCredit: X @fmanjoo

But a worrying trend may be emerging.

A woman claims a deepfake doppelganger of a dead relative has to surface online, in social media memes, fundraisers, and even puzzles.

Scrolling on social media platform X (formerly Twitter), Sara Burningham, a podcast and documentary producer at Slate Magazine, stumbled across a "janky AI" image of five elderly men dressed like war veterans.

Below the photo it says, “The real heroes in America are not in Hollywood.”

READ MORE ON AI

The image bares all the hallmarks of a deepfake: extra fingers, a gibberish language, and a nonsensical five-seater arrangement for a plane.

But what struck Burningham most, was that the man seated closest to the 'camera' was her dad, who died 14 years ago.

"It stopped me in my tracks. I wanted to find out where this image had sprung from," she wrote in Slate.

"I did a reverse image search and found the image is circulating all over the place."

While there are some minor differences between her dad and the man in the image, like the line of his beard and the shape of his glasses, she and her brother agreed it was "unmistakably him".

Deepfakes: A Digital Threat to Society

He was also not an American veteran, like the picture suggests, but a Canadian oil engineer–turned–environmentalist.

Burningham believes an image generator scraped a photo of her dad from social media, like the wedding snaps she posted in 2007.

The ways in which our information is now totally beyond our control make all of us queasy or, if we think about it a little longer, angry.

Sara Burningham, a podcast and documentary producer at Slate Magazine

But the AI failed to "jumble it up enough to make it into someone new".

It's possible that an AI image generator could accidentally conjure up a 'random' face that already exists, or existed.

"Our memories have become forever digital debris to be sucked in, digested, and reanimated by machine learners," added Burningham.

"Our lives, our dead, and their data are becoming a kind of digital compost.

"The ways in which our information is now totally beyond our control make all of us queasy or, if we think about it a little longer, angry."

'Years-old digital shadow'

Experts have urged people to start preparing for their "digital death", so that remnants of your life, what Burningham called her dad's "years-old digital shadow", aren't trapped in the digital sphere forever.

A recent Which? poll revealed that three-quarters of people have no plan for what to do with their digital assets after they have passed away.

While it is largely intended to avoid the risk of emails, photos and social media accounts being locked up and inaccessible to loved ones, it also helps remove some of the ammo AI can use to create deepfakes.

READ MORE SUN STORIES

There are currently no legal rules for how digital assets are dealt with when you die. 

The watchdog has encouraged people to share account details with family or friends before they die and consider including a letter of wishes.

What are the arguments against AI?

Artificial intelligence is a highly contested issue, and it seems everyone has a stance on it. Here are some common arguments against it:

Loss of jobs - Some industry experts argue that AI will create new niches in the job market, and as some roles are eliminated, others will appear. However, many artists and writers insist the argument is ethical, as generative AI tools are being trained on their work and wouldn't function otherwise.

Ethics - When AI is trained on a dataset, much of the content is taken from the Internet. This is almost always, if not exclusively, done without notifying the people whose work is being taken.

Privacy - Content from personal social media accounts may be fed to language models to train them. Concerns have cropped up as Meta unveils its AI assistants across platforms like Facebook and Instagram. There have been legal challenges to this: in 2016, legislation was created to protect personal data in the EU, and similar laws are in the works in the United States.

Misinformation - As AI tools pulls information from the Internet, they may take things out of context or suffer hallucinations that produce nonsensical answers. Tools like Copilot on Bing and Google's generative AI in search are always at risk of getting things wrong. Some critics argue this could have lethal effects - such as AI prescribing the wrong health information.

Topics