Jump directly to the content

META has released a new tool that can detect AI-generated audio and potentially undermine the creation of dangerous "deepfakes."

The tech behemoth announced the release of AudioSeal, a tool designed to label and detect synthetic audio, on Tuesday.

Meta announced the release of its AudioSeal technology on Tuesday. It can embed and detect inaudible watermarks in AI-generated audio segments
2
Meta announced the release of its AudioSeal technology on Tuesday. It can embed and detect inaudible watermarks in AI-generated audio segmentsCredit: Getty

AudioSeal simultaneously trains a generator that embeds an inaudible watermark and a detector that picks up on the watermarked fragments in longer audios.

Deepfakes began as still images and video clips featuring celebrities' faces superimposed on others' bodies, often in an exploitative or pornographic context.

However, with the growth of artificial intelligence technology, it is easier than ever to produce synthetic video as well as audio with an eerie degree of accuracy.

In January, a fake robocall of President Joe Biden told Democrats not to vote in New Hampshire primaries. The man responsible for the plot was later indicted on charges of voter suppression and impersonation of a candidate.

Meta addressed the potential use of AI by malicious actors in a press release.

"Generative AI tools are inspiring people to share their creations with their friends, family, and followers on social media," the release read.

"As with all AI innovations, it’s important that we do our part to help ensure responsible use of these tools."

While a human being cannot detect an audio watermark, which is below the frequency our ears can hear, AudioSeal can.

Meta dubbed the tool "the first audio watermarking technique designed specifically for the localized detection of AI-generated speech."

When listening to longer audio clips, AudioSeal can pick up on segments generated by AI using localized detection.

YouTube adds AI feature that tells you the name of a song just from you humming within seconds – exact steps to find it

This makes it possible to identify AI-generated segments in content like podcast episodes.

AudioSeal differs from traditional methods in that its localized detection approach makes it faster and more efficient than complex models.

This innovative design increases detection speed by 485 times compared to previous methods, Meta says.

AudioSeal has been released under a commercial license and is free for download on , a platform where developers share code.

The tool could theoretically be used to identify misleading AI-generated "deepfakes"
2
The tool could theoretically be used to identify misleading AI-generated "deepfakes"Credit: Getty

The team at Meta has big aspirations for AudioSeal - but as audio watermarks have yet to be adopted widely, the tool's utility may be limited.

Social media companies could hypothetically use watermarks to identify misleading AI-generated content, but there is currently no singular industry standard.

Another downside is that watermarks are easy to manipulate and can be removed using software like Audacity or Adobe Audition.

Meta is not the only tech giant taking a stab at audio watermarking software.

DEFENCE AGAINST THE DEEPFAKES

Here's what , Head of Technology and Science at The Sun and The U.S. Sun, has to say...

The rise of deepfakes is one of the most worrying trends in online security.

Deepfake technology can create videos of you even from a single photo – so almost no one is safe.

But although it seems a bit hopeless, the rapid rise of deepfakes has some upsides.

For a start, there's much greater awareness about deepfakes now.

So people will be looking for the signs that a video might be faked.

Similarly, tech companies are investing time and money in software that can detect faked AI content.

This means social media will be able to flag faked content to you with increased confidence – and more often.

As the quality of deepfakes grow, you'll likely struggle to spot visual mistakes – especially in a few years.

So your best defence is your own common sense: apply scrutiny to everything you watch online.

Ask if the video is something that would make sense for someone to have faked – and who benefits from you seeing this clip?

If you're being told something alarming, a person is saying something that seems out of character, or you're being rushed into an action, there's a chance you're watching a fraudulent clip.

In November, Google's DeepMind team announced the release of its SynthID technology in collaboration with YouTube.

READ MORE SUN STORIES

The tool was deployed through DeepMind's Lyria model, an AI music generator. This integration allows SynthID to determine if Google's AI tech has been used in the creation of a track.

News of the release came just months after the company released a beta version of SynthID for images created through Google Cloud’s Vertex AI.

Topics