Jump directly to the content

GOOGLE has made a bombshell U-turn that opens up the possibility of its AI tech powering weapons as the world enters a chilling new era of warfare.

The company had previously vowed its AI technology would never be used for purposes that were "likely to cause harm".

USS Fitzgerald firing a naval strike missile.
6
USS Fitzgerald became the first AI warship to be deployedCredit: Alamy
Google logo on a glass building.
6
Google has changed its policy to allow the development of weapons using its AI technologyCredit: Getty
Illustration of AI-powered weapons: attack drones, tanks, and robot dogs in a warzone.
6

But now that pledge has been scrapped, opening the door for Google AI to power high-tech battlefield weapons and spy gear.

AI has been compared to nuclear weapons in the way it could transform the world's warfare and destabilise geopolitics.

It is already being used to develop huge unmanned attack drones, self-mending warships, hydrogen-powered tanks and self-steering submarines.

Two Google execs, senior vice president James Manyika and AI chief Demis Hassabis, defended the move.

READ MORE IN TECH

They said that businesses and governments need to work together on AI which "supports national security".

The idea of using AI on the battlefield is controversial, but the men argued that the approach needed updating since AI had become so widespread.

They wrote: "Billions of people are using AI in their everyday lives. AI has become a general-purpose technology, and a platform which countless organisations and individuals use to build applications.

"It has moved from a niche research topic in the lab to a technology that is becoming as pervasive as mobile phones and the internet itself."

Google banned its AI technology being used in weapons in 2018 after a protest from its own staff over a contract with the US Defense Department to use it to analyse drone video.

Firms across the AI sector are becoming more willing to partner with defence and surveillance projects.

OpenAI, the developers of ChatGPT, announced in December a partnership with fellow tech company Anduril to develop warzone technology against drones.

OpenAI had also previously ruled out military uses for its tech, but changed its own policy last year.

Watch chilling Chinese robot dog ‘Black Panther’ that not even Usain Bolt can outrun as it sprints 100m in under 10 secs

Many experts are concerned about the implications of unchecked AI development.

Top scientists last year urged government to realise that AI is not "a toy" and could be catastrophic for humankind.

In a report calling for stricter regulations on the tech, researchers said AI - in the wrong hands - could be used to produce biological weapons and someday make humans extinct.

However, many AI weapons projects are steaming ahead nonetheless.

a futuristic tank is parked in front of a brick wall
6
South Korea has unveiled plans for the world's first hydrogen-powered tank with an AI-controlled gunCredit: Hyundai Rotem
XQ-58A Valkyrie unmanned air vehicle in flight.
6
The United States military is testing unmanned AI aircraft like the Kratos XQ-58 ValkyrieCredit: Instagram
Illustration of a self-steering robot war submarine.
6
The UK Royal Navy is at work on an autonomous weapon, a submarine nicknamed 'Cetus' that can steer itselfCredit: instagram

The US Navy recently deployed the world's first AI warship.

USS Fitzgerald, first launched January 1994, has been redeployed with an innovative artificial intelligence program used by the Pentagon.

The software uses machine learning to predict maintenance issues before they have even happened, allowing crews to avoid major malfunctions and stay on the water.

It takes 10,000 sensor readings every second from all over the ship - including the hull and the mechanical and electrical systems.

The AI algorithm then interprets the data to make maintenance recommendations to the crew.

Militaries across the world are already in a bid to find an edge of the battlefield.

The  Marine Corps completed the second test flight of an unmanned aircraft called the Kratos XQ-58 Valkyrie, piloted by AI, in February last year.

This development marked a milestone in implementing Project Eagle, the service's aviation modernization strategy.

Major General Scott Cain, Air Force Research Lab commander, said last year: “AI will be a critical element to future warfighting and the speed at which we’re going to have to understand the operational picture and make decisions."

READ MORE SUN STORIES

And South Korea has unveiled plans for a new AI-powered tank, which would be the world's first powered by hydrogen.

Rotem, a subsidiary of motor company Hyundai, released its plans for these hi-tech war machines, that would be used by the South Korean army.

What are the arguments against AI?

ARTIFICIAL intelligence is a highly contested issue, and it seems everyone has a stance on it. Here are some common arguments against it:

Loss of jobs - Some industry experts argue that AI will create new niches in the job market, and as some roles are eliminated, others will appear. However, many artists and writers insist the argument is ethical, as generative AI tools are being trained on their work and wouldn't function otherwise.

Ethics - When AI is trained on a dataset, much of the content is taken from the Internet. This is almost always, if not exclusively, done without notifying the people whose work is being taken.

Privacy - Content from personal social media accounts may be fed to language models to train them. Concerns have cropped up as Meta unveils its AI assistants across platforms like Facebook and Instagram. There have been legal challenges to this: in 2016, legislation was created to protect personal data in the EU, and similar laws are in the works in the United States.

Misinformation - As AI tools pulls information from the Internet, they may take things out of context or suffer hallucinations that produce nonsensical answers. Tools like Copilot on Bing and Google's generative AI in search are always at risk of getting things wrong. Some critics argue this could have lethal effects - such as AI prescribing the wrong health information.

Topics