Killer drones that pick their own targets and robot war submarines that steer themselves – AI weapons of the near future
![](http://www.mcb777.site/wp-content/uploads/2024/07/image-8bb6b9457e.jpg?w=620)
MILITARIES across the globe are racing to develop AI-powered weapons in a frantic attempt to best each other.
While proponents argue that enhanced capabilities could reduce collateral damage, critics say the AI arms race could quickly spiral out of control.
The United States Marine Corps has welcomed developments in this arena with open arms.
The service completed the second test flight of an unmanned aircraft called the Kratos XQ-58 Valkyrie in February.
This development marked a milestone in implementing Project Eagle, the service's aviation modernization strategy.
In addition to building weapons piloted by artificial intelligence, the Marine Corps is looking into ways to pair unmanned aircraft with crewed ones, to not do away with human fighters entirely.
“AI will be a critical element to future warfighting and the speed at which we’re going to have to understand the operational picture and make decisions,” Major General Scott Cain, Air Force Research Lab commander, said in a press release.
Cain anticipates autonomous operations will continue to "evolve at an unprecedented rate."
But the use of autonomous weapons is a double-edged sword.
These tools reduce the number of fighters sent into conflict while simultaneously maximizing the damage inflicted on the enemy.
Drones and other autonomous devices can lock onto targets with unfathomable precision, fulfilling their purpose as killing machines.
While this may seem like a net positive, such unchecked brutality will only cause the arms race to intensify, possibly spurring more destructive weapons development on both sides.
A global coalition has banded together to ensure the responsible use of artificial intelligence in wartime.
Last February, the United States government launched the Political Declaration on Responsible Military Use of AI.
The document aims to guide states’ development and deployment of military AI.
Countries are urged to develop AI that is consistent with obligations to International Humanitarian Law, which is meant to limit the effects of armed conflict and protect those who are not fighting.
The declaration already has 52 signatories. Missing from the list is Russia - a country that contributed to stoking the AI arms race.
The invasion of Ukraine kickstarted the development of AI-powered weaponry as the embattled country scrambled to counter one of the most powerful militaries in the world.
This pressure, coupled with a steady flow of investment, turned Ukraine into a hotbed of development for autonomous drones.
Vyriy is just one company working on developing drones that fly themselves and lock onto targets. The devices use basic computer vision algorithms that analyze and interpret images.
A Google form reviewed by The U.S. Sun offers a free course to Ukrainians interested in learning how to build drones.
"Within the program, you will learn the skills of assembling a civilian 7-inch FPV drone for free, which in the hands of our military is capable of destroying the occupiers' equipment and burning enemy tanks!" it reads.
Another Ukranian drone manufacturer, Swarmer, can deploy dozens of the instruments simultaneously.
Thanks to AI, the devices can adapt to each other's movements and coordinate strikes without sustained human involvement.
Some tools are in development, while others are already in action on the battlefield. AI-powered drones have been targeting Russian oil refineries, for example.
And other countries are scrambling to develop autonomous weapons, even without the looming threat of conflict.
MSubs, a British tech firm, secured a £15.4 million ($19.8 million) contract from the UK Royal Navy in 2022 to construct an autonomous submarine codenamed "Project Cetus."
Named after a mythical sea monster, the crewless machine will be able to operate up to 3,000 miles from home for three months.
Artificial intelligence is a highly contested issue, and it seems everyone has a stance on it. Here are some common arguments against it:
Loss of jobs - Some industry experts argue that AI will create new niches in the job market, and as some roles are eliminated, others will appear. However, many artists and writers insist the argument is ethical, as generative AI tools are being trained on their work and wouldn't function otherwise.
Ethics - When AI is trained on a dataset, much of the content is taken from the Internet. This is almost always, if not exclusively, done without notifying the people whose work is being taken.
Privacy - Content from personal social media accounts may be fed to language models to train them. Concerns have cropped up as Meta unveils its AI assistants across platforms like Facebook and Instagram. There have been legal challenges to this: in 2016, legislation was created to protect personal data in the EU, and similar laws are in the works in the United States.
Misinformation - As AI tools pulls information from the Internet, they may take things out of context or suffer hallucinations that produce nonsensical answers. Tools like Copilot on Bing and Google's generative AI in search are always at risk of getting things wrong. Some critics argue this could have lethal effects - such as AI prescribing the wrong health information.
Chinese researchers have found yet another use for artificial intelligence in wartime - aiding in the weapons design process.
The Chinese military uses AI to develop huge electromagnetic weapons that can launch projectiles into orbit.
In 2022, a group of Chinese researchers from the Naval University of Engineering in Wuhan used AI to develop what they claimed was the world’s smallest and most powerful coilgun.
After the AI provided a set of optimized data points, the researchers changed the weapon's parameters, decreasing its size and boosting its energy output based on the computer's suggestions.
The kinetic energy of the bullet passing through the barrel was more than twice what was needed to fire a fatal shot.