US Army experimenting with AI bots and training them as war advisers but experts warn it’s too ‘high-stakes’
THE US Army is said to have trained chatbots to provide battle advice in a war game simulation.
This experiment was conducted by the US Army Research Laboratory.
It's hoped the test could help improve battle planning if AI can be utilized in this way.
Researchers used OpenAI’s technology to conduct their tests.
They used the GPT-4 Turbo and GPT-4 Vision models.
The bots were asked to react to scenarios in the military sci-fi video game Starcraft II.
"The development of Courses of Action (COAs) in military operations is traditionally a time-consuming and intricate process.
"Addressing this challenge, this study introduces COA-GPT, a novel algorithm employing Large Language Models (LLMs) for rapid and efficient generation of valid COAs," the researchers wrote.
Commanders in the study were able to input text and image prompts and receive AI advice.
The AI bots acted as a military commander’s assistant in the test.
They were said to respond in seconds with detailed proposals.
The person receiving the advice could then ask for it to be refined before acting on it.
Experts have advised against using AI models in war situations.
According to The New Scientist, Carol Smith from Carnegie Mellon University in Pennsylvania is one of these experts.
"I would not recommend using any large language model or generative AI system for any high-stakes situation," she warned.
"COA-GPT's capability to rapidly adapt and update COAs during missions presents a transformative potential for military planning, particularly in addressing planning discrepancies and capitalizing on emergent windows of opportunities," the researchers concluded.