Futurism logo

Killer AI robots for military use can autonomously pick their targets

Is it possible to globalise responsible AI in the military domain?

By Susan Fourtané Published about a year ago 3 min read
Killer AI robots for military use can autonomously pick their targets
Photo by Maximalfocus on Unsplash

British spelling

Whilst everyone is talking about GPTs I wonder if we are losing sight of the bigger bots which could be just over the horizon as we speak.

The ones that are more worrisome of them all: killer robots with embedded autonomous artificial intelligence.

Last year, the government of the Netherlands hosted the first global Summit on Responsible Artificial Intelligence in the Military Domain (REAIM).

Fifty countries attended the diplomatic conference. Russia was not invited due to the invasion of Ukraine. I find incredibly disturbing wars are still happening in the 21st century. But that’s not my focus right now.

The aim of the conference was to create a roadmap for the responsible use of Artificial Intelligence (AI) in military operations. At the end, there were norms set, boundaries, and ethical standards discussed for the use of AI in military matters and this supposedly came from the international agreements.

Despite the responsible development, deployment, and use of Artificial Intelligence in the military domain must be given a higher place, I wonder if this is, indeed, the case.

The future of wars: AI and autonomous bots target specific individuals for extermination

AI and robotics development is happening at the same time governments are deploying the technology without little concern of the damage they can cause, in my opinion. After all, they must think, it’s war. Extra destruction is not a concern for anyone.

The military is currently using AI for recognition, surveillance, and situational analysis. Ideally, in a field that is about life and death, you would expect that there is some conscious human involved in the decision-making process.

And unlike WarGames, this is not a film I am talking about but reality.

Even though some say the prospect of fully independent killing machines remains far off, the reality shows that is not the case.

I remember a few years ago, attending a conference on manufacturing, there was a talk about drones capable of following their target until such a target was terminated. This technology was supposed to be a useful tool deployed by law enforcement combating criminals.

I remember for the first time I was in fear about a technology being deployed. That technology that I saw demoed years ago, is available.

One of the conference sessions on the future of war was called Regulating Slaughter Bots.

Regulation usually happens when the technology already exists. You wouldn’t regulate something that doesn’t exist, would you?

The reality is that AI with the potential to autonomously pick targets for destruction is not part of a futuristic movie any longer.

If you, like many, think this technology is not really so close, think it over. The fact that you are first learning about this now doesn’t mean that killer, autonomous AI technology has not been developing and evolving in the background for many years already without anyone noticing.

They are called secret weapons for a reason.

About 10 years ago, I attended a conference where we were introduced to autonomous drones that were able to target specific individuals. These highly sophisticated autonomous drones were initially thought to be deployed by law enforcement. There was a video demonstrating how, once targeted for extermination, the individual could not hide anywhere.

And that was something I saw 10 years ago.

A new age of warfare: AI Robots that kill entirely on their own

By Jason Mavrommatis on Unsplash

Take drone swarms and the use of AI in nuclear command and control systems, for example.

Call it robots, machines, or weapons, AI is already being used in the Russia-Ukraine war.

This conflict, among other things, has accelerated a trend toward autonomous military machines able to select and attack targets without human interference.

Experts have warned that it may be only a matter of time before either Russia or Ukraine deploys weapons that kill autonomously.

The inevitability in autonomous robot warfare

By Sergey Koznov on Unsplash

Autonomous robots for warfare, unless embedded with extremely advanced and sophisticated artificial intelligence (AI) and machine learning (ML) capabilities, might find it difficult to discriminate between targets and potentially harm civilians.

They could carry out tasks on their own without following any human supervision. And there is the potential and risk of the technology being hacked, or falling into the wrong hands.

Every technology can have beneficial uses and destructive ones. The current ongoing conflicts clearly show that the human race has not advanced enough to be responsible, live in harmony, and use technology for the betterment of humanity.

Instead, destruction is one of the main characteristics of human society, which is not yet ready to take further steps in deploying any kind of autonomous, AI-embedded machines for warfare.

About the writer: Susan Fourtané is a science and technology journalist, professional writer, dead media archeologist, photography enthusiast, a free-spirited maverick, and sometimes a hermit. She travels capturing the essence of what she thinks it’s interesting.

artificial intelligencefutureopiniontechhumanity

About the Creator

Susan Fourtané

Susan Fourtané is a Science and Technology Journalist, a professional writer with over 18 years experience writing for global media and industry publications. She's a member of the ABSW, WFSJ, Society of Authors, and London Press Club.

Reader insights

Outstanding

Excellent work. Looking forward to reading more!

Top insights

  1. Easy to read and follow

    Well-structured & engaging content

  2. Expert insights and opinions

    Arguments were carefully researched and presented

  3. Eye opening

    Niche topic & fresh perspectives

  1. Heartfelt and relatable

    The story invoked strong personal emotions

Add your insights

Comments (3)

Sign in to comment
  • Addison Alderabout a year ago

    Notably, Israel attended REAIM but did not sign up. Because they've been using AI targeting for years, including against human and structural targets for airstrikes, and also directly on individuals with automatic border guns. It's literally incalculable how many innocent people they have killed with AI targeting. Sources: https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes https://eandt.theiet.org/2023/02/17/global-summit-calls-responsible-use-ai-military https://apnews.com/article/technology-business-israel-robotics-west-bank-cfc889a120cbf59356f5044eb43d5b88 This story is important and terrifying – but also well-written and enjoyable!

  • It's so sad to think that these are all being used for war, murder and all sorts of violence!

  • ReadShakurrabout a year ago

    Awesome content

Find us on social media

Miscellaneous links

  • Explore
  • Contact
  • Privacy Policy
  • Terms of Use
  • Support

© 2026 Creatd, Inc. All Rights Reserved.