In 2013, Amnesty International announced that US drone strikes in Pakistan could be classified as a war crime after reports of collateral civilian casualties emerged.
Drone strikes were already controversial.
The concept of a remote operator, coldly detached from the battleground, makes many people uncomfortable. He sees his targets only in the grey and white filter of the thermal camera. He doesn’t hear the explosions or the screams, only the intermittent radio chatter. He is often hundreds of miles away, safe and secure from the warzone he is ‘fighting’ in.
Many of these operators also control the drones on a video game controller. Ironically, in 2007, the video game series Call of Duty had a level where the player operated an AC-130 gunship using the same interface as in real-life. The level was met with critical acclaim by some and unease by many.
Despite this, lethal drones offer a country’s armed forces many strategic advantages. US-led drone strikes hugely aided the Syrian and Iraqi governments in their reoccupation of territory formerly held by ISIS. That they could do this without ‘boots on the ground’ was another advantage enabled only by unmanned aerial vehicles.
Artificial Intelligence (AI) adds a new dimension to this challenging subject.
A few years ago, in 2019, the author Arash Heydarian Pashakhanlou wrote that Artificial Intelligence might eventually replace the role of fighter pilots. The movie ‘Top Gun: Maverick’ explored this idea. One of the leading characters loses his job as a fighter pilot to an AI machine.
In 2022, ChatGPT broke into the world. Many people, from copywriters to coders, feared they would lose their jobs due to the extraordinary capabilities of the model.
While these concerns were probably overstated at the time, there’s no doubt that as AI continues to improve, businesses will begin outsourcing more to them and, indeed, replacing human roles.
This isn’t just because AI is cheaper but because it has the potential to be even more effective than humans. In 2023, ChatGPT passed a university-level Law and Business exam.
The logic is that these advancements will also occur in the vehicular industry. Tesla has made strides with their self-driving cars - could something similar happen in aviation?
The problem goes back to the issue of civilian casualties. Ask ChatGPT the same question 1,000 times, and 999 of the answers might be correct. One error might not be a big deal when someone’s writing an article or building a brief, but in a military setting, people could die.
AI doesn’t just make mistakes; it also has no conception of morality in general (insofar as models don’t have a picture of anything, they are simply trained on the quality of their outputs).
However, the idea of a machine operator, more skilled than its human counterparts, has too many potential benefits to discard entirely.
Therefore, the solution is for these machines to only operate in a purely defensive, non-lethal capacity.
Israel’s Iron Dome missile system protects the country from external missile attacks at a 90% efficacy rate. What if AI could take that to 100%?
Could AI provide full-proof protection against threats to civilians and not be used in a lethal capacity?
It’s hard to say, especially when national militaries always try to one-up each other in technological space.
There is hope. The US recently disposed of the last of its Sarin gas stockpile, showing that international regulations work.
Perhaps there ought to be some pressure from manufacturers and software developers too.
I’m reminded of Isaac Asimov’s first rule of robotics: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
We don’t use AI to write content, but you might.
Check out our five must-know when creating marketing content with AI.