[ad_1]
Throughout history, warfare has been irrevocably changed by the advent of new technologies. Historians of the war have identified several technological revolutions.
The first was the invention of gunpowder by the people of ancient China. He gave us muskets, rifles, machine guns and finally all kinds of explosives. It is indisputable that gunpowder has completely changed how we fight wars.
Then came the invention of the nuclear bomb, which raised the stakes more than ever. Wars can be stopped with just one weapon, and life as we know it can be stopped with one nuclear stockpile.
And now, war—like other spheres of life—has entered the era of automation. AI will cut through the “fog of war”, changing where and how we fight. Smaller, cheaper and increasingly capable off-the-shelf systems will replace large, expensive, manned platforms.
We’ve seen the beginnings of this in Ukraine, where sophisticated armed drones are being developed, Russia’s proximity to AI “smart” mines that detect explosive trails, and Ukraine’s successful use of autonomous “drone” boats. A major attack on the Russian navy at Sevastopol.
We see this revolution happening in Australia with our own forces. And all of this begs the question: Why did the government’s recent Defense Strategic Review fail to take seriously the implications of AI-enabled warfare?
AI has entered the Australian military.
Australia has a variety of autonomous weapons and ships that could be deployed in a conflict.
Our Air Force expects to acquire several 12-meter long non-functional Ghost Bat aircraft to keep the expensive F-35 fighter jets from ducking with advanced technologies.
At sea, the Defense Force is testing a new type of static surveillance vessel, the Bluebottle, developed by indigenous company Osius. And under the sea, Australia is building a prototype of a six-meter long Ghost Shark submarine.
It seems to be developing many more technologies in the future. The government’s recently announced $3.4 billion defense innovation “fast” military technologies, including hypersonic missiles, guided-missile weapons and autonomous vehicles, aim to bring them into service sooner.
So how do AI and autonomy fit into our bigger strategic picture?
The recent Defense Strategy Review is a close analysis of whether Australia has the necessary defense capabilities, posture and readiness to defend its interests for the next decade and beyond. They expect AI and autonomy to be a big concern — especially since the review recommends a nontrivial $19 billion in spending over the next four years.
However, the review only mentions autonomy twice (both in the context of existing weapons) and AI once (one of the four pillars of the AUKUS submarine program).
Countries are preparing for the third revolution.
Around the world, major powers have made it clear that they see AI as a central part of the planet’s military future.
The House of Lords in the UK is holding a public inquiry into the use of AI in weapons systems. In Luxembourg, the government organized an important conference on autonomous weapons. And China has announced its intention to become the world leader in AI by 2030. Her plan for the development of new generation AI declares that “AI is a strategic technology that will lead the future”, both militarily and economically.
Similarly, Russian President Vladimir Putin has said that “whoever is a leader in this field will be the ruler of the world,” and the United States has adopted a “third offset strategy” and invested heavily in AI, autonomy, and robotics.
Unless we pay more attention to AI in our military strategy, we may be fighting wars with outdated technologies. Russia saw the tragedy last year when its Black Sea flagship, the missile cruiser Moskva, sank after being rammed by a drone.
Future regulation
Many people (myself included) hope that autonomous weapons will soon be regulated. Earlier this year I was invited as an expert witness to a meeting of governments in Costa Rica where 30 Latin and Central American countries called for legislation – many for the first time.
The regulation ensures meaningful human control over autonomous weapons systems (although we have yet to agree on what “meaningful control” looks like).
But the rule doesn’t make AI go. Still, we expect to see AI and some level of autonomy become critical components of our defenses in the near future.
There are instances, such as in mining, where autonomy is most desirable. Indeed, AI will be very useful in managing information space and in military logistics (if its use is not exposed to ethical challenges in other settings, such as when using lethal weapons).
At the same time, autonomy creates strategic challenges. For example, alongside cutting costs and increasing power, it changes the geopolitical system. For example, Turkey is becoming a major drone superpower.
We have to prepare
Australia must consider how it can defend itself in an AI-powered world, where terrorists or fraudsters can attack us with swarms of drones and the attacker is impossible to identify. A review that ignores all of these leaves us unprepared for the future.
We also need to engage more constructively in ongoing diplomatic discussions about the use of AI in warfare. Sometimes the best defense is found in the political field, and not in the military.
- Toby Walsh, Professor of AI, Research Group Leader, UNSW Sydney
This article is reprinted from the discussion under a Creative Commons license. Read the original article.
Now read: Australia is investing $3.4 billion to boost defense capabilities.
[ad_2]
Source link