Faced with an enemy fighter jet, there’s one sensible thing a military drone should do: split. However, in December 2002, caught in the crosshairs of an Iraqi MiG, an unmanned US Predator was instructed to stay put. The MiG fired, the Predator fired back and the result, unhappily for the US, was a heap of drone parts on the southern Iraqi desert.
This incident is often regarded as the first dogfight between a drone, properly known as an unmanned aerial vehicle (UAV) and a conventional, manned fighter. Yet in a way, the Predator hardly stood a chance. US and British UAVs are operated remotely by pilots sitting thousands of kilometers away on US turf, so maneuvers are hobbled by signal delays of a quarter-second or more. This means evading missiles will always be nigh-on impossible — unless the UAVs pilot themselves.
In July this year, amid a haze of dry ice and revolving spotlights at the Warton aerodrome in Lancashire, BAE Systems launched a prototype UAV that might do just that. With a development cost of more than £140 million (US$225 million), the alien-looking Taranis was billed by the UK Ministry of Defence (MOD) as a “fully autonomous” craft that can fly deep into enemy territory to collect intelligence, drop bombs and “defend itself against manned and other unmanned enemy aircraft.” Lord Drayson, the UK minister for defense procurement from 2005 to 2007, said Taranis would have “almost no need for operator input.”
Illustration: Yusah
Taranis is just one example of a huge swing toward autonomous defense systems: machines that make decisions independent of any human input, with the potential to change modern warfare radically. States with advanced militaries such as the US and the UK are viewing autonomy as a way to have a longer reach, greater efficiency and fewer repatriated body bags. The UK government’s Strategic Defence and Security Review, published last month, cited it as a means to “adapt to the unexpected” in a time of constrained resources. However, behind the technological glitz, autonomous systems hide a wealth of ethical and legal problems.
For some military tasks, armed robots can already take care of themselves. The sides of many allied warships sport a Gatling gun as part of the Phalanx system, which is designed to fire automatically at incoming missiles. Israel is deploying machine-gun turrets along its border with the Gaza Strip to target Palestinian infiltrators automatically. For this “See-Shoot” system, an Israeli commander told the industry magazine Defense News, a human operator will give the go-ahead to fire “at least in the initial phases of deployment.”
Phalanx and See-Shoot are automated systems, but they are not autonomous, a subtle yet crucial difference. A drinks machine is an example of an automated system: You push a certain button and out drops the corresponding bottle. In a similar way, the Phalanx Gatling gun waits for a certain blip to appear on its radar, then fires at it. Autonomous systems, on the other hand, perform much more complex tasks by taking thousands of readings from the environment. These translate to a near-infinite number of input states, which must be processed through lengthy computer code to find the best possible outcome. Some believe it’s the same basic method we use to make decisions ourselves.
High-profile armed systems such as Taranis have the true nature of their autonomy kept secret, but other projects hint at what might be in store. At the Robotics Institute of Carnegie Mellon University in Pennsylvania, researchers are using Pentagon funding to develop a six-wheeled tank that can find its own way across a battlefield. The prototype, which tipped the scales at six tonnes, was nicknamed the Crusher thanks to its ability to flatten cars. The latest prototype, known as the Autonomous Platform Demonstrator (APD), weighs nine tonnes and can travel at 80kph.
The key to the APD’s autonomy is a hierarchy of self-navigation tools. First, it downloads a basic route from a satellite map, such as Google Earth. Once it has set off, stereo video cameras build up a 3D image of the environment so it can plan a more detailed route around obstacles. To make minor adjustments, lasers then make precision measurements of its proximity to surrounding terrain.
Dimi Apostolopoulos, principal investigator for the APD, said its payload could include reconnaissance systems or mounted weapons, primarily for use in the most dangerous areas where commanders are loath to deploy human soldiers.
“Strange as it may sound, we believe the introduction of robotics will change warfare,” he said. “There’s no doubt about that. It’ll take a lot of people out of the toughest situations. And my belief is that this is a good thing for both sides.”
Other research in military robots ranges from big to small, from impressive to bizarre. At the robotics lab Boston Dynamics, engineers funded by the US Defense Advanced Research Projects Agency (DARPA) are developing a four-legged robot that “can go anywhere people and animals can go.” Called BigDog, the robot uses sensors and motors to control balance autonomously, trotting over rugged terrain like a creepy headless goat.
Perhaps more creepy is DARPA’s research proposal to hijack flying insects for surveillance — in other words, harness a biological “UAV” that is already autonomous. According to the proposal, tiny, electro-mechanical controllers could be implanted into the insects during their metamorphosis, although some researchers have said this idea is a little too far-fetched.
What is clear is that there is huge investment in military robotics, with UAVs at the forefront. The RAF has five armed Reaper UAVs and has five more on order. The US is way ahead, with the Pentagon planning to increase its fleet of Reaper, Predator and other “multirole” UAVs from 300 next year to 800 in 2020.
As Gordon Johnson of the US Joint Forces Command famously said of military robots: “They don’t get hungry. They’re not afraid. They don’t forget their orders.”
His statement was reminiscent of a line in the 1986 blockbuster Short Circuit by Newton Crosby, a scientist who had created a highly autonomous military robot: “It doesn’t get scared. It doesn’t get happy. It doesn’t get sad. It just runs programs!”
In that film, the robot went AWOL.
What happens if real-life military robots go wrong? Although we are a long way from the sophisticated robots of science fiction, the military are still considering how to tackle potential failure. In June, Werner Dahm, then-chief scientist of the US Air Force, released the USAF “vision” report Technology Horizons, in which he argued that autonomous systems, while essential for the air force’s future, must be put through “verification and validation (V&V) to be certified as trustworthy.
Military systems already have to undergo V&V using a method largely unchanged since the Apollo program. It’s what Dahm calls the “brute force” approach: systematically testing every possible state of a system until it is 100 percent certifiable. Today, says Dahm, more than half the cost of modern fighter aircraft is in software development, while a huge chunk of that cost is in V&V. Yet as soon as one contemplates autonomous systems, which have near-infinite input states, brute-force V&V becomes out of the question. Although Dahm says V&V could be made easier by designing software to “anticipate” the testing process, he believes we will ultimately have to satisfy ourselves with certification below 100 percent .
“The average citizen might say, well, 99.99 percent , that’s not good enough,” Dahm said. “There are two important responses to that. One, you’d be surprised the car you’re driving isn’t 99.99 percent [certified] in most of what it does ... and the other part of the answer is, if you insist on 100 percent [certification], I’ll never be able to get the highly autonomous system.”
Even existing military robots, which are human-operated, have become controversial. Some believe the CIA’s use of UAVs to target alleged insurgents in Pakistan goes against a 1976 executive order by former US president Gerald Ford to ban political assassinations. Yet for autonomous systems, with humans gradually taken out of the loop, it gets more complicated.
“If a machine that has learnt on the job shoots at an ambulance rather than a tank, whose fault was it?” asked Chris Elliott, a barrister and systems engineer. “Who has committed the crime?”
Elliott’s concerns are echoed by other lawyers and scientists. Noel Sharkey, a professor of artificial intelligence at Sheffield University, says it is impossible for autonomous robots today to distinguish reliably between civilians and combatants, a cornerstone of international humanitarian law. He also believes robots lack the subtle judgment to adhere to another humanitarian law: the principle of proportionality that says civilian causalities must not be “excessive” for the military advantage gained.
“It’s not always appropriate to fire and kill,” Sharkey said. “There are so many examples in the Iraq War where insurgents have been in an alleyway, marines have arrived with guns raised, but noticed the insurgents were actually carrying a coffin. So the marines lower their machine guns, take off their helmets and let the insurgents pass. Now, a robot couldn’t make that kind of decision. What features does it look for? Could the box be carrying weapons?”
The issue is autonomous strike — that is, a robot making its own firing decision — and here opinions differ.
An MOD spokesperson said via e-mail that, in attack roles, “there will remain an enduring need for appropriately trained human involvement” in operating UAVs “for the foreseeable future.”
Dahm believes the USAF holds the same view, though it appears to be lost in its latest UAV Flight Plan.
“Increasingly, humans will no longer be ‘in the loop’ but rather ‘on the loop’ — monitoring the execution of certain decisions,” it reads. “Simultaneously, advances in AI [artificial intelligence] will enable systems to make combat decisions ... without necessarily requiring human input.”
It adds, however: “Authorizing a machine to make lethal combat decisions is contingent upon political and military leaders resolving legal and ethical questions.”
A 2008 paper by the US Office of Naval Research also admits that there are ethical and legal obstacles to autonomy. It suggests a “sensible goal” would be to program autonomous robots to act “at least as ethically” as human soldiers, although it notes that “accidents will continue to occur, which raise the question of legal responsibility.” The paper also considers the idea that autonomous robots could one day be treated as “legal quasi-agents,” like children.
Rob Alexander, a computer scientist at York University, thinks this would be a step too far.
“A machine cannot be held accountable,” he said. “Certainly not with any foreseeable technology — we’re not talking about Star Trek androids here. These things are machines and the operators or designers must be responsible for their behavior.”
There are broader issues. In his recent book Cities Under Siege: The New Military Urbanism, Stephen Graham, a human geography expert at Durham University, argues that autonomy is the result of shifting warfare from fields to cities, where walls and hideouts “undermine” the hegemony of advanced militaries. However, the real danger, Graham says, is that autonomous robots reduce the political cost of going to war, so that it no longer becomes a last resort.
“You don’t get the funeral corteges going through small towns in Wiltshire,” he said.
Joanne Mariner, a lawyer at Human Rights Watch, voiced the same concern.
Given the limitations of current robotics, the deeper ethical and legal issues of autonomy will, for the near future, stay largely hypothetical. According to Dahm, autonomy will have more imminent uses as part of large military systems, performing tasks that are becoming too laborious for humans. Satellites, for example, could autonomously filter reconnaissance data so they only transmit those images displaying recognizable targets. Indeed, military commanders already use software that has elements of autonomy to help in certain fiddly tasks, such as organizing the deployment of munitions. As years go by, more tactical decisions, mundane at first, could be handed to machines.
The natural reaction is that we’re paving the way for a dystopian future akin to various science fiction films, a world taken over by self-aware robots. However, that would be missing the point: In exchanging flesh and blood for circuits and steel, it is the precise opposite of artificial intelligence we should be afraid of.
As Sharkey said: “I don’t think we’re on the path to a Terminator-style future. Those robots were clever.”
Two sets of economic data released last week by the Directorate-General of Budget, Accounting and Statistics (DGBAS) have drawn mixed reactions from the public: One on the nation’s economic performance in the first quarter of the year and the other on Taiwan’s household wealth distribution in 2021. GDP growth for the first quarter was faster than expected, at 6.51 percent year-on-year, an acceleration from the previous quarter’s 4.93 percent and higher than the agency’s February estimate of 5.92 percent. It was also the highest growth since the second quarter of 2021, when the economy expanded 8.07 percent, DGBAS data showed. The growth
In the intricate ballet of geopolitics, names signify more than mere identification: They embody history, culture and sovereignty. The recent decision by China to refer to Arunachal Pradesh as “Tsang Nan” or South Tibet, and to rename Tibet as “Xizang,” is a strategic move that extends beyond cartography into the realm of diplomatic signaling. This op-ed explores the implications of these actions and India’s potential response. Names are potent symbols in international relations, encapsulating the essence of a nation’s stance on territorial disputes. China’s choice to rename regions within Indian territory is not merely a linguistic exercise, but a symbolic assertion
More than seven months into the armed conflict in Gaza, the International Court of Justice ordered Israel to take “immediate and effective measures” to protect Palestinians in Gaza from the risk of genocide following a case brought by South Africa regarding Israel’s breaches of the 1948 Genocide Convention. The international community, including Amnesty International, called for an immediate ceasefire by all parties to prevent further loss of civilian lives and to ensure access to life-saving aid. Several protests have been organized around the world, including at the University of California Los Angeles (UCLA) and many other universities in the US.
Every day since Oct. 7 last year, the world has watched an unprecedented wave of violence rain down on Israel and the occupied Palestinian Territories — more than 200 days of constant suffering and death in Gaza with just a seven-day pause. Many of us in the American expatriate community in Taiwan have been watching this tragedy unfold in horror. We know we are implicated with every US-made “dumb” bomb dropped on a civilian target and by the diplomatic cover our government gives to the Israeli government, which has only gotten more extreme with such impunity. Meantime, multicultural coalitions of US