Last month marked the 17th anniversary of the Sept. 11, 2001, terror attacks. With it came a new milestone: The US military has been in Afghanistan for so long that someone born after the attacks is now old enough to go fight there. They can also serve in the six other places where the US is officially at war, not to mention the 133 countries where special operations forces have conducted missions in just the first half of this year.
The wars stemming from the terror attacks continue with no end in sight. Now, the Pentagon is investing heavily in technologies that will intensify them. By embracing the latest tools that the tech industry has to offer, the US military is creating a more automated form of warfare, one that will greatly increase its capacity to wage war everywhere forever.
On Friday, the US Department of Defense closes the bidding period for one of the biggest technology contracts in its history: the Joint Enterprise Defense Infrastructure (JEDI). JEDI is an ambitious project to build a cloud computing system that serves US forces all over the world, from analysts behind a desk in Virginia to soldiers on patrol in Niger. The contract is worth as much as US$10 billion over 10 years, which is why big tech companies are fighting hard to win it.
Illustration: Yusha
Not Google, however, where a pressure campaign by workers forced management to drop out of the running.
At first glance, JEDI might look like just another project to modernize information technology. Government IT tends to run a fair distance behind Silicon Valley, even in a place as lavishly funded as the Pentagon. With about 3.4 million users and 4 million devices, the department’s digital footprint is immense. Moving even a portion of its workloads to a cloud provider such as Amazon will no doubt improve efficiency.
However, the real force driving JEDI is the desire to weaponize artificial intelligence (AI) — what the department has begun calling “algorithmic warfare.”
By pooling the military’s data into a modern cloud platform and using the machine-learning services that such platforms provide to analyze that data, JEDI will help the Pentagon realize its AI ambitions.
The scale of those ambitions has grown increasingly clear in recent months. In June, the Pentagon established the Joint Artificial Intelligence Center, which will oversee the roughly 600 AI projects under way across the department at a planned cost of US$1.7 billion. This month, the US Defense Advanced Research Projects Agency, the Pentagon’s storied research and development wing, announced it would be investing up to US$2 billion over the next five years into AI weapons research.
So far, the reporting on the Pentagon’s AI spending spree has largely focused on the prospect of autonomous weapons — Terminator-style killer robots that mow people down without any input from a human operator. This is indeed a frightening near-future scenario and a global ban on autonomous weaponry of the kind sought by the Campaign to Stop Killer Robots is absolutely essential.
However, AI has already begun rewiring warfare, even if it has not taken the form of literal terminators. There are less cinematic, but equally scary ways to weaponize AI. You do not need algorithms pulling the trigger for algorithms to play an extremely dangerous role.
To understand that role, it helps to understand the particular difficulties posed by the forever war. The killing itself is not particularly difficult. With a military budget larger than that of China, Russia, Saudi Arabia, India, France, Britain and Japan combined, and about 800 bases around the world, the US has an abundance of firepower and an unparalleled ability to deploy that firepower anywhere on the planet.
The US military knows how to kill. The harder part is figuring out whom to kill. In a more traditional war, you simply kill the enemy, but who is the enemy in a conflict with no national boundaries, no fixed battlefields, and no conventional adversaries?
This is the perennial question of the forever war. It is also a key feature of its design. The vagueness of the enemy is what has enabled the conflict to continue for nearly two decades and to expand to more than 70 countries — a boon to the contractors, bureaucrats and politicians who make their living from US militarism. If war is a racket, in the words of US Marine legend Smedley Butler, the forever war is one the longest cons yet.
However, the vagueness of the enemy also creates certain challenges. It is one thing to look at a map of North Vietnam and pick places to bomb. It is quite another to sift through vast quantities of information from all over the world to identify a good candidate for a drone strike. When the enemy is everywhere, target identification becomes far more labor-intensive. This is where AI — or, more precisely, machine learning — comes in. Machine learning can help automate one of the more tedious and time-consuming aspects of the forever war: finding people to kill.
The Pentagon’s Project Maven is already putting this idea into practice. Maven, also known as the Algorithmic Warfare Cross-Functional Team, made headlines recently for sparking an employee revolt at Google over the company’s involvement. Maven is the military’s “pathfinder” AI project. Its initial phase involves using machine learning to scan drone video footage to help identify individuals, vehicles and buildings that might be worth bombing.
“We have analysts looking at full-motion video, staring at screens six, seven, eight, nine, 10, 11 hours at a time,” says the project director, Lieutenant General Jack Shanahan.
Maven’s software automates that work, then relays its discoveries to a human. So far, it has been a big success: The software has been deployed to as many as six combat locations in the Middle East and Africa. The goal is to eventually load the software onto the drones themselves so they can locate targets in real time.
Won’t this technology improve precision, thus reducing civilian casualties? This is a common argument made by higher-ups in both the Pentagon and Silicon Valley to defend their collaboration on projects like Maven.
Code for America’s Jen Pahlka puts it in terms of “sharp knives” versus “dull knives”: Sharper knives can help the military save lives.
However, in the case of weaponized AI, the knives in question arenot particularly sharp. There is no shortage of horror stories of what happens when human oversight is outsourced to faulty or prejudiced algorithms — algorithms that cannot recognize black faces, or that reinforce racial bias in policing and criminal sentencing. Do we really want the Pentagon using the same technology to help determine who gets a bomb dropped on their head?
However, the deeper problem with the humanitarian argument for algorithmic warfare is the assumption that the US military is an essentially benevolent force. Many millions of people around the world would disagree. Last year alone, the US and allied strikes in Iraq and Syria killed as many as 6,000 civilians. Numbers like these do not suggest a few honest mistakes here and there, but a systemic indifference to “collateral damage.” Indeed, the US government has repeatedly bombed civilian gatherings such as weddings in the hopes of killing a high-value target.
Furthermore, the line between civilian and combatant is highly porous in the era of the forever war.
A report from the Intercept suggests that the US military labels anyone it kills in “targeted” strikes as “enemy killed in action,” even if they were not one of the targets.
The so-called “signature strikes” conducted by the US military and the CIA play similar tricks with the concept of the combatant. These are drone attacks on individuals whose identities are unknown, but who are suspected of being militants based on displaying certain “signatures,” which can be as vague as being a military-aged male in a particular area.
In other words, the problem is not the quality of the tools, but the institution wielding them.
AI will only make that institution more brutal.
The forever war demands that the US sees enemies everywhere. AI promises to find those enemies faster — even if all it takes to be considered an enemy is exhibiting a pattern of behavior that a classified machine-learning model associates with hostile activity.
Call it death by big data.
AI also has the potential to make the forever war more permanent, by giving some of the US’ largest companies a stake in perpetuating it. Silicon Valley has always had close links to the US military, but algorithmic warfare will bring big tech deeper into the military-industrial complex and give billionaires like Amazon chief executive officer Jeff Bezos a powerful incentive to ensure the forever war lasts forever.
Enemies will be found. Money will be made.
Labubu, an elf-like plush toy with pointy ears and nine serrated teeth, has become a global sensation, worn by celebrities including Rihanna and Dua Lipa. These dolls are sold out in stores from Singapore to London; a human-sized version recently fetched a whopping US$150,000 at an auction in Beijing. With all the social media buzz, it is worth asking if we are witnessing the rise of a new-age collectible, or whether Labubu is a mere fad destined to fade. Investors certainly want to know. Pop Mart International Group Ltd, the Chinese manufacturer behind this trendy toy, has rallied 178 percent
My youngest son attends a university in Taipei. Throughout the past two years, whenever I have brought him his luggage or picked him up for the end of a semester or the start of a break, I have stayed at a hotel near his campus. In doing so, I have noticed a strange phenomenon: The hotel’s TV contained an unusual number of Chinese channels, filled with accents that would make a person feel as if they are in China. It is quite exhausting. A few days ago, while staying in the hotel, I found that of the 50 available TV channels,
Kinmen County’s political geography is provocative in and of itself. A pair of islets running up abreast the Chinese mainland, just 20 minutes by ferry from the Chinese city of Xiamen, Kinmen remains under the Taiwanese government’s control, after China’s failed invasion attempt in 1949. The provocative nature of Kinmen’s existence, along with the Matsu Islands off the coast of China’s Fuzhou City, has led to no shortage of outrageous takes and analyses in foreign media either fearmongering of a Chinese invasion or using these accidents of history to somehow understand Taiwan. Every few months a foreign reporter goes to
There is no such thing as a “silicon shield.” This trope has gained traction in the world of Taiwanese news, likely with the best intentions. Anything that breaks the China-controlled narrative that Taiwan is doomed to be conquered is welcome, but after observing its rise in recent months, I now believe that the “silicon shield” is a myth — one that is ultimately working against Taiwan. The basic silicon shield idea is that the world, particularly the US, would rush to defend Taiwan against a Chinese invasion because they do not want Beijing to seize the nation’s vital and unique chip industry. However,