Last month marked the 17th anniversary of the Sept. 11, 2001, terror attacks. With it came a new milestone: The US military has been in Afghanistan for so long that someone born after the attacks is now old enough to go fight there. They can also serve in the six other places where the US is officially at war, not to mention the 133 countries where special operations forces have conducted missions in just the first half of this year.
The wars stemming from the terror attacks continue with no end in sight. Now, the Pentagon is investing heavily in technologies that will intensify them. By embracing the latest tools that the tech industry has to offer, the US military is creating a more automated form of warfare, one that will greatly increase its capacity to wage war everywhere forever.
On Friday, the US Department of Defense closes the bidding period for one of the biggest technology contracts in its history: the Joint Enterprise Defense Infrastructure (JEDI). JEDI is an ambitious project to build a cloud computing system that serves US forces all over the world, from analysts behind a desk in Virginia to soldiers on patrol in Niger. The contract is worth as much as US$10 billion over 10 years, which is why big tech companies are fighting hard to win it.
Illustration: Yusha
Not Google, however, where a pressure campaign by workers forced management to drop out of the running.
At first glance, JEDI might look like just another project to modernize information technology. Government IT tends to run a fair distance behind Silicon Valley, even in a place as lavishly funded as the Pentagon. With about 3.4 million users and 4 million devices, the department’s digital footprint is immense. Moving even a portion of its workloads to a cloud provider such as Amazon will no doubt improve efficiency.
However, the real force driving JEDI is the desire to weaponize artificial intelligence (AI) — what the department has begun calling “algorithmic warfare.”
By pooling the military’s data into a modern cloud platform and using the machine-learning services that such platforms provide to analyze that data, JEDI will help the Pentagon realize its AI ambitions.
The scale of those ambitions has grown increasingly clear in recent months. In June, the Pentagon established the Joint Artificial Intelligence Center, which will oversee the roughly 600 AI projects under way across the department at a planned cost of US$1.7 billion. This month, the US Defense Advanced Research Projects Agency, the Pentagon’s storied research and development wing, announced it would be investing up to US$2 billion over the next five years into AI weapons research.
So far, the reporting on the Pentagon’s AI spending spree has largely focused on the prospect of autonomous weapons — Terminator-style killer robots that mow people down without any input from a human operator. This is indeed a frightening near-future scenario and a global ban on autonomous weaponry of the kind sought by the Campaign to Stop Killer Robots is absolutely essential.
However, AI has already begun rewiring warfare, even if it has not taken the form of literal terminators. There are less cinematic, but equally scary ways to weaponize AI. You do not need algorithms pulling the trigger for algorithms to play an extremely dangerous role.
To understand that role, it helps to understand the particular difficulties posed by the forever war. The killing itself is not particularly difficult. With a military budget larger than that of China, Russia, Saudi Arabia, India, France, Britain and Japan combined, and about 800 bases around the world, the US has an abundance of firepower and an unparalleled ability to deploy that firepower anywhere on the planet.
The US military knows how to kill. The harder part is figuring out whom to kill. In a more traditional war, you simply kill the enemy, but who is the enemy in a conflict with no national boundaries, no fixed battlefields, and no conventional adversaries?
This is the perennial question of the forever war. It is also a key feature of its design. The vagueness of the enemy is what has enabled the conflict to continue for nearly two decades and to expand to more than 70 countries — a boon to the contractors, bureaucrats and politicians who make their living from US militarism. If war is a racket, in the words of US Marine legend Smedley Butler, the forever war is one the longest cons yet.
However, the vagueness of the enemy also creates certain challenges. It is one thing to look at a map of North Vietnam and pick places to bomb. It is quite another to sift through vast quantities of information from all over the world to identify a good candidate for a drone strike. When the enemy is everywhere, target identification becomes far more labor-intensive. This is where AI — or, more precisely, machine learning — comes in. Machine learning can help automate one of the more tedious and time-consuming aspects of the forever war: finding people to kill.
The Pentagon’s Project Maven is already putting this idea into practice. Maven, also known as the Algorithmic Warfare Cross-Functional Team, made headlines recently for sparking an employee revolt at Google over the company’s involvement. Maven is the military’s “pathfinder” AI project. Its initial phase involves using machine learning to scan drone video footage to help identify individuals, vehicles and buildings that might be worth bombing.
“We have analysts looking at full-motion video, staring at screens six, seven, eight, nine, 10, 11 hours at a time,” says the project director, Lieutenant General Jack Shanahan.
Maven’s software automates that work, then relays its discoveries to a human. So far, it has been a big success: The software has been deployed to as many as six combat locations in the Middle East and Africa. The goal is to eventually load the software onto the drones themselves so they can locate targets in real time.
Won’t this technology improve precision, thus reducing civilian casualties? This is a common argument made by higher-ups in both the Pentagon and Silicon Valley to defend their collaboration on projects like Maven.
Code for America’s Jen Pahlka puts it in terms of “sharp knives” versus “dull knives”: Sharper knives can help the military save lives.
However, in the case of weaponized AI, the knives in question arenot particularly sharp. There is no shortage of horror stories of what happens when human oversight is outsourced to faulty or prejudiced algorithms — algorithms that cannot recognize black faces, or that reinforce racial bias in policing and criminal sentencing. Do we really want the Pentagon using the same technology to help determine who gets a bomb dropped on their head?
However, the deeper problem with the humanitarian argument for algorithmic warfare is the assumption that the US military is an essentially benevolent force. Many millions of people around the world would disagree. Last year alone, the US and allied strikes in Iraq and Syria killed as many as 6,000 civilians. Numbers like these do not suggest a few honest mistakes here and there, but a systemic indifference to “collateral damage.” Indeed, the US government has repeatedly bombed civilian gatherings such as weddings in the hopes of killing a high-value target.
Furthermore, the line between civilian and combatant is highly porous in the era of the forever war.
A report from the Intercept suggests that the US military labels anyone it kills in “targeted” strikes as “enemy killed in action,” even if they were not one of the targets.
The so-called “signature strikes” conducted by the US military and the CIA play similar tricks with the concept of the combatant. These are drone attacks on individuals whose identities are unknown, but who are suspected of being militants based on displaying certain “signatures,” which can be as vague as being a military-aged male in a particular area.
In other words, the problem is not the quality of the tools, but the institution wielding them.
AI will only make that institution more brutal.
The forever war demands that the US sees enemies everywhere. AI promises to find those enemies faster — even if all it takes to be considered an enemy is exhibiting a pattern of behavior that a classified machine-learning model associates with hostile activity.
Call it death by big data.
AI also has the potential to make the forever war more permanent, by giving some of the US’ largest companies a stake in perpetuating it. Silicon Valley has always had close links to the US military, but algorithmic warfare will bring big tech deeper into the military-industrial complex and give billionaires like Amazon chief executive officer Jeff Bezos a powerful incentive to ensure the forever war lasts forever.
Enemies will be found. Money will be made.
Taiwan is a small, humble place. There is no Eiffel Tower, no pyramids — no singular attraction that draws the world’s attention. If it makes headlines, it is because China wants to invade. Yet, those who find their way here by some twist of fate often fall in love. If you ask them why, some cite numbers showing it is one of the freest and safest countries in the world. Others talk about something harder to name: The quiet order of queues, the shared umbrellas for anyone caught in the rain, the way people stand so elderly riders can sit, the
Taiwan’s fall would be “a disaster for American interests,” US President Donald Trump’s nominee for undersecretary of defense for policy Elbridge Colby said at his Senate confirmation hearing on Tuesday last week, as he warned of the “dramatic deterioration of military balance” in the western Pacific. The Republic of China (Taiwan) is indeed facing a unique and acute threat from the Chinese Communist Party’s rising military adventurism, which is why Taiwan has been bolstering its defenses. As US Senator Tom Cotton rightly pointed out in the same hearing, “[although] Taiwan’s defense spending is still inadequate ... [it] has been trending upwards
Small and medium enterprises make up the backbone of Taiwan’s economy, yet large corporations such as Taiwan Semiconductor Manufacturing Co (TSMC) play a crucial role in shaping its industrial structure, economic development and global standing. The company reported a record net profit of NT$374.68 billion (US$11.41 billion) for the fourth quarter last year, a 57 percent year-on-year increase, with revenue reaching NT$868.46 billion, a 39 percent increase. Taiwan’s GDP last year was about NT$24.62 trillion, according to the Directorate-General of Budget, Accounting and Statistics, meaning TSMC’s quarterly revenue alone accounted for about 3.5 percent of Taiwan’s GDP last year, with the company’s
There is nothing the Chinese Nationalist Party (KMT) could do to stop the tsunami-like mass recall campaign. KMT Chairman Eric Chu (朱立倫) reportedly said the party does not exclude the option of conditionally proposing a no-confidence vote against the premier, which the party later denied. Did an “actuary” like Chu finally come around to thinking it should get tough with the ruling party? The KMT says the Democratic Progressive Party (DPP) is leading a minority government with only a 40 percent share of the vote. It has said that the DPP is out of touch with the electorate, has proposed a bloated