Warnings about the risks posed by artificial intelligence (AI) seem to be everywhere nowadays. From Elon Musk to former US secretary of state Henry Kissinger, people are sounding the alarm that super-smart computers could wipe us out, like in the Terminator films. To hear them talk, you would think we were on the brink of dystopia — that Skynet is nearly upon us.
These warnings matter, but they gloss over a more urgent problem: weaponized AI is already here. As you watch this, powerful interests — from corporations to state agencies, like militaries and police agencies — are using AI to monitor people, assess them and to make consequential decisions about their lives.
Should we have a treaty ban on autonomous weapons? Absolutely. However, we do not need to take humans “out of the loop” to do damage. Faulty algorithmic processing has been hurting poor and vulnerable communities for years.
I first noticed how data-driven targeting could go wrong five years ago, in Yemen. I was in the capital, Sana’a, interviewing survivors of a US drone attack that had killed innocent people. Two of the civilians who died could have been US allies. One was the village policeman and the other was an imam who had preached against al-Qaeda days before the strike. One of the men’s surviving relatives, an engineer called Faisal bin Ali Jaber, came to me with a simple question: Why were his loved ones targeted?
Faisal and I traveled 11,000km from the Arabian Peninsula to Washington looking for answers. White House officials met Faisal, but no one would explain why his family got caught in the crosshairs.
In time, the truth became clear. Faisal’s relatives died because they got mistakenly caught up in a semi-automated targeting matrix.
We know this because the US has admitted that its drones attack targets whose identities are unknown. That is where AI comes in. The US does not have deep human intelligence sources in Yemen, so it relies heavily on massive sweeps of signals data. AI processes this data — and throws up red flags in a targeting algorithm. A human fired the missiles, but almost certainly did so on the software’s recommendation.
These kinds of attacks, called “signature strikes,” make up the majority of drone strikes. Meanwhile, civilian airstrike deaths have become more numerous under US President Donald Trump — more than 6,000 last year in Iraq and Syria alone.
This is AI at its most controversial.
The controversy spilled over to Google this spring, with thousands of the company’s employees protesting — and some resigning — over a bid to help the US Department of Defense analyze drone feeds. However, this is not the only potential abuse of AI we need to consider.
Journalists have started exploring many problematic uses of AI: predictive policing heatmaps have amplified racial bias in our criminal justice system; Facial recognition, which the police are testing in cities like London, has been wrong as much as 98 percent of the time; while shopping online, you might be paying more than your neighbor because of discriminatory pricing; and we have all heard how state actors have exploited Facebook’s News Feed to put propaganda on the screens of millions.
Academics sometimes say that the field of AI and machine learning is in its adolescence. If that is the case, it is an adolescent we have given the power to influence our news, to hire and fire people, and even kill them.
For human rights advocates and concerned citizens, investigating and controlling these uses of AI is one of the most urgent issues we face. Every time we hear of a data-driven policy decision, we should ask ourselves: Who is using the software? Who are they targeting? Who stands to gain — and who to lose? How do we hold the people who use these tools, as well as the people who built them, to account?
Cori Crider, a US lawyer, investigates the national security state and the ethics of technology in intelligence. She is a former director of international human rights organization Reprieve.
Copyright: Project Syndicate
Former Chinese Nationalist Party (KMT) lawmaker Cheng Li-wun (鄭麗文) on Saturday won the party’s chairperson election with 65,122 votes, or 50.15 percent of the votes, becoming the second woman in the seat and the first to have switched allegiance from the Democratic Progressive Party (DPP) to the KMT. Cheng, running for the top KMT position for the first time, had been termed a “dark horse,” while the biggest contender was former Taipei mayor Hau Lung-bin (郝龍斌), considered by many to represent the party’s establishment elite. Hau also has substantial experience in government and in the KMT. Cheng joined the Wild Lily Student
The Chinese Nationalist Party (KMT) has its chairperson election tomorrow. Although the party has long positioned itself as “China friendly,” the election is overshadowed by “an overwhelming wave of Chinese intervention.” The six candidates vying for the chair are former Taipei mayor Hau Lung-bin (郝龍斌), former lawmaker Cheng Li-wen (鄭麗文), Legislator Luo Chih-chiang (羅智強), Sun Yat-sen School president Chang Ya-chung (張亞中), former National Assembly representative Tsai Chih-hong (蔡志弘) and former Changhua County comissioner Zhuo Bo-yuan (卓伯源). While Cheng and Hau are front-runners in different surveys, Hau has complained of an online defamation campaign against him coming from accounts with foreign IP addresses,
When Taiwan High Speed Rail Corp (THSRC) announced the implementation of a new “quiet carriage” policy across all train cars on Sept. 22, I — a classroom teacher who frequently takes the high-speed rail — was filled with anticipation. The days of passengers videoconferencing as if there were no one else on the train, playing videos at full volume or speaking loudly without regard for others finally seemed numbered. However, this battle for silence was lost after less than one month. Faced with emotional guilt from infants and anxious parents, THSRC caved and retreated. However, official high-speed rail data have long
Taipei stands as one of the safest capital cities the world. Taiwan has exceptionally low crime rates — lower than many European nations — and is one of Asia’s leading democracies, respected for its rule of law and commitment to human rights. It is among the few Asian countries to have given legal effect to the International Covenant on Civil and Political Rights and the International Covenant of Social Economic and Cultural Rights. Yet Taiwan continues to uphold the death penalty. This year, the government has taken a number of regressive steps: Executions have resumed, proposals for harsher prison sentences