In the past 15 years, we have witnessed an explosion in the amount of digital data available — from the Internet, social media, scientific equipment, smartphones, surveillance cameras and many other sources — and in the computer technologies used to process it. “Big data,” as it is known, will undoubtedly deliver important scientific, technological and medical advances, but it also poses serious risks if it is misused or abused.
Already, major innovations such as Internet search engines, machine translation and image labeling have relied on applying machine-learning techniques to vast data sets. In the near future, big data could significantly improve government policymaking, social-welfare programs and scholarship.
However, having more data is no substitute for having high-quality data. For example, a recent article in Nature reports that election pollsters in the US are struggling to obtain representative samples of the population because they are legally permitted to call only landline telephones, whereas Americans increasingly rely on cellphones.
While one can find countless political opinions on social media, these are not reliably representative of voters, either. In fact, a substantial share of tweets and Facebook posts about politics are computer-generated.
In recent years, automated programs based on biased data sets have caused numerous scandals.
For example, in April last year, when a college student searched Google Images for “unprofessional hairstyles for work,” the results showed mostly pictures of black people; when the student changed the first search term to “professional,” Google returned mostly pictures of white people.
However, this was not the result of bias on the part of Google’s programmers; rather, it reflected how people had labeled pictures on the Internet.
A big data program that used this search result to evaluate hiring and promotion decisions might penalize black candidates who resembled the pictures in the results for “unprofessional hairstyles,” thereby perpetuating traditional social biases.
This is not just a hypothetical possibility. Last year, a ProPublica investigation of “recidivism risk models” demonstrated that a widely used methodology to determine sentences for convicted criminals systematically overestimates the likelihood that black defendants will commit crimes in the future, and underestimates the risk that white defendants will do so.
Another hazard of big data is that it can be gamed. When people know that a data set is being used to make important decisions that will affect them, they have an incentive to tip the scales in their favor. For example, teachers who are judged according to their students’ test scores may be more likely to “teach to the test,” or even to cheat.
Similarly, college administrators who want to move their institutions up in the US News and World Report rankings have made unwise decisions, such as investing in extravagant gyms at the expense of academics. Worse, they have made grotesquely unethical decisions, such as the effort by Mount Saint Mary’s University to boost its “retention rate” by identifying and expelling weaker students in the first few weeks of school.
Even Google’s search engine is not immune. Despite being driven by an enormous amount of data overseen by some of the world’s top data scientists, its results are susceptible to “search-engine optimization” and manipulation, such as “Google bombing,” “spamdexing” and other methods serving parochial interests.
A third hazard is privacy violations, because so much of the data now available contains personal information. In recent years, enormous collections of confidential data have been stolen from commercial and government sites; and researchers have shown how people’s political opinions or even sexual preferences can be accurately gleaned from seemingly innocuous online postings, such as movie reviews — even when they are published pseudonymously.
Finally, big data poses a challenge for accountability. Someone who feels that he or she has been treated unfairly by an algorithm’s decision often has no way to appeal it, either because specific results cannot be interpreted, or because the people who have written the algorithm refuse to provide details about how it works.
While governments or corporations might intimidate anyone who objects by describing their algorithms as “mathematical” or “scientific,” they, too, are often awed by their creations’ behavior. The EU recently adopted a measure guaranteeing people affected by algorithms a “right to an explanation,” but only time will tell how this will work in practice.
When people who are harmed by big data have no avenues for recourse, the results can be toxic and far-reaching, as data scientist Cathy O’Neil demonstrates in her recent book Weapons of Math Destruction.
The good news is that the hazards of big data can be largely avoided, but they will not be unless we zealously protect people’s privacy, detect and correct unfairness, use algorithmic recommendations prudently and maintain a rigorous understanding of algorithms’ inner workings and the data that informs their decisions.
Ernest Davis is a professor of computer science at the Courant Institute of Mathematical Sciences, New York University.
Copyright: Project Syndicate
As strategic tensions escalate across the vast Indo-Pacific region, Taiwan has emerged as more than a potential flashpoint. It is the fulcrum upon which the credibility of the evolving American-led strategy of integrated deterrence now rests. How the US and regional powers like Japan respond to Taiwan’s defense, and how credible the deterrent against Chinese aggression proves to be, will profoundly shape the Indo-Pacific security architecture for years to come. A successful defense of Taiwan through strengthened deterrence in the Indo-Pacific would enhance the credibility of the US-led alliance system and underpin America’s global preeminence, while a failure of integrated deterrence would
It is being said every second day: The ongoing recall campaign in Taiwan — where citizens are trying to collect enough signatures to trigger re-elections for a number of Chinese Nationalist Party (KMT) legislators — is orchestrated by the Democratic Progressive Party (DPP), or even President William Lai (賴清德) himself. The KMT makes the claim, and foreign media and analysts repeat it. However, they never show any proof — because there is not any. It is alarming how easily academics, journalists and experts toss around claims that amount to accusing a democratic government of conspiracy — without a shred of evidence. These
Taiwan is confronting escalating threats from its behemoth neighbor. Last month, the Chinese People’s Liberation Army conducted live-fire drills in the East China Sea, practicing blockades and precision strikes on simulated targets, while its escalating cyberattacks targeting government, financial and telecommunication systems threaten to disrupt Taiwan’s digital infrastructure. The mounting geopolitical pressure underscores Taiwan’s need to strengthen its defense capabilities to deter possible aggression and improve civilian preparedness. The consequences of inadequate preparation have been made all too clear by the tragic situation in Ukraine. Taiwan can build on its successful COVID-19 response, marked by effective planning and execution, to enhance
Since taking office, US President Donald Trump has upheld the core goals of “making America safer, stronger, and more prosperous,” fully implementing an “America first” policy. Countries have responded cautiously to the fresh style and rapid pace of the new Trump administration. The US has prioritized reindustrialization, building a stronger US role in the Indo-Pacific, and countering China’s malicious influence. This has created a high degree of alignment between the interests of Taiwan and the US in security, economics, technology and other spheres. Taiwan must properly understand the Trump administration’s intentions and coordinate, connect and correspond with US strategic goals.