Recent months could be remembered as the moment when predictive artificial intelligence (AI) went mainstream. While prediction algorithms have been in use for decades, the release of applications such as OpenAI’s ChatGPT3 — and its rapid integration with Microsoft’s Bing search engine — might have unleashed the floodgates when it comes to user-friendly AI.
Within weeks of ChatGPT3’s release, it had already attracted 100 million monthly users, many of whom have doubtless already experienced its dark side — from insults and threats to disinformation and a demonstrated ability to write malicious code.
The chatbots that are generating headlines are just the tip of the iceberg. AI for creating text, speech, art and video are progressing rapidly, with far-reaching implications for governance, commerce and civic life. Not surprisingly, capital is flooding into the sector, with governments and companies investing in start-ups to develop and deploy the latest machine-learning tools.
Illustration: Yusha
These new applications would combine historical data with machine learning, natural language processing and deep learning to determine the probability of future events.
Crucially, adoption of the new natural language processing and generative AI would not be confined to the wealthy countries and companies such as Google, Meta and Microsoft that spearheaded their creation.
These technologies are already spreading across low and middle-income settings, where predictive analytics for everything from reducing urban inequality to addressing food security hold tremendous promise for cash-strapped governments, firms and non-governmental organizations seeking to improve efficiency and unlock social and economic benefits.
However, the problem is that there has been insufficient attention given to the potential negative externalities and unintended effects of these technologies. The most obvious risk is that unprecedentedly powerful predictive tools could strengthen authoritarian regimes’ surveillance capacity.
One widely cited example is China’s “social-credit system,” which uses credit histories, criminal convictions, online behavior and other data to assign a score to every person in the country.
Those scores can be used to determine whether someone can secure a loan, access a good school, travel by rail or air, and so forth. Although China’s system is billed as a tool to improve transparency, it doubles as an instrument of social control.
Even when used by ostensibly well-intentioned democratic governments, companies focused on social impact and progressive nonprofits, predictive tools can generate sub-optimal outcomes.
Design flaws in the underlying algorithms and biased data sets can lead to privacy breaches and identity-based discrimination.
This has already become a glaring issue in criminal justice, where predictive analytics routinely perpetuate racial and socio-economic disparities. For example, an AI system built to help US judges assess the likelihood of recidivism erroneously determined that black defendants are at far greater risk of re-offending than white ones.
Concerns about how AI could deepen inequalities in the workplace are also growing. Predictive algorithms have been increasing efficiency and profits in ways that benefit managers and shareholders at the expense of rank-and-file workers — especially in the gig economy.
In all these examples, AI systems are holding up a fun house mirror to society, reflecting and magnifying our biases and inequities.
Digitization tends to exacerbate, rather than ameliorate, existing political, social and economic problems, technology researcher Nanjira Sambuli said.
The enthusiasm to adopt predictive tools must be balanced against informed and ethical consideration of their intended and unintended effects. Where the effects of powerful algorithms are disputed or unknown, the precautionary principle would counsel against deploying them.
AI must not become another domain where decisionmakers ask for forgiveness rather than permission. That is why the UN High Commissioner for Human Rights and others have called for moratoriums on the adoption of AI systems until ethical and human-rights frameworks have been updated to account for their potential harms.
Crafting the appropriate frameworks would require a consensus on the basic principles that should inform the design and use of predictive AI tools.
Fortunately, the race for AI has led to a parallel flurry of research, initiatives, institutes and networks on ethics. While civil society has taken the lead, intergovernmental entities such as the Organisation for Economic Co-operation and Development and UNESCO have also become involved.
The UN has been working on building universal standards for ethical AI since at least 2021. The EU has proposed an AI act — the first such effort by a major regulator — which would block certain uses, such as those resembling China’s social-credit system, and subject other high-risk applications to specific requirements and oversight.
This debate has been concentrated overwhelmingly in North America and Western Europe.
However, low and middle-income countries have their own baseline needs, concerns and social inequities to consider. There is ample research showing that technologies developed by and for markets in advanced economies are often inappropriate for less-developed economies.
If AI tools are simply imported and put into wide use before the necessary governance structures are in place, they could easily do more harm than good. All these issues must be considered to devise truly universal principles for AI governance.
Recognizing these gaps, the think tanks Igarape Institute and New America recently launched a new Global Task Force on Predictive Analytics for Security and Development. The task force is to convene digital-rights advocates, public-sector partners, tech entrepreneurs and social scientists from the Americas, Africa, Asia and Europe, with the goal of defining principles for the use of predictive technologies in public safety, and sustainable development in the Global South.
Formulating these principles and standards is just the first step. The bigger challenge will be to marshal the international, national and subnational collaboration and coordination needed to implement them in law and practice.
In the global rush to develop and deploy new predictive AI tools, harm-prevention frameworks are essential to ensure a secure, prosperous, sustainable and human-centered future.
Robert Muggah, cofounder of the Igarape Institute and the SecDev Group, is a member of the World Economic Forum’s Global Future Council on Cities of Tomorrow and an adviser to the Global Risks Report. Gabriella Seiler is a consultant at the Igarape Institute and a partner and director at Kunumi. Gordon LaForge is a senior policy analyst at New America and a lecturer at the Thunderbird School of Global Management at Arizona State University.
Copyright: Project Syndicate
Recently, China launched another diplomatic offensive against Taiwan, improperly linking its “one China principle” with UN General Assembly Resolution 2758 to constrain Taiwan’s diplomatic space. After Taiwan’s presidential election on Jan. 13, China persuaded Nauru to sever diplomatic ties with Taiwan. Nauru cited Resolution 2758 in its declaration of the diplomatic break. Subsequently, during the WHO Executive Board meeting that month, Beijing rallied countries including Venezuela, Zimbabwe, Belarus, Egypt, Nicaragua, Sri Lanka, Laos, Russia, Syria and Pakistan to reiterate the “one China principle” in their statements, and assert that “Resolution 2758 has settled the status of Taiwan” to hinder Taiwan’s
Singaporean Prime Minister Lee Hsien Loong’s (李顯龍) decision to step down after 19 years and hand power to his deputy, Lawrence Wong (黃循財), on May 15 was expected — though, perhaps, not so soon. Most political analysts had been eyeing an end-of-year handover, to ensure more time for Wong to study and shadow the role, ahead of general elections that must be called by November next year. Wong — who is currently both deputy prime minister and minister of finance — would need a combination of fresh ideas, wisdom and experience as he writes the nation’s next chapter. The world that
The past few months have seen tremendous strides in India’s journey to develop a vibrant semiconductor and electronics ecosystem. The nation’s established prowess in information technology (IT) has earned it much-needed revenue and prestige across the globe. Now, through the convergence of engineering talent, supportive government policies, an expanding market and technologically adaptive entrepreneurship, India is striving to become part of global electronics and semiconductor supply chains. Indian Prime Minister Narendra Modi’s Vision of “Make in India” and “Design in India” has been the guiding force behind the government’s incentive schemes that span skilling, design, fabrication, assembly, testing and packaging, and
As former president Ma Ying-jeou (馬英九) wrapped up his visit to the People’s Republic of China, he received his share of attention. Certainly, the trip must be seen within the full context of Ma’s life, that is, his eight-year presidency, the Sunflower movement and his failed Economic Cooperation Framework Agreement, as well as his eight years as Taipei mayor with its posturing, accusations of money laundering, and ups and downs. Through all that, basic questions stand out: “What drives Ma? What is his end game?” Having observed and commented on Ma for decades, it is all ironically reminiscent of former US president Harry