It is commonly believed that the future of humanity will one day be threatened by the rise of artificial intelligence (AI), perhaps embodied in malevolent robots. Yet as we enter the third decade of the millennium, it is not the singularity we should fear, but rather a much older enemy: ourselves.
Think less The Terminator, more Minority Report.
We are rapidly developing literal mind-reading technology without any framework for how to control it. Imagine, for a moment, if human beings had evolved to be able to read each other’s minds. How would that have gone for us?
To answer this question, consider your own internal dialogues. It is safe to assume that every one of us has had thoughts that would be shocking even (or especially) to those closest to us.
How would those who might not wish us well have reacted to being able to hear what emotional rants go through our heads from time to time?
Would they have had the judgment to let them pass, recognizing them as just flashes of emotion? Or would some have responded opportunistically, taking advantage of thoughts we would otherwise not wish to betray?
Evolution did not enable us to read minds because that power might have ended our existence as a species. Instead, as our ancient ancestors organized into groups for protection, most of us learned what could be said and what was best left unspoken.
Over time, this became a highly evolved human trait that enabled societies to form, cities to rise, and even hundreds of stressed out people to be jammed into a flying tube, usually without attacking their seat-mates.
It forms a core part of what we now call EQ, or emotional intelligence.
And yet technology is now beginning to threaten this necessary evolutionary adaptation in a fundamental way.
The first stage has taken place in social media. Facebook underscored this trajectory, when Russian manipulation of the platform affected the US’ presidential election in 2016.
And Twitter, which empowers a user to dash off a passing thought or emotion that might then be shared with millions, amplifies this trend. Imagine how North Korean leaders struggled to interpret US President Donald Trump’s tweet of nuclear “fire and fury.”
Was it a real threat from a new and erratic US leader, or just a spur-of-the-moment exhalation, a mental flash without a filter that would best be ignored?
Back in the days of the bipolar superpower world, the iconic US-Soviet hotline telephone was installed as a way to clarify each side’s intentions, lest through some misunderstanding the world might otherwise disappear beneath a nuclear mushroom cloud.
Today, in our much more complicated multipolar and asymmetric-threat-driven world, social media offers all who are willing a giant, unedited megaphone.
Social media has become a tool that can undermine democracy; and yet it is mere child’s play compared to what is now barreling our way.
Companies ranging from start-ups to multinational conglomerates have recently announced startling innovations that enable mind reading.
Elon Musk’s company Neuralink is seeking approval for human trials of a device implanted in users’ brains to read their minds.
Nissan has developed “Brain-to-Vehicle” technology that enables a car to read instructions from a driver’s mind. Facebook has funded scientists that use brainwaves to decode speech.
A recent paper in the science journal Nature explains how AI can create speech by analyzing brain signals.
Researchers at Columbia University have developed technology that can analyze brain activity to determine what a user wants and vocalize those desires via a synthesizer.
Clearly, these kinds of advances can offer real benefits, including helping those suffering from paralysis or neurological disorders. Early examples of neuroprosthetics, such as cochlear implants, which enable a deaf person to hear, or promising devices that could allow the blind to see, are already in use.
However, there are also darker potential applications, like enabling advertisers to micro-hone their offerings to individuals’ unspoken desires, or employers to spy on their workers, or police to monitor citizens’ possible criminal intent on a vast scale, akin to the way London residents today are tracked on CCTV.
An early warning is ToTok, one of the most downloaded social-media apps, which, it was recently revealed, the United Arab Emirates government had been using to spy on users.
What happens if mind-reading devices are hacked? It is difficult to imagine a more relevant area of data privacy than that which exists in the human brain.
Musk believes that brain interfaces will be necessary for humans to keep up with AI. This brings us back to Philip K. Dick’s science fiction horror story The Minority Report (the basis of the 2002 film). Consider the myriad of thorny ethical, legal, and social-order implications of a policeman stopping a crime before it takes place because he or she could “assess” an individual’s likely intent by reading their brainwaves.
When is a crime committed? When the thought takes place? When actions begin that manifest the thought in reality? When the gun is pointed? When the trigger finger tightens?
A principal challenge of technological innovation is that it usually takes society a long time to catch up, understand the broader implications of how the new technology can be used and abused, and provide appropriate legal and regulatory frameworks to regulate its conduct.
In the second decade of this millennium, social media moved from a tool to connect to a platform with immense power to spread lies and manipulate elections.
Society is now grappling with how to harness the best of this innovation, while mitigating its potential for abuse.
Perhaps, before we have even figured that out, the third decade of the millennium will confront us with far more consequential technological challenges.
Alexander Friedman, an investor, is a former chief executive of GAM Investments, chief investment officer of UBS, chief financial officer of the Bill & Melinda Gates Foundation and a White House fellow.
Copyright: Project Syndicate
Taiwan should reject two flawed answers to the Eswatini controversy: that diplomatic allies no longer matter, or that they must be preserved at any cost. The sustainable answer is to maintain formal diplomatic relations while redesigning development relationships around transparency, local ownership and democratic accountability. President William Lai’s (賴清德) canceled trip to Eswatini has elicited two predictable reactions in Taiwan. One camp has argued that the episode proves Taiwan must double down on support for every remaining diplomatic ally, because Beijing is tightening the screws, and formal recognition is too scarce to risk. The other says the opposite: If maintaining
India’s semiconductor strategy is undergoing a quiet, but significant, recalibration. With the rollout of India Semiconductor Mission (ISM) 2.0, New Delhi is signaling a shift away from ambition-driven leaps toward a more grounded, capability-led approach rooted in industrial realities and institutional learning. Rather than attempting to enter the most advanced nodes immediately, India has chosen to prioritize mature technologies in the 28-nanometer to 65-nanometer range. That would not be a retreat, but a strategic alignment with domestic capabilities, market demand and global supply chain gaps. The shift carries the imprimatur of Indian Prime Minister Narendra Modi, indicating that the recalibration is
Chinese Nationalist Party (KMT) Chairwoman Cheng Li-wun (鄭麗文), during an interview for the podcast Lanshuan Time (蘭萱時間) released on Monday, said that a US professor had said that she deserved to be nominated for the Nobel Peace Prize following her meeting earlier this month with Chinese President Xi Jinping (習近平). Cheng’s “journey of peace” has garnered attention from overseas and from within Taiwan. The latest My Formosa poll, conducted last week after the Cheng-Xi meeting, shows that Cheng’s approval rating is 31.5 percent, up 7.6 percentage points compared with the month before. The same poll showed that 44.5 percent of respondents
China last week announced that it picked two Pakistani astronauts for its Tiangong space station mission, indicating the maturation of the two nations’ relationship from terrestrial infrastructure cooperation to extraterrestrial strategic domains. For Taiwan and India, the developments present an opportunity for democratic collaboration in space, particularly regarding dual-use technologies and the normative frameworks for outer space governance. Sino-Pakistani space cooperation dates back to the end of the Cold War in the 1990s, with a cooperative agreement between the Pakistani Space & Upper Atmosphere Research Commission, and the Chinese Ministry of Aerospace Industry. Space cooperation was integrated into the China-Pakistan