It is commonly believed that the future of humanity will one day be threatened by the rise of artificial intelligence (AI), perhaps embodied in malevolent robots. Yet as we enter the third decade of the millennium, it is not the singularity we should fear, but rather a much older enemy: ourselves.
Think less The Terminator, more Minority Report.
We are rapidly developing literal mind-reading technology without any framework for how to control it. Imagine, for a moment, if human beings had evolved to be able to read each other’s minds. How would that have gone for us?
To answer this question, consider your own internal dialogues. It is safe to assume that every one of us has had thoughts that would be shocking even (or especially) to those closest to us.
How would those who might not wish us well have reacted to being able to hear what emotional rants go through our heads from time to time?
Would they have had the judgment to let them pass, recognizing them as just flashes of emotion? Or would some have responded opportunistically, taking advantage of thoughts we would otherwise not wish to betray?
Evolution did not enable us to read minds because that power might have ended our existence as a species. Instead, as our ancient ancestors organized into groups for protection, most of us learned what could be said and what was best left unspoken.
Over time, this became a highly evolved human trait that enabled societies to form, cities to rise, and even hundreds of stressed out people to be jammed into a flying tube, usually without attacking their seat-mates.
It forms a core part of what we now call EQ, or emotional intelligence.
And yet technology is now beginning to threaten this necessary evolutionary adaptation in a fundamental way.
The first stage has taken place in social media. Facebook underscored this trajectory, when Russian manipulation of the platform affected the US’ presidential election in 2016.
And Twitter, which empowers a user to dash off a passing thought or emotion that might then be shared with millions, amplifies this trend. Imagine how North Korean leaders struggled to interpret US President Donald Trump’s tweet of nuclear “fire and fury.”
Was it a real threat from a new and erratic US leader, or just a spur-of-the-moment exhalation, a mental flash without a filter that would best be ignored?
Back in the days of the bipolar superpower world, the iconic US-Soviet hotline telephone was installed as a way to clarify each side’s intentions, lest through some misunderstanding the world might otherwise disappear beneath a nuclear mushroom cloud.
Today, in our much more complicated multipolar and asymmetric-threat-driven world, social media offers all who are willing a giant, unedited megaphone.
Social media has become a tool that can undermine democracy; and yet it is mere child’s play compared to what is now barreling our way.
Companies ranging from start-ups to multinational conglomerates have recently announced startling innovations that enable mind reading.
Elon Musk’s company Neuralink is seeking approval for human trials of a device implanted in users’ brains to read their minds.
Nissan has developed “Brain-to-Vehicle” technology that enables a car to read instructions from a driver’s mind. Facebook has funded scientists that use brainwaves to decode speech.
A recent paper in the science journal Nature explains how AI can create speech by analyzing brain signals.
Researchers at Columbia University have developed technology that can analyze brain activity to determine what a user wants and vocalize those desires via a synthesizer.
Clearly, these kinds of advances can offer real benefits, including helping those suffering from paralysis or neurological disorders. Early examples of neuroprosthetics, such as cochlear implants, which enable a deaf person to hear, or promising devices that could allow the blind to see, are already in use.
However, there are also darker potential applications, like enabling advertisers to micro-hone their offerings to individuals’ unspoken desires, or employers to spy on their workers, or police to monitor citizens’ possible criminal intent on a vast scale, akin to the way London residents today are tracked on CCTV.
An early warning is ToTok, one of the most downloaded social-media apps, which, it was recently revealed, the United Arab Emirates government had been using to spy on users.
What happens if mind-reading devices are hacked? It is difficult to imagine a more relevant area of data privacy than that which exists in the human brain.
Musk believes that brain interfaces will be necessary for humans to keep up with AI. This brings us back to Philip K. Dick’s science fiction horror story The Minority Report (the basis of the 2002 film). Consider the myriad of thorny ethical, legal, and social-order implications of a policeman stopping a crime before it takes place because he or she could “assess” an individual’s likely intent by reading their brainwaves.
When is a crime committed? When the thought takes place? When actions begin that manifest the thought in reality? When the gun is pointed? When the trigger finger tightens?
A principal challenge of technological innovation is that it usually takes society a long time to catch up, understand the broader implications of how the new technology can be used and abused, and provide appropriate legal and regulatory frameworks to regulate its conduct.
In the second decade of this millennium, social media moved from a tool to connect to a platform with immense power to spread lies and manipulate elections.
Society is now grappling with how to harness the best of this innovation, while mitigating its potential for abuse.
Perhaps, before we have even figured that out, the third decade of the millennium will confront us with far more consequential technological challenges.
Alexander Friedman, an investor, is a former chief executive of GAM Investments, chief investment officer of UBS, chief financial officer of the Bill & Melinda Gates Foundation and a White House fellow.
Copyright: Project Syndicate
President William Lai (賴清德) attended a dinner held by the American Israel Public Affairs Committee (AIPAC) when representatives from the group visited Taiwan in October. In a speech at the event, Lai highlighted similarities in the geopolitical challenges faced by Israel and Taiwan, saying that the two countries “stand on the front line against authoritarianism.” Lai noted how Taiwan had “immediately condemned” the Oct. 7, 2023, attack on Israel by Hamas and had provided humanitarian aid. Lai was heavily criticized from some quarters for standing with AIPAC and Israel. On Nov. 4, the Taipei Times published an opinion article (“Speak out on the
Eighty-seven percent of Taiwan’s energy supply this year came from burning fossil fuels, with more than 47 percent of that from gas-fired power generation. The figures attracted international attention since they were in October published in a Reuters report, which highlighted the fragility and structural challenges of Taiwan’s energy sector, accumulated through long-standing policy choices. The nation’s overreliance on natural gas is proving unstable and inadequate. The rising use of natural gas does not project an image of a Taiwan committed to a green energy transition; rather, it seems that Taiwan is attempting to patch up structural gaps in lieu of
News about expanding security cooperation between Israel and Taiwan, including the visits of Deputy Minister of National Defense Po Horng-huei (柏鴻輝) in September and Deputy Minister of Foreign Affairs Francois Wu (吳志中) this month, as well as growing ties in areas such as missile defense and cybersecurity, should not be viewed as isolated events. The emphasis on missile defense, including Taiwan’s newly introduced T-Dome project, is simply the most visible sign of a deeper trend that has been taking shape quietly over the past two to three years. Taipei is seeking to expand security and defense cooperation with Israel, something officials
“Can you tell me where the time and motivation will come from to get students to improve their English proficiency in four years of university?” The teacher’s question — not accusatory, just slightly exasperated — was directed at the panelists at the end of a recent conference on English language learning at Taiwanese universities. Perhaps thankfully for the professors on stage, her question was too big for the five minutes remaining. However, it hung over the venue like an ominous cloud on an otherwise sunny-skies day of research into English as a medium of instruction and the government’s Bilingual Nation 2030