We are so surrounded by gadgetry nowadays that it is sometimes hard to tell where devices end and people begin. From computers and scanners to mobile devices, an increasing number of humans spend much of their conscious lives interacting with the world through electronics, the only barrier between brain and machine being the senses — sight, sound, and touch — through which humans and devices interface. But remove those senses from the equation, and electronic devices can become our eyes, ears and even arms and legs, taking in the world around us and interacting with it through software and hardware.
This is no mere prediction. Brain-machine interfaces are already clinically well established — for example, in restoring hearing through cochlear implants. And patients with end-stage Parkinson’s disease can be treated with deep brain stimulation (DBS). Current experiments on neural prosthetics point to the enormous future potential of similar interventions, whether retinal or brain-stem implants for the blind or brain-recording devices for controlling prostheses.
Non-invasive brain-machine interfaces based on electroencephalogram recordings have restored the communication skills of paralyzed patients. Animal research and some human studies suggest that full control of artificial limbs in real time could further offer the paralyzed an opportunity to grasp or even to stand and walk on brain-controlled, artificial legs, albeit likely through invasive means, with electrodes implanted directly in the brain.
Future advances in neurosciences, together with miniaturization of microelectronic devices, will enable more widespread application of brain-machine interfaces. This could be seen to challenge our notions of personhood and moral agency. And the question will certainly loom that if functions can be restored for those in need, is it right to use these technologies to enhance the abilities of healthy individuals?
But the ethical problems that these technologies pose are conceptually similar to those presented by existing therapies, such as antidepressants. Although the technologies and situations that brain-machine interfacing devices present might seem new and unfamiliar, they pose few new ethical challenges.
In brain-controlled prosthetic devices, a computer that sits in the device decodes signals from the brain. These signals are then used to predict what a user intends to do. Invariably, predictions will sometimes fail, which could lead to dangerous, or at least embarrassing, situations. Who is responsible for involuntary acts? Is it the fault of the computer or the user? Will a user need some kind of license and obligatory insurance to operate a prosthesis?
Fortunately, there are precedents for dealing with liability when biology and technology fail. Increasing knowledge of human genetics, for example, led to attempts to reject criminal responsibility, based on the inappropriate belief that genes predetermine actions. These attempts failed, and neuroscientific pursuits seem similarly unlikely to overturn our views on human free will and responsibility.
Moreover, humans often control dangerous and unpredictable tools, such as cars and guns. Brain-machine interfaces represent a highly sophisticated case of tool use, but they are still just that. Legal responsibility should not be much harder to disentangle.
But what if machines change the brain? Evidence from early brain stimulation experiments a half-century ago suggests that sending a current into the brain may cause shifts in personality and alter behavior. And, while many Parkinson’s patients report significant benefits from DBS, it has shown a greater incidence of serious adverse effects, such as nervous system and psychiatric disorders and a higher suicide rate. Case studies revealed hypomania and personality changes of which patients were unaware, and which disrupted family relationships before the stimulation parameters were readjusted.
Such examples illustrate the possible dramatic side-effects of DBS, but subtler effects are also possible. Even without stimulation, mere recording devices such as brain-controlled motor prostheses may alter the patient’s personality. Patients will need to be trained in generating the appropriate neural signals to direct the prosthetic limb. Doing so might have slight effects on mood or memory function or impair speech control.
Nevertheless, this does not raise a new ethical problem. Side-effects are common in most medical interventions, including treatment with psychoactive drugs. In 2004, for example, the US Food and Drug Administration told drug manufacturers to print warnings on certain antidepressants about the increased short-term risk of suicide in adolescents using them, and required increased monitoring of young people as they started medication.
Similar safeguards will be needed for neuroprostheses, including in research. The classic approach of biomedical ethics is to weigh the benefits for the patient against the risk of the intervention, and to respect the patient’s autonomous decisions. None of the new technologies warrants changing that approach.
Nevertheless, the availability of such technologies has already begun to cause friction. For example, many in the deaf community have rejected cochlear implants, because they do not regard deafness as a disability that needs to be corrected, but as a part of their life and cultural identity. To them, cochlear implants are an enhancement beyond normal functioning.
Distinguishing between enhancement and treatment requires defining normality and disease, which is notoriously difficult. For example, Christopher Boorse, a philosopher at the University of Delaware, defines disease as a statistical deviation from “species-typical functioning.”
From this perspective, cochlear implants seem ethically unproblematic. Nevertheless, Anita Silvers, a philosopher at San Francisco State University and a disability scholar and activist, has described such treatments as “tyranny of the normal,” aimed at adjusting the deaf to a world designed by the hearing, ultimately implying the inferiority of deafness.
We should take such concerns seriously, but they should not prevent further research on brain-machine interfaces. Brain technologies should be presented as one option, but not the only solution, for, say, paralysis or deafness. In this and other medical applications, we are well prepared to deal with ethical questions in parallel to and in cooperation with neuroscientific research.
Jens Clausen is research assistant at the Institute for Ethics and History of Medicine, Tuebingen, Germany.
COPYRIGHT: PROJECT SYNDICATE
A Chinese diplomat’s violent threat against Japanese Prime Minister Sanae Takaichi following her remarks on defending Taiwan marks a dangerous escalation in East Asian tensions, revealing Beijing’s growing intolerance for dissent and the fragility of regional diplomacy. Chinese Consul General in Osaka Xue Jian (薛劍) on Saturday posted a chilling message on X: “the dirty neck that sticks itself in must be cut off,” in reference to Takaichi’s remark to Japanese lawmakers that an attack on Taiwan could threaten Japan’s survival. The post, which was later deleted, was not an isolated outburst. Xue has also amplified other incendiary messages, including one suggesting
Chinese Consul General in Osaka Xue Jian (薛劍) on Saturday last week shared a news article on social media about Japanese Prime Minister Sanae Takaichi’s remarks on Taiwan, adding that “the dirty neck that sticks itself in must be cut off.” The previous day in the Japanese House of Representatives, Takaichi said that a Chinese attack on Taiwan could constitute “a situation threatening Japan’s survival,” a reference to a legal legal term introduced in 2015 that allows the prime minister to deploy the Japan Self-Defense Forces. The violent nature of Xue’s comments is notable in that it came from a diplomat,
China’s third aircraft carrier, the Fujian, entered service this week after a commissioning ceremony in China’s Hainan Province on Wednesday last week. Chinese state media reported that the Fujian would be deployed to the Taiwan Strait, the South China Sea and the western Pacific. It seemed that the Taiwan Strait being one of its priorities meant greater military pressure on Taiwan, but it would actually put the Fujian at greater risk of being compromised. If the carrier were to leave its home port of Sanya and sail to the East China Sea or the Yellow Sea, it would have to transit the
The artificial intelligence (AI) boom, sparked by the arrival of OpenAI’s ChatGPT, took the world by storm. Within weeks, everyone was talking about it, trying it and had an opinion. It has transformed the way people live, work and think. The trend has only accelerated. The AI snowball continues to roll, growing larger and more influential across nearly every sector. Higher education has not been spared. Universities rushed to embrace this technological wave, eager to demonstrate that they are keeping up with the times. AI literacy is now presented as an essential skill, a key selling point to attract prospective students.