We are so surrounded by gadgetry nowadays that it is sometimes hard to tell where devices end and people begin. From computers and scanners to mobile devices, an increasing number of humans spend much of their conscious lives interacting with the world through electronics, the only barrier between brain and machine being the senses — sight, sound, and touch — through which humans and devices interface. But remove those senses from the equation, and electronic devices can become our eyes, ears and even arms and legs, taking in the world around us and interacting with it through software and hardware.
This is no mere prediction. Brain-machine interfaces are already clinically well established — for example, in restoring hearing through cochlear implants. And patients with end-stage Parkinson’s disease can be treated with deep brain stimulation (DBS). Current experiments on neural prosthetics point to the enormous future potential of similar interventions, whether retinal or brain-stem implants for the blind or brain-recording devices for controlling prostheses.
Non-invasive brain-machine interfaces based on electroencephalogram recordings have restored the communication skills of paralyzed patients. Animal research and some human studies suggest that full control of artificial limbs in real time could further offer the paralyzed an opportunity to grasp or even to stand and walk on brain-controlled, artificial legs, albeit likely through invasive means, with electrodes implanted directly in the brain.
Future advances in neurosciences, together with miniaturization of microelectronic devices, will enable more widespread application of brain-machine interfaces. This could be seen to challenge our notions of personhood and moral agency. And the question will certainly loom that if functions can be restored for those in need, is it right to use these technologies to enhance the abilities of healthy individuals?
But the ethical problems that these technologies pose are conceptually similar to those presented by existing therapies, such as antidepressants. Although the technologies and situations that brain-machine interfacing devices present might seem new and unfamiliar, they pose few new ethical challenges.
In brain-controlled prosthetic devices, a computer that sits in the device decodes signals from the brain. These signals are then used to predict what a user intends to do. Invariably, predictions will sometimes fail, which could lead to dangerous, or at least embarrassing, situations. Who is responsible for involuntary acts? Is it the fault of the computer or the user? Will a user need some kind of license and obligatory insurance to operate a prosthesis?
Fortunately, there are precedents for dealing with liability when biology and technology fail. Increasing knowledge of human genetics, for example, led to attempts to reject criminal responsibility, based on the inappropriate belief that genes predetermine actions. These attempts failed, and neuroscientific pursuits seem similarly unlikely to overturn our views on human free will and responsibility.
Moreover, humans often control dangerous and unpredictable tools, such as cars and guns. Brain-machine interfaces represent a highly sophisticated case of tool use, but they are still just that. Legal responsibility should not be much harder to disentangle.
But what if machines change the brain? Evidence from early brain stimulation experiments a half-century ago suggests that sending a current into the brain may cause shifts in personality and alter behavior. And, while many Parkinson’s patients report significant benefits from DBS, it has shown a greater incidence of serious adverse effects, such as nervous system and psychiatric disorders and a higher suicide rate. Case studies revealed hypomania and personality changes of which patients were unaware, and which disrupted family relationships before the stimulation parameters were readjusted.
Such examples illustrate the possible dramatic side-effects of DBS, but subtler effects are also possible. Even without stimulation, mere recording devices such as brain-controlled motor prostheses may alter the patient’s personality. Patients will need to be trained in generating the appropriate neural signals to direct the prosthetic limb. Doing so might have slight effects on mood or memory function or impair speech control.
Nevertheless, this does not raise a new ethical problem. Side-effects are common in most medical interventions, including treatment with psychoactive drugs. In 2004, for example, the US Food and Drug Administration told drug manufacturers to print warnings on certain antidepressants about the increased short-term risk of suicide in adolescents using them, and required increased monitoring of young people as they started medication.
Similar safeguards will be needed for neuroprostheses, including in research. The classic approach of biomedical ethics is to weigh the benefits for the patient against the risk of the intervention, and to respect the patient’s autonomous decisions. None of the new technologies warrants changing that approach.
Nevertheless, the availability of such technologies has already begun to cause friction. For example, many in the deaf community have rejected cochlear implants, because they do not regard deafness as a disability that needs to be corrected, but as a part of their life and cultural identity. To them, cochlear implants are an enhancement beyond normal functioning.
Distinguishing between enhancement and treatment requires defining normality and disease, which is notoriously difficult. For example, Christopher Boorse, a philosopher at the University of Delaware, defines disease as a statistical deviation from “species-typical functioning.”
From this perspective, cochlear implants seem ethically unproblematic. Nevertheless, Anita Silvers, a philosopher at San Francisco State University and a disability scholar and activist, has described such treatments as “tyranny of the normal,” aimed at adjusting the deaf to a world designed by the hearing, ultimately implying the inferiority of deafness.
We should take such concerns seriously, but they should not prevent further research on brain-machine interfaces. Brain technologies should be presented as one option, but not the only solution, for, say, paralysis or deafness. In this and other medical applications, we are well prepared to deal with ethical questions in parallel to and in cooperation with neuroscientific research.
Jens Clausen is research assistant at the Institute for Ethics and History of Medicine, Tuebingen, Germany.
COPYRIGHT: PROJECT SYNDICATE
On Sept. 3 in Tiananmen Square, the Chinese Communist Party (CCP) and the People’s Liberation Army (PLA) rolled out a parade of new weapons in PLA service that threaten Taiwan — some of that Taiwan is addressing with added and new military investments and some of which it cannot, having to rely on the initiative of allies like the United States. The CCP’s goal of replacing US leadership on the global stage was advanced by the military parade, but also by China hosting in Tianjin an August 31-Sept. 1 summit of the Shanghai Cooperation Organization (SCO), which since 2001 has specialized
The narrative surrounding Indian Prime Minister Narendra Modi’s attendance at last week’s Shanghai Cooperation Organization (SCO) summit — where he held hands with Russian President Vladimir Putin and chatted amiably with Chinese President Xi Jinping (習近平) — was widely framed as a signal of Modi distancing himself from the US and edging closer to regional autocrats. It was depicted as Modi reacting to the levying of high US tariffs, burying the hatchet over border disputes with China, and heralding less engagement with the Quadrilateral Security dialogue (Quad) composed of the US, India, Japan and Australia. With Modi in China for the
A large part of the discourse about Taiwan as a sovereign, independent nation has centered on conventions of international law and international agreements between outside powers — such as between the US, UK, Russia, the Republic of China (ROC) and Japan at the end of World War II, and between the US and the People’s Republic of China (PRC) since recognition of the PRC as the sole representative of China at the UN. Internationally, the narrative on the PRC and Taiwan has changed considerably since the days of the first term of former president Chen Shui-bian (陳水扁) of the Democratic
A report by the US-based Jamestown Foundation on Tuesday last week warned that China is operating illegal oil drilling inside Taiwan’s exclusive economic zone (EEZ) off the Taiwan-controlled Pratas Island (Dongsha, 東沙群島), marking a sharp escalation in Beijing’s “gray zone” tactics. The report said that, starting in July, state-owned China National Offshore Oil Corp installed 12 permanent or semi-permanent oil rig structures and dozens of associated ships deep inside Taiwan’s EEZ about 48km from the restricted waters of Pratas Island in the northeast of the South China Sea, islands that are home to a Taiwanese garrison. The rigs not only typify