An executive from a large technology firm on Thursday won a Nobel prize. The top prize for chemistry went to the head of Alphabet Inc’s artificial intelligence (AI) efforts, Demis Hassabis, along with two other key scientists, for a years-long project that used AI to predict the structure of proteins. The day before, Geoffrey Hinton, a former executive at Google who has been called a godfather of AI, won the Nobel Prize in Physics along with physicist John Hopfield, for work on machine learning.
It seems the Nobel Foundation is eager to mark AI advancements — and the notion that key scientific problems can be solved computationally — as worthy of its coveted prizes. That would be a reputational boon for firms like Google and executives like Hassabis. However, there is a risk, too, that such recognition obscures concerns about both the technology itself and the increasing concentration of AI power in a handful of companies.
Hassabis himself has long craved this accolade, having told staff for years that that he wanted DeepMind, the AI lab he cofounded and sold to Google in 2015, to win between three and five Nobel prizes over the next decade.
At a news conference on Wednesday, he called the award “an unbelievable honor of a lifetime” and said he had been hoping to win it this time around.
Indeed, he initially shaped DeepMind as a research lab with utopian objectives, where many of its leading scientists worked on building AI systems to help cure diseases like cancer or solve global warming.
However, that humanitarian agenda faded to the background after the sale to Google and especially after the release of OpenAI’s ChatGPT, which sparked a race among tech giants to deploy chatbot-style technology to businesses and consumers.
DeepMind has since become more product-focused (information about its healthcare and climate efforts disappeared from its homepage, for example), although it has continued with health-related efforts like AlphaFold. Out of DeepMind’s roughly 1,500-strong workforce, a team of just two dozen people were running the protein-folding project when it reached a critical milestone in 2020, according to a video documentary about the effort.
The Nobel will surely give Hassabis a credibility boost at Alphabet, where he has been leading the company’s fraught efforts to keep up with OpenAI. Google’s flagship AI model Gemini has grappled with controversies over its frequent mistakes and the possibility it would choke off traffic to the rest of the Web. Now perhaps a smoother path has been paved for Hassabis if he wants to become Alphabet’s next chief executive.
The former chess champion is a consummate strategist and rivals Sam Altman as the world’s most successful builder of AI technology, having pushed the boundaries of fields like deep learning, reinforcement learning and games-based models such as AlphaGo, which beat world champion go players eight years ago. Hassabis was already talking about taking on protein folding during those matches.
The glow benefits Google, too. Recent challenges from antitrust regulators over monopolistic behavior have not helped its reputation as a company founded on the principle of “don’t be evil.” Now with two Nobel prizes linked to work done by its scientists, the tech giant can more easily frame itself as providing services that are ultimately good for society, as its lawyers have been arguing, and perhaps generate goodwill more broadly with the public and regulators.
However, we should not forget the tension between the high-minded goals professed by Big Tech and what their businesses are really focused on. Google, which derives close to 80 percent of its revenue from advertising, is now putting ads into its new AI search tool. For businesses, that invites a new layer of complexity to online advertising, while consumers face the prospect of wading through AI-generated information that Google is trying to monetize, and which could one day become more biased toward advertisers.
Remember also that Google’s prioritization of human well-being was called into question less than three years ago when it fired two leading AI ethics experts who had warned about the risks that its AI models could entrench bias, spread misinformation and hoard energy, issues that have not gone away. A study in Nature last month, for instance, showed that AI tools like ChatGPT were making racist decisions about people based on their dialect.
The Nobel Prize is designed to recognize people who have made outstanding contributions to science, humanism and peace, so the foundation behind it has taken a bold stance in validating the work of AI and of one company in particular. The award to Hassabis — like the Nobel Peace Prize given to Barack Obama one year after he was elected as US president — feels a little premature. It is still unclear what kind of broad, real-world impact DeepMind’s protein-folding project will have on the medical field and drug discovery.
Let us hope the prize motivates well-endowed technology firms to invest much more in using AI for public service efforts like protein folding and in AI ethics research — and does not muddy the debate over the very real risks that AI poses to the world, too.
Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of We Are Anonymous.
This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
A high-school student surnamed Yang (楊) gained admissions to several prestigious medical schools recently. However, when Yang shared his “learning portfolio” on social media, he was caught exaggerating and even falsifying content, and his admissions were revoked. Now he has to take the “advanced subjects test” scheduled for next month. With his outstanding performance in the general scholastic ability test (GSAT), Yang successfully gained admissions to five prestigious medical schools. However, his university dreams have now been frustrated by the “flaws” in his learning portfolio. This is a wake-up call not only for students, but also teachers. Yang did make a big
As former president Ma Ying-jeou (馬英九) concludes his fourth visit to China since leaving office, Taiwan finds itself once again trapped in a familiar cycle of political theater. The Democratic Progressive Party (DPP) has criticized Ma’s participation in the Straits Forum as “dancing with Beijing,” while the Chinese Nationalist Party (KMT) defends it as an act of constitutional diplomacy. Both sides miss a crucial point: The real question is not whether Ma’s visit helps or hurts Taiwan — it is why Taiwan lacks a sophisticated, multi-track approach to one of the most complex geopolitical relationships in the world. The disagreement reduces Taiwan’s
Former president Ma Ying-jeou (馬英九) is visiting China, where he is addressed in a few ways, but never as a former president. On Sunday, he attended the Straits Forum in Xiamen, not as a former president of Taiwan, but as a former Chinese Nationalist Party (KMT) chairman. There, he met with Chinese People’s Political Consultative Conference Chairman Wang Huning (王滬寧). Presumably, Wang at least would have been aware that Ma had once been president, and yet he did not mention that fact, referring to him only as “Mr Ma Ying-jeou.” Perhaps the apparent oversight was not intended to convey a lack of
Chinese Nationalist Party (KMT) Chairman Eric Chu (朱立倫) last week announced that the KMT was launching “Operation Patriot” in response to an unprecedented massive campaign to recall 31 KMT legislators. However, his action has also raised questions and doubts: Are these so-called “patriots” pledging allegiance to the country or to the party? While all KMT-proposed campaigns to recall Democratic Progressive Party (DPP) lawmakers have failed, and a growing number of local KMT chapter personnel have been indicted for allegedly forging petition signatures, media reports said that at least 26 recall motions against KMT legislators have passed the second signature threshold