Hours after the artificial intelligence (AI) pioneer Geoffrey Hinton won a Nobel Prize in physics, he drove a rented car to Google’s California headquarters to celebrate.
Hinton does not work at Google anymore. Nor did the longtime professor at the University of Toronto do his pioneering research at the tech giant.
However, his impromptu party reflected AI’s moment as a commercial blockbuster that has also reached the pinnacles of scientific recognition.
That was on Tuesday last week. Then, early on Wednesday last week, two employees of Google’s AI division won a Nobel Prize in chemistry for using AI to predict and design novel proteins.
“This is really a testament to the power of computer science and artificial intelligence,” said Jeanette Wing, a professor of computer science at Columbia University.
Asked about the historic back-to-back science awards for AI work in an e-mail on Wednesday last week, Hinton said: “Neural networks are the future.”
It did not always seem that way for researchers who decades ago experimented with interconnected computer nodes inspired by neurons in the human brain. Hinton shares this year’s physics Nobel with another scientist, John Hopfield, for helping develop those building blocks of machine learning.
Neural network advances came from “basic, curiosity-driven research,” Hinton said at a news conference after his win.
“Not out of throwing money at applied problems, but actually letting scientists follow their curiosity to try and understand things,” he said.
Such work started well before Google existed. However, a bountiful tech industry has now made it easier for AI scientists to pursue their ideas even as it has challenged them with new ethical questions about the societal impacts of their work.
One reason why the current wave of AI research is so closely tied to the tech industry is that only a handful of corporations have the resources to build the most powerful AI systems.
“These discoveries and this capability could not happen without humongous computational power and humongous amounts of digital data,” Wing said. “There are very few companies — tech companies — that have that kind of computational power. Google is one. Microsoft is another.”
The chemistry Nobel Prize went to Demis Hassabis and John Jumper of Google’s London-based DeepMind laboratory along with researcher David Baker at the University of Washington for work that could help discover new medicines.
Hassabis, the CEO and cofounder of DeepMind, which Google acquired in 2014, said his dream was to model his research laboratory on the “incredible storied history” of Bell Labs.
Started in 1925, the New Jersey-based industrial lab was the workplace of multiple Nobel-winning scientists over several decades who helped develop modern computing and telecommunications.
“I wanted to recreate a modern-day industrial research lab that really did cutting-edge research,” Hassabis said. “But of course, that needs a lot of patience and a lot of support. We’ve had that from Google, and it’s been amazing.”
Hinton joined Google late in his career and quit last year, so he could talk more freely about his concerns about AI’s dangers, particularly what happens if humans lose control of machines that become smarter than us.
However, he stops short of criticizing his former employer.
Hinton, 76, said he was staying in a cheap hotel in Palo Alto, California, when the Nobel committee woke him up with a phone call early on Tuesday morning, leading him to cancel a medical appointment scheduled for later that day.
By the time the sleep-deprived scientist reached the Google campus in nearby Mountain View, he “seemed pretty lively and not very tired at all” as colleagues popped bottles of champagne, said computer scientist Richard Zemel, a former doctoral student of Hinton’s who joined him at the Google party.
“Obviously there are these big companies now that are trying to cash in on all the commercial success and that is exciting,” said Zemel, now a Columbia professor.
However, what is more important to Hinton and his closest colleagues has been what the Nobel recognition means to the fundamental research they spent decades trying to advance, Zemel said.
Guests included Google executives and another former Hinton student, Ilya Sutskever, a cofounder and former chief scientist and board member at ChatGPT maker OpenAI.
Sutskever helped lead a group of board members who briefly ousted OpenAI CEO Sam Altman last year in turmoil that has symbolized the industry’s conflicts.
An hour before the party, Hinton used his Nobel bully pulpit to throw shade at OpenAI during opening remarks at a virtual news conference organized by the University of Toronto in which he thanked former mentors and students.
“I’m particularly proud of the fact that one of my students fired Sam Altman,” Hinton said.
Asked to elaborate, Hinton said OpenAI started with a primary objective to develop better-than-human artificial general intelligence “and ensure that it was safe.”
“And over time, it turned out that Sam Altman was much less concerned with safety than with profits. And I think that’s unfortunate,” Hinton said.
In response, OpenAI said in a statement that it is “proud of delivering the most capable and safest AI systems” and that they “safely serve hundreds of millions of people each week.”
Conflicts are likely to persist in a field where building even a relatively modest AI system requires resources “well beyond those of your typical research university,” said Michael Kearns, a professor of computer science at the University of Pennsylvania.
However, this week marks a “great victory for interdisciplinary research” that was decades in the making, said Kearns, who sits on the committee that picks the winners of computer science’s top prize — the Turing Award.
Hinton is only the second person to win a Nobel and Turing. The first, Turing-winning political scientist Herbert Simon, started working on what he called “computer simulation of human cognition” in the 1950s and won the Nobel economics prize in 1978 for his study of organizational decisionmaking.
Wing, who met Simon in her early career, said scientists are still just at the tip of finding ways to apply computing’s most powerful capabilities to other fields.
“We’re just at the beginning in terms of scientific discovery using AI,” she said.
A chip made by Taiwan Semiconductor Manufacturing Co (TSMC) was found on a Huawei Technologies Co artificial intelligence (AI) processor, indicating a possible breach of US export restrictions that have been in place since 2019 on sensitive tech to the Chinese firm and others. The incident has triggered significant concern in the IT industry, as it appears that proxy buyers are acting on behalf of restricted Chinese companies to bypass the US rules, which are intended to protect its national security. Canada-based research firm TechInsights conducted a die analysis of the Huawei Ascend 910B AI Trainer, releasing its findings on Oct.
Pat Gelsinger took the reins as Intel CEO three years ago with hopes of reviving the US industrial icon. He soon made a big mistake. Intel had a sweet deal going with Taiwan Semiconductor Manufacturing Co (TSMC), the giant manufacturer of semiconductors for other companies. TSMC would make chips that Intel designed, but could not produce and was offering deep discounts to Intel, four people with knowledge of the agreement said. Instead of nurturing the relationship, Gelsinger — who hoped to restore Intel’s own manufacturing prowess — offended TSMC by calling out Taiwan’s precarious relations with China. “You don’t want all of
In honor of President Jimmy Carter’s 100th birthday, my longtime friend and colleague John Tkacik wrote an excellent op-ed reassessing Carter’s derecognition of Taipei. But I would like to add my own thoughts on this often-misunderstood president. During Carter’s single term as president of the United States from 1977 to 1981, despite numerous foreign policy and domestic challenges, he is widely recognized for brokering the historic 1978 Camp David Accords that ended the state of war between Egypt and Israel after more than three decades of hostilities. It is considered one of the most significant diplomatic achievements of the 20th century.
In a recent essay in Foreign Affairs, titled “The Upside on Uncertainty in Taiwan,” Johns Hopkins University professor James B. Steinberg makes the argument that the concept of strategic ambiguity has kept a tenuous peace across the Taiwan Strait. In his piece, Steinberg is primarily countering the arguments of Tufts University professor Sulmaan Wasif Khan, who in his thought-provoking new book The Struggle for Taiwan does some excellent out-of-the-box thinking looking at US policy toward Taiwan from 1943 on, and doing some fascinating “what if?” exercises. Reading through Steinberg’s comments, and just starting to read Khan’s book, we could already sense that