Tech leaders have been vocal proponents of the need to regulate artificial intelligence, but they’re also lobbying hard to make sure the new rules work in their favor.
That’s not to say they all want the same thing.
Facebook parent Meta and IBM yesterday launched a new group called the AI Alliance that’s advocating for an “open science” approach to AI development that puts them at odds with rivals Google, Microsoft and ChatGPT-maker OpenAI.
Photo: AP
These two diverging camps — the open and the closed — disagree about whether to build AI in a way that makes the underlying technology widely accessible. Safety is at the heart of the debate, but so is who gets to profit from AI’s advances.
Open advocates favor an approach that is “not proprietary and closed,” said Dario Gil, a senior vice president at IBM who directs its research division. “So it’s not like a thing that is locked in a barrel and no one knows what they are.”
WHAT’S OPEN-SOURCE AI?
The term “open-source” comes from a decades-old practice of building software in which the code is free or widely accessible for anyone to examine, modify and build upon.
Open-source AI involves more than just code and computer scientists differ on how to define it depending on which components of the technology are publicly available and if there are restrictions limiting its use. Some use open science to describe the broader philosophy.
The AI Alliance — led by IBM and Meta and including Dell, Sony, chipmakers AMD and Intel and several universities and AI startups — is “coming together to articulate, simply put, that the future of AI is going to be built fundamentally on top of the open scientific exchange of ideas and on open innovation, including open source and open technologies,” Gil said ahead of its unveiling.
Part of the confusion around open-source AI is that despite its name, OpenAI — the company behind ChatGPT and the image-generator DALL-E — builds AI systems that are decidedly closed.
“To state the obvious, there are near-term and commercial incentives against open source,” said Ilya Sutskever, OpenAI’s chief scientist and co-founder, in a video interview hosted by Stanford University in April. But there’s also a longer-term worry involving the potential for an AI system with “mind-bendingly powerful” capabilities that would be too dangerous to make publicly accessible, he said.
To make his case for open-source dangers, Sutskever posited an AI system that had learned how to start its own biological laboratory.
IS IT DANGEROUS?
Even current AI models pose risks and could be used, for instance, to ramp up disinformation campaigns to disrupt democratic elections, said University of California, Berkeley scholar David Evan Harris.
“Open source is really great in so many dimensions of technology,” but AI is different, Harris said.
“Anyone who watched the movie Oppenheimer knows this, that when big scientific discoveries are being made, there are lots of reasons to think twice about how broadly to share the details of all of that information in ways that could get into the wrong hands,” he said.
The Center for Humane Technology, a longtime critic of Meta’s social media practices, is among the groups drawing attention to the risks of open-source or leaked AI models.
“As long as there are no guardrails in place right now, it’s just completely irresponsible to be deploying these models to the public,” said the group’s Camille Carlton.
IS IT FEAR-MONGERING?
An increasingly public debate has emerged over the benefits or dangers of adopting an open-source approach to AI development.
Meta’s chief AI scientist, Yann LeCun, this fall took aim on social media at OpenAI, Google and startup Anthropic for what he described as “massive corporate lobbying” to write the rules in a way that benefits their high-performing AI models and could concentrate their power over the technology’s development. The three companies, along with OpenAI’s key partner Microsoft, have formed their own industry group called the Frontier Model Forum.
LeCun said on X, formerly Twitter, that he worried that fearmongering from fellow scientists about AI “doomsday scenarios” was giving ammunition to those who want to ban open-source research and development.
“In a future where AI systems are poised to constitute the repository of all human knowledge and culture, we need the platforms to be open source and freely available so that everyone can contribute to them,” LeCun wrote. “Openness is the only way to make AI platforms reflect the entirety of human knowledge and culture.”
For IBM, an early supporter of the open-source Linux operating system in the 1990s, the dispute feeds into a much longer competition that precedes the AI boom.
“It’s sort of a classic regulatory capture approach of trying to raise fears about open-source innovation,” said Chris Padilla, who leads IBM’s global government affairs team. “I mean, this has been the Microsoft model for decades, right? They always opposed open-source programs that could compete with Windows or Office. They’re taking a similar approach here.”
WHAT ARE GOVERNMENTS DOING?
It was easy to miss the “open-source” debate in the discussion around US President Joe Biden’s sweeping executive order on AI.
That’s because Biden’s order described open models with the highly technical name of “dual-use foundation models with widely available weights” and said they needed further study. Weights are numerical parameters that influence how an AI model performs.
“When the weights for a dual-use foundation model are widely available — such as when they are publicly posted on the Internet — there can be substantial benefits to innovation, but also substantial security risks, such as the removal of safeguards within the model,” Biden’s order said. He gave U.S. Commerce Secretary Gina Raimondo until July to talk to experts and come back with recommendations on how to manage the potential benefits and risks.
The European Union has less time to figure it out. In negotiations coming to a head Wednesday, officials working to finalize passage of world-leading AI regulation are still debating a number of provisions, including one that could exempt certain “free and open-source AI components” from rules affecting commercial models.
Dec. 9 to Dec. 15 When architect Lee Chung-yao (李重耀) heard that the Xinbeitou Train Station was to be demolished in 1988 for the MRT’s Tamsui line, he immediately reached out to the owner of Taiwan Folk Village (台灣民俗村). Lee had been advising Shih Chin-shan (施金山) on his pet project, a 52-hectare theme park in Changhua County that aimed to showcase traditional Taiwanese architecture, crafts and culture. Shih had wanted to build all the structures from scratch, but Lee convinced him to acquire historic properties and move them to the park grounds. Although the Cultural
The Taipei Times reported last week that housing transactions fell 15.3 percent last month, to under 20,000 units. However, the market boomed for the first eight months of the year, and observers expect it to show growth for the year as a whole. The fall was due to Central Bank intervention. “The negative impact of credit controls grew evident for the third straight month,” said Sinyi Realty Inc (信義房屋) research manager Tseng Ching-ter (曾敬德), according to the report. Central Bank Governor Yang Chin-long (楊金龍) in October said that the Central Bank implemented selective credit controls in September to cool the housing
During the Japanese colonial era, remote mountain villages were almost exclusively populated by indigenous residents. Deep in the mountains of Chiayi County, however, was a settlement of Hakka families who braved the harsh living conditions and relative isolation to eke out a living processing camphor. As the industry declined, the village’s homes and offices were abandoned one by one, leaving us with a glimpse of a lifestyle that no longer exists. Even today, it takes between four and six hours to walk in to Baisyue Village (白雪村), and the village is so far up in the Chiayi mountains that it’s actually
The results of the 2024 US presidential election rattled the country and sent shockwaves across the world — or were cause for celebration, depending on who you ask. Is it any surprise then that the Merriam-Webster word of the year is “polarization?” “Polarization means division, but it’s a very specific kind of division,” said Peter Sokolowski, Merriam-Webster’s editor at large, in an exclusive interview ahead of Monday’s announcement. “Polarization means that we are tending toward the extremes rather than toward the center.” The election was so divisive, many American voters went to the polls with a feeling that the opposing candidate was