A new chatbot from OpenAI took the Internet by storm this week, dashing off poems, screenplays and essay answers that were plastered as screenshots all over Twitter by the breathless technoratti.
Although the underlying technology has been around for a few years, this was the first time OpenAI brought its powerful language-generating system known as GPT3 to the masses, prompting a race to give it the most inventive commands. My favorite is, “Write a Biblical verse explaining how to remove a peanut butter sandwich from a VCR.”
Beyond the gimmicky demos, some people are already finding practical uses for ChatGPT — including programers who are using it to draft code or spot errors — but the system’s biggest utility could be a financial disaster for Google by supplying superior answers to the queries we currently put to the world’s most powerful search engine.
Google works by crawling billions of Web pages, indexing that content and then ranking it in order of the most relevant answers. It then spits out a list of links to click through. ChatGPT offers something more tantalizing for harried Internet users: A single answer based on its own search and synthesis of that information.
ChatGPT has been trained on millions of Web sites to glean not only the skill of holding a humanlike conversation, but information itself, so long as it was published on the Internet before late last year.
COMPARING SEARCHES
I went through my own Google search history over the past month and put 18 of my Google queries into ChatGPT, cataloging the answers. I then went back and ran the queries through Google once more, to refresh my memory. The end result was, in my judgment, that ChatGPT’s answers were more useful than Google’s in 13 out of the 18 examples.
“Useful” is of course subjective. What do I mean by the term? In this case, answers that were clear and comprehensive. A query about whether condensed milk or evaporated milk was better for pumpkin pie during Thanksgiving sparked a detailed — if slightly verbose — answer from ChatGPT that explained how condensed milk would lead to a sweeter pie. (Naturally, that was superior.) Google mainly provided a list of links to recipes I would have to click around, with no clear answer.
That underscores ChatGPT’s prime threat to Google down the line. It gives a single, immediate response that requires no further scanning of other Web sites. In Silicon Valley speak, that is a “frictionless” experience, something of a holy grail when online consumers overwhelmingly favor services that are quick and easy to use.
Google does have its own version of summarized answers to some queries, but they are compilations of the highest-ranked Web pages and typically brief. It also has its own proprietary language model, called LaMDA, which is so good that one of the company’s engineers thought the system was sentient.
Google does not generate its own singular answers to queries because anything that prevents people from scanning search results is going to hurt Google’s transactional business model of getting people to click on ads. About 81 percent of Alphabet Inc’s US$257.6 billion revenue last year came from advertising, much of that being Google’s pay-per-click ads, data compiled by Bloomberg showed.
“It’s all designed with the purpose of ‘Let’s get you to click on a link,’” said Sridhar Ramaswamy, who oversaw Google’s ads and commerce business from 2013 to 2018, adding that generative search from systems like ChatGPT would disrupt Google’s traditional search business “in a massive way.”
“It’s just a better experience,” he said. “The goal of Google search is to get you to click on links, ideally ads, and all other text on the page is just filler.”
Ramaswamy cofounded a subscription-based search engine called Neeva in 2019, which is planning to roll out its own generative search feature that can summarize Web pages, with footnotes, in the coming months.
ChatGPT does not reveal the sources of its information. In fact, there is a good chance its own creators cannot tell how it generates the answers it comes up with. That points to one of its biggest weaknesses: Sometimes, its answers are plain wrong.
Stack Overflow, a question-and-answer site for coders, on Monday temporarily banned its users from sharing advice from ChatGPT, saying that the thousands of answers that programmers were posting from the system were often incorrect.
My own experience bears this out. When I put my 12-year-old daughter’s English essay question into the system, it offered a long and eloquent analysis that sounded authoritative — but the answer was also riddled with mistakes, for instance stating that a literary character’s parents had died when they had not.
CONFIDENT BUT WRONG
What is disturbing about this flaw is that the inaccuracies are hard to spot, especially when ChatGPT sounds so confident.
The system’s answers “typically look like they might be good,” according to Stack Overflow, and by OpenAI’s own admission, they are often plausible sounding.
OpenAI had initially trained its system to be more cautious, but the result was that it declined questions it knew the answer to. By going the other way, the result is something like a college frat student bluffing their way through an essay after not studying. Fluent hogwash.
It is unclear how common ChatGPT’s mistakes are. One estimate doing the rounds on Twitter is a rate of 2 to 5 percent. It might be more. That could make Internet users wary of using ChatGPT for important information.
Another strength for Google: It mostly makes money on transactional search queries for products, and navigational searches for other sites, such as people typing in “Facebook” or “YouTube.”
Those kinds of queries comprised many of the top 100 Google searches this year. So long as ChatGPT does not offer links to other sites, it is not encroaching too deeply on Google’s turf.
However, those issues could evolve over time. ChatGPT could become more accurate as OpenAI expands the training of its model to more current parts of the Web. To that end, OpenAI is working on a system called WebGPT, which it hopes leads to more accurate answers to search queries, which would include source citations.
A combination of ChatGPT and WebGPT could be a powerful alternative to Google, and ChatGPT is already giving more accurate answers than OpenAI’s earlier systems.
ChatGPT amassed 1 million users in about five days. That is an extraordinary milestone. It took Instagram 2.5 months to reach that number, and 10 months for Facebook.
OpenAI is not publicly speculating about its future applications, but if its new chatbot starts sharing links to other Web sites, particularly those that sell things, that could spell real danger for Google.
Parmy Olson is a Bloomberg Opinion columnist covering technology, and a former reporter for the Wall Street Journal and Forbes.
This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Chinese actor Alan Yu (于朦朧) died after allegedly falling from a building in Beijing on Sept. 11. The actor’s mysterious death was tightly censored on Chinese social media, with discussions and doubts about the incident quickly erased. Even Hong Kong artist Daniel Chan’s (陳曉東) post questioning the truth about the case was automatically deleted, sparking concern among overseas Chinese-speaking communities about the dark culture and severe censorship in China’s entertainment industry. Yu had been under house arrest for days, and forced to drink with the rich and powerful before he died, reports said. He lost his life in this vicious
In South Korea, the medical cosmetic industry is fiercely competitive and prices are low, attracting beauty enthusiasts from Taiwan. However, basic medical risks are often overlooked. While sharing a meal with friends recently, I heard one mention that his daughter would be going to South Korea for a cosmetic skincare procedure. I felt a twinge of unease at the time, but seeing as it was just a casual conversation among friends, I simply reminded him to prioritize safety. I never thought that, not long after, I would actually encounter a patient in my clinic with a similar situation. She had
A recent trio of opinion articles in this newspaper reflects the growing anxiety surrounding Washington’s reported request for Taiwan to shift up to 50 percent of its semiconductor production abroad — a process likely to take 10 years, even under the most serious and coordinated effort. Simon H. Tang (湯先鈍) issued a sharp warning (“US trade threatens silicon shield,” Oct. 4, page 8), calling the move a threat to Taiwan’s “silicon shield,” which he argues deters aggression by making Taiwan indispensable. On the same day, Hsiao Hsi-huei (蕭錫惠) (“Responding to US semiconductor policy shift,” Oct. 4, page 8) focused on
George Santayana wrote: “Those who cannot remember the past are condemned to repeat it.” This article will help readers avoid repeating mistakes by examining four examples from the civil war between the Chinese Communist Party (CCP) forces and the Republic of China (ROC) forces that involved two city sieges and two island invasions. The city sieges compared are Changchun (May to October 1948) and Beiping (November 1948 to January 1949, renamed Beijing after its capture), and attempts to invade Kinmen (October 1949) and Hainan (April 1950). Comparing and contrasting these examples, we can learn how Taiwan may prevent a war with