A year ago, Susan Wojcicki was on stage to defend YouTube. Her company, hammered for months for fueling falsehoods online, was reeling from another flare-up involving a conspiracy theory video about the Parkland, Florida, high-school shooting that suggested the victims were “crisis actors.”
Wojcicki, YouTube’s chief executive officer, is a reluctant public ambassador, but she was in Austin, Texas, at the South by Southwest conference to unveil a solution that she hoped would help quell conspiracy theories: a tiny text box from Web sites like Wikipedia that would sit below videos that questioned well-established facts like the moon landing and link viewers to the truth.
Wojcicki’s media behemoth, bent on overtaking television, is estimated to rake in sales of more than US$16 billion a year.
Illustration: Kevin Sheu
However, on that day, Wojcicki compared her video site to a different kind of institution.
“We’re really more like a library,” she said, staking out a familiar position as a defender of free speech. “There have always been controversies, if you look back at libraries.”
Since Wojcicki took the stage, prominent conspiracy theories on the platform — including one on child vaccinations and another tying former US secretary of state Hillary Rodham Clinton to a Satanic cult — have drawn the ire of lawmakers eager to regulate technology companies. And YouTube is, a year later, even more associated with the darker parts of the Web.
The conundrum is not just that videos questioning the moon landing or the efficacy of vaccines are on YouTube. The massive “library,” generated by users with little editorial oversight, is bound to have untrue nonsense. Instead, YouTube’s problem is that it allows the nonsense to flourish. And, in some cases, through its powerful artificial intelligence system, it even provides the fuel that lets it spread.
Wojcicki and her deputies know this. In recent years, scores of people inside YouTube and Google, its owner, raised concerns about the mass of false, incendiary and toxic content that the world’s largest video site surfaced and spread. One employee wanted to flag troubling videos, which fell just short of the hate speech rules, and stop recommending them to viewers. Another wanted to track these videos in a spreadsheet to chart their popularity. A third, fretful of the spread of “alt-right” video bloggers, created an internal vertical that showed just how popular they were. Each time they got the same basic response: Do not rock the boat.
The company spent years chasing one business goal above others: “engagement,” a measure of the views, time spent and interactions with online videos. Conversations with more than 20 people who work at or recently left YouTube reveal a corporate leadership unable or unwilling to act on these internal alarms for fear of throttling engagement.
Wojcicki would “never put her fingers on the scale,” said one person who worked for her.
“Her view was: ‘My job is to run the company, not deal with this,’” they said.
This person, like others who spoke to Bloomberg News, asked not to be identified because of a worry of retaliation.
YouTube turned down Bloomberg News’ requests to speak to Wojcicki, other executives, management at Google and the board of Alphabet Inc, its parent company.
Last week, Neal Mohan, YouTube’s chief product officer, told the New York Times that the company has “made great strides” in addressing its issues with recommendation and radical content.
A YouTube spokeswoman contested the notion that Wojcicki is inattentive to these issues and that the company prioritizes engagement above all else.
Instead, the spokeswoman said that the company has spent the past two years focused squarely on finding solutions for its content problems. Since 2017, YouTube has recommended clips based on a metric called “responsibility,” which includes input from satisfaction surveys it shows after videos. YouTube declined to describe it more fully, but said it receives “millions” of survey responses each week.
“Our primary focus has been tackling some of the platform’s toughest content challenges,” a spokeswoman said in an e-mailed statement.
“We’ve taken a number of significant steps, including updating our recommendations system to prevent the spread of harmful misinformation, improving the news experience on YouTube, bringing the number of people focused on content issues across Google to 10,000, investing in machine learning to be able to more quickly find and remove violative content, and reviewing and updating our policies — we made more than 30 policy updates in 2018 alone. And this is not the end: responsibility remains our No. 1 priority,” the spokeswoman said.
In response to criticism about prioritizing growth over safety, Facebook Inc has proposed a dramatic shift in its core product. YouTube still has struggled to explain any new corporate vision to the public and investors — and sometimes, to its own staff.
Five senior personnel who left YouTube and Google in the past two years privately cited the platform’s inability to tame extreme, disturbing videos as the reason for their departure.
YouTube’s inertia was illuminated again after a deadly measles outbreak drew public attention to vaccinations conspiracies on social media several weeks ago. New data from Moonshot CVE, a London-based firm that studies extremism, found that fewer than 20 YouTube channels that have spread these lies reached over 170 million viewers, many who where then recommended other videos laden with conspiracy theories.
The company’s lackluster response to explicit videos aimed at kids has drawn criticism from the tech industry itself.
Patrick Copeland, a former Google director who left in 2016, recently posted a damning indictment of his former company on LinkedIn. While watching YouTube, Copeland’s daughter was recommended a clip that featured both a Snow White character drawn with exaggerated sexual features and a horse engaged in a sexual act.
“Most companies would fire someone for watching this video at work,” he wrote. “Unbelievable!!”
Copeland, who spent a decade at Google, decided to block the YouTube.com domain.
Micah Schaffer joined YouTube in 2006, nine months before it was acquired by Google and well before it had become part of the cultural firmament. He was assigned the task of writing policies for the freewheeling site. Back then, YouTube was focused on convincing people why they should watch videos from amateurs and upload their own.
A few years later, when he left YouTube, the site was still unprofitable and largely known for frivolity — A clip of David, a rambling seven-year-old drugged up after a trip to a dentist, was the second-most watched video that year.
However, even then there were problems with malicious content. At about that time, there was an uptick in videos praising anorexia. In response, staff moderators began furiously combing the clips to place age restrictions, cut them from recommendations or pull them down entirely.
They “threatened the health of our users,” Schaffer recalled.
He was reminded of that episode recently, when videos sermonizing about the so-called perils of vaccinations began spreading on YouTube.
That, he thought, would have been a no-brainer back in the earlier days.
“We would have severely restricted them or banned them entirely,” Schaffer said. “YouTube should never have allowed dangerous conspiracy theories to become such a dominant part of the platform’s culture.”
Somewhere in the past decade, he added, YouTube prioritized chasing profits over the safety of its users.
“We may have been hemorrhaging money, but at least dogs riding skateboards never killed anyone,” he said.
Beginning in about 2009, Google took tighter control of YouTube. It ushered in executives, such as sales chief Robert Kyncl, formerly of Netflix, for a technical strategy and business plan to sustain its exploding growth. In 2012, YouTube concluded that the more people watched, the more ads it could run — and that recommending videos, alongside a clip or after one was finished, was the best way to keep eyes on the site.
So YouTube, then run by Google veteran Salar Kamangar, set a company-wide objective to reach 1 billion hours of viewing a day and rewrote its recommendation engine to maximize for that goal.
When Wojcicki took over, in 2014, YouTube was one-third of the way to the goal, she recalled in Measure What Matters, investor John Doerr’s book released last year.
“They thought it would break the Internet, but it seemed to me that such a clear and measurable objective would energize people and I cheered them on,” Wojcicki told Doerr. “The billion hours of daily watch time gave our tech people a North Star.”
By October, 2016, YouTube hit its goal. That same fall, three Google coders published a paper on the ways YouTube’s recommendation system worked with its mountain of freshly uploaded footage. They outlined how YouTube’s neural network, an AI system, could better predict what a viewer would watch next. The research notes how the AI can try to suppress “clickbait,” videos that lied about their subject and lost viewer’s attention.
Yet it makes no mention of the landmines — misinformation, political extremism and repellent kid’s content — that have garnered millions and millions of views and rattled the company since. Those topics rarely came up before the US presidential election in 2016.
“We were so in the weeds trying to hit our goals and drive usage of the site,” said one former senior manager. “I don’t know if we really picked up our heads.”
YouTube does not give an exact recipe for virality, but in the race to 1 billion hours, a formula emerged: Outrage equals attention.
It is one that people on the political fringes have easily exploited, said Brittan Heller, a fellow at Harvard University’s Carr Center.
“They don’t know how the algorithm works, but they do know that the more outrageous the content is, the more views,” she said.
People inside YouTube knew about this dynamic. Over the years, there were many tortured debates about what to do with troublesome videos — those that do not violate its content policies and so remain on the site. Some software engineers have nicknamed the problem “bad virality.”
Yonatan Zunger, a privacy engineer at Google, recalled a suggestion he made to YouTube staff before he left the company in 2016. He proposed a third tier: videos that were allowed to stay on YouTube, but, because they were “close to the line” of the takedown policy, would be removed from recommendations.
“Bad actors quickly get very good at understanding where the bright lines are and skating as close to those lines as possible,” Zunger said.
His proposal, which went to the head of YouTube policy, was turned down.
“I can say with a lot of confidence that they were deeply wrong,” he said.
Rather than revamp its recommendation engine, YouTube doubled down. The neural network described in the 2016 research went into effect in YouTube recommendations starting in 2015. By the measures available, it has achieved its goal of keeping people on YouTube.
“It’s an addiction engine,” said Francis Irving, a computer scientist who has written critically about YouTube’s AI system.
Irving said he has raised these concerns with YouTube staff.
They responded with incredulity, or an indication that they had no incentives to change how its software worked, he said.
“It’s not a disastrous failed algorithm,” Irving added. “It works well for a lot of people, and it makes a lot of money.”
Paul Covington, a senior Google engineer who coauthored the 2016 recommendation engine research, presented the findings at a conference the following March. He was asked how the engineers decide what outcome to aim for with their algorithms.
“It’s kind of a product decision,” Covington said at the conference, referring to a separate YouTube division. “Product tells us that we want to increase this metric, and then we go and increase it. So it’s not really left up to us.”
Covington did not respond to an e-mail requesting comment.
A YouTube spokeswoman said that, starting in late 2016, the company added a measure of “social responsibility” to its recommendation algorithm. Those inputs include how many times people share and click the “like” and “dislike” buttons on a video.
However, YouTube declined to share any more detail on the metric or its impacts.
Three days after Donald Trump was elected US president, Wojcicki convened her entire staff for their weekly meeting. One employee fretted aloud about the site’s election-related videos that were watched the most. They were dominated by publishers like Breitbart News and Infowars, which were known for their outrage and provocation. Breitbart had a popular section called “black crime.”
The episode, according to a person in attendance, prompted widespread conversation, but no immediate policy edicts.
A spokeswoman declined to comment on the particular case, but said that “generally extreme content does not perform well on the platform.”
At that time, YouTube’s management was focused on a very different crisis. Its “creators,” the droves that upload videos to the site, were upset. Some grumped about pay, others threatened openly to defect to rival sites.
Wojcicki and her lieutenants drew up a plan. YouTube called it Project Bean. The plan was to rewrite YouTube’s entire business model, according to three former senior staffers who worked on it.
It centered on a way to pay creators that is not based on the ads their videos hosted. Instead, YouTube would pay on engagement — how many viewers watched a video and how long they watched. A special algorithm would pool incoming cash, then divvy it out to creators, even if no ads ran on their videos. The idea was to reward video stars shorted by the system, such as those making sex education and music videos, which marquee advertisers found too risque to endorse.
Coders at YouTube labored for at least a year to make the project workable. However, company managers failed to appreciate how the project could backfire: paying based on engagement risked making its “bad virality” problem worse, as it could have rewarded videos that achieved popularity achieved by outrage.
One person involved said that the algorithms for doling out payments were tightly guarded.
If it went into effect then, this person said, it is likely that someone like Alex Jones — the Infowars creator and conspiracy theorist with a huge following on the site, before YouTube booted him in August last year — would have suddenly become one of the highest-paid YouTube stars.
Wojcicki pitched Project Bean to Google’s leadership team in October of 2017. By then, YouTube and other social media sites faced the first wave of censure for making “filter bubbles” — directing people to pre-existing beliefs, then feeding them more of the same.
Wojcicki’s boss, Sundar Pichai, turned down YouTube’s proposal because, in part, he felt it could make the filter bubble problem worse, according to two people familiar with the exchange.
Another person familiar with the situation said the effort was shelved because of concerns that it would overly complicate the way creators were paid.
YouTube declined to comment on the project.
In November of 2017, YouTube finally took decisive action against channels pushing pernicious videos, cutting thousands off from receiving advertisements or from the site altogether virtually overnight.
Creators dubbed it “The Purge.”
The company was facing an ongoing advertiser boycott, but the real catalyst was an explosion of media coverage over disturbing videos aimed at children. The worst was “Toy Freaks,” a channel where a father posted videos with his two daughters, sometimes showing them vomiting or in extreme pain. YouTube removed Toy Freaks, and quickly distanced itself from it.
However, the channel had not been in the shadows. With more than 8 million subscribers, it had been reportedly among the top 100 most-watched on the site.
These types of disturbing videos were an “open secret” inside the company, which justified their existence often with arguments about free speech, said one former staffer.
YouTube did plow money into combating its content problems. It hired thousands more people to sift through videos to find those that violated the site’s rules. However, to some inside, those fixes took too long to arrive or paled next to the scale of the problem.
As of 2017, YouTube’s policy for how content moderators handle conspiracy theories did not exist, according to a former moderator who specialized in foreign-language content.
At the end of the year, fewer than 20 people were on the staff for “trust and safety,” the unit overseeing content policies, according to a former staffer.
A YouTube spokeswoman said that the division has grown “significantly” since, but declined to share exact numbers.
In February last year, the video calling the Parkland shooting victims “crisis actors” went viral on YouTube’s trending page. Policy staff suggested soon after limiting recommendations on the page to vetted news sources.
YouTube management rejected the proposal, according to a person with knowledge of the event.
The person did not know the reasoning behind the rejection, but noted that YouTube was then intent on accelerating its viewing time for videos related to news.
However, YouTube did soon address its issues around news-related content. In July last year, YouTube announced it would add links to Google News results inside of YouTube search and began to feature “authoritative” sources, from established media outlets, in its news sections. YouTube also gave US$25 million in grants to news organizations making videos. In the final quarter of last year, YouTube said it removed more than 8.8 million channels for violating its guidelines. Those measures are meant to help bury troubling videos on its site, and the company now points to the efforts as a sign of its attention to its content problems.
Yet, in the past, YouTube dissuaded staff from being proactive.
Lawyers verbally advised employees not assigned to handle moderation to avoid searching on their own for questionable videos, according to one former executive upset by the practice.
The person said the directive was never put in writing, but the message was clear: If YouTube knew these videos existed, its legal grounding grew thinner.
US federal law shields YouTube, and other tech giants, from liability for the content on their sites, yet the companies risk losing the protections of this law if they take too active an editorial role.
Some employees still sought out these videos anyway.
One telling moment happened early last year, according to two people familiar with it.
An employee decided to create a new YouTube “vertical,” a category that the company uses to group its mountain of video footage. This person gathered videos under an imagined vertical for the “alt-right” political ensemble. Based on engagement, the hypothetical alt-right category sat with music, sports and gaming as the most-popular channels at YouTube, an attempt to show how critical these videos were to YouTube’s business.
A person familiar with the executive team said they did not recall seeing this experiment.
In January, YouTube followed former Google employee Zunger’s advice and created a new tier for problematic videos. So-called “borderline content,” which does not violate the terms of service, can stay on the site, but will no longer be recommended to viewers.
A month later, after a spate of press about vaccination conspiracies, YouTube said it was placing some of these videos in the category. In February, Google also released a lengthy document detailing how it addresses misinformation on its services, including YouTube.
“The primary goal of our recommendation systems today is to create a trusted and positive experience for our users,” the document reads. “The YouTube company-wide goal is framed not just as ‘growth,’ but as ‘responsible growth.’”
The company has been applying the fix Wojcicki proposed a year ago.
YouTube said the information panels from Wikipedia and other sources, which Wojcicki debuted in Austin, are now shown “tens of millions of times a week.”
A 2015 clip about vaccination from iHealthTube.com, a “natural health” YouTube channel, is one of the videos that now sports a small gray box. The text links to a Wikipedia entry for the MMR vaccine.
Moonshot CVE, the London-based anti-extremism firm, identified the channel as one of the most consistent generators of anti-vaccination theories on YouTube.
However, YouTube appears to be applying the fix only sporadically. One of iHealthTube.com’s most popular videos is not about vaccines. It is a seven-minute clip titled: “Every cancer can be cured in weeks.”
While YouTube said it is no longer recommends the video to viewers, there is no Wikipedia entry on the page. It has been viewed more than 7 million times.
US President Donald Trump and Chinese President Xi Jinping (習近平) were born under the sign of Gemini. Geminis are known for their intelligence, creativity, adaptability and flexibility. It is unlikely, then, that the trade conflict between the US and China would escalate into a catastrophic collision. It is more probable that both sides would seek a way to de-escalate, paving the way for a Trump-Xi summit that allows the global economy some breathing room. Practically speaking, China and the US have vulnerabilities, and a prolonged trade war would be damaging for both. In the US, the electoral system means that public opinion
In their recent op-ed “Trump Should Rein In Taiwan” in Foreign Policy magazine, Christopher Chivvis and Stephen Wertheim argued that the US should pressure President William Lai (賴清德) to “tone it down” to de-escalate tensions in the Taiwan Strait — as if Taiwan’s words are more of a threat to peace than Beijing’s actions. It is an old argument dressed up in new concern: that Washington must rein in Taipei to avoid war. However, this narrative gets it backward. Taiwan is not the problem; China is. Calls for a so-called “grand bargain” with Beijing — where the US pressures Taiwan into concessions
The term “assassin’s mace” originates from Chinese folklore, describing a concealed weapon used by a weaker hero to defeat a stronger adversary with an unexpected strike. In more general military parlance, the concept refers to an asymmetric capability that targets a critical vulnerability of an adversary. China has found its modern equivalent of the assassin’s mace with its high-altitude electromagnetic pulse (HEMP) weapons, which are nuclear warheads detonated at a high altitude, emitting intense electromagnetic radiation capable of disabling and destroying electronics. An assassin’s mace weapon possesses two essential characteristics: strategic surprise and the ability to neutralize a core dependency.
Chinese President and Chinese Communist Party (CCP) Chairman Xi Jinping (習近平) said in a politburo speech late last month that his party must protect the “bottom line” to prevent systemic threats. The tone of his address was grave, revealing deep anxieties about China’s current state of affairs. Essentially, what he worries most about is systemic threats to China’s normal development as a country. The US-China trade war has turned white hot: China’s export orders have plummeted, Chinese firms and enterprises are shutting up shop, and local debt risks are mounting daily, causing China’s economy to flag externally and hemorrhage internally. China’s