YouTube’s misinformation crisis was years in the making
This article is for information only and doesn’t call for any action.
A rule of thumb I have about covering big tech platforms is that generally you will not learn anything useful about a company from talking to their CEOs. Even when the CEO is candid in their responses, which they often are, the platforms they run have long ago ceased to be under their direct control. A big tech CEO these days is more like a head of state: trying to nudge a large, unruly population toward progress through legislation and speech-making, while managing the chaos that the worst of its citizens are inventing every day.
And so I am grateful to the New York Times’ Daisuke Wakabayashi today for handing in a profile of YouTube CEO Susan Wojcicki in which we actually do learn something. We see how her status as a division chief within Alphabet’s complex corporate structure shielded her from some of the pressures faced by her peers at Facebook and Twitter. When they are first called before Congress, Wojcicki is not invited. While her peers have to answer to shareholders, she is able to keep her platforms’s financial performance private. While platforms run by their founders take the brunt of the criticism — the platforms are their ideas, after all — Wojcicki’s status as a hired hand has insulated her.
Of course, as CEO of YouTube, Wojcicki faces a huge amount of criticism daily. But the Times’ profile shows how for much of her tenure, that criticism has run in directions that have little to do with the way that misinformation or hate speech spreads on the platform. Instead, Wojcicki’s core constituents for most of tenure have been big advertisers, who make YouTube a viable business; and star creators, who populate the site with videos against which to sell that advertising. (To the tune of 500 hours of footage uploaded per minute, according to the report.) Wakabayashi writes:
One reason Ms. Wojcicki defies easy characterization is that her core function keeps changing. Today, her job is to be something like the standards czar of an anarchic civilization. Before that, when YouTube started home-growing celebrity icons, she was a budding media mogul. But in whatever role YouTube has needed her to assume, Ms. Wojcicki has not lost sight of the skill she learned early on at Google: how to keep advertisers happy.
Marc S. Pritchard, Procter & Gamble’s chief brand officer, who is responsible for one of the biggest advertising budgets in the world, said his company has had some rocky moments with YouTube in the last few years, and that Ms. Wojcicki has been a steadying presence.
With this context, it’s relatively easy to see how YouTube went from budding next-generation cable TV to the site described in Mark Bergen’s article for Bloomberg this month: continually caught flat-footed in the face of deadly viral “challenges,” murderous live-streams, conspiracy theorists, and white nationalists catapulted into popularity by YouTube’s own algorithms. It turns out that it doesn’t matter whether a platform is run by its founder or an outsider: the logic of a platform is to grow as big as you can as fast as you can, and promise everyone that you will clean up the resulting messes as fast as you can.
Like many of her peers who run platforms, Wojcicki comes across as sincere, determined, and skilled in the art of running a business. And yet coming away from the profile, I wonder if YouTube has yet to grasp the challenge ahead of it. Deep in the story, the CEO acknowledges that YouTube’s biggest challenges stems from so-called “borderline content” — videos that come close to breaking the site’s rules without quite going over the line.
Ms. Wojcicki said the third category, so-called borderline content, has been the most challenging. Earlier this year, the company announced that it was changing its algorithm to stop recommending material like conspiracy videos that can become a gateway to the unsavory.
Starting with the United States, YouTube said it would employ human raters from across the country to evaluate certain content. Those judgments will help inform what the recommendation engine flags. (Clearly, the algorithms need attention. This week, they mistakenly added information about the Sept. 11, 2001 terror attacks to footage of the Notre Dame fire.) YouTube said it plans to introduce the change to another 20 countries this year, deploying raters in each market to understand the preferences of local users.
What I don’t see here is an acknowledgement that borderline content is what YouTube incentivizes its users to produce. In a world of infinite video inventory, only the most strikingly original ideas stand out — and time ad again, what has stood out on YouTube is video that shocks, outrages, and offends.
Facebook CEO Mark Zuckerberg described this problem with admirable clarity in a blog post last year. He wrote:
Our research suggests that no matter where we draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average — even when they tell us afterwards they don’t like the content.
This is a basic incentive problem that we can address by penalizing borderline content so it gets less distribution and engagement. By making the distribution curve look like the graph below where distribution declines as content gets more sensational, people are disincentivized from creating provocative content that is as close to the line as possible.
As a site where people come to catch up with old high school classmates and talk to fellow area moms, Facebook is better positioned to survive in a world where the most polarizing content no longer appears in users’ feed. YouTube, as a destination for entertainment, sits in a much different position. And so while I’m heartened that YouTube intends to prevent the spread of borderline content, it’s hard to overstate the degree to which the company relies on it today. Recall this chilling detail from Bergen’s story:
One telling moment happened around early 2018, according to two people familiar with it. An employee decided to create a new YouTube “vertical,” a category that the company uses to group its mountain of video footage. This person gathered together videos under an imagined vertical for the “alt-right,” the political ensemble loosely tied to Trump. Based on engagement, the hypothetical alt-right category sat with music, sports and gaming as the most popular channels at YouTube.
Perhaps YouTube will ultimately make good on its promises to de-radicalize itself. In recent months, the company has come under significant pressure to do so. Countries around the world are enacting legislation requiring the company to monitor user uploads more strictly, and even the Democratic speaker of the House in the United States is threatening to erode the safe-harbor provision in federal law by which YouTube is able to operate in its current form.
And the pressures are not just legislative in nature. On Wednesday, BuzzFeed reported that recently Google staffers were alerted that an employee had been diagnosed with measles. It took YouTube until February to de-monetize the many popular videos promoting the idea that vaccines like the one that protects against the measles cause harm. And while it’s too much to lay blame for a nationwide measles outbreak at Google’s feet, the case underscores the degree to which crises amplified on social media will not discriminate when it comes to choosing their victims.
European Union lawmakers have approved legislation that requires platforms to quickly take down terrorist content, Colin Lecher reports. (In other splinternet news, the EU has also passed new regulations aimed at promoting competition among e-commerce platforms.)
Under the legislation, called the Terrorist Content Regulation, companies could be fined up to 4 percent of revenue if they consistently fail to remove terrorist content. The plan would apply to major companies like Facebook and YouTube, but much of the debate has focused on smaller platforms, as critics have charged that the plan places an undue burden on those companies.
The legislation approved by Parliament ultimately rolled back some of the more controversial parts of the plan, such as a requirement to constantly monitor uploads and filter for terrorist content. The approved plan also gives more leeway to deal with a first removal order, providing platforms with 12 hours to take down the content.
Joseph Cox used Amazon’s (controversial!) Rekognition software to find weapons in the Christchurch massacre video and wonders whether Facebook couldn’t deploy similar technology at scale to detect murder live streams more effectively:
Obviously this crude test is not supposed to be a fully-fleshed solution that a tech giant would actually deploy. But it still highlights that it is possible to detect weapons in live streams. So what is the issue that is stopping the swift surfacing, and if appropriate, removal, of live streamed gun violence via automated means?
Russell Brandom has a smart, beautifully written profile of Andrew Yang, a fringe presidential candidate who keeps saying smart things about what automation is going to do to our world:
A politician’s sincerity is always in doubt, and Yang is more doubtable than most. This is his first campaign, his first venture in electoral politics of any kind. Before 2017, he had no history of anti-capitalist activism. It’s easy to paint him as a vanity candidate who is indulging in fashionable socialism to build his thought-leader credentials, like a smarter, more detail-oriented Howard Schultz. But describing this collapse in person, Yang seems genuinely shaken and moved to do something — anything — to stave off the collapse. If his campaign comes off as doomed or absurd, it’s simply because he didn’t know what else to do.
“That is literally what drove me to run for president,” Yang says. “I thought to myself, realistically, my choices are to watch the society come apart or try and galvanize energy around meaningful solutions.”
Google’s effort to turn the Toronto waterfront into a laboratory of democracy has met with new resistance from citizens there:
Privacy advocates are concerned the project will increase surveillance and outsource government responsibilities to a private corporation.
“Canada is not Google’s lab rat,” said the association’s executive director and general counsel MJ Bryant. “We can do better. Our freedom from unlawful public surveillance is worth fighting for.”
Talia Levin examines how far-right figures have used social networks to promote baseless theories that the Notre Dame fire was started by anti-Western zealots:
While baseless, racist conspiracy-peddling is an unfortunate but constant feature of social media — the background noise to any unfolding event — more mainstream conservative media proved to be just as susceptible to a narrative of civilizational conflict. On “The News and Why It Matters,” a video program and podcast on TheBlaze.com, former Fox News mainstay and current talk-radio host Glenn Beck floated the possibility of a coverup by France’s government. “If this was started by Islamists, I don’t think you’ll find out about it, because I think it would set the entire country on fire,” Beck told his co-hosts, adding that this was France’s “World Trade Center moment.” On Fox News, Tucker Carlson hosted far-right columnist Mark Steyn, who denounced France as “godless” and inveighed against the “post-Christian” country. As Carlson nodded along, brow furrowed, Steyn recounted a story of worshiping in the Basilica of Saint-Denis, which is now, he declared, “in a Muslim suburb,” and asserted that rebuilding Notre Dame, as President Emmanuel Macron had promised to do, would be pointless.
Nina Jankowicz looks at the state of information warfare in Ukraine, which continues to struggle against ongoing Russian misinformation campaigns despite having blocked access to Russia-based social networks and websites:
Unlike Washington, which has mustered hardly any official response to Russia’s use of disinformation to influence the 2016 presidential election, Kyiv has taken action. In May 2017, Poroshenko banned the Russian search engine Yandex and the social-media networks VKontakte and Odnoklassniki within Ukraine, a decision backed by the MIP. A year later, the government blocked an additional 192 websites that supposedly had pro-Russian sympathies, relying on the MIP’s advice to compile the list. The bans have, in one sense, served their purpose; officials say that overt Russian-originated disinformation has decreased. Yet as Zolotukhin alluded to in his conversations with me, that has not meant Moscow’s goals have not been met.
In response, Ukraine has been accused—by allies as well as critics—of pushing the boundaries of acceptable democratic behavior. “We received immediate feedback from all of our partners, saying, ‘Well, this is an attack on free speech and attack on free expression,’” Ivanna Klympush-Tsintsadze, Ukraine’s deputy prime minister for Euro-Atlantic and European integration, told me. “We had a really hard time explaining to our partners … don’t forget that we are a country at war. We are losing people every other day, if not every single day.”
Some of Instagram’s top meme-makers are organizing in the hope that collective action will offer an effective counter-balance to the capricious nature of platforms. A fascinating report from Taylor Lorenz:
A few things the IG Meme Union wants: a more open and transparent appeals process for account bans; a direct line of support with Instagram, or a dedicated liaison to the meme community; and a better way to ensure that original content isn’t monetized by someone else. “Having a public and clear appeal process is a big thing,” Praindo said. “People appeal now and get turned down, and they won’t know why.” (In a statement, an Instagram spokesperson said, “Each week we review millions of reports and there are times when we make mistakes.” She also said the company would soon be rolling out an option to appeal post removals.)
So far, the union’s message has been well received by the broader meme community. Administrators for accounts with millions of followers said they support the group’s efforts and would stand in solidarity with them. “I think the union is a good thing. There should be something like this,” said Sonny5ideUp, a memer with more than 1 million followers on Instagram. Jackson Weimer, a writer for Meme Insider who has also created several successful Instagram meme pages, said he thinks the union is a “good idea” and a necessary way to get Instagram to finally take memers seriously.
You had to assume this was happening from the moment Facebook announced a home speaker. The company confirmed the news today after a scoop from CNBC’s Sal Rodriguez.
Michael Hiltzik looks at the (doomed) shareholder proposals to reduce Mark Zuckerberg’s authority over the platform:
“Facebook operates essentially as a dictatorship,” observes the supporting statement for one of those proposals. “Shareholders cannot call special meetings and have no right to act by written consent. A supermajority vote is required to amend certain bylaws. Our Board is locked into an out-dated governance structure that reduces board accountability to shareholders.”
One of the four proposals would establish an independent chair, instead of leaving the chair and CEO positions both in Zuckerberg’s hands. Another would require majority votes for directors, so they couldn’t skate into their board positions purely on Zuckerberg’s say-so. The third would call for all shares, whether Class A or Class B, to have a single vote. A fourth calls for the board to consider “strategic alternatives” including a breakup of the company.
Pinterest is wasting a lot of breath in the run-up to its IPO trying to convince people it’s not a social network:
Pinterest calls itself a “visual discovery” platform for people to get ideas for different aspects of their lives, whether that’s curating a wardrobe, planning a vacation or wedding, or furnishing a new home. In a video to investors, Silbermann illustrates why Pinterest is unique. He describes social media platforms as a way to document the past and entertain oneself; while Pinterest is a “utility” for future activities.
“Social media at its best makes you feel socially validated, while Pinterest at best makes you feel creative and empowered to act,” Silbermann says.
Well here’s a nice story about the good that social media can do: in the wake of the Notre Dame cathedral fire, journalist Yashar Ali asked his 395,000 Twitter followers to consider donating to a crowdfunding campaign for three black churches in Louisiana that were recently gutted by arson. Within a day, the campaign was closing in on $1.5 million in donations.
Shouldn’t you be able to hide an awful reply to your tweet so your followers don’t see it? Well, Twitter says you will be able to in June. Or as I like to think of it, 17 Jack Dorsey podcasts from now.
Josh Constine profiles a new social network built for making tutorials. It’s an interesting attempt to unbundle YouTube:
Sick of pausing and rewinding YouTube tutorials to replay that tricky part? Jumprope is a new instructional social network offering a powerful how-to video slideshow creation tool. Jumprope helps people make step-by-step guides to cooking, beauty, crafts, parenting and more using voice-overed looping GIFs for each phase. And creators can export their whole lesson for sharing on Instagram, YouTube, or wherever.
Sure, why not:
The next phase of social media is about hanging out together while apart. Rather than performing on a live stream or engaging with a video chat, Instagram may allow you to chill and watch videos together with a friend. Facebook already has Watch Party for group co-viewing, and in November we broke the news that Facebook Messenger’s code contains an unreleased “Watch Videos Together” feature. Now Instagram’s code reveals a “co-watch content” feature hidden inside Instagram Direct Messaging.
The Washington Post is experimenting with giving readers in India direct access to reporters through the country’s favorite messaging app:
New Delhi Bureau Chief Joanna Slater and Correspondent Niha Masih will take people behind the scenes of their reporting and share regular updates on the election. Users who opt in to the channel will also be able to engage directly with Slater and Masih.
Sean Parker, the founding president and eventual critic of Facebook, writes a blurb for his former CEO on the occasion of Zuckerberg being named to a Time list.
Faced with tensions between the company’s idealistic belief in impartiality and “openness” and the realities of managing this global platform (public scrutiny, accusations of privacy abuses and government investigations), Mark will need to make hard choices. My hope is that he remains true to the ideals upon which the company was founded—choosing to promote universal values like decency over sensationalism, intimacy over social status, and human dignity over tribalism—or in Zuckspeak, simply: “goodness.”
And finally …
It’s easy to be discouraged by the recent global decline in democracy. So here’s a story about the billionaire founder of Foxconn is running for president of Taiwan for one perfect reason: the Chinese goddess of the sea, Mazu, told him to:
“Today, Mazu told me I should be inspired by her to do good things for people who are suffering, to give young people hope, to support cross-strait peace,” Gou said, adding that the goddess had recently spoken to him in a dream. “I came to ask Mazu and she told me to come forward.”
I look forward to the presidential debates.
Talk to me
Send me tips, comments, questions, and a doctor’s note proving that you have been vaccinated for measles: [email protected].