YouTube Plans to Have 10,000 People Reviewing Content Next Year

The video site examined nearly 2 million videos for violent extremist material since June

YouTube's headquarters in San Bruno, Calif., may be a little more crowded in 2018 littleny/iStock
Headshot of David Cohen

YouTube CEO Susan Wojcicki provided another update on the Google-owned video site’s safety efforts.

Wojcicki announced in a blog post that YouTube’s goal is to have more than 10,000 people across Google “working to address content that might violate our policies” in 2018, adding that YouTube’s trust and safety teams have reviewed nearly 2 million videos for violent extremist content since June, up from the total of 1 million that was announced in October.

She wrote: “We are also taking aggressive action on comments, launching new comment moderation tools and, in some cases, shutting down comments altogether. In the past few weeks, we’ve used machine learning to help human reviewers find and terminate hundreds of accounts and shut down hundreds of thousands of comments. Our teams also work closely with the National Center for Missing and Exploited Children, the International Women’s Forum and other child safety organizations around the world to report predatory behavior and accounts to the correct law enforcement agencies.”

She went into more detail on YouTube’s use of machine learning, saying that the video site has removed more than 150,000 videos for violent extremism since June, after deploying the technology to flag such content for human review. Wojcicki said 98 percent of videos removed by YouTube for violent extremism are flagged by its machine learning algorithms, and the technology is helping its reviewers pull nearly five times as many videos as they did before it was deployed.

Wojcicki wrote: “Our advances in machine learning let us now take down nearly 70 percent of violent extremist content within eight hours of upload and nearly one-half of it in two hours, and we continue to accelerate that speed. Since we started using machine learning to flag violent and extremist content in June, the technology has reviewed and flagged content that would have taken 180,000 people working 40 hours per week to assess.”

She addressed the desire for greater transparency, saying that YouTube will begin issuing a regular report next year “where we will provide more aggregate data about the flags we receive and the actions we take to remove videos and comments that violate our content policies,” and adding that YouTube is developing additional tools on that front.

On the advertiser side, Wojcicki said YouTube will “apply stricter criteria” when determining which videos and channels are eligible for advertising in order to avoid brands seeing their ads running alongside objectionable content. YouTube also plans to add to its team of ad reviewers.

She wrote: “We are taking these actions because it’s the right thing to do. Creators make incredible content that builds global fan bases. Fans come to YouTube to watch, share and engage with this content. Advertisers, which want to reach those people, fund this creator economy. Each of these groups is essential to YouTube’s creative ecosystem—none can thrive on YouTube without the other—and all three deserve our best efforts.”


david.cohen@adweek.com David Cohen is editor of Adweek's Social Pro Daily.
Publish date: December 5, 2017 https://stage.adweek.com/digital/youtube-safety-update-december-2017/ © 2020 Adweek, LLC. - All Rights Reserved and NOT FOR REPRINT
{"taxonomy":"","sortby":"","label":"","shouldShow":""}