Matt Rivitz, founder of the advertising activism org Sleeping Giants, spent the months leading up to the coronavirus pandemic trying to reimagine what an ad network could be. Partnered with a non-governmental organization, the agency veteran wanted to formulate a more ethical version of the business model he sees as profiting off of misinformation and hate.
But as the onset of the pandemic made investment options dicier—and Rivitz began to better appreciate the scope of change that would be needed—he switched gears. He has now started working with a startup called Nobl that uses language detection to identify patterns around conspiracy theories and hate content and funnel programmatic money that would be spent on it to more quality outlets.
Meanwhile, the normal work of Sleeping Giants and the model of consumer activism it helped pioneer continues as ever. This week, several advertisers including T-Mobile, Disney and Papa John’s distanced themselves from Tucker Carlson’s Fox News show over comments he made about Black Lives Matter protests after widespread backlash.
Adweek sat down with Rivitz to talk about the new undertaking, why platforms still aren’t doing enough to crack down on misinformation and the problems with programmatic business models.
This interview has been edited for length and clarity.
Adweek: Why did you choose to team up with this company specifically?
Matt Rivitz: Basically it provides some pretty radical transparency to a business that doesn’t have any right now.
I’ve had so many advertisers come to me and say, “Can you give me your blacklist?” and I never wanted it to be my point of view, because my point of view doesn’t represent everyone. You kind of have to know it when you see it. We wanted to create something that can do it through language instead of doing whitelists and blacklists and having to keep all those decisions under wraps. It’s just basically like, “We’ll show you where you are according to our algorithm. And you can you can make that call.”
What kinds of problems with the blacklist system led you this?
It’s inefficient. It doesn’t account for all the sites that open every single day that are going to be trying to take advantage of the system. We’re heading into an election, and there’s going to be a vast number of sites that are going to open that are going to be highly inflammatory on any side of an issue, and they’re going to be able to monetize themselves because the system is so opaque. Google and Facebook aren’t really doing a great job—in my mind—of policing their ad network because they want to monetize everything.
Platforms like Facebook and Google are also using natural language AI to root out some of the same content you are talking about. Why are those efforts inadequate in your eyes?
It’s not in their business interest. It’s not my business interest either, it’s a mission-driven thing for me. But I don’t need to make a billion dollars—I just want to, I want to clean up the internet. Right now, the business model is to reward engagement. And it just so happens that hate and divisiveness and conspiracies have more engagement than facts. So in order to get that, we need to change the game a little bit.
What do you think of Twitter’s recent efforts to add more fact-checking disclaimers and Facebook’s refusal to do so?
Look, Twitter has avoided doing the right thing for a very long time. And they’re rightfully enforcing their terms of service a bit now, but haven’t gone all the way. They’ve definitely made bigger steps than most platforms, I will say, and trying to root out a lot of hate and harassment on the platform. It still happens all the time—it happened to me the other day again, someone published my address. And it happens all the time now, it’s no big deal anymore.
But Facebook unfortunately continues to dig in. They won’t enforce the rules that they themselves wrote and I think that’s really shameful. The fact that they’re going to allow ads from politicians with verifiable lies is tremendously damaging to democracy—especially because they have so much reach—and I think that making Facebook groups where you can’t see what’s happening on there and white supremacists and extremists are able to write whatever they want and organize on there and no one will ever find out is amazingly dangerous. I think they don’t enforce their hate speech policies and they don’t enforce their incitement to violence policies either.
Instagram just put out a banner supporting Black Lives Matter at the top of their feed, and I just think it’s tremendously hypocritical because no one has allowed more racism and harassment on their platforms than Facebook. I mean, they caused a genocide.
And getting back to Nobl, you think that it will require technology on a bigger scale to address some of these problems?
I do. We’re sitting on a major problem that’s caused by a type of business. So we need a business-type approach. Twitter doesn’t scale, you’re only gonna reach so many people. There have probably been 4,400 advertisers that have opted out of Breitbart after they found out that they were on there. That’s one site. They continue to show up there because Google and Facebook won’t remove them from that network, despite the fact that they’re breaking their terms of service pretty regularly–at least the way I’m interpreting them.
I’ve had so many marketers that have said, “OK, what do we do now?” and I’m like, “I don’t know, get better.” But it’s really hard because it’s really hard to do that. I don’t believe that the tools they have right now are sufficient because as these brand safety problems keep happening, it’s hard to be a responsible marketer if you don’t have the right tools to do it. I think, in a way, it’s, it’s a business problem. And I’m more than happy to be a part of a business that’s part of the solution.