Facebook kicked off a series of explanatory posts it calls Hard Questions with a detailed look at how the social network combats terrorism, penned by director of global policy management Monika Bickert and counterterrorism policy manager Brian Fishman.
- After a person dies, what should happen to their online identity?
- How aggressively should social media companies monitor and remove controversial posts and images from their platforms? Who gets to decide what’s controversial, especially in a global community with a multitude of cultural norms?
- Who gets to define what’s false news and what’s simply controversial political speech?
- Is social media good for democracy?
- How can we use data for everyone’s benefit, without undermining people’s trust?
- How should young internet users be introduced to new ways to express themselves in a safe environment?
Schrage added that the social network welcomed further topic suggestions, which can be emailed to firstname.lastname@example.org.
In the initial post of the series, Bickert provided details on:
- How Facebook uses artificial intelligence—including image matching and language understanding—in its counterterrorism efforts.
- Where humans come into play—both users and experts retained by Facebook.
- Partnerships with governments, technology industry companies and other organizations.
Highlights of her post follow:
Our stance is simple: There’s no place on Facebook for terrorism. We remove terrorists and posts that support terrorism whenever we become aware of them. When we receive reports of potential terrorism posts, we review those reports urgently and with scrutiny. And in the rare cases when we uncover evidence of imminent harm, we promptly inform authorities. Although academic research finds that the radicalization of members of groups like ISIS and Al Qaeda primarily occurs offline, we know that the internet does play a role—and we don’t want Facebook to be used for any terrorist activity whatsoever.
When someone tries to upload a terrorist photo or video, our systems look for whether the image matches a known terrorism photo or video. This means that if we previously removed a propaganda video from ISIS, we can work to prevent other accounts from uploading the same video to our site. In many cases, this means that terrorist content intended for upload to Facebook simply never reaches the platform.
We have also recently started to experiment with using AI to understand text that might be advocating for terrorism. We’re currently experimenting with analyzing text that we’ve already removed for praising or supporting terrorist organizations such as ISIS and Al Qaeda so we can develop text-based signals that such content may be terrorist propaganda. That analysis goes into an algorithm that is in the early stages of learning how to detect similar posts. The machine learning algorithms work on a feedback loop and get better over time.
Because we don’t want terrorists to have a place anywhere in the family of Facebook applications, we have begun work on systems to enable us to take action against terrorist accounts across all our platforms, including WhatsApp and Instagram. Given the limited data some of our apps collect as part of their service, the ability to share data across the whole family is indispensable to our efforts to keep all our platforms safe.
In the past year, we’ve also significantly grown our team of counterterrorism specialists. At Facebook, more than 150 people are exclusively or primarily focused on countering terrorism as their core responsibility. This includes academic experts on counterterrorism, former prosecutors, former law enforcement agents and analysts and engineers. Within this specialist team alone, we speak nearly 30 languages.
Governments and inter-governmental agencies also have a key role to play in convening and providing expertise that is impossible for companies to develop independently. We have learned much through briefings from agencies in different countries about ISIS and Al Qaeda propaganda mechanisms. We have also participated in and benefited from efforts to support industry collaboration by organizations such as the EU Internet Forum, the Global Coalition Against Daesh and the U.K. Home Office.
Image courtesy of whitemay/iStock.