Facebook announced that 99 percent of the ISIS and al-Qaida terror-related content it removes from the social network is detected before even being flagged and, in some cases, before it even goes live.
Head of global policy management Monika Bickert and head of counterterrorism policy Brian Fishman said in the latest installment of the social network’s Hard Questions series that by using automated systems—including photo and video matching and text-based machine learning—once Facebook becomes aware of terror content, 83 percent of subsequently uploaded copies are removed within one hour of upload.
Facebook has also enlisted the help of more experts, following up this summer’s creation of the Global Internet Forum to Counter Terrorism with Microsoft, Twitter and YouTube by announcing that it has expanded several partnerships in recent months, including with Flashpoint, the Middle East Media Research Institute, the SITE Intelligence Group and the University of Alabama at Birmingham’s Computer Forensics Research Lab.
Bickert and Fishman said those organizations are helping Facebook flag pages, profiles and groups that are potentially associated with terrorist groups for review, as well as sharing photo and video files found elsewhere on the web that are associated with ISIS and Al Qaeda, which the social network can then run against its algorithms to remove those files or prevent them from being uploaded to Facebook altogether.
They wrote, “Often analysts and observers will ask us at Facebook why, with our vast databases and advanced technology, we can’t just block nefarious activity using technology alone. The truth is that we need not only technology but also people to do this work. And in order to be truly effective in stopping the spread of terrorist content across the entire internet, we need to join forces with others.”