Some of the web’s biggest destinations for watching videos, including YouTube and Facebook, have quietly started to remove extremist content from their sites, according to Reuters, citing two people familiar with the matter.
This sites deploying systems to block or rapidly take down Islamic State videos and other similar material, the sources told Reuters. The two sources would not say how videos in the databases were initially identified as extremist.
Will you offer us a hand? Every gift, regardless of size, fuels our future.
Your critical contribution enables us to maintain our independence from shareholders or wealthy owners, allowing us to keep up reporting without bias. It means we can continue to make Jewish Business News available to everyone.
You can support us for as little as $1 via PayPal at [email protected].
Thank you.
The technology in use looks for “hashes, ” a digital fingerprint allowing all content with matching fingerprints to be removed rapidly.
Such a system would catch attempts to repost content already identified as unacceptable, but would not automatically block videos that have not been seen before.
The companies would not confirm that they are using the method or talk about how it might be employed, but numerous people familiar with the technology said that posted videos could be checked against a database of banned content to identify new postings of, say, a beheading or a lecture inciting violence, according to Reuters.
The companies now using automation are not publicly discussing it, two sources said, in part out of concern that terrorists might learn how to manipulate their systems or that repressive regimes might insist the technology be used to censor opponents.
The two people familiar with the still-evolving industry practice confirmed it to Reuters after the Counter Extremism Project publicly described its content-blocking system for the first time last week and urged the big internet companies to adopt it.