Facebook announced on Thursday that in response to complaints that the company has not done enough to monitor extremist content, they will introduce artificial intelligence that will work alongside human moderators to review and help remove content.
After the recent attack in London, Prime Minister Theresa May called upon big internet companies including Facebook to take a more active stand in the fight against terrorism. “We cannot allow this ideology the safe space it needs to breed. Yet that is precisely what the internet — and the big companies that provide internet-based services — provide,” she stated.
Facebook believes that overtime human moderators will not be as necessary as the artificial intelligence will be able to teach itself to identify key phrases that have already been flagged, identify Facebook users who belong to extremist Facebook groups or monitor those who constantly view these pages and create fake Facebook accounts to spread content. As of now, 150 human moderators who know a total of 30 languages are working with the artificial intelligence to spot posts that violate Facebook’s terms of use.
J.M. Berger, a fellow with the International Centre for Counter- Terrorism at The Hague, believes that social media companies like Facebook are going to face a big challenge when it comes to monitoring extremist activity because that requires them to define what counts as terrorism. “The problem, as usual, is determining what is extremist, and what isn’t, and it goes further than just jihadists. Are they just talking about ISIS and Al Qaeda, or are they going to go further to deal with white nationalism and neo-Nazi movements?”
The question of First Amendment rights to free speech also come into question and when policing becomes too much. Jillian York, the director for international freedom of expression at the Electronic Frontier Foundation, stated, “Will it be effective or will it overreach? Are they trying to discourage people from joining terrorist groups to begin with, or to discourage them from posting about terrorism on Facebook?”
These are all questions that Facebook and other social media companies who have been called upon to be more active in policing will have the find the answers to as they increase the amount of content they remove from their sites.
Featured Image Via Flickr/b_d_solis