Daily Management Review

Celebrations of Nice Attack Stemmed by Swift Move by Twitter and Facebook


07/18/2016




In a rare round of praise for a platform that has often struggled to contain violent propaganda, watchdog groups said that Twitter Inc moved swiftly to remove posts from Islamic extremists glorifying a truck attack in Nice, France.
 
Numerous challenges to social media companies have been posed by a spate of violence over the past several months. Internet monitoring groups said that initially restrictions on social media were imposed following the unsuccessful military coup in Turkey. However as the events unfolded and numerous citizens broadcast live video on Facebook and sent tweets, the crackdown on social media in the country appeared to ease.
 
The issue of whether the Tunisian man who drove a truck into Bastille Day crowds on Thursday, killing 84 people, had ties to Islamic militants is still being determined by the U.S. and French authorities.
 
According to the Counter Extremism Project, a private group that monitors and reports extremist content online, at least 50 Twitter accounts praising the attacks used the hashtag Nice in Arabic. The group said that many accounts shared images praising the carnage and which appeared almost immediately after the attack.
 
Following the attacks last year and earlier this year in Paris and Brussels, this time too the pattern was similar to what was seen on Twitter back then. However action much more quickly this week by Twitter which once took a purist approach to free speech but has since revised its rules.
 
Violent content, such as advocacy of terrorism has always been banned by Twitter rule and those rules have recently been made more explicit.
 
"Twitter moved with swiftness we have not seen before to erase pro-attack tweets within minutes. It was the first time Twitter has reacted so efficiently," Counter Extremism Project said in a statement.
 
Twitter had responded with unusual alacrity, said Rabbi Abraham Cooper, head of the Simon Wiesenthal Center's Digital Terrorism and Hate project.
 
Twitter said in a statement that it condemns terrorism and bans it on its site but did not provide any information about account suspensions.
 
Efforts to quickly remove violent propaganda that violates their terms of service have been ramped up by Twitter, Facebook Inc and other internet firms over the past two years.
 
Distinguishing between graphic images that are shared to glorify or celebrate attacks and those shared by witnesses who are documenting events is the major challenge that is faced by both the companies.
 
The type of content that are and are not allowed on the platform is dictated by Facebook's "community standards". "Terrorism" and related content, such as posts or images that celebrate attacks or promote violence is explicitly banned according to those standards.
 
Despite this, the policies of the company with respect to graphic images are more nuanced. To report objectionable content to teams of human editors, Facebook, like most large internet companies, relies on users and eagle-eyed advocacy groups. The human editors decide whether a post should be deleted after they review each submission.
 
In order to establish clearer and in many cases stricter ground rules on what content is permissible on their platforms, internet companies have continually updated their terms of service over the past two years.

Facebook and YouTube have moved recently toward implementing some automated processes to block or rapidly remove Islamic State videos and similar material in response to pressure by U.S. lawmakers and counterextremism groups. However celebrating attacks online and even updating their tactics by Islamist militants has not been stopped by such measures.
 
According to screenshots from the Wiesenthal Center, to enable their tweets were shown to a wider audience, Twitter hashtags that were trending globally to celebrate the Nice attacks, such as #PrayForNice, #NiceAttack and #Nice were used by some Islamic State supporters.
 
(Source:www.reuters.com)