In a response to political pressure in Europe to militant groups using the social network for propaganda and recruiting, Facebook Inc offered additional insight on its efforts to remove terrorism content.
Monika Bickert, Facebook's director of global policy management, and Brian Fishman, counter-terrorism policy manager, explained in a blog post that in order to identify and remove content quickly, Facebook has ramped up use of artificial intelligence such as image matching and language understanding.
Earlier, Facebook’s statement was met with skepticism by some who have criticized U.S. technology companies for moving slowly as the world's largest social media network, with 1.9 billion users, has not always been so open about its operations.
"We've known that extremist groups have been weaponizing the internet for years," said Hany Farid, a Dartmouth College computer scientist who studies ways to stem extremist material online.
"So why, for years, have they been understaffing their moderation? Why, for years, have they been behind on innovation?" Farid asked. He called Facebook's statement a public relations move in response to European governments.
While saying that technology companies needed to go further, Britain's interior ministry welcomed Facebook's efforts.
"This includes the use of technical solutions so that terrorist content can be identified and removed before it is widely disseminated, and ultimately prevented from being uploaded in the first place," a ministry spokesman said on Thursday.
Facebook and other providers of social media such as Google and Twitter have bene pressed to do more to remove militant content and hate speech by countries such as Germany, France and Britain, where, in recent years, civilians have been killed and wounded in bombings and shootings by Islamist militants.
For the content posted by its users, Facebook has been threatened by government officials to be stripped of the broad legal protections it enjoys against liability and fined by government officials.
The company said in the blog post that to see if a photo or video being uploaded matches a known photo or video from groups it has defined as terrorist, such as Islamic State, Al Qaeda and their affiliates, Facebook uses artificial intelligence for image matching that allows the company to achieve this.
In order to help each other identify the same content on their platforms, a common database of digital fingerprints automatically assigned to videos or photos of militant content was created by YouTube, Facebook, Twitter and Microsoft last year.
Analysis of text that has already been removed from the site for praising or supporting militant organizations for developing text-based signals for such propaganda is now done by Facebook.
"More than half the accounts we remove for terrorism are accounts we find ourselves; that is something that we want to let our community know so they understand we are really committed to making Facebook a hostile environment for terrorists," Bickert said.
Recent attacks were naturally starting conversations among people about what they could do to stand up to militancy, Bickert said when asked why Facebook was opening up now about policies that it had long declined to discuss.
In addition, she said, "We're talking about this because we are seeing this technology really start to become an important part of how we try to find this content."
Elliot Schrage, vice president for public policy and communications, said in a statement that the first in a planned series of announcements to address "hard questions" facing the company was Facebook's blog post on Thursday. He said that other questions, include: "Is social media good for democracy?"
(Source:www.reuters.com)
Monika Bickert, Facebook's director of global policy management, and Brian Fishman, counter-terrorism policy manager, explained in a blog post that in order to identify and remove content quickly, Facebook has ramped up use of artificial intelligence such as image matching and language understanding.
Earlier, Facebook’s statement was met with skepticism by some who have criticized U.S. technology companies for moving slowly as the world's largest social media network, with 1.9 billion users, has not always been so open about its operations.
"We've known that extremist groups have been weaponizing the internet for years," said Hany Farid, a Dartmouth College computer scientist who studies ways to stem extremist material online.
"So why, for years, have they been understaffing their moderation? Why, for years, have they been behind on innovation?" Farid asked. He called Facebook's statement a public relations move in response to European governments.
While saying that technology companies needed to go further, Britain's interior ministry welcomed Facebook's efforts.
"This includes the use of technical solutions so that terrorist content can be identified and removed before it is widely disseminated, and ultimately prevented from being uploaded in the first place," a ministry spokesman said on Thursday.
Facebook and other providers of social media such as Google and Twitter have bene pressed to do more to remove militant content and hate speech by countries such as Germany, France and Britain, where, in recent years, civilians have been killed and wounded in bombings and shootings by Islamist militants.
For the content posted by its users, Facebook has been threatened by government officials to be stripped of the broad legal protections it enjoys against liability and fined by government officials.
The company said in the blog post that to see if a photo or video being uploaded matches a known photo or video from groups it has defined as terrorist, such as Islamic State, Al Qaeda and their affiliates, Facebook uses artificial intelligence for image matching that allows the company to achieve this.
In order to help each other identify the same content on their platforms, a common database of digital fingerprints automatically assigned to videos or photos of militant content was created by YouTube, Facebook, Twitter and Microsoft last year.
Analysis of text that has already been removed from the site for praising or supporting militant organizations for developing text-based signals for such propaganda is now done by Facebook.
"More than half the accounts we remove for terrorism are accounts we find ourselves; that is something that we want to let our community know so they understand we are really committed to making Facebook a hostile environment for terrorists," Bickert said.
Recent attacks were naturally starting conversations among people about what they could do to stand up to militancy, Bickert said when asked why Facebook was opening up now about policies that it had long declined to discuss.
In addition, she said, "We're talking about this because we are seeing this technology really start to become an important part of how we try to find this content."
Elliot Schrage, vice president for public policy and communications, said in a statement that the first in a planned series of announcements to address "hard questions" facing the company was Facebook's blog post on Thursday. He said that other questions, include: "Is social media good for democracy?"
(Source:www.reuters.com)