With the growing number of terrorist attacks of late, it is clear that businesses across the world must take a stand and do everything they can to prevent terrorist attacks. This is especially important for social media platforms, where terrorists are spreading their message, recruiting, and searching for “inspiration.” Tech companies, specifically those in social media, are stepping up to the plate by implementing technology and security measures which prevent terrorist organizations from spreading their message across and communicating through their networks. While most of these companies fight to preserve free speech, they realize that they can help control the abilities of groups that strive to wreak havoc and cause misery.
With over 1.9 billion worldwide users, it can be hard to monitor all of the activity that Facebook hosts. While the platform has already implemented technology aimed to fight copyright infringement and child pornography, they realized the need to defend itself against other unacceptable content. Facebook has consulted with counter-terrorism agencies, law enforcement, and other government agencies, and is now using Artificial Intelligence (AI) to help block terroristic posts on its website. This is done through image matching, which removes known terrorism related photos or videos. In addition, language understanding analyzes text and removes it in the case that it violates policies by praising terrorist groups or their actions. When pages, groups, posts, or profiles that are terroristic are blocked, algorithms scan profiles that have engaged with that material and can block them as well. Additionally, the algorithms try to prevent blocked or banned users from creating new profiles and continuing the same behavior.
Facebook’s WhatsApp, a messenger application, has been under fire for encrypting conversations and allowing terrorists to communicate freely on the platform. This became the subject of many media-inquiries following a terrorist attack in London, where authorities were unable to decipher the attacker’s last message. Although they haven’t changed their encryption methods, the company provides all of the information they can when ordered by law enforcement.
In 2015, Twitter began working to combat extremists utilizing this platform, but recently the company has increased its efforts. This is being done through AI, which scans the platforms for posts that are similar to its internal database of text, imagery, and videos that are related to terrorism. Twitter’s platform reported that between July and December of 2016, almost 400,000 accounts were suspended for terrorism-related issues. Of these, 75% were prompted by internal spam-fighting tools.
Our team members unanimously condone acts of terror and social content related to terroristic groups, individuals, or acts. We are proud to see technology groups are taking part in the fight against this propaganda and look forward to seeing how advancements in AI understanding and other technology will further deter these groups and individuals from utilizing social media to spread their message. Stay tuned to our blog for more industry news and tech tips.