Artificial Intelligence for Counterterrorism?- iWONDER

  29 July 2019    Read: 2776
 Artificial Intelligence for Counterterrorism?-  iWONDER

The recent debate between the Associated Press and Facebook about the success of removing content posted by terrorist organizations should be a wake-up call concerning content moderation capabilities on these kinds of platforms. Facebook data indicates the removal of 99% of terrorism content, while AP contends that Facebook’s success is only 38%. The point here is that machine learning adds a limited capability to human content mediation. The current state of the art in machine learning in this area is far from meeting expectation and is a fantasy, created around the magical tool of artificial intelligence (AI).

Terrorist networks will continue to exploit advanced technology in the areas of social network mapping and terrorist recruitment to benefit from the AI arms race. New AI technology in drones, among other things, will result in the production of cheap versions of them and that may easily fall into the hands of terrorists. There is no doubt that terrorist groups like ISIS will attempt to utilize all possible means to pursue terrorist activities. The gaps in content moderation in social media and communication networks will constitute opportunities for ISIS and others as well.

Machine learning has a technology aspect, a social context, and an industry dimension. On the one hand, it is a product of high technology and a market for it. The social context is where it impacts the daily lives of people. In this sense, there is a growing AI intervention with an influence on the socio-economic conditions of people. This is an evolving phenomena, which requires social, political, legal and ethical evaluations, in addition to technology.

Machine learning relies on algorithms that are known as classifiers. The classifier needs to be trained by data and works better if the difference in data, no matter how massive it is, clearly shows it. As it is fed by labeled categories, it is fragile against unforeseen conditions. It does not have a cognitive ability comparable to humans in this sense. That is why one would not expect machine learning to be able to respond to the complexities of societal and cultural value settings. The automated tools in one setting may be fragile in others. However, it is also next to impossible to monitor contents at today’s scale of social media and relevant platforms only with human capability. The need for machine learning is obvious.

The uploaders of content are aware of the deficits of machine learning enhanced tools. They develop measures to bypass the filters of automated tools. They may modify the content until they reach the goal of staying on the platform as much as possible. Human probes would help automated tools to discover blind spots. However, the idea of creating efficient filters may not always be possible. The industry dimension of machine learning does not like to disappoint customers. Providers may be faced with considerable fines/penalties if they cause government dissatisfaction in the case of benign posts. This situation results in over-filtering which puts machine learning to the side of “artificial” rather than the desired component of “intelligence” in content management.

As an example of what content mediation requires is Christchurch’s shooter first provided a manifesto to his supporters through a private communication channel. He used Facebook livestream to spread violent media. This video was multiplied by his supporters. Facebook removed the content as soon as it was recognized. The copied content of video and manifesto was edited countless time to bypass filters and continuously uploaded to reach out to the widest possible audience. There is a learning process among the transnational terrorist networks and ISIS shares this process too. They improve and innovate to exploit weaknesses as well. What is needed here is to prevent the emergence of such content before it goes viral on these platforms. The problem with preventive measures is the risk of creating an unnecessary bubble in benign content identification.

The deletion of the historical record in this process is also problematic as it may be used for research to develop a better grasp of terrorist structures. Although this problem can be solved through time-limited archives of moderated content, the real problem of removing benign content remains the essence of the problem. There is an expectation for companies to limit content, develop recognition systems and pursue operations within the confines of the legal systems. The public-private and private-private partnerships, and public debate on content mediation will open up space for a necessary socio-political and industry/business environment, and better coordination against various forms of terrorism-related content abuses.

ISIS declares new provinces from Africa to Asia, the Middle East to Central Asia. It sent blessings to allegedly loyal groups after its defeat in Syria. ISIS or any other terrorist network does not have anything close to its own Silicon Valley. However, they benefitted from the weaknesses and deficits of the content sharing platforms and AI produced tools. For example, they may benefit from recommendation engines, which can deliver extremist content to large audiences.

The automated tools are necessary since the control or mediation of the amount of content is beyond the capability of human teams. Machine learning is fragile and human teams are vulnerable against massive data. We have no option but to put all available capabilities into better use, in this case, machine learning for counterterrorism with recognition of deficits and weaknesses. Consequently, machine learning is a necessary component of this struggle with the use of human cognitive capabilities to grasp the dynamics, alternatives and prospective terrorist activities, including those of ISIS and others as well.

 

Read the original article on intpolicydigest.org.


More about: ArtificialIntelligence  


News Line