Terrorism in The Age of Artificial Intelligence

Terrorism in The Age of Artificial Intelligence

Muskan
Muskan

Modern technologies are becoming more pervasive in our day-to-day lives. Right from online shopping to personalized movie and music recommendations, these emerging technologies have had a profound impact on us. Even though we encounter and acknowledge the benefits brought about by these technologies, an overlooked possibility is the malicious use of these technologies that pose a threat to humanity. Researchers in the field of AI and Robotics is seeking to enhance their understanding of these emerging technologies, from the perspective of crime and terrorism in the age of Artificial Intelligence. 

In 2018, United Nations Secretary-General, António Guterres stated, “While these technologies hold great promise, they are not risk-free, and some inspire anxiety and even fear. They can be used to malicious ends or can have unintended negative consequences.” It has been observed that early adoption of emerging technologies, which tend to be poorly regulated and governed, as is the case with artificial intelligence, present a way in which terrorists or violent extremists can further their malicious agenda. There are growing concerns around the protection of human rights, rights to privacy, freedom of thought, and expression. Therefore, any exploration of the use of Artificial Intelligence must be accompanied by efforts to prevent possible human rights infringements. 

Killer Drones and Terrorism in the Age of Artificial Intelligence

A few years ago, drones were primarily referred to as stingless male bees. However, thanks to rapid advancements in modern technology and bio-mimicking, drones now refer to as unmanned aerial vehicles. According to the Washington Post, in 2016, in a “first-of-its-kind” attack, the Islamic State of Iraq executed a drone attack that killed two Kurdish fighters and wounded two French Special Operations troops. From 2016 on, drone strikes were a regular occurrence in the Islamic State’s operations in Iraq and Syria. In fact, owing to the seriousness of the threat, in 2019, the European Union Secretary Commissioner, Julian King warned that killer drones could be used by terrorist groups to target European cities. The threat of drone attacks has been felt in India as well. In June 2021, a drone attack was executed at the Air Force Station in Jammu. Drone strikes on the Indian Air Force (IAF) Jammu base and observations of UAVs at the Ratnuchak and Kaluchak military bases are clear examples of terrorists using advanced technologies to stay ahead of security forces.

In using drones in combat, terrorists demonstrate their ability and intent to invent and capitalize on innovative technologies, a development that will continue with the advancement of Artificial Intelligence. In addition, some researchers believe that AI will threaten physical security in newer ways. A report highlights that some of the major areas of concern are easy to access to high-tech products, and the possible use of autonomous vehicles to deliver explosives. 

Swarming attacks are also a grave source of concern since they can encompass thousands and thousands of tiny killer robots. As Max Tegmark notes in his book Life 3.0: Being Human in the Age of Artificial Intelligence: “If a million such killer drones can be dispatched from the back of a single truck, then one has a horrifying weapon of mass destruction of a whole new kind: one that can selectively kill only a prescribed category of people, leaving everybody and everything else unscathed.” Thousands of weaponized, killer robots swarming over a crowd of civilians would be extremely difficult to defend against, and could possibly result in massive casualties if they were successful.

Challenges of Lethal Autonomous Weapons

Autonomous weapons mainly identify, select and engage with the targets themselves, with little to no human intervention. An autonomous system can augment or replace humans depending on the task, allowing them to perform more complex and cognitively challenging activities. Many experts assert that the military is positioned to benefit from using autonomous systems so as to eliminate humans from dull, dangerous, and dirty tasks. However, objections regarding the morality and effectiveness of these weapons have generated a fierce debate worldwide. 

Experts argue that AI systems may operate with a different understanding of the environment than human operators, particularly when it strays from its original designs. Furthermore, there is a possibility that AI systems may be biased due to the quality of the training data. For instance, time and again, there have been multiple cases of racial bias in AI facial recognition systems due to a lack of diversity in the images used to train the system. If biases such as this remain undetected and are incorporated into systems with a lethal effect, it could have major repercussions on combat. 

The biggest concern, however, is that it is difficult to contain autonomous weapons technology. This simply means that it is extremely complicated to prevent this technology from falling into the hands of non-state actors. As Paul Scharre, a former U.S. defense official stated:

“We are entering a world where the technology to build lethal autonomous weapons is available not only to nation-states but to individuals as well. That world is not in the distant future. It’s already here.”

Well-organized terrorist groups with good financial means and manpower can do anything to get hold of these technologies whether now or in the near future. After all, this means that it would be harder to attribute crimes to a single culprit or group of individuals, as attackers would not have to risk their lives. It is therefore important to reflect on how to limit access to these autonomous weapons systems. 

It is essential to note that as of now, there is still a long way to go before new weapons systems are fully autonomous. Experts are still debating if fully autonomous systems can be developed successfully, whether at the military level or at the private sector level. Some argue that there should never be anything like fully autonomous systems which eliminate the human factor completely as it could have adverse effects. As a result, there should be a distinction drawn between autonomous decision-making which includes full autonomy, and semi-autonomy. 

Moving forward, some important questions that need to be asked include how we can put restrictions on non-state actors from developing autonomous systems themselves and what are the possible countermeasures to stop these autonomous systems.  Although there are many questions that remain unaddressed at this point, the risks arising from terrorists leveraging AI still remain low at this stage. Knives and guns are still the primary means of attack for most terrorist groups. Yet it is important to consider how new technologies could change our view of threats in the future as they become more widely available to non-state actors and terrorists.

Counter-Terrorism Initiatives

Some of the use-cases of AI in countering terrorism are as follows:

Predictive Analytics for Counter-Terrorism

Using predictive analytics in counter-terrorism operations can help us be more proactive by anticipating future attacks and intervening right on time. In order to accomplish this feat, the AI model will have to be fed large volumes of data pertaining to the behavior of the suspected individuals. By examining data such as this, a model can potentially predict things about a person’s likely future activities. With the explosion in the amount of data generated with individuals’ online behavior, especially on social media, there has been growing interest in analyzing how this information can be used to predict terrorist activities.

However, it is important to note that the application of this model is somewhat limited due to the very unpredictable nature of humans and the fact that most terrorist activities are not very open. Moreover, human rights experts and civil society organizations have raised questions regarding ethical implications arising from implementing such a model as it could spark civil liberties challenges.  

Identifying Misleading Content and Conspiracy Narratives

While fabrication and misrepresentation of information are not necessarily illegal, they may cause harm and contribute to the spread of terrorist and violent extremist narratives. By exploiting vulnerabilities in the social media ecosystem, terrorists or violent extremists can spread false information and mislead people by generating conspiracy narratives to undermine confidence in the government and to reinforce extremist narratives. 

Although it is very difficult to put a complete stop to the flow of misinformation on social media, identifying the bots or fake accounts created with the agenda of spreading fake information can be a crucial start for combating a large amount of misinformation spread by terrorists. As per researchers, tweets by bots tend to focus on very narrow topics, while tweets by humans tend to be more diverse. Thus, AI tools can be used to identify bots and disseminate misinformation up to some extent. 

Strengthening Defenses Against Weaponized AI

Countries will have to strengthen their defense against drone attacks and weaponized AI. However, anti-AI defenses can be extremely costly and challenging. Furthermore, it will be difficult to build effective defenses and it will leave citizens exposed in the meantime.

In view of the international dependences and cross-border implications of many technological systems, a regional and international approach becomes essential to ensure terrorists are not able to exploit gaps in regulatory requirements that might expose vulnerabilities in AI systems. We need to build resilient governing structures that can counteract and mitigate the adverse effects of the malicious use of AI.

Wrapping up

There have been a lot of debates that question the impact of the new technologies. In fact, it is necessary to put these debates into perspective to understand the possible threats that could quite easily spiral out of control. It is essential that we remain alert to emerging threats so that we are not crippled by a lack of imagination. In spite of the fact that terrorist groups have traditionally employed weapons of low technology, such as firearms, blades, and vehicles, terrorism is not a static threat. In the future, the entry barriers to AI will decrease as its application becomes more widespread and the skill sets required to employ it decrease. In conclusion, the questions that we should ask are whether terrorism will use AI technology and, if it does, what is that the international community might expect. 

However, an important question is whether it is possible that these scenarios are simply based on our fear of what-ifs. After all, terrorists can go as far as our imagination takes us. We must be careful not to fall victim to fear-mongering, by escalating a situation based on speculation rather than evidence. This is especially important when it comes to innovative technologies, since a layperson may find it difficult to determine whether the scenarios foreshadowed will actually come to pass.

Muskan
Technology

Leave a Comment

Your email address will not be published. Required fields are marked *