Of all the European nations who have been the victims of terrorism, France has arguably been hit the hardest and the most frequently. It is something not many of us talk about, but it is a problem that cannot be ignored. Extremist attacks are on the rise, both in number of attacks and the casualties associated with them. We seem relatively powerless to stop them.
In the past few years alone, France has suffered dozens of attacks including the killing of a number of Jewish children and French soldiers in 2012. There was a twin vehicle attacks in Dijon and Nantes in 2014 which injured 21. In 2015 were the infamous Charlie Hebdo attacks and the Bataclan attacks. They killed a staggering 130 people. A vehicle attack in Nice in 2016 killed 84 people celebrating Bastille day, as well as several attacks throughout 2017. France is not alone, with the United Kingdom, Germany and Spain also being chief targets.
AI to detect extreme content
Modern technology offers us several different methods of stopping terrorist attacks on our soil. One of the most important of these is prevention. Leading social media brands such as Facebook, Twitter, Instagram and YouTube have now enacted strict rules in regards to ‘unacceptable content’. It uses a number of moderators to look at videos or content reported to them for being extreme in nature. However, there also a number of artificial intelligences working on locating and eliminating extremist propaganda from social media networks. This prevents vulnerable people from being radicalised, and significantly shortens the reach of terrorist organisations. Many of the artificial intelligences use image matching to achieve this goal. When a new photo or video is uploaded, it will compare that image to a databank of known terrorist videos or images. If it is a match, then the upload is denied and a manual investigation can take place via a human. If it is not a match, then the video or photo will be posted and hosted without issue. There is, of course, a loophole to this. It does not defend against new propaganda. That is true, though it does help prevent re-posting of content.
An AI that is altogether more proactive is also used by Facebook to ‘read’ posts and use keywords to decide whether or not it is promoting hate speech or extreme content. There is a fine line between discussing terrorism and promoting it, but the AI can apparently tell the difference quite well.
Terror vs Tech
Technology can also be used to disrupt and hamper terrorist attacks in progress, as well. Believe it or not, this already happened by accident during the Christmas market attack in Berlin in 2016. Tunisian-born ISIS Jihadi Anis Amri smashed a large haulage truck into packed market. His goal was to cause as much damage as possible to structures and people. But he wasn’t a professionnal truck driver and only hijacked the vehicle. He didn’t know these trucks are fitted with automatic braking systems. If the truck detects obstacles then it will alert the driver, and if no response is given by the driver then the truck automatically slams on its brakes. This caused the lorry to come to a stop far shorter than the attackers had intended, and caused 12 deaths instead of a far more significant figure. For comparison, the older truck without this system used in the Nice attack had no automatic system of braking. The attacker killed 86 people. This system is designed for drivers incapacitated by stroke or other random causes and was not designed specifically for terrorism. When the time came it did take on that role admirably, if not on purpose.
Due to the significant cost of acquiring such a large vehicle legally, terrorists are increasingly looking to hijack delivery trucks and haulage lorries. Because of the rise of these attacks, many such vehicles are now fitted with remote kill-switches. It means that the driver or the company’s HQ can remotely turn off the engine of a hijacked vehicle and render it useless to the hijackers. This could save a great many lives.
Tech help after the attack
Sometimes, and with great misfortune, terror attacks cannot be prevented. Once they take place, however, the focus turns to finding those responsible and preventing them from causing further harm. It is also important for investigators to find out exactly how it happened so that they can prevent it from happening again. They also try to trace back where they acquired the tools to conduct such an act – whichever method they chose. With so much video surveillance, and with almost every member of the public having a smartphone, most modern terror attacks are recorded. Hundreds of hours of footage from hundreds or thousands of witnesses are available. This footage takes a significant amount of time to comb through. This is where artificial intelligence can step in to speed up the investigation when time is critical.
Software like BriefCam can analyse vast quantities of video data in a relatively short space of time. Investigators can very quickly find what they are looking for using only one person, instead of using dozens of officers. Search is swift and limits human error.
AI making the terrorist’s job a whole lot harder
There are literally hundreds of ways technology is being used to fight back against terrorism, though we can expect to see a whole lot more investment to amplify the effects. Technology has always been about locating, quantifying and then solving a need or problem, and terrorism is just the latest in a long line of these. As terrorism reaches out to affect us all, the Tech industry is working hard to introduce new methods to prevent terrorism, limit terrorist attacks and bring justice to those responsible. One thing is for certain – artificial intelligence is making the terrorist’s job a whole lot harder. And that’s good news for everyone.