Facebook’s Anti-Terror Breaches

Facebook’s Anti-Terror Breaches
À la Une
Source: Wikimedia

How Dual PR Disasters Rocked Facebook’s Moderator Department

Facebook’s moderator department is known as the invisible, omnipotent defender of the user from all things unwholesome. They receive our reports when we flag something as inappropriate and have the ultimate say on whether or not something truly belongs on Facebook. The company is famous for their secrecy around the moderator department, contributing to various myths and legends around this relatively unknown, supposedly all-powerful role.

However, the last month has seen a number of devastating leaks impacting Facebook’s moderators. First of these was an expose by the Guardian into the day-to-day life of Facebook’s moderator workforce. What we all expected to see were lines of dedicated, motivated individuals who were standing between us as Facebook users and the horrors of unmoderated or extreme content: beheadings, child abuse and violent attacks. What was discovered was decidedly different.

In the words of one moderator who spoke anonymously to the Guardian, the day was described thusly:

“There was literally nothing enjoyable about the job. You’d go into work at 9am every morning, turn on your computer and watch someone have their head cut off. Every day, every minute, that’s what you see. Heads being cut off.”

Other staff went on to describe the training and support they received as “absolutely not sufficient”, and not being given any “mandatory counselling”. The images are shocking enough, and this is most obvious during the initial interview stage. Potentially successful applicants are shown images of sexual abuse on children and child pornography in order to truly test their resilience and whether or not they are suitable for the role.

Facebook state that they have many support and wellness procedures in place, both compulsory and optional, on par in style and quality to those in other workplaces with a similar psychological stresses and pressures. Nevertheless, this was a damaging bit of PR gone awry for Facebook’s most secretive department.

It was absolutely nothing, however, compared to the breach which occurred a few weeks later. Dozens of Facebook’s anti-terrorism moderators has their safety put in danger when their identities were accidentally revealed to the very people they were investigating.

Investigators first knew something was amiss when they began receiving friend requests on their personal Facebook accounts from high-profile terrorists and other dangerous criminals. This would be frightening for anyone, but even more so when you are the person actively working to foil the spread of terror and death.

The alarm was quickly raised, but the damage had already been done. The real-life identities of many moderators had been revealed and the terrorists they were investigating were out for blood. The ‘most at risk’ individuals were focused on closely by Facebook, who kept their safety as their highest concern. At least one employee was forced to flee Ireland and hide in Eastern Europe for several months until the hunt for him had calmed down.

In all, more than 1,000 employees across 22 departments were affected by the bug, which caused the personal profiles of the moderators to appear in the notifications on the activity log of Facebook groups whose admins were banned by the moderator. If any admins remained, they could easily find the moderator’s identity in the group’s logs. Their personal profiles – containing information on and images of themselves, their family and their friends – were made clearly visible to those connected with some of the world’s most violent terrorist organisations.

The bug occurred in November 2016, though it had displayed the details of moderators as far back as August 2016.

Though the breach primarily affected employees in Dublin, Ireland, a large number of Facebook’s moderators are based in the Philippines and India – both countries with a substantial ISIS presence. In fact, an ISIS-aligned group have currently taken over an entire city in the Philippines. That is not to say that the Dublin moderators are in any less danger. One moderator said that he lives in terror of opening his mail, fearing a pipe bomb in a parcel or some other method of vengeance.

The moderators are quite understandably angry that they were failed in this manner and feel insufficiently supported since the breach. They are also upset that they were made to log in to the moderator portal with their personal, private-life Facebook profiles instead of special fake profiles for their job role. In the words of one moderator: “They never warned us that something like this could happen”.

Since the breach, Facebook have announced they are trialling the use of admin accounts that are not linked to an employee’s personal profile.

These recent breaches with Facebook’s moderator teams have put Facebook’s internal workings into the spotlight and have laid bare the difficulties many companies face when they are in a fast-paced industry which is constantly changing. Facebook is currently rallying in response to these breaches and putting special efforts into fixing the causes.

The bug has already been fixed, though there is significant effort going in to making sure something similar never happens again. Time will tell if Facebook has done enough to improve the treatment of their moderator staff and offer more support.