Could Human Moderators and Algorithms Ban Hitler from Facebook?

Facebook, Twitter, YouTube and other big social media platforms are the “greatest propaganda machine in history.” The verdict has recently been rendered by Sacha Baron Cohen, an English actor and comedian, and amplified by both the traditional media and users of the “greatest propaganda machine.”

An excerpt from Sasha Baron Cohen’s speech, uploaded on YouTube by Guardian News on November 23, 2019.

Would Hitler buy Facebook ads?

Cohen who is mostly known for his satirical characters “Ali G,” “Borat” and “Brüno,” burst out against social media and Internet giants at the Anti-Defamation League summit in New York on November 21. Blaming major social media platforms for an upsurge in “hate crimes” and “murderous attacks on religious and ethnic minorities,” Cohen denounced algorithms these platforms use for favouring content that promotes “hate, conspiracies and lies.”

The comedian was particularly angry with Facebook for not vetting the political ads the platform ran. He claimed, “if Facebook were around in the 1930s, it would have allowed Hitler to post 30-second ads on his ‘solution’ to the ‘Jewish problem’.” Cohen’s full remarks can be read here.

Cohen is best known for his satirical and often controversial fictitious characters, including Borat. Source: Giphy

The fix

So, how can we fix the increasingly powerful and pervasive social media? Cohen proposed a two-pronged solution: the US government should be more assertive in regulating social media sites, while the platforms should be more ferocious in policing content.

While some government regulation of tech giants is perhaps unavoidable, the second part of the strategy, content moderation by platforms, seems to be too ridden with technical and political issues to placate social media critics.

At the moment, social media sites appear to use two main mechanisms for user content moderation – human moderators and algorithms.

Imperfect humans

Over the last several years, social media companies have recruited tens of thousands of people around the world to screen and delete content that users flag as violent or offensive. How exactly these networks of human content moderators operate is shrouded in secrecy. What evidence is available, however, suggests that these moderators are undertrained and overstressed, while how they do their work is inconsistent, confusing and often illogical.

At a more fundamental level, users appear to have serious doubts about whether social media sites are capable of and should be entrusted to police what they post and share. A recent study by Pew Research Centre suggests that while 66% of adults in the United States believe social media sites should remove offensive content from their platforms, only 31% trust these sites in determining what exactly constitutes offensive content.

Besides, human moderators inadvertently allow their own biases to impact their work, and there is evidence of all kinds of biases displayed by content moderators.

Source: Giphy

Even less perfect algorithms

Social media platforms use increasingly sophisticated artificial intelligence algorithms to detect and remove content that contains hate speech, violence, terrorist propaganda, nudity and spam.

The inherent problem with these algorithms, however, is that they lack contextual and situational awareness. In practice, this means that in determining whether certain content should be removed, algorithms cannot distinguish between nudity in Renaissance art and sexual activity, or violence in a movie and in a user-uploaded video. As a result, mess caused by algorithms in content moderation often requires the involvement of human moderators.

Besides, just like the human moderators that they are supposed to replace, algorithms have been shown to be biased against certain demographic groups.

Artificial intelligence algorithms so far remain ill-equipped for content moderation on social media. Source: Giphy

What’s next?

It looks like for the time being, social media companies will have to rely on a combination of human moderators and algorithms to vet and remove offensive content. After all, humans are trainable and algorithms can always be improved.

Do you think social media sites should be policing user content? Do you trust these sites in determining which content is offensive? I’d love to hear from you in the comments!