Could Human Moderators and Algorithms Ban Hitler from Facebook?

Facebook, Twitter, YouTube and other big social media platforms are the “greatest propaganda machine in history.” The verdict has recently been rendered by Sacha Baron Cohen, an English actor and comedian, and amplified by both the traditional media and users of the “greatest propaganda machine.”

An excerpt from Sasha Baron Cohen’s speech, uploaded on YouTube by Guardian News on November 23, 2019.

Would Hitler buy Facebook ads?

Cohen who is mostly known for his satirical characters “Ali G,” “Borat” and “Brüno,” burst out against social media and Internet giants at the Anti-Defamation League summit in New York on November 21. Blaming major social media platforms for an upsurge in “hate crimes” and “murderous attacks on religious and ethnic minorities,” Cohen denounced algorithms these platforms use for favouring content that promotes “hate, conspiracies and lies.”

The comedian was particularly angry with Facebook for not vetting the political ads the platform ran. He claimed, “if Facebook were around in the 1930s, it would have allowed Hitler to post 30-second ads on his ‘solution’ to the ‘Jewish problem’.” Cohen’s full remarks can be read here.

Cohen is best known for his satirical and often controversial fictitious characters, including Borat. Source: Giphy

The fix

So, how can we fix the increasingly powerful and pervasive social media? Cohen proposed a two-pronged solution: the US government should be more assertive in regulating social media sites, while the platforms should be more ferocious in policing content.

While some government regulation of tech giants is perhaps unavoidable, the second part of the strategy, content moderation by platforms, seems to be too ridden with technical and political issues to placate social media critics.

At the moment, social media sites appear to use two main mechanisms for user content moderation – human moderators and algorithms.

Imperfect humans

Over the last several years, social media companies have recruited tens of thousands of people around the world to screen and delete content that users flag as violent or offensive. How exactly these networks of human content moderators operate is shrouded in secrecy. What evidence is available, however, suggests that these moderators are undertrained and overstressed, while how they do their work is inconsistent, confusing and often illogical.

At a more fundamental level, users appear to have serious doubts about whether social media sites are capable of and should be entrusted to police what they post and share. A recent study by Pew Research Centre suggests that while 66% of adults in the United States believe social media sites should remove offensive content from their platforms, only 31% trust these sites in determining what exactly constitutes offensive content.

Besides, human moderators inadvertently allow their own biases to impact their work, and there is evidence of all kinds of biases displayed by content moderators.

Source: Giphy

Even less perfect algorithms

Social media platforms use increasingly sophisticated artificial intelligence algorithms to detect and remove content that contains hate speech, violence, terrorist propaganda, nudity and spam.

The inherent problem with these algorithms, however, is that they lack contextual and situational awareness. In practice, this means that in determining whether certain content should be removed, algorithms cannot distinguish between nudity in Renaissance art and sexual activity, or violence in a movie and in a user-uploaded video. As a result, mess caused by algorithms in content moderation often requires the involvement of human moderators.

Besides, just like the human moderators that they are supposed to replace, algorithms have been shown to be biased against certain demographic groups.

Artificial intelligence algorithms so far remain ill-equipped for content moderation on social media. Source: Giphy

What’s next?

It looks like for the time being, social media companies will have to rely on a combination of human moderators and algorithms to vet and remove offensive content. After all, humans are trainable and algorithms can always be improved.

Do you think social media sites should be policing user content? Do you trust these sites in determining which content is offensive? I’d love to hear from you in the comments!

10 thoughts on “Could Human Moderators and Algorithms Ban Hitler from Facebook?

  1. What an interesting read! It’s probably a big struggle for the social media platform to decide if a message is offensive or not because every human can see a message from a different perspective. Of course, some message are pure hate, but for some others it can be hard for human to judge if they are offensive of not.

    • I agree. Deciding what is offensive or not is inadvertently affected by our own biases and politics. I think this is one of the reasons why social media companies outsource the content management task to teams based abroad, where people don’t have a stake in political debates around issues that are hot in the west.

  2. Really liked your blog. I agree that those in power and those who do not have the best interest of all of humanity, would and do use social media to promote anti-semantic and racist views. And the large tec giants who make millions of dollars will not really intervene because their bottom line is more important. Yet, if we allow censorship then it could be a slippery slope into the loss of freedom of speech. I do believe however, there should be some policing for those responsible in developing and maintaining social media sites. I would hope as a society, progressing into the 21st century that new tech companies will develop a policing algorithm that balances freedom of speech and promoting hate.

    • Thanks for your comment! I agree that some content moderation is unavoidable. There are clear cases when content fuels hate and violence. I also agree that there are often less clear-cut instances of content that is being moderated. In such instances, algorithms could indeed be useful, although the parameters that algorithms work with still have to be set by human beings.

  3. WOW.

    What a loaded question. Again, someone has already hit on the slippery slope argument.

    I disagree with Cohen’s “he US government should be more assertive in regulating social media sites”. The American government, in no uncertain terms, should have any say over what I post to a Canadian website.

    What I can agree with, however, is the idea that governments as a whole, need to take a more active role in monitoring what is said online. Once again, technology has advanced faster than we’ve been able to build rules around it.

    We already have laws surrounding hate speech and false advertising, it shouldn’t be too much to expect the powers that be to apply the same rules for hardcopy content to the digital arena.

    (Oh… and I absolutely loved Cohen’s speech.)

    • Thanks for your comment! That the US government should be more assertive in regulating social media is indeed very controversial. If they decide to do so, I see no reason why nations such as China, Russia, Iran or Turkey wouldn’t want to limit their populations’ access to western government-controlled platforms. After all, the content that is increasingly deemed offensive to audiences in Northern America and Western Europe is very different from what people int he rest of the world consider offensive.

  4. This was a very interesting read. Great blog, Alexander. So many questions raised here that we don’t have the answers to. However, discussing potential answers and issues to this problem, is a start, somewhere.
    I do believe one main issue, is that there are no internationally-agreed-upon policies, or benchmarks to determine what is offensive, what is considered hate speech, or maybe there are, and I am missing them? Hate speech here in Canada, may not be considered ‘hateful’ in Spain, China, or in Turkey. So of course, there would be inconsistencies on social media – the world is engaged on Facebook and Twitter. What I also find, is that each social media platform has differing standards, so no consistency amongst any social media companies either.
    Finally, I also think that social media companies don’t want their policies to be known, or anything to be defined, because that would limit people from venting their true feelings on their platform. And that, would be the beginning of the end for them. Unfortunately, ‘hate’ or negativity sells, and gets people commenting. This, is what social media platforms want to continue.

    • Thank you for your comment! This is exactly what I wrote in my response to another comment under the blog. There are no universal parameters for determining what constitutes offensive content. There is international law that clearly bans incitement to violence against certain protected categories, but this law is helpless when it comes to content that increasingly large audiences in Northern America and Western Europe deem offensive.

  5. I agree with your conclusion. The algorithms can take out a bunch of offensive content, while the mods handle whatever is left. Though it doesn’t make it a piece of cake to deal with the possible hundreds to thousands (and even millions) of users on social media each day. Algorithms will definitely need to evolve past where it is now, especially blocking TOO much content, where enjoyable videos and posts get blocked (or worse happens), because of some bug (happened to a famous YouTuber’s live stream audience, Markiplier). The story you can look at from here:
    Anyways, nice post!

    • Tyler, thanks for sharing your thoughts! I did not know about the story you mentioned, the one where someone’s content was being removed from YouTube. I think I need to read more about YouTube, a platform I know little about.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.