Could Human Moderators and Algorithms Ban Hitler from Facebook?

Facebook, Twitter, YouTube and other big social media platforms are the “greatest propaganda machine in history.” The verdict has recently been rendered by Sacha Baron Cohen, an English actor and comedian, and amplified by both the traditional media and users of the “greatest propaganda machine.”

An excerpt from Sasha Baron Cohen’s speech, uploaded on YouTube by Guardian News on November 23, 2019.

Would Hitler buy Facebook ads?

Cohen who is mostly known for his satirical characters “Ali G,” “Borat” and “Brüno,” burst out against social media and Internet giants at the Anti-Defamation League summit in New York on November 21. Blaming major social media platforms for an upsurge in “hate crimes” and “murderous attacks on religious and ethnic minorities,” Cohen denounced algorithms these platforms use for favouring content that promotes “hate, conspiracies and lies.”

The comedian was particularly angry with Facebook for not vetting the political ads the platform ran. He claimed, “if Facebook were around in the 1930s, it would have allowed Hitler to post 30-second ads on his ‘solution’ to the ‘Jewish problem’.” Cohen’s full remarks can be read here.

Cohen is best known for his satirical and often controversial fictitious characters, including Borat. Source: Giphy

The fix

So, how can we fix the increasingly powerful and pervasive social media? Cohen proposed a two-pronged solution: the US government should be more assertive in regulating social media sites, while the platforms should be more ferocious in policing content.

While some government regulation of tech giants is perhaps unavoidable, the second part of the strategy, content moderation by platforms, seems to be too ridden with technical and political issues to placate social media critics.

At the moment, social media sites appear to use two main mechanisms for user content moderation – human moderators and algorithms.

Imperfect humans

Over the last several years, social media companies have recruited tens of thousands of people around the world to screen and delete content that users flag as violent or offensive. How exactly these networks of human content moderators operate is shrouded in secrecy. What evidence is available, however, suggests that these moderators are undertrained and overstressed, while how they do their work is inconsistent, confusing and often illogical.

At a more fundamental level, users appear to have serious doubts about whether social media sites are capable of and should be entrusted to police what they post and share. A recent study by Pew Research Centre suggests that while 66% of adults in the United States believe social media sites should remove offensive content from their platforms, only 31% trust these sites in determining what exactly constitutes offensive content.

Besides, human moderators inadvertently allow their own biases to impact their work, and there is evidence of all kinds of biases displayed by content moderators.

Source: Giphy

Even less perfect algorithms

Social media platforms use increasingly sophisticated artificial intelligence algorithms to detect and remove content that contains hate speech, violence, terrorist propaganda, nudity and spam.

The inherent problem with these algorithms, however, is that they lack contextual and situational awareness. In practice, this means that in determining whether certain content should be removed, algorithms cannot distinguish between nudity in Renaissance art and sexual activity, or violence in a movie and in a user-uploaded video. As a result, mess caused by algorithms in content moderation often requires the involvement of human moderators.

Besides, just like the human moderators that they are supposed to replace, algorithms have been shown to be biased against certain demographic groups.

Artificial intelligence algorithms so far remain ill-equipped for content moderation on social media. Source: Giphy

What’s next?

It looks like for the time being, social media companies will have to rely on a combination of human moderators and algorithms to vet and remove offensive content. After all, humans are trainable and algorithms can always be improved.

Do you think social media sites should be policing user content? Do you trust these sites in determining which content is offensive? I’d love to hear from you in the comments!

COM0015 – Blog #4: Out of the box and into the social media sphere

Everyone knows – well, hopefully everyone knows – that once you put something out there on social media, it’s out there forever. There’s no taking it back because a screenshot can live forever.

The person you portray on your various platforms is the person most of the world will see you as. Not just your posts, but your likes and dislikes, your music and movie preferences, your employer and alma mater. It’s all there to find.

And social media platforms and marketing experts have figured this out – and are taking

StockSnap_6M5NMUQXVU

Newspaper ad buys are steadily decreasing as companies turn online and to social media.

advantage of it in a big way. I didn’t realize until recently just how targeted a marketer could be when building an advertising campaign on social media. But every bit of information you put online can be used to find you and try to sell you something.

A Facebook ad campaign, for example, can target people living in certain areas, with a specific job title, who like pages A and B, who like pages A and B but not C. The list goes on and on. Marketers can now reach the exact audience that they want for a very affordable cost. In the old days (less than a decade ago) they would have had to spend huge amounts of money on an ad buy and hope that the right people saw it amongst the masses.

StockSnap_CKDBFD30B1

Is this the new face of the polling industry?

But I’m also amazed at the other ways in which our social media information is being used. Polling firms that use artificial intelligence to scan social media platforms are far more accurate than traditional telephone polls. This new system can analyze a person’s feed and determine their opinions – some which they may never have the courage to tell a stranger on the other end of a call.

The information on social media says so much more about us than we even realize. It can tell a company what we’ll buy, or a pollster how we’ll vote. Organizations that figure this out sooner rather than later will produce more effective campaigns for much less money.

And maybe we’ll start to see Facebook ads for things we want to buy.

All photos courtesy of Stocksnap.io.