Could Human Moderators and Algorithms Ban Hitler from Facebook?

Facebook, Twitter, YouTube and other big social media platforms are the “greatest propaganda machine in history.” The verdict has recently been rendered by Sacha Baron Cohen, an English actor and comedian, and amplified by both the traditional media and users of the “greatest propaganda machine.”

An excerpt from Sasha Baron Cohen’s speech, uploaded on YouTube by Guardian News on November 23, 2019.

Would Hitler buy Facebook ads?

Cohen who is mostly known for his satirical characters “Ali G,” “Borat” and “Brüno,” burst out against social media and Internet giants at the Anti-Defamation League summit in New York on November 21. Blaming major social media platforms for an upsurge in “hate crimes” and “murderous attacks on religious and ethnic minorities,” Cohen denounced algorithms these platforms use for favouring content that promotes “hate, conspiracies and lies.”

The comedian was particularly angry with Facebook for not vetting the political ads the platform ran. He claimed, “if Facebook were around in the 1930s, it would have allowed Hitler to post 30-second ads on his ‘solution’ to the ‘Jewish problem’.” Cohen’s full remarks can be read here.

Cohen is best known for his satirical and often controversial fictitious characters, including Borat. Source: Giphy

The fix

So, how can we fix the increasingly powerful and pervasive social media? Cohen proposed a two-pronged solution: the US government should be more assertive in regulating social media sites, while the platforms should be more ferocious in policing content.

While some government regulation of tech giants is perhaps unavoidable, the second part of the strategy, content moderation by platforms, seems to be too ridden with technical and political issues to placate social media critics.

At the moment, social media sites appear to use two main mechanisms for user content moderation – human moderators and algorithms.

Imperfect humans

Over the last several years, social media companies have recruited tens of thousands of people around the world to screen and delete content that users flag as violent or offensive. How exactly these networks of human content moderators operate is shrouded in secrecy. What evidence is available, however, suggests that these moderators are undertrained and overstressed, while how they do their work is inconsistent, confusing and often illogical.

At a more fundamental level, users appear to have serious doubts about whether social media sites are capable of and should be entrusted to police what they post and share. A recent study by Pew Research Centre suggests that while 66% of adults in the United States believe social media sites should remove offensive content from their platforms, only 31% trust these sites in determining what exactly constitutes offensive content.

Besides, human moderators inadvertently allow their own biases to impact their work, and there is evidence of all kinds of biases displayed by content moderators.

Source: Giphy

Even less perfect algorithms

Social media platforms use increasingly sophisticated artificial intelligence algorithms to detect and remove content that contains hate speech, violence, terrorist propaganda, nudity and spam.

The inherent problem with these algorithms, however, is that they lack contextual and situational awareness. In practice, this means that in determining whether certain content should be removed, algorithms cannot distinguish between nudity in Renaissance art and sexual activity, or violence in a movie and in a user-uploaded video. As a result, mess caused by algorithms in content moderation often requires the involvement of human moderators.

Besides, just like the human moderators that they are supposed to replace, algorithms have been shown to be biased against certain demographic groups.

Artificial intelligence algorithms so far remain ill-equipped for content moderation on social media. Source: Giphy

What’s next?

It looks like for the time being, social media companies will have to rely on a combination of human moderators and algorithms to vet and remove offensive content. After all, humans are trainable and algorithms can always be improved.

Do you think social media sites should be policing user content? Do you trust these sites in determining which content is offensive? I’d love to hear from you in the comments!

Broken Guitars and Fat Cats: Customer Service Lessons from Airlines Industry’s PR Disasters

Ask any expert about the impacts social media has had on public relations, big brands or customer service. Chances are the answer you’ll get will include at least one reference to the public relations disaster that befall United Airlines in 2009.

United Breaks Guitars

The story began in early 2008 when Dave Carroll, a Canadian musician, had his pricey guitar damaged by United Airlines baggage handlers. After speaking to dozens customer service representatives and failing to get the company to pay the $1,200 repair cost, Carroll wrote a song about his experience, recorded a video to go along with the song, and uploaded it on YouTube.

Following a year of unsuccessful attempts to get United Airlines to compensate him for a damaged guitar, Dave Carroll made this video in 2009. As of this writing, the video has been watched more than 19.5 million times.

The video, United Breaks Guitars, quickly went viral, and the story was picked up and amplified by mainstream media. Carroll gave hundreds of interviews, telling everyone who cared to listen about his experience. The United Airlines’ executives tried to minimize the damage to the company’s reputation by finally agreeing to compensate the musician, but their efforts were too late. According to BBC, the airline’s share price dropped 10 percent shortly after Carroll’s video went viral.

It is much harder to estimate the longer-term damage that the incident has had on United Airlines’ brand. As for other big international brands, particularly in the airlines industry, they must have learnt that in the age of social media, negative customer experience can quickly escalate into a major PR disaster.

Aeroflot’s Fat-Cat Debacle

Well, the lesson appears to have been lost on Russia’s largest airline, Aeroflot. Over the last weeks, the company has experienced a public relations fiasco comparable to that of United Airlines a decade ago.

On October 30, Mikhail Galin missed his connecting flight in Moscow after Aeroflot check-in staff said his cat, Victor, was too heavy to travel in the cabin of the aircraft with him. The airline insisted that pets heavier than eight kilograms had to travel in the luggage hold. Galin’s furry friend was two kilograms above the limit. As the man later explained [ru] on Facebook, Victor was distraught by the first leg of the journey, a four-hour flight from Riga, Latvia’s capital, to Moscow. He feared that an eight-hour flight in the cargo hold to Vladivostok, in Russia’s Far East, would severely traumatize the cat.

Galin came up with an ingenious plan to get Victor accepted in the aircraft cabin. He posted the cat’s picture on Facebook and asked his friends to help him find Victor’s look-alike in Moscow. As soon as a similar looking but slimmer cat was found, Galin purchased a ticket to Vladivostok and had Aeroflot’s check-in staff weigh Victor’s look-alike and confirm that the pet was fit to travel in the cabin. Once Galin received his boarding pass, he parted with the impostor and his owner, and boarded the plane.

Galin shared this photograph of his overweight cat, Viktor, on Facebook soon after pulling his now famous cat-swap trick and boarding a plane.

As Galin and Victor made it safely to Vladivostok, their story was widely shared and discussed on social media. Given the special status cats enjoy on the Internet as well as the fact that an estimated six out of 10 Russians own at least one cat, reactions to Galin’s cunning albeit legally shady scheme to ensure his cat travelled with comfort were overwhelmingly jubilant.

However, the mood was not shared by Aeroflot’s executives. Once the story was brought to their attention, they stripped Galin of his frequent flier status and cancelled all air miles that he had accumulated. The company issued a statement explaining its pet travel policies and accusing Galin of “deliberate violation” of these rules.

Aeroflot’s reaction sparked a huge social media outcry in Russia. Memes ridiculing Aeroflot’s rigid policies and supporting Galin took the country’s social media by storm (some of the best memes can be viewed here and here). Celebrities, athletes and politicians also weighed in, boycotting the company and sharing messages with hashtags that could be roughly translated as “pets are not luggage”, “let Viktor fly” and “we are all the fat cat”. Other airlines scored PR points against Aeroflot by offering Galin a special “feline VIP” frequent flier status. The man was also bombarded by offers of free cat food, spa treatments for Viktor, free stays in pet-friendly hotels and movie vouchers. The story became so big that even the country’s president was asked to comment on it.

This caricature created by Sergey Elkin was shared by Radio Svoboda. In the image, an Aeroflot plane is depicted as chasing and barking at a fat cat. Source: Radio Svoboda on Twitter.

Just like United Airlines did a decade ago, the Russian airline had a quick change of heart, apologizing for its rash response and offering [ru] Galin the company’s shares as compensation. And just as was the case with United Airlines, the move came too late to stop the wave of negative publicity and social media ridicule from causing serious damage to Aeroflot’s brand.

Lessons for Social Media and Customer Service Teams

The PR disasters experienced by United Airlines and Aeroflot will not prevent mishaps from happening. Baggage handlers will inevitably continue damaging baggage, while overworked and distressed check-in staff will continue alienating customers.

What these two fiascos should change, however, is the way big brands, both in the airlines industry and elsewhere, handle customer feedback on social media.

Build relationships

In the age of social media, companies should focus on building relationships with their customers and personalizing these relationships wherever possible. This approach calls for a departure from standard operating procedures that require customer service staff to act within the rigid boundaries of company policies, rules and standards. In other words, companies need to ensure that when their staff communicates to customers, particularly those with grievances, they sound like humans capable of empathy and emotion, rather than impassionate bureaucrats. As Dr. Natalie Petouhoff, a customer service and social media expert at Forrester Research suggests, brands should be aware of the “frustration customers feel with companies that act like monolithic monsters”.

This shift requires that companies invest in training their staff in positive customer service and empower them to make on-the-spot decisions that make customers happy, even if these decision do not always align to policies and rules. Or, as John Deighton, Professor at Harvard Business School puts it, brands “need to cultivate good judgement and free their employees to use it”.

Engage online

When customers share stories of poor customer service on social media, companies should listen to and engage in these conversations before they get out of hand. Such engagement should aim at turning negative customer service experience into positive experience, while ensuring that this transformation is interesting enough for social media audiences to tune in.

For instance, management expert Bart Perkins suggests that instead of trying to buy off Dave Carroll after his video got viral, United Airlines could have mitigated the impact of the consequent PR disaster, while also scoring some positive publicity points, by employing the same tools that Carroll had used, namely creativity and humour. They could, for example, respond with a funny video of their own. They could also organize a content for best sung responses to Carroll and share the winning songs online. Perkins urges companies to remember that when a story that can potentially affect their brand is unfolding online, “by choosing not to engage, they are letting the opponent win all the debate points.”

Focus on the positive

Over the longer term, companies should focus proactively on creating positive customer experience and promoting positive offline experiences online. Negative stories are not likely to develop into PR calamities when they involve companies that are generally known to deliver good service. Besides, as customer experience expert Blake Morgan argues, companies that focus on the positive inadvertently encourage their customers to do the same.

I understand that while building relationships, engaging online and focusing on the positive should set companies on a good start, it is not enough to help them promote their brand and address negative publicity online. What else should companies do to adapt their customer service and public relations to the realities of the world increasingly saturated with social media? Do you know any companies that have successfully completed this transformation?

Do Algorithms and Echo Chambers Make Us Nasty?

I’ve recently read an interesting book, Ten Arguments for Deleting Your Social Media Accounts Right Now. Written by Jaron Lanier who was only a decade ago regarded as the “Silicon Valley digital-guru rock star,” the book presents a number of powerful arguments for quitting social media platforms like Twitter and Facebook. While most of Lanier’ arguments sound too familiar to raise many eyebrows, he offers a very novel and illuminating analysis of the heavy toll that social media is taking on political debate and political activism.

Jaron Lanier talks about his book, Ten Arguments for Deleting Your Social Media Accounts Right Now

Algorithms favour assholes

Lanier suggests that a strong trend towards negativity and polarization is hard-wired into the algorithms that make social media platforms so addictive. It is hard to disagree with this take if you follow political conversations on Twitter where particularly hateful and obnoxious posts tend to attract the most attention. As users flock to comment on and register their outrage about the nastiest posts, conversations gravitate towards the most extreme viewpoints.

Politicians and activists of all stripes adapt to the algorithm-dictated outrage-is-everything pattern by reframing their positions on controversial issues as Twitter-style statements in which there is no place for nuance. Bot and troll armies operated by malicious actors, then, drive the polarization even further by spreading misinformation. Social media users become increasingly confined to and influenced by opinions within their social media “echo chambers”. In the end, we end up losing our ability to see nuance and empathize with people outside our “echo chambers”. Or, in the words of Lanier, social media algorithms turn users into “assholes” and reward those who behave like ones.

Source: Giphy

Echo chambers?

But how exactly do otherwise nice individuals who greet their neighbours and support co-workers’ charity drives in their daily lives turn into “assholes” when discussing politics online? What Lanier’s otherwise very informative book leaves unclear is the mechanism that turns social media users into nasty human beings that troll other users and share offensive content.

The book offers only a partial explanation by suggesting that platform algorithms reward hateful and polarizing content. Many other authors, scholars and journalists have argued that the way social media platforms organize users into communities inevitably creates “echo chambers” which solidify and reproduce particular political opinions to the point where users become unwilling to give merit to or even tolerate opposing or more nuanced opinions. This is the view I used to gravitate towards, particularly after realizing that the list of people I followed on Twitter looked surprisingly similar to the list of people I agreed with.

Source: Giphy

The key assumption underlying the “echo chamber” argument is that long-lasting exposure to certain political views and insulation from opposing views drives political polarization. This assumption, however, has been questioned by a recent study conducted by a group of scholars of American politics. The authors surveyed a substantial group of Democrat and Republican Twitter users and had them follow accounts expressing opposing political views. When the respondents were re-surveyed after some time, the researchers found that instead of bringing the users closer to each other, exposure to opposing political views actually increased their polarization.

While this study refutes the core assumption behind the “echo chamber” argument, it does not leave me anywhere closer to understanding what exactly causes otherwise polite and well-behaved individuals to post and share insulting political content online.

Do you have an explanation? Have you read anything interesting that could help me find an explanation? If so, please let me know in the comments section below.