Robots have been expected to continue to become an integral part of our society fo years. Now robotics is an avenue to fight “hate speech” (Leetaru, 2017). Forbes contributor Kalev Leetaru writes that deep learning bots are a possible key to eliminating hate speech over social media. These bots have gained in intelligence over the years to the point of high sophistication regarding their ability to analyze human text and imagery (Leetaru, 2017). The idea is to release them in mass numbers and have them report, counter and overwhelm writers of hate speech online (Leetaru, 2017). They would do this by identifying specific words and phrases or meanings to then generate a report of abusive behavior on social platforms (Leetaru, 2017). In doing so, they will gather specific data on account names, timestamps and response rates as to eliminate any bias (Leetaru, 2017). The data would then be uploaded to the platforms to notify them and lead to the bots being able to write post responses back to the authors to encourage “self-censorship” (Leetaru, 2017).
When I read this article, I originally thought of the movies, the idea of robots fighting off online vandals seems unrealistic. But then I thought about how far we have come in robots and engineering. I like the idea of having a coordinated and completely cohesive fleet of robots monitoring platform activity without any inherant bias in their policy. The article touches on the fact that policies regarding behavior on platforms are not always properly followed because they rely on some form of human discretion even within their coding algorithms embedded into their platforms (Leetaru, 2017). The reality that these bots can achieve well written human speech is also amazing; they could shut down an argument on behalf of victims all while gathering data on the perpetrator. They can help companies and platforms monitor behaviour and respond faster and smarter to online threats and hate.
On the other hand, though, having these bots can also go too much to the other extreme where free speech is cut out. Leetaru also notes this issue; it is a worrying byproduct of invasive technologies of its nature (Leetaru, 2017). I think the key here is to have strict control over the testing period and as with most procedures, companies who use the bots need to set guidelines and policies in turn that are made based on a collective thought process rather than one person or a small group of people.The only issue this course faces is also the quesiton of governments having the right to control such a large scale program/initiative or do individual companies pay to ahve control?
What are your thoughts on the use of robots to defend the public’s ideals on social media?
To read the full article you can find it here!
Leetaru, Kalev, (February 04, 2017). Fighting Social Media Hate Speech With AI-Powered Bots. Retrieved from https://www.forbes.com/sites/kalevleetaru/2017/02/04/fighting-social-media-hate-speech-with-ai-powered-bots/print/