The Challenge of Online Abuse Detection; Breaking the Back of the Beast

Thilini Wijesiriwardene

How many conversations did you have this week? How many of them were face to face conversations with someone across from you or someone on telephone/VoIP? And most importantly how many of you were caught up in the reply comment loop of social media? With the dawn of the internet and social media, we have acquired the ability to maintain conversations with vast audiences or put forth opinions for the whole world to see. Twitter is a social networking site which was founded back in 2006 by Jack Dorsey. From the first tweet tweeted by him in 2006, Twitter has come a long way. Today Twitter has become many peoples' go-to platform for online scanning of news and gossip (isn't this what most of us do nowadays online; we prefer to scan few sentences rather than read a lengthier article)

Apart from this, Twitter has given all of us direct access to lives of many figures deemed influential and famous in a myriad of fields ranging from entertainment to religion to politics. You can know what your president thinks about a pressing issue and how other vital figures respond to his/her outlook. This may seem like your personalized grapevine. Our opinions can be less censored and more relaxed. We could, if we wish, address thousands of followers instantly. For famous figures, it is an appealing route to bypass the traditional media and reach their fans. As delightful as it seems; this path of direct access to one's perspectives can sometimes result in disaster than in benefit. Am I not entitled to rant on Twitter? Can't I express my honest opinion on "my" twitter account? Aren't we allowed to have disagreements/ different views? All of these questions point out to the genuinely peculiar and dangerous nature of abuse and harassment on Twitter.

Be a celebrity or ordinary citizen, harassment and abuse on Twitter have reached and affected the lives of many users. Why is Twitter still struggling to bring this issue under control? The simple answer is that we are incredibly nuanced in our expressions; in this time of technology and connectivity, we can create and understand new cultural contexts more so than ever before. Filtering Twitter streams by offensive or profane word lists to identify hate and abuse seems less effective and counterproductive because of this ability of ours. Censoring tweets based on the use of words that are classified as offensive or harassing does not necessarily cut it, and it could anger the users who use such words in their regular conversations.

One of the most recent examples that depict the difficulty of online abuse detection sprout from the controversial tweet from Roseanne Barr, a renowned American comedian. She has posted a racially charged tweet about Valerie Jarrett, the former senior advisor to the president Barack Obama, who is an African-American woman. Barr's tweet did not include any offensive or profane words per se, but instantaneously got picked up by followers and interested parties as racist and eventually got her sitcom canceled by ABC. Following is Barr's tweet:

“if the muslim brotherhood & planet of the apes had a baby=vj”

As humans, we connect millions of concepts with each other in our minds, and these are colored with a multitude of cultural contexts. We are extremely quick to understand the connotations of discourse. That is how this tweet got picked up as racist by humans. Can the same filtering be achieved by algorithms? As of today, no algorithm comes close to the human brain in understanding the overtones of human communication, and no algorithms possess the context including socio-cultural, linguistics and other dimensions, that humans are equipped with. But it gives us hope to know that we would be able to make the algorithms more "aware" of the context. What if there were knowledge graphs that connect the concept of the planet of the apes with apes and apes with the concept of racial stereotyping of African Americans? It would make the algorithms less inept in "understanding" the context. If Valerie Jarrett is an entity in the same graph with links pointing to the political nature of the position held by her, it would also be helpful for an algorithm to put things in perspective.

As hopeful as it seems, it would be naive to believe that encoding world knowledge in knowledge graphs and similar structures is the only answer to the question of providing context to algorithms. There are rich and, often overlooked content that is embedded in the metadata of online conversations. What are these content that an algorithm might be able to glean context from? When a message is posted on Twitter, the audience has many ways to react to the tweet. They could comment on it or/and they could retweet it (as it is or with minor changes introduced by themselves). This reaction oriented behavior of the audience can be leveraged to identify "controversial" content. In this case, the tweet of Roseanne Barr might be picked up by an algorithm if it were to be "taught" to monitor "controversial" content. From the number of replies started to pour in within short time intervals and from the amount of retweets with minor changes (mostly disapproving the original racially charged tweet) an algorithm would be able to "learn" that the original tweet is controversial and might need to be flagged for more thorough inspection - may be by a human.

Above mentioned approach also emphasizes another critical fact; tweets are rarely by themselves. Therefore if you try to evaluate the abusive/ harassing nature of tweets by only looking at single tweets, and not looking at the entire conversation (if available), it would not be as effective. Looking at the conversation could reveal the type of the relationship between the parties engaged in the conversation; friends could be using certain flagged words in a playful manner without the intention to harass, but strangers who use similar words might actually have a harassing intention. Nature of the conversation; whether it is a heated argument or a casual conversation also can provide context. Therefore it is fruitful to look at the entire conversations, not at individual tweets taken out of their context.

As social creatures, in every social situation both online and offline, we tend to release specific markers that can be picked up and analyzed. It is essential to identify and look at the markers that are released in online interactions because today most of our online decisions, experiences, and secrets tend to increasingly influence our offline lives; Online advertisements are identified to change our votes offline, online harassment tends to lead to dire results offline, etc. Believing that an algorithm one day would be able to "understand" the undertones of communication as humans and also to be equipped with context as humans seem far-fetched. But as researchers, it would be worthwhile and exciting to try and leverage these markers and context to make algorithms more capable.