The Challenge of Online Abuse Detection; Breaking the Back of the Beast

Thilini Wijesiriwardene

How many conversations did you have this week? How many of them were face to face conversations with someone across from you or someone on telephone/VoIP? And most importantly how many of you were caught up in the reply comment loop of social media? With the dawn of the internet and social media, we have acquired the ability to maintain conversations with vast audiences or put forth opinions for the whole world to see. Twitter is a social networking site which was founded back in 2006 by Jack Dorsey. From the first tweet tweeted by him in 2006, Twitter has come a long way. Today Twitter has become many peoples' go-to platform for online scanning of news and gossip (isn't this what most of us do nowadays online; we prefer to scan few sentences rather than read a lengthier article)

Apart from this, Twitter has given all of us direct access to lives of many figures deemed influential and famous in a myriad of fields ranging from entertainment to religion to politics. You can know what your president thinks about a pressing issue and how other vital figures respond to his/her outlook. This may seem like your personalized grapevine. Our opinions can be less censored and more relaxed. We could, if we wish, address thousands of followers instantly. For famous figures, it is an appealing route to bypass the traditional media and reach their fans. As delightful as it seems; this path of direct access to one's perspectives can sometimes result in disaster than in benefit. Am I not entitled to rant on Twitter? Can't I express my honest opinion on "my" twitter account? Aren't we allowed to have disagreements/ different views? All of these questions point out to the genuinely peculiar and dangerous nature of abuse and harassment on Twitter.

Be a celebrity or ordinary citizen, harassment and abuse on Twitter have reached and affected the lives of many users. Why is Twitter still struggling to bring this issue under control? The simple answer is that we are incredibly nuanced in our expressions; in this time of technology and connectivity, we can create and understand new cultural contexts more so than ever before. Filtering Twitter streams by offensive or profane word lists to identify hate and abuse seems less effective and counterproductive because of this ability of ours. Censoring tweets based on the use of words that are classified as offensive or harassing does not necessarily cut it, and it could anger the users who use such words in their regular conversations.

One of the most recent examples that depict the difficulty of online abuse detection sprout from the controversial tweet from Roseanne Barr, a renowned American comedian. She has posted a racially charged tweet about Valerie Jarrett, the former senior advisor to the president Barack Obama, who is an African-American woman. Barr's tweet did not include any offensive or profane words per se, but instantaneously got picked up by followers and interested parties as racist and eventually got her sitcom canceled by ABC. Following is Barr's tweet:

“if the muslim brotherhood & planet of the apes had a baby=vj”

As humans, we connect millions of concepts with each other in our minds, and these are colored with a multitude of cultural contexts. We are extremely quick to understand the connotations of discourse. That is how this tweet got picked up as racist by humans. Can the same filtering be achieved by algorithms? As of today, no algorithm comes close to the human brain in understanding the overtones of human communication, and no algorithms possess the context including socio-cultural, linguistics and other dimensions, that humans are equipped with. But it gives us hope to know that we would be able to make the algorithms more "aware" of the context. What if there were knowledge graphs that connect the concept of the planet of the apes with apes and apes with the concept of racial stereotyping of African Americans? It would make the algorithms less inept in "understanding" the context. If Valerie Jarrett is an entity in the same graph with links pointing to the political nature of the position held by her, it would also be helpful for an algorithm to put things in perspective.

As hopeful as it seems, it would be naive to believe that encoding world knowledge in knowledge graphs and similar structures is the only answer to the question of providing context to algorithms. There are rich and, often overlooked content that is embedded in the metadata of online conversations. What are these content that an algorithm might be able to glean context from? When a message is posted on Twitter, the audience has many ways to react to the tweet. They could comment on it or/and they could retweet it (as it is or with minor changes introduced by themselves). This reaction oriented behavior of the audience can be leveraged to identify "controversial" content. In this case, the tweet of Roseanne Barr might be picked up by an algorithm if it were to be "taught" to monitor "controversial" content. From the number of replies started to pour in within short time intervals and from the amount of retweets with minor changes (mostly disapproving the original racially charged tweet) an algorithm would be able to "learn" that the original tweet is controversial and might need to be flagged for more thorough inspection - may be by a human.

Above mentioned approach also emphasizes another critical fact; tweets are rarely by themselves. Therefore if you try to evaluate the abusive/ harassing nature of tweets by only looking at single tweets, and not looking at the entire conversation (if available), it would not be as effective. Looking at the conversation could reveal the type of the relationship between the parties engaged in the conversation; friends could be using certain flagged words in a playful manner without the intention to harass, but strangers who use similar words might actually have a harassing intention. Nature of the conversation; whether it is a heated argument or a casual conversation also can provide context. Therefore it is fruitful to look at the entire conversations, not at individual tweets taken out of their context.

As social creatures, in every social situation both online and offline, we tend to release specific markers that can be picked up and analyzed. It is essential to identify and look at the markers that are released in online interactions because today most of our online decisions, experiences, and secrets tend to increasingly influence our offline lives; Online advertisements are identified to change our votes offline, online harassment tends to lead to dire results offline, etc. Believing that an algorithm one day would be able to "understand" the undertones of communication as humans and also to be equipped with context as humans seem far-fetched. But as researchers, it would be worthwhile and exciting to try and leverage these markers and context to make algorithms more capable.

Challenges and Opportunities for Sentiment Analysis over Social Media for Dynamic Events such as an Election

Monireh Ebrahimi and Amir Hossein Yazdavar

Previous efforts to assess people’s sentiment on Twitter have suggested that Twitter may be a valuable resource for studying political sentiment and that it reflects the offline political landscape. According to a Pew Research Center report, in January 2016 44% of US adults stated having learned about the presidential election through social media. Furthermore, 24% reported use of social media posts of the two candidates as a source of news and information, which is more than the 15% who have used both candidates’ websites or emails combined (Pew Research Center). The first presidential debate between Trump and Hillary was the most tweeted debate ever with 17.1 million tweets (First Presidential Debate Breaks Twitter Record).

Many opinion mining systems and tools have been developed to provide users with the attitudes of people towards products/people/topics and their attributes/aspects. One of the most often used techniques to gauge the public’s attitude, including its preferences and support, is that of sentiment analysis.  However, sentiment analysis for predicting the result of an election is still a challenging task. Though apparently simple, it is empirically highly challenging to train a successful model for conducting sentiment analysis on tweet streams for an election. Among the key challenges are changes in the topics of conversation and the people on which social media posts express their opinions.  In this blog, we will provide a brief overview of our sentiment analysis classifier and highlight some of the challenges that we have encountered during our monitoring of the presidential election at Kno.e.sis using our Twitris system. We should note here that Twitris-enabled election predictions by Kno.e.sis and the Cognovi Labs team were some of the very few that succeeded while a vast majority of predictions failed (Election Day #SocialMedia Analysis #Election2016 08Nov2016, Cognovi Labs: Twitter Analytics Startup Predicts Trump Upset in Real-Time).

We first created a supervised multi-class classifier (positive vs. negative vs. neutral) for analyzing people’s opinions about different election candidates. To this end, we trained our model for each candidate separately. The motivation for this segregation comes from our observation that the same tweet on an issue can be positive for one candidate while negative for another one. In fact, the sentiment of a tweet is very candidate-dependent. In the first round of training in July 2016 before the convention, we used 10,000 labeled tweets collected for 5 candidates, including Bernie Sanders, Donald Trump, Hillary Clinton, John Kasich, and Ted Cruz on 10 issues encompassing budget, finance, education, energy, environment, healthcare, immigration, gun control, and civil liberties . In addition to excluding re-tweets, tweets were tested for similarity using a ratio of levenshtein distance to ensure that no two tweets were too similar. Afterward, through many experiments over different machine learning algorithms and parameter settings, we found our best model with respect to F-measure. Our best model for Clinton uses SVM with TF-IDF vectorization of 1-3 grams, positive and negative hashtags for each candidate, number of positive and negative words (sentiment score), and achieved 0.66 precision, 0.63 recall, and 0.63 f-measure. Through manual error analysis, however, we noticed the importance of considering more comprehensive features like the number of positive and negative words to avoid some outrageous errors. Therefore, we conducted some experiments with the number of positive words, the number of negative words, and LIWC as features. Surprisingly, these features improved our F-measure by only around 1 %. Apart from that, we also used a distributed vector representation of training instances obtained from a pre-trained word2vec model on Twitter and Google News instead of using a discrete/traditional representation. However, the performance decreased. Finally, we achieved the best performance using CNN: three models—Random initialization, static (word2vec), non-static (fine-tuning word2vec).

Challenges

1. Fast-paced change in dataset

The foremost challenging part is creating a robust classification system to cope with the dynamic nature of tweets related to an election. The election is very active (or dynamic) as everyday people talk about some new aspects of the election and candidates in the context of new events. Therefore, important features used to classify sentiment may soon become irrelevant and new emerging features would be neglected if we did not update the training set regularly. Furthermore, in a political domain, unlike many other domains, people mostly express their sentiment toward the candidates implicitly and without using sentiment words extensively. This phenomenon makes the situation worse and more challenging. Another factor that may exacerbate the problem is differentiating the transient important features from lasting or recurring ones. Those features may disappear and then reappear in the future [1]. In the context of the election, for example, this scenario may happen because of the temporal changes in what each candidate’s supporters talk about. Given this non-stationary characteristic of the election, we may encounter a concept drift/dataset shift problem. That is, learning when the test and training data have a different distribution. In fact, most of the machine learning approaches assume an identical distribution for the training and test set, although the test/target environment changes over time in many real-world problems. This phenomenon is an important factor for selecting our classification model. Among the classification models, SVM is one of the most robust models for dataset shift.
Untitled drawing (2) (1).jpg
Untitled drawing (1) (1).jpg
All of these aforementioned challenges make active learning necessary. In fact, we can seldom apply machine learning to a real-world problem successfully without using active learning. There are two possible models for active learning which are useful to our problem as shown in the figures  below. Both of the above models are expensive because of involving the human in the loop for doing the really labor-intensive and time-consuming task of annotation. Annotation is even more challenging here due to both the short length of tweets and the inherent vagueness in the political tweets. A question may arise concerning why we have not used any unsupervised approach like a lexicon-based approach when the annotation part is so challenging and our annotated dataset becomes obsolete and outdated very fast. The answer is that in political tweets people often do not use many sentiment words; hence, the performance of a lexicon-based method would be low. Empirically, we employed the MPQA subjectivity lexicon [2] to capture the subjectivity of each tweet. However the accuracy of this model did not go beyond 0.49.

Despite the costs, updating the training set regularly was the most effective measure for keeping the classifier reasonably good during the 2016 election. In fact, no matter how well our system may have been working, what worked well up until yesterday could become useless today after a new political event, set of propaganda, or scandal. For example, our first model trained during the primaries performed quite poorly during and after the conventions. Therefore, feeding the dataset with a new training set was the key task for  keeping the system reliable. To do so, we were updating the training data in a timely manner, e.g., every few days. It may also be worth trying to include more important/influential tweets in the training data. To achieve this goal we were collecting the data mostly at specific times, such as during the presidential debates.

2. Candidate-dependence

Most of sentiment analysis tools work in a target-independent manner. However, a target-independent sentiment analyzer is prone to yield poor results on our dataset because post-conventions a huge number of our tweets contain the names of both candidates. “I am getting so nervous because I want Trump to win so bad. Hillary scares me to death and with her America will be over” and “I don't really want Hillary to win but I want Trump to lose can we just do the election over” are examples of such tweets. Based on our observation, about 48% of our instances contained variants of both Clinton’s and Trump’s names. In such cases, the sentiment of those tweets may get misclassified for a given candidate because of the interference of the features related to another candidate. The state-of-the-art approaches for supervised target-dependent sentiment analysis can be grouped into two groups of syntax-based and context-based methods. The first group merely relies on POS tagging or syntax parsing for feature extraction (e.g., [3]), while the second group defines the left and right context for each target [4]. [4] demonstrates that the latter outperforms the former in the classification of informal texts such as tweets. To further enhance performance, sentiment lexicon expansion-related works such as [5] can be used to extract the sentiment-bearing candidate-specific expressions and those expressions can be added to the feature vector of a classifier. In our case, since we have trained one classifier per candidate, we can include the instances containing the name of more than one candidate in the training set of both classifiers. The key is to include features related to the target candidate in its corresponding classifier and exclude the irrelevant one both in the training and testing phase. To do that, we can use either dependency or proximity (similar to the two aforementioned works) to include the on-target features and ignore the off-target ones. Similarly, in the testing phase, depending on the classifier, we should include and exclude some of the features from our feature vector.


3. The importance of identifying the user’s political preference

The ultimate goal of sentiment analysis over political tweet streams is predicting election results. Hence, obtaining some information about the political preference of the users can provide more fine-grained sources of information to a political pundit or analyzer for  insight. Inspired by [6], we have developed a simple but effective algorithm to categorize users into 5 groups of far left-leaning,  left-leaning, far right-leaning, right-leaning, and independent users. The idea behind our approach is the tendency of users to follow others who have a similar political orientation as themselves. The more right/left-leaning a user follows, the more probability of that particular user being right/left-leaning. Therefore, we have collected a set of Twitter users with known political orientation, including all senators, congresspersons, and political pundits. Then, we estimate the probability that a user is left-leaning(right-leaning) by calculating the ratio of left-leaning (right-leaning) followees of a user to his/her total number of followers. Finally, we decide the political preference of a user by comparing the above ratio with a threshold T. Gaining this information about users helps to improve the social media-based prediction of the election.


4. Content-related challenges (hash tags)

Recently, there has been a surge of interest in distant supervision, which is training a classifier on a weekly labeled training set [7,8]. In this method, the training data gets automatically labeled based on some heuristics. In the context of sentiment analysis, using the emoticons :) and :(  and other similar emoticons as a positive and a negative label respectively is one way of using distant supervision. Hashtags are also widely used for different machine learning tasks such as emotion identification [9]. Similarly, people use a plethora of hashtags in their tweets about the election. Besides, as we mentioned before, due to the dynamic nature of election domain, the quality, quantity, and freshness of labeled data plays a vital role in creating a robust classifier. It is therefore desirable to use popular hashtags that each candidate’s supporters use as a weak label in our dataset. However, our analysis for the 2016 election showed that people widely use hashtags sarcastically in the political domain and using popular hashtags for automatic labeling leads to a huge number of incorrectly labeled instances. For example, through the election only 43% of tweets containing #Imwithher were positive for Clinton and it was used sarcastically in 27% of tweets. Furthermore, our experiments show that using those hashtags as a feature for our classifier will also not boost our performance.


5. Content-related challenges (links)

All existing techniques for tweet classifiers rely merely on tweet contents and ignore the content of the documents they point to through a URL. However, based on our observation in the  2016 election, around 36% of of tweets contain a URL to an external link. Similarly in the  2012 election, [6] demonstrates that 60% of tweets from very highly engaged users contain URLs. Those links are crucial as without them often the tweet is incomplete and inferring the sentiment is impossible or difficult even for a human annotator. Therefore, our hypothesis is that incorporating the content, keywords or title of the documents that a URL points to as a feature will cause a significant gain in our performance intuitively. To the best of our knowledge, there is no work on tweet classification that expands tweets based on their URLs. However, link expansion has successfully been applied to other problems such as topical anomaly detection [10] and distant supervision [11].  

6. Content-related challenges (sarcasm)

Based on our observation 7% of Trump’s tweets and 6% of Clinton’s tweets are sarcastic. Among these sarcastic tweets, 39% and 32% of them have been classified incorrectly by our system. To date, many sophisticated tools and approaches have been proposed to deal with sarcasm. Looking closer at these works, they mostly focus only on detecting the sarcasm in the text and not on how to cope with it in the sentiment analysis task. This, therefore, raises the interesting question about how sarcasm may or may not affect the sentiment of the tweets and how to deal with sarcastic tweets in both the training and prediction phases. Ellon Rillof et al [12] has proposed an algorithm to recognize the common form of sarcasm which flips the polarity in the sentence. These kinds of polarity-reverser sarcastic tweets often express the positive (negative) sentiment in the context of a negative (positive) activity or situation. However, Maynard et al [13] show that determining the scope of sarcasm in tweets is still challenging. In fact, the polarity of sarcasm may apply to part of a tweet or its hashtags but not necessarily the whole. As a result, dealing with sarcasm in the task of sentiment analysis is an open research issue worth more work. In terms of the training set, our hypothesis is that excluding the sarcastic instances from the training set will remove the noise and improve the quality of our training set.


7. Interpretation-related challenges (Sentiment Analysis versus Emotion Analysis)

Study of sentiment has evolved to the study of emotions, which has finer granularity. Positive, negative, and neutral sentiments can be expressed with different emotions such as joy and love for positive polarity; anxiety and sadness for negative; and apathy for neutral sentiment. Our emotion analysis on who tweeted #IVOTED in the 2016 US presidential election  reveals Trump followers were joyful about Trump on election day. Though the sentiment analysis favored Hillary in the early hours, emotion analysis was showing support for Trump (joy). In fact, emotion is a better criterion for predicting people’s action like voting and usually there are huge emotional differences in the tweets which belong to the same polarity. Hence, emotion analysis should  inevitably be a component of an election prediction tool.


8. Interpretation-related challenges (Vote counting vs engagement counting)

Most/all of the aforementioned challenges affect the quality of our sentiment analysis approach. It is also very important to correlate a user’s online behavior and opinion with their actual vote. Lu et al [6] show the more important role of highly engaged users in the result prediction of the 2012 election  . There are two plausible explanations for this. First, the more a user tweets, the more reliably we can predict his/her opinion. Second, highly active people are usually more influential and more likely to actually vote in the real world. That is why an election monitoring system should report both user-level normalized sentiment in addition to a tweet-level one. It is the end user analyzer’s task to consider both of these factors in prediction.


9. The importance of location

An application that predicts the election result must consider each state’s influence in the election by means of the number of electoral votes for that state. Many tools and approaches have been developed for both fine-grained [14] and coarse-grained (Twitris) location identification in tweets for different purposes such as disaster management (Hazards SEES: Social and Physical Sensing Enabled Decision Support for Disaster Management and Response) and election monitoring. In the latter case, the geographic location of a tweet or the user location in the profile can be used to estimate the user’s approximate location. During the 2016 election , the spatial aspect of our Twitris system played a crucial role in assisting the end users in predicting the election.

10. Trustworthiness-related challenges (Bots)

What happens when a large number of participants in a conversation are biased robots which artificially inflate social-media traffic by manipulating public opinion and spreading political misinformation? A social bot is a computer algorithm that automatically generates content over social media and is trying to emulate and possibly change public attitude. For the past few years, social bots have inhabited social media platforms. Our analysis demonstrates that a large portion of tweets which were pro-Trump and pro-Clinton during the first and second debates originated from automated accounts (How Twitter Bots Are Shaping the Election, How the Bot-y Politic Influenced This Election). Indeed, we have witnessed bot wars during the election.  Recently, pinpointing the sources of bots has attracted many researchers. Supervised statistical models is done utilizing a different feature set from network features: retweets, mentions, and hashtag co-occurrence [15] to user features (i.e. language, geographic locations, account creation time, number of followers, followees), posts [16], and timing features (i.e. content generation and consumption, by measuring tweet rate and inter-tweet time distribution [17]). Content features are based on natural language cues measured via linguistic analysis, e.g. part-of-speech tagging [18]. At Kno.e.sis, by examining the resource that generates a tweet (checking whether it is originating from an API or not), our system has found bots in support of both Trump and Clinton with a high precision but low recall. A more sophisticated approach is needed to improve the recall.




References

[1]
Pinage, Felipe Azevedo, Eulanda Miranda dos Santos, and João Manuel Portela da Gama. "Classification systems in dynamic environments: an overview." Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 6.5 (2016): 156-166.

[2]
Wilson, Theresa, Janyce Wiebe, and Paul Hoffmann. "Recognizing contextual polarity in phrase-level sentiment analysis." Proceedings of the conference on human language technology and empirical methods in natural language processing. Association for Computational Linguistics, 2005.

[3]
Jiang, Long, et al. "Target-dependent twitter sentiment classification." Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1. Association for Computational Linguistics, 2011.

[4]
Vo, Duy-Tin, and Yue Zhang. "Target-Dependent Twitter Sentiment Classification with Rich Automatic Features." IJCAI. 2015.

[5]
Chen, Lu, et al. "Extracting Diverse Sentiment Expressions with Target-Dependent Polarity from Twitter." Sixth International AAAI Conference on Weblogs and Social Media. 2012.

[6]
Chen, Lu, Wenbo Wang, and Amit P. Sheth. "Are Twitter users equal in predicting elections? A study of user groups in predicting 2012 US Republican Presidential Primaries." International Conference on Social Informatics. Springer Berlin Heidelberg, 2012.

[7]
Purver, Matthew, and Stuart Battersby. "Experimenting with distant supervision for emotion classification." Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, 2012.

[8]
Go, Alec, Richa Bhayani, and Lei Huang. "Twitter sentiment classification using distant supervision." CS224N Project Report, Stanford 1.12 (2009).

[9]
Wang, Wenbo, et al. "Harnessing twitter" big data" for automatic emotion identification." Privacy, Security, Risk and Trust (PASSAT), 2012 International Conference on and 2012 International Conference on Social Computing (SocialCom). IEEE, 2012.

[10]
Anantharam, Pramod, Krishnaprasad Thirunarayan, and Amit Sheth. "Topical anomaly detection from twitter stream." Proceedings of the 4th Annual ACM Web Science Conference. ACM, 2012.

[11]
Magdy, Walid, et al. "Distant Supervision for Tweet Classification Using YouTube Labels." ICWSM. 2015.

[12]
Riloff, Ellen, et al. "Sarcasm as Contrast between a Positive Sentiment and Negative Situation." EMNLP. Vol. 13. 2013.

[13]
Maynard, Diana, and Mark A. Greenwood. "Who cares about Sarcastic Tweets? Investigating the Impact of Sarcasm on Sentiment Analysis." LREC. 2014.

[14]
Ji, Zongcheng, et al. "Joint Recognition and Linking of Fine-Grained Locations from Tweets." Proceedings of the 25th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 2016.

[15]
Lee, Kyumin, Brian David Eoff, and James Caverlee. "Seven Months with the Devils: A Long-Term Study of Content Polluters on Twitter." ICWSM. 2011.

[16]
Kramer, Adam DI, Jamie E. Guillory, and Jeffrey T. Hancock. "Experimental evidence of massive-scale emotional contagion through social networks." Proceedings of the National Academy of Sciences 111.24 (2014): 8788-8790.

[17]
Yang, Zhi, et al. "Uncovering social network sybils in the wild." ACM Transactions on Knowledge Discovery from Data (TKDD) 8.1 (2014): 2.

[18]
Davis, Clayton Allen, et al. "BotOrNot: A system to evaluate social bots." Proceedings of the 25th International Conference Companion on World Wide Web. International World Wide Web Conferences Steering Committee, 2016.

Bots in the Election

In the Kno.e.sis Center at Wright State University, we continue to refine our Twitris technology (Licensed by Cognovi Labs LLC) for collective social intelligence to analyze social media (esp. Twitter) in real time. Kno.e.sis and Cognovi Labs teamed up with the Applied Policy Research Institute (APRI) early in the year and created some tools to monitor the debates. See press coverage on TechCrunch. From the time that we first began following the nominees on Twitter, one thing became clear: Donald Trump was considerably more popular than his competition during the primaries as well as the general election. To be honest, I had never considered the possibility that social bots may have been playing a role in this popularity.


After the conclusion of the first debate all parties who had watched our "Debate Dashboard" were shocked not just by the volume of tweets but by the sentiment and emotion that appeared more positive for Trump than Clinton. When we investigated news from major media outlets, we became more and more concerned that our tool had some serious flaws. Due to articles we had seen discussing the large support Trump had on Twitter, we decided to focus on sentiment. Up until a few days before the election, we continued to update and improve our sentiment analysis algorithm.   


Notwithstanding the improvements in precision to our sentiment classifier, we continued to see Trump as the clear leader. As the debates came and went, our data remained consistent. We began an urgent quest for reason. We added gender analysis because media outlets were telling us that women were down on Trump and would be a major force in the election. Our analysis did not show this, despite having 96% precision in determining female and male users. We developed a proprietary process to separate users into left-leaning and right-leaning users. We could even say whether a user was strongly or loosely associated with a particular political party. Unfortunately, analyzing the data based on political association didn't help either. Surprisingly many strongly left leaning users were anti-Hillary, just a bit behind the right-leaning users.


After the second debate, we began to see many articles pop up about social bots. Once we began to look more into the issue, we found many articles from early in the year talking about Trump's "Bot Army" (Trumps' Biggest Lie? The Size of His Twitter Following). We had our aha moment. In that article, there is a reference to The Atlantic's use of a tool called BotOrNot. We decided to attempt to use this or some similar tool during the last debate to remove users with bot accounts and analyze the remaining data.


BotOrNot is a tool developed at the University of Indiana, Bloomington with collaboration from the University of Southern California, Marina Del Rey. Their tool creates over one thousand metrics by looking at the user account and analyzing retweets, hashtags, metadata, etc. The tool performed extremely well in the DARPA Twitter Bot Challenge, having correctly identifying all of the known bots (though it did incorrectly mark some additional users as bots). We were excited to learn that they had made their tool available through an API endpoint and decided to run our tweets through the system to test the speed at which we could process users. Twitris at this point was processing nearly 35 tweets per second for the election analysis alone, and it was clear very quickly that their service would not be able to handle the volume of data we would be consuming.


Though the final presidential debate was only several days away, we still had hope that we would find an answer. We saw Prof. Philip Howard, from the University of Oxford, mention in The Washington Post that they considered any user that tweets more than 50 times in one day a bot. It would be relatively simple to create an index of users and simply increment the count per tweet and then check the index quickly as the tweets roll through. We may have done this if our team had been free at the time. Some were working on bug fixes, others on improving sentiment, and still others were working to fix some infrastructure issues that we were experiencing at the time. Our corpus of tweets for the election campaign was on its way to exceeding 60 million tweets. A robust implementation would have required more time than anyone had to offer.


I suppose that now is a better time than never to define what a "bot" is. Phil Howard says, in that Washington Post article mentioned above, that "A Twitter bot is nothing more than a program that automatically posts messages to Twitter and/or auto-retweets the messages of others". I personally like this definition, but it leaves a little wiggle room where tools like BotOrNot, which focus primarily on the user accounts, are considered. A Twitter user can be a real, living, breathing, human being and still exhibit bot-like tweeting habits. I think that this is the reason that Howard's group settled on the 50-tweets-per-day stat instead of relying on a classifier. A user can tweet for themselves part of the time, but still have some automation process that tweets certain things on their behalf. There are many reasons that someone would do this, to increase their Klout score (imagine a LuLaRoe seller, YouTuber, or Blogger), for example. There are many companies that you can pay to automate this kind of activity for you. Some, like Linkis' "Convey" (more on this later) work by finding influential tweets and tweeting them on your behalf. These tweets are fairly easy to spot because they attach "via @c0nvey" to the end of the original tweet.


In the end, what we did was to develop a system that is able to quickly and accurately weed out tweets that were not authored by humans, even if the user account is owned by an actual human. Let's see how well our system stacks up to the BotOrNot service. We collected all of the bots found over one fifteen minute period and ran them through BotOrNot. 67.16% of the accounts from the tweets we labeled were determined by BotOrNot to be “bot” owned accounts. I was a little disappointed by this, so I took a look at the users that BotOrNot dismissed as human. In the first pass, I decided to apply Howard's 50-tweets-per-day rule. Doing this increased the percentage of accurately labeled bot tweets to 73.88%. One of the screen names found among this group actually contained the word "bot" in it. The most important thing for us at this point was to make sure that we weren’t getting a lot of false positives. We looked at each account one by one; we looked at each tweet in our system, both those classified as "likely bot" and "likely human".


 
Figure: Real-time labeling of bot and human tweets in Twitris




Many of the users with a low tweet-per-day usage had a combination of the two, and looking at the tweets that were labeled "likely bot", nearly all of them contained "via @c0nvey". Digging a bit further, I found that the Convey service has been accused of tweeting on behalf of unwitting users in the past. Here's a thread from one Twitter user talking about his experience:
   c0nvey.jpg


So, our system is able to accurately detect automated (bot) tweets despite the fact that the user is not a bot. Our average detection rate also looks reasonable compared to other services. We experience bot traffic at a rate of about a 5% per day in our election campaign, though there are days that get to nearly 8%.


Moving Forward (oh, and Fake News)


Elections come and go. Once they are gone, we seek to apply our findings to future work. Being able to detect bots is great; but, what else can we do with that? Well, post-election we have learned the term “fake news” (with content that was entirely made up, not grounded in truth or reality), something that not too many people were concerned about before: see the Google Trend.
fakenews.jpg
Where does all of this fake news come from? There have been troves of news reports blaming Facebook and Twitter for altering the outcome of the election. Obviously, Facebook and Twitter themselves weren’t creating pro-Trump news (Facebook in particular was accused of killing pro-Trump trends); however, some blame these companies for not alerting people that the news that they were “allowing” to spread was fake. I don’t think that that is fair. These companies are reliant on the fact that they don’t publish news (for more, read this article on Facebook, the “News Feed”, and the Communications Decency Act 230) to avoid lawsuits.


During the evaluation of our bot detection system, we noticed something interesting. A large majority of the bot-labeled tweets contained links to dubious looking news stories. Because of our ability to identify these bot tweets, we can exclude them from analysis when considering a brand-centered campaign (like Samsung during the Note 7 battery “situation”). I think that it is important to note that, despite the elimination of “fake news” and “bot tweets” from our analysis, Trump was always winning. We saw the same thing with the Brexit referendum earlier this year, where Twitris helped us to correctly predict the Brexit outcome before the polls closed. There is clear evidence to support the fact that high tweet volume translates to success (except Bernie Sanders, but there may be, uh-hum, other reasons for that). It seems to me that for bots and “fake news” to have swayed the election, they would have needed to be ready to go as soon as Trump announced that he was running, but what we have seen is that he was always ahead.


We will continue to find new ways to leverage everything that we learned from the 2016 Election. If you want to stay up to date on our analysis, please sign up to receive CognoviLabs’ newsletter at www.CognoviLabs.com or join Kno.e.sis on FB, and while you are there check the other post-election analysis.