"We suck at dealing with abuse and trolls on the platform and we've sucked at it for years."
This is Twitter CEO Dick Costolo's assessment, an honest, unfiltered comment from a leaked internal company memo which was obtained by The Verge earlier this week. Costolo's blunt assessment comes on the back of two particularly high-profile incidences of cyber-bullying on the world's biggest microblogging site - writer Lindy West did a report for NPR about her confrontation with a Twitter troll (who created an account impersonating her recently deceased father to attack her), while Robin Williams' daughter Zelda shut down her account after being targeted in a series of vicious tweets. Both cases shine a light on the impacts of Twitter abuse, abuse that people are experiencing every day. The dangers of such attacks are even more resonant when you consider the damage they can cause to teenagers and adolescents, amongst whom social media is a way of life. But at the end of the day, what can Twitter actually do to stop something that's existed since the beginnings of human communication?
The Growing Monster
Anyone who uses social media has come across some level of 'trolling' behaviour at some point. Facebook is fighting it's own battle against the issue, but Twitter is in a slightly tougher spot due to the platform's emphasis on open communication. While Facebook has evolved to become more of a meeting place for close friends and family members, Twitter's where people go to voice their opinion on current issues, to add their thoughts to the global conversation. This also leaves people vulnerable to the slings and arrows of online commentators and you can block them out, of course, using Twitter's 'block user' option, but that doesn't stop them from opening a new account and continuing their abuse. And the damage caused by just one tweet can be significant.
To get an idea of the level of the issue, I ran a few searches in Topsy to find out how many times common trolling terms have been used on Twitter over the past week:
Of course, not every one of these mentions is abuse, many are being used within a specific contextual framework, but that's a cumulative 16,200 mentions of these terms within the last seven days alone. Even if only a quarter of those mentions are actual bullying (4,050), that's still a significant issue to be addressed. What's more, with studies showing that cyberbulling is now among the main causes of suicidal thoughts in children and adolescents, the need to address those instances couldn't be more pressing. But how do you detect and contain such ugly behaviour effectively, whilst operating a platform that encourages open dicsussion and debate?
Detecting Hate
This is the core issue that Twitter, and really, every social network, is facing - how can you detect and eliminate negative behaviour amongst such a huge volume of social interactions. Five hundred million tweets are sent every day - half a billion messages that need to be scanned and sorted in order to pinpoint offenders and cancel out their access. Every day. It's a monumental task, and one which would require thousands of man-hours to accomplish manually. Detection could be left to automation - you can search for tweets via sentiment, maybe Twitter could create an algorithm that highlights users with consistently low sentiment scores in order to flag them for review. Such a system might yield positive results (and is likely already in operation on some level) but there's still the issue of stopping tweets happening in real time. That's simply impossible - there's no way to stop someone tweeting something pre-emptively, so there's no way to completely shield users from exposure to abuse. The only hope is through detection, then penalisation of the offending members.
The Steps to Protection
In an interview with BuzzFeed after the leaked memo, Dick Costolo noted some of the aspects Twitter is investigating to address trolling, including streamlining the reporting process and altering the terms of service that can be bent to benefit those spreading online hate. Such measures are always, and will always be, difficult - take Facebook, they employ a team of people to sift through and filter out the horrors of humanity away from our news feeds. Many of those employees end up psychologically scarred by the things they have to see on a daily basis, and those are the real terrors, the real things that they need to address and remove every day. But offence is, of course, subjective, so those teams are actually dealing with a raft of other issues and reports on top of the more serious violations they need to address - their work can literally save lives if they can get to it in time, so a level of priority needs to be assigned to each report. But who's to say my 'minor' complaint isn't major to me? How can assessors know what my mental state is and how the words of others affect me? Someone saying they hate me might be nothing, something I should just be able to get over, but contextually, it could be devastating.
It's that ambiguity that further complicates the battle against online bile, identifying the issues that genuinely need to be addressed amongst those that are less impactful, from an overall standpoint. In an ideal world, they'd not have to make any such call, all reports would be treated equally. But in the real world, capacity is a necessary consideration - it's not possible to cover everything.
In future, the likely solution to the detection and eradication of online trolls lies in big data and analytics. Studies have already shown the power of social media data, in terms of its ability to detect and categorise users' personality and psychological leanings, it won't be long before similar analysis will be able to determine levels of psychological distress or highlight people in danger based on their traits and habits. Such advanced analytics could lead to a system that can effectively highlight the most at-risk users in order to ensure their reports are dealt with faster, or even dealt with before the user makes any such report. Its advances like this that underline the positive possibilities of social and data translation.
But on top of all this, we too can make a positive impact. I've long said that there's no need for negativity on social media - if you don't like what someone has to say, don't follow them - there's no need to engage in personal or vicious attacks because there are plenty of other people you likely do want to listen to and you do want to engage with. The rules and parameters of our online communities haven't been solified yet, haven't been embedded into our culture the way traditional communications processes have. There's opportunity for us to make social media a place where bullying and unnecessary hatred is simply not acceptable, a new world order, to some degree. In that sense, it's all of our responsibility to report abusers when we see them, to offer assistance to those who may be in need - to basically not turn a blind eye to harassing and overly negative behaviour. We can't expect the platforms alone to save us, but we can contribute in our own ways and help ensure such activity is not accepted. Hopefully, through our combined efforts, we can change the perspective on how we impact the people around us and the roles we play when expressing our own thoughts and beliefs.