After Twitter announced its decision to ban political ads entirely this week, it was interesting to note the mixed responses to the approach.
Some praised Twitter for taking a stand, while others questioned whether it would really make that much of a difference either way. On Facebook, targeted political ads were a major factor in the attempted voter manipulation around the 2016 US Presidential Election cycle - but on Twitter, it wasn't so much its ads that were the problem, but the automated bot armies which were deployed to boost certain tweets.
Various research reports have found large clusters to Twitter bots that have been used to amplify certain messaging. In April 2017, in the wash-up from the election, UK researchers uncovered several huge, inter-connected Twitter bot networks, with the largest incorporating some 500,000 fake accounts, many of which were summoned to amplify political messaging.
As reported by Recode at the time:
"During the third presidential debate, Twitter bots sharing pro-Trump-related content outnumbered pro-Clinton bots by 7 to 1. And in the span between the first and second debates, more than a third of pro-Trump tweets were generated by bots, compared with a fifth for pro-Clinton tweets."
So given findings like this, is Twitter's political ad ban really make much of a difference?
In an aligning crossover, this week, Twitter has released the latest version of its Transparency Report, which provides an overview of all the actions it's taken based on information requests and enforcement actions.
The report also, interestingly, includes stats on accounts removed due to 'artificial amplification' - "Actions to make an account or concept more popular or controversial than it actually is through inauthentic engagements."
As you can see here, Twitter's taking action against millions of accounts each month in an effort to eliminate such activity. Twitter says that it's been focused on "deterring potentially spammy accounts at the time of account creation; often before their first Tweet", which has lead to significant improvements on this front. And while Twitter likely won't be able to stop such activity outright, it is worth noting that the platform is taking action, and that it has been ramping up its activity in the past couple of years in order to break these bot networks.
Indeed, Twitter has also been implementing new rules around this, like changes to its API to limit the use of mass follow and retweeting, while it's also shifted its reporting metrics to a new 'mDAU' or 'monetizable daily active users' count, which will theoretically enable it to remove more fake profiles without such actions impacting its broader usage stats.
The question is valid - do Twitter ads really matter when its bot problem is far more pressing? But the answer, arguably, is that Twitter is improving on both fronts, which should make its political ad ban a more effective, impactful approach, and limit the spread of misinformation.
In addition to this, Twitter's latest Transparency Report also shows a 105% increase in accounts being locked or suspended for violating the Twitter Rules, while 50% of the tweets the platform takes action on for abuse are now being proactively surfaced by Twitter's detection technology, rather than as a result of user reports.
Twitter has long been criticized for failing its users on these fronts, not taking action against those who break the rules and not protecting users from abuse via tweet. Now Twitter is improving, but fewer reports don't lead to as much coverage. Which is good, that's how it should be, but it also likely means that Twitter's not getting the credit it deserves for the work it's done in addressing such.
There's more in Twitter's full Transparency Report, including information on government information requests and legal demands.
And while Twitter still has work to do on all fronts, it is worth noting the progress that it's made - and how that could, potentially, mean that its political ads ban will be more effective than some expect.