‘Shadow banning’ sounds much cooler than it actually is.
Rather than being some mystical element, shadow banning is, as described Twitter:
“Deliberately making someone’s content undiscoverable to everyone except the person who posted it, unbeknownst to the original poster.”
So rather than banning you, or blocking your content, your reach is simply reduced, making it much less visible, which is theoretically made increasingly viable in the age of algorithms, where not all of your posts will be seen by all of your followers.
Twitter has provided this definition because it has been accused by various groups of implementing shadow bans, specifically on political voices. Twitter had offered some explanation on this in the past, dismissing the concern, till this happened earlier in the week:
Twitter “SHADOW BANNING” prominent Republicans. Not good. We will look into this discriminatory and illegal practice at once! Many complaints.
— Donald J. Trump (@realDonaldTrump) July 26, 2018
When the President starts tweeting about it to his 53 million followers, you pretty much have to take action, which Twitter has done in a detailed explanation of ‘shadow banning’ and how Twitter’s algorithm-defined timeline may have lead some to believe they were falling victim to censorship.
First, Twitter has sought to clarify the platform’s stance:
“We do not shadow ban. You are always able to see the tweets from accounts you follow (although you may have to do more work to find them, like go directly to their profile). And we certainly don’t shadow ban based on political viewpoints or ideology.”
Of course, that explanation – especially the ‘you may have to do more work to find them’ part – only incited more anger amongst those who’ve felt victimized by Twitter’s perceived ‘bans’. Maybe Twitter doesn’t shadow ban, but restricting content from follower timelines is the same thing, right?
Twitter explained further:
“We do rank tweets and search results. We do this because Twitter is most useful when it’s immediately relevant. These ranking models take many signals into consideration to best organize tweets for timely relevance.”
Those signals incorporate these key factors:
- Tweets from people you’re interested in should be ranked highly
- Tweets that are popular are likely to be interesting and should be higher ranked
- Tweets from bad-faith actors who intend to manipulate or divide the conversation should be ranked lower
There are still questions in there, right? On the last point, who determines ‘bad-faith actors’? What does ‘divide the conversation’ mean?
Definitions like this are what’s lead to various social media platforms seeing problems with their process, as often such rules are provided to teams of people who have to interpret and apply them as they see fit. Wording like this can lead to alternate understandings, and varying application – we saw this recently with Twitter’s verification process, which was being applied in widely different ways by different Twitter teams, and within different regions.
Further explaining the last point, Twitter says that these are some of the key signals they use when determining ‘bad faith actors’.
- Specific account properties that indicate authenticity (e.g. whether you have a confirmed email address, how recently your account was created, whether you uploaded a profile image, etc)
- What actions you take on Twitter (e.g. who you follow, who you retweet, etc)
- How other accounts interact with you (e.g. who mutes you, who follows you, who retweets you, who blocks you, etc)
Twitter says that they know this approach is working because they’re seeing fewer abuse and spam reports, which seems like a good indicator, but still, there’s probably going to be some issues, some elements lost in translation.
The key issue which sparked the most recent queries on shadow bans came from people who claimed that their accounts were virtually invisible in search. Twitter has acknowledged this error, and claims to have fixed it.
“[We’ve] identified an issue where some accounts weren’t auto-suggested in search, even when people were searching for their specific name. To be clear, this only impacted our search auto-suggestions. The accounts, their tweets and surrounding conversation about those accounts were showing up in search results. As of yesterday afternoon, this issue was resolved.”
So Twitter doesn’t shadow ban and the key concern raised has been fixed. Case closed, right?
Unfortunately, this won’t be the last we hear of it – there will undoubtedly be some who believe that Twitter or Facebook or Instagram is censoring their posts, there will always be users who believe their voices are being silenced, particularly given the influence that algorithms now have over what people see.
And while Twitter’s explanation is good, and it’s a positive that they’re taking the time to provide more transparency into how their ranking systems actually work, the growth of social as a medium for sharing divisive political content will increasingly put the platforms themselves in difficult editorial positions like this, where clarity on such rules is absolutely clear.
The problem is, clarity may not be totally possible, there will always be some flexibility in such rules that will be up for interpretation.
That’s not to say the platforms should just give up and let everything flow, one way or another, but as we’re also seeing with Facebook’s recent suspension of controversial broadcaster Alex Jones, the position they’re now in, where they can dictate the reach of such content, requires a delicate balance.
And no matter what, they likely won't be able to please all sides.