Live-streaming is the hype function of the moment, with Periscope leading the way (though Facebook seems to be closing the gap every moment). And with 2 million daily active users consuming more than the equivalent of 40 years of video content on the platform, every day, there's a lot going on in live-streams, a lot of interest in the value and connective power the medium brings into our wider social media experience.
But along with the increased use of live-streaming there also comes challenges. One of the earliest criticisms of live-streaming was that it could easily be used to broadcast pay-per-view and subscription-based content, subverting the rights of broadcasters. Another challenge is in policing quetionable content - just recently there was an incident where a group of teenagers live-streamed themselves having sex, and another, where a French woman broadcast her own suicide.
Because the content is live, there's no filter, and that can lead to issues - with the content itself, as noted, but another area of concern is bullying and harassment, with many users jumping into the live-stream comments to spew forth offensive remarks, which have reportedly put many off from using the medium entirely (particularly female users).
(Image via Embed.ly)
And while such issues are not isolated to Periscope, it's another knock against on the app's parent company, Twitter, which has long had a poor reputation for protecting their users from abuse - even former CEO Dick Costolo noted last year that "we suck at dealing with abuse and trolls on the platform".
And the real challenge in live-streaming is that content is coming through live and in real-time - so how can you stop it before it's seen?
In an effort to tackle the issues with on-platform abuse, Periscope's implemented a unique solution. Rather than eliminating comments entirely (a key function of the app) or relying on automated sentiment detection to police what's shared (as each stream and audience will have a different tolerance or acceptance level for what's shared), Periscope's implementing a new 'trial by jury' type system for potentially offensive remarks.
It works like this (as outlined on Periscope's Medium blog):
- During a broadcast, viewers can report comments as spam or abuse. The viewer that reports the comment will no longer see messages from that commenter for the remainder of the broadcast. The system may also identify commonly reported phrases.
- When a comment is reported, a few viewers are randomly selected to vote on whether they think the comment is spam, abuse, or looks okay.
- The result of the vote is shown to voters. If the majority votes that the comment is spam or abuse, the commenter will be notified that their ability to chat in the broadcast has been temporarily disabled. Repeat offenses will result in chat being disabled for that commenter for the remainder of the broadcast.
It's an interesting solution, one which takes the onus off the broadcaster - which could become cumbersome if they're trying to broadcast at the same time as moderate - and places it on the crowd to dictate what's acceptable. How effective it will be is another question - definitely, it's better than nothing, but the tolerance levels will, of course, be dictated by the crowd, and as noted in a post from Embed.ly (the source of the above screenshot), there are times where the amount of abusive viewers outweigh those there to actually watch.
But either way, it's an interesting move for Periscope, and one which shows that they are trying, they are looking for solutions to limit the impact of on-platform abuse.
According to Periscope:
"...the entire process above should last just a matter of seconds. That said, if people don't want to participate, broadcasters can elect to not have their broadcasts moderated, and viewers can opt out of voting from their Settings."
Interestingly, TechCrunch has also posted an article today which looks at how Facebook's tackling a similar problem, using image recognition technology to stop the spread of offensive content before it even goes live. Again, that can't happen in live-streaming, which is, by definition, happening live, but it's interesting to consider how evolving technologies could be used, in future, to recognize and detect offensive material before it has a chance to cause any damage (Twitter, too, have their own image recognition AI which detects offensive content, but it's likely nowhere near as advanced as Facebook).
In the case of live content, the challenge remains a difficult one, and one to which there are no real answers - though given the rise of live-streaming, particularly with Facebook's push to promote their own Live offering, it's likely to become a bigger focus as more people come to utilize the option.
Maybe, given the nature of streaming, the only way to police the content is by empowering the viewer communities themselves. And maybe they'll follow the lead of platforms like Reddit and form their own, solidified parameters around what's acceptable and what's not.
And while the system will require you, as a viewer, to give your opinion on a Periscope comment every now and then, that action - taking a moment to read and vote - could save someone else from having to deal with the fallout from such remarks.
And in that, one simple click could be more powerful than you realize.