Habbo, Facebook and the IWF: Child Safety Lessons
It's been a big ol' week in terms of child online safety and reporting ...
Habbo Hotel led the way, with its expose on Channel 4 News, about which you will surely have seen and read much.
I'm not going to add too many words to the ongoing debate, partly because others are expressing my feelings very well: take a look at Jeremy Wilson in The Kernel, whose analysis of Sulake's position within the 'moral panic flavour of the moment' lampoons the CEO's statement: "It has provided a global wake-up call for both my company and our poorly-regulated industry" with the wry comment "The conviction of Matthew Leonard for online child sex offences after contacting the majority of his 80 victims through Habbo clearly wasn’t a sufficient wake-up call for Sulake…"
... and partly because no website is absolutely secure: scaling the moral high ground allows for the potential steep moral tumble downwards. All I would say is that there are a lot of people in the industry - for example, all those who contributed towards UKCCIS's 'Good practice guidance for the moderation of interactive services for children' - who care deeply about the safety of minors and are devoting resources to the best moderator talent and technology: resources which do, of course, come off the bottom line. But as Sulake have discovered to their cost, it's an investment worth making.
Possibly as a reaction to the lack of response when users complained to the Habbo Hotel moderation team, Facebook then went public with their Facebook Reporting Infographic illustrating what happens when you hit the 'report' button on the social networking site and demonstrating the paths a report makes dependent on the choices made through the process:
The accompany blog on Facebook explains that the reports triaged to four separate teams:
- Hate and harassment
- Abusive content
"In order to effectively review reports, user operations is separated into four specific teams that review certain report types — the safety team, the hate and harassment team, the access team, and the abusive content team. When a person reports a piece of content, depending on the reason for their report, it will go to one of these teams. For example, if you are reporting content that you believe contains graphic violence, the safety team will review and assess the report. And don’t forget, we recently launched our support dashboard, which will allow you to keep track of some of these reports.
"If one of these teams determines that a reported piece of content violates our policies or our statement of rights and responsibilities, we will remove it and warn the person who posted it. In addition, we may also revoke a user’s ability to share particular types of content or use certain features, disable a user’s account, or, if need be, refer issues to law enforcement. We also have special teams just to handle user appeals for the instances when we may might have made a mistake."
Back in April, Facebook released The Support Dashboard: a portal designed to help you track the progress of the reports you make to Facebook. From your Support Dashboard, you can see if your report has been reviewed by Facebook, and also alerts you when a decision has been made about a report. Update: This isn't yet available in the UK, no is there any launch date in sight.
Internet Watch Foundation
As with eModeration's own escalation flow processes, so Facebook teams will take a call on whether content is illegal in any way, and if so, report it through to the appropriate authorities.
Which brings me to the last item on my list: the success story released by the Internet Watch Foundation (IWF) yesterday that an anonymous public report led to a man’s arrest, conviction and the rescue of three children from sexual abuse. In a particularly harrowing tale, the IWF's statement detailed the steps which were taken to get the content removed and, working with the Child Exploitation and Online Protection Centre (CEOP), to trace and arrest the offender within one week of the original complaint.
So what's the takeout from this week?
At the nub, it's that any websites hosting user generated content - and especially those aimed at children - need to devote sufficient and appropriate resource to their moderation teams, tools and processes, and scale these as they grow. It's that we need to be transparent about these processes, and learn from others' best practice, regardless of the state of regulation in any territory. And finally, it's that our actions as watchdogs really can count. Thanks you so much to the person who reported what they found on the internet to the IWF, and stopped the misery for three abused children.