Other BT PLC sections
 

Child sexual abuse images

We automatically block access to child sexual abuse images. Our customers don’t have to take any action to block these images – nor can they unblock access to it. We’re not directly required by law to block child sexual abuse images, but we consider it’s legitimate to take this action as it prevents the commission of a crime – the viewing of a child sexual abuse image. We do this voluntarily to protect children.

We were the first communications provider to develop technology to block these images when we introduced our blocking system, Cleanfeed, in 2004. Since then, almost all other communications providers in the UK have introduced similar technology. It’s also been adopted in other countries around the world, too.

Who decides which images are blocked?

The Internet Watch Foundation (IWF) gives us a list of images to block. There are around 1000 - 3000 blocked images on their list at any one time. For understandable reasons, the list of blocked images isn’t publicly available.

The IWF has detailed procedures it uses when deciding which images should be blocked. There’s an appeals procedure against its decisions.

How effective is Cleanfeed?

We don’t claim Cleanfeed solves the problem of access to this kind of material. Determined people wanting to find child sexual abuse images can often find ways around blocking mechanisms. But Cleanfeed does help prevent inadvertent access, so cuts the number of successful attempts to access these images overall.

What happens if there’s an attempt to access a blocked site?

Until 2013, people attempting to visit blocked sites or images were shown a 404 page error indicating they’d not been found. In 2013, we changed our approach. Today we display a web page explaining that the site contains illegal child sexual abuse images and offering links to counselling services.

This change of approach was significant. Since we introduced Cleanfeed, public concern about online images of child sexual abuse has grown. The new web page acknowledges that some people try to access this material deliberately. The generic message might alarm people who don’t intend to access the material. But it’s vital we deter people accessing child sexual abuse images. In doing so, we help protect children.

What information do we collect about access to child sexual abuse images?

Using Cleanfeed, we record the number of times we block access to these types of images.

But these statistics can’t tell the difference between the deliberate and the accidental. For example, someone might click on an email link not realising it points to child sexual abuse images. On top of that, the email might automatically try to connect to these sites, possibly multiple times, to show hyperlinked images.

From the end of January 2015 to early November 2015, the average number of attempts to retrieve an image of child sexual abuse notified to us by IWF was 36,738 every 24 hours. 

What should happen in the future?

By introducing Cleanfeed in 2004 we’ve played a vital role in blocking access to child sexual abuse images. We are proud of that. But it’s helpful to reflect on how we might improve the existing model – to keep it fit for the future.

Under the Protection of Children Act 1978, taking, making, showing, distributing, or possessing with a view to distributing indecent photographs of children is a crime. This is a very important statute in terms of the IWF’s assessments. But there’s currently no law forcing us or other communications providers to block the child sexual abuse images they identify; we do this voluntarily to protect children and cut crime.

There have been no legal challenges to the IWF model (not surprising, given the nature of the material). But the IWF is a registered charity funded by the EU and the online industry. (We were among the co-founders and are one of the many members which contribute to its running costs.) Despite its expertise it has no official standing to determine whether or not material is unlawful; its blocking list is not decided by any judicial authority.

Where blocking is done by “interception” strict legal rules apply. This is how Cleanfeed works. So this voluntary blocking activity could in principle pose a legal risk for us under those rules. (Although in reality we’re comfortable we’re within the law because we’re seeking to make the web safe for everyone and prevent crime.)

There’s a small chance that on occasion some of the material we block may be legal. We may face complaints, or even legal challenges, as a result. On balance, we’re happy to take the tiny risk of legal action over our responsibility to protect children.

Nevertheless, we think the current voluntary blocking system could be strengthened by giving it legal force. The work of the IWF is really valuable in setting an internationally-agreed benchmark for judging unlawful material. Few would disagree with that. But a new scheme which gave legal force to the IWF’s assessment of materials, alongside compulsory blocking, would reassure everyone that all blocked material was illegal and make all communications providers participate.

Could the IWF approach be used for other types of content, such as extremist or radicalisation material?

Behind every image of child sexual abuse is a grave offence against a child. Publishing or possessing these images is illegal. Other online content on subjects like extremism and radicalisation may of course be similarly reprehensible, but they’re often trickier to pin down as unlawful.

The content in question could be open to interpretation. People could have different views on its potential impact and so its lawfulness. Or it could be that the only way to judge the content would be to analyse the intentions of the person who created it.

So, it’s complicated. For example, it’s an offence under the Terrorism Act 2006 to publish or disseminate, intentionally or recklessly: "a statement that is likely to be understood by some or all of the members of the public to whom it is published as a direct or indirect encouragement or other inducement to them to the commission, preparation or instigation of acts of terrorism".

We provide a reporting button for our customers to alert the Counter Terrorism Internet Referral Unit of any terrorist material online. But it won’t always be clear whether material is illegal in this context. The decision would likely need a close look at the words, images and overall impact of the content. That kind of judgement is always going to be subjective in a way that assessing an image of child sexual abuse isn’t. Communications providers should not be the people making these judgements.

Furthermore, there’s no body with the right standing (like the IWF) to define the boundaries between lawful and unlawful content. And where there’s doubt, the tendency might be to over-block material, infringing people’s right to free expression.

How do we currently deal with extremist/radicalisation material?

We don’t automatically block it. But lots of it will be filtered out by the parental controls we offer our consumer customers.

If someone tells us about this type of content, we use a specialist firm to quickly help us assess the content and determine if it should be blocked. If we agree with the person flagging it to us, we include it on the blocked sites list within the relevant category in BT Parental Controls. There is no expert body or guidance on how to deal with this. We don’t make a legal assessment of the material because we have no legal authority to do so. But we believe this approach makes us less likely to infringe the right to free expression – because whatever we put into one of the categories within our parental controls, it is the customer who decides for him or herself whether to apply parental control filters.

We follow the same steps regardless of who raises a concern. Adopting our standard process, we have in the past reviewed content from a list of material which in the view of the police was unlawfully terrorism related. Some of this content was already on our list of blocked sites in BT Parental Control’s filters – under categories like hate and self-harm, social networking or media streaming. After looking more closely at the other sites, we added many to the Hate blocked list under BT Parental Controls.

There’s no perfect solution. BT Parental Controls is not without challenges and compromises when it comes to blocking potentially unlawful material. However, it’s a pragmatic solution for a difficult issue with no clear external processes, and one which allows us to pay due regard to the right to free expression.

How will this material be dealt with in the future?

The government and media have called for the internet industry to help tackle extremist content because of the serious threat it poses to UK security. In its recently published “Counter-Extremism Strategy”, the government said that communications providers play a critical role in tackling extremist content online.

We understand the government’s concern. But there’s a limit to what communications providers can do as the extremist material is not always easily or accurately identified and much of this material is on encrypted social media sites that our filters can’t easily block. Even when we can block it, content can often quickly reappear on another website.

One suggestion is to strengthen communications providers’ Ts&Cs to choke off extremist material.17 We don’t think that’s a workable approach. Quite aside from the inherent difficulty in deciding what’s “extremist”, Ts&Cs will inevitably end up varying from provider to provider. We need a better understanding of the role and function we and our peers in the social media industry play or we could end up straying into an inappropriate situation where corporations are asked to make decisions about people’s legal rights.

Any exceptions to the principle of open internet access must be clear and consistent. They should be under a transparent legal framework. There should ideally be a better, independent, legal process to evaluate this type of material and to decide what content should be taken down or blocked, with independent oversight to check requests, on behalf of the general public.

These safeguards are vital when looking at content the impact of which, by its nature, needs carefully weighing up. A court process – like the one we helped establish for blocking access to copyright infringing sites – could work for requests to remove or block access. That way, an expert and independent court makes judgments on what’s legal and what’s not.

Or an independent and expert body could be empowered to make binding decisions (subject to certain caveats) on illegality. This could work on similar lines to our proposed model for assessing and blocking child sexual abuse images.

Either of these routes would possibly require new legislation. But even voluntary arrangements for automatically blocking intrinsically unlawful material could need a legal framework (because of the Net Neutrality regulation).

This framework would have to consider proportionality, and be in place by December 2016, as required under EU law. And ideally the processes for blocking child sexual abuse images would also be formalised with legislation to remove any residual doubt about their legality.


17 HM Government, Counter-Extremism Strategy 67