Webwatch: Government unveils ‘world first’ plans for tough new online safety laws

The Government will introduce “world first” internet safety laws designed to make the UK the safest place in the world to be online, new proposals claim.

A white paper on online harms, published jointly by the Department for Digital, Culture, Media and Sport (DCMS) and the Home Office, proposes strict new rules be introduced that require firms to take responsibility for their users and their safety, as well as the content that appears on their services.

It suggests punishing social media companies with large fines or blocking them from being accessed.

Overseen by an independent regulator, internet companies which break these rules could even see senior management held personally liable for the failings.

A newly introduced duty of care will require firms to take more responsibility for the safety of users and more actively tackle the harm caused by content or activity on their platforms.

The regulator will have the power to issue “substantial fines, block access to sites and potentially impose liability on individual members of senior management”.

However, the proposals have prompted warnings that oversight should not amount to state censorship.

A 12-week consultation will now take place before ministers publish draft legislation.

The proposed measures are part of a Government pledge to make the UK one of the safest places in the world to be online, and comes in response to concerns over the growth of violent content, encouraging suicide, disinformation and the exposure of children to cyberbullying and other inappropriate material online.

A number of charities and campaigners have called for greater regulation to be introduced, while several reports from MPs and other groups published this year have also supported the calls for a duty of care to be implemented.

Prime Minister Theresa May said the proposals were a sign the age of self-regulation for internet companies was over.

“The internet can be brilliant at connecting people across the world – but for too long these companies have not done enough to protect users, especially children and young people, from harmful content,” she said.

“Online companies must start taking responsibility for their platforms, and help restore public trust in this technology.”

The Home Secretary, Sajid Javid, added that tech firms had a “moral duty” to protect the young people they “profit from”.

“Despite our repeated calls to action, harmful and illegal content – including child abuse and terrorism – is still too readily available online,” he said.

“That is why we are forcing these firms to clean up their act once and for all.”

The proposed new laws will apply to any company that allows users to share or discover user-generated content or interact with each other online, the Government said, applicable to companies of all sizes from social media platforms to file-hosting sites, forum, messaging services and search engines.

It also calls for powers to be given to a regulator to force internet firms to publish annual transparency reports on the harmful content on their platforms and how they are addressing it.

Companies including Facebook and Twitter already publish reports of this nature.

Responding to the proposals, Facebook’s UK head of public policy Rebecca Stimson said: “New rules for the internet should protect society from harm while also supporting innovation, the digital economy and freedom of speech.

“These are complex issues to get right and we look forward to working with the Government and Parliament to ensure new regulations are effective.”

Peter Wanless, chief executive of children’s charity the NSPCC – which has campaigned for regulation for the past two years – said the proposals would make the UK a “world pioneer” in protecting children online.

“For too long social networks have failed to prioritise children’s safety and left them exposed to grooming, abuse, and harmful content,” he said.

“So it’s high time they were forced to act through this legally binding duty to protect children, backed up with hefty punishments if they fail to do so.”

However, there have been warnings that the proposals could amount to state censorship.

“These things are always justified as being for good, kind and worthy objectives, but ultimately it’s giving power to a state regulator to decide what can and cannot be shown on the internet,” Victoria Hewson, of the Institute for Economic Affairs think tank, told the BBC.

“Maybe the authorities should be trying to stop these things at source.”

Former culture secretary John Whittingdale warned ministers risked dragging people into a “draconian censorship regime” in their attempts to regulate internet firms.

Writing in the Mail On Sunday, he said he feared the plans could also “give succour to Britain’s enemies”, giving them an excuse to further censor their own people.

HOW WILL AN ONLINE HARMS REGULATOR WORK?

Plans for new laws making tech giants and social networks more accountable for harmful content online have been set out by the Government, in a bid to make the UK one of the safest places in the world to be online.

Here is everything you need to know about the long-awaited white paper:

Will there be regulation?

An independent regulator will be responsible for ensuring tech companies abide to a new duty of care and code of practice.

The Government is currently consulting on whether this should mean the creation of a brand new regulator or whether it should be housed within an existing regulator, such as Ofcom.

What will the regulator do?

It is proposed that the regulator be given powers to ensure all companies affected by a new regulatory framework fulfil their duty of care.

Clear safety standards will be set out, which force companies to report to the regulator.

Tech firms could be issued substantial fines for any proven failures, with a requirement to publish a notice to the public detailing where they went wrong.

The Government is also consulting on giving the regulator even tougher powers to make individual senior managers criminally liable for any breaches.

This could extend to preventing offenders from appearing in search results, app stores or links on social media posts, before making internet service providers block non-complaint websites or apps entirely as a last resort.

What is considered an online harm?

The Government has defined a wide list of what it considers an online harm, both illegal and with less legal definition.

Illegal harms that will be tackled include:

  • Child sexual abuse and exploitation
  • Terrorist content and activity
  • Organised immigration crime
  • Modern slavery
  • Extreme pornography
  • Revenge pornography
  • Harassment and cyberstalking
  • Hate crime
  • Encouraging or assisting suicide
  • Incitement of violence
  • Sale of illegal goods or services, such as drugs and weapons
  • Contempt of court and interference with legal proceedings
  • Sexting of indecent images by under 18s

The harms that will be covered that have less legal definition include:

  • Cyberbullying and trolling
  • Extremist content and activity
  • Coercive behaviour
  • Disinformation
  • Violent content
  • Advocacy of self-harm
  • Promotion of Female Genital Mutilation

It will also make companies liable for exposing children to legal content for adults, such as pornography.

Who will regulation affect?

Any companies that let users share or discover user-generated content or interact with others online will be affected by the regulations – particularly social networks such as Facebook, Instagram and Twitter.

However, it will also stretch to other parts of the web, including file hosting sites, forums, messaging services and search engines.

Why is the Government cracking down on online content?

The Government wants to stamp out a host of online harms, such as illegal activity and content, ranging from terrorism-related material, to child sexual exploitation, abuse and inciting or assisting suicide.

It also wants to tackle areas that are not illegal but it believes could be damaging to individuals, particularly children and vulnerable people.

It has come to the conclusion that self-regulation is no longer working and therefore wants to introduce new legally-binding measures that make tech companies hosting the content responsible for blocking it or removing it swiftly.

The urgency to act has been highlighted by a number of cases, such as teenager Molly Russell, who was found to have viewed content linked to self-harm and suicide on Instagram before taking her own life in 2017.

More recently, material relating to terrorism has also been a concern, following the mosque attack in New Zealand which was livestreamed on Facebook.

What next?

A 12-week consultation of the proposals will now take place before the Government will publish its final proposals for legislation.

Copyright (c) Press Association Ltd. 2019, All Rights Reserved. Picture (c) Dominic Lipinski / PA Wire.