Smart Living | Digital Parenting

Digital Parenting | 06 Nov 2023

Online Safety Act 2023: Everything you need to know

The new law has been hailed by organisations such as the NSPCC, yet it continues to be hotly debated. We explain what it is and what it means for families.

The Government says the law will make the UK “the safest place in the world to be online.” First proposed in 2017, the Online Safety Bill has now been passed into law as the Online Safety Act 2023. Its proponents claim it will protect children from harmful material by forcing websites and apps to remove illegal content and give adults better powers to screen out content we just don’t want to see.

What is in the Online Safety Act?

In a word, lots. Its first draft, presented to Parliament in 2021, was a hefty 145 pages long. The latest iteration has mushroomed to almost twice that size. It’s a complicated piece of legislation, in part because it aims to regulate a highly complex area.

The law applies to any ‘user-to-user service’ accessible within the UK. That means any kind of online service where content created, uploaded or shared by a user can be encountered and experienced by another user.

If that sounds broad, that’s because it is – it potentially affects vast numbers of websites and apps that have any kind of messaging, sharing or forum function from potentially thousands of businesses and organisations, not just household multinational names such as Facebook, Twitter, TikTok, Snapchat, YouTube etc. However, the websites of news organisations – such as broadcasters and newspapers – are exempt, as are their comments sections.

New parental controls available on Instagram and Messenger

Emma Robertson, from Vodafone Digital Parenting partner Digital Awareness UK, runs through Meta's updates to their messaging and social media apps.

How does the Online Safety Act aim to keep kids safe?

The law imposes Duties of Care on any user-to-user service. The most important Duty of Care for services is to protect the online safety of both children and adults. When it comes to adults, services have additional duties to ‘protect content of democratic importance’ and ‘journalistic content’.

In practice this means they’ll have to conduct and publish risk assessments about the potential harmful content on their service. They must remove any such harmful content.

‘Harmful content’ includes stuff that was already illegal in the UK, such as anything encouraging terrorism or child abuse, for example. It now includes more types of content, such as content encouraging self-harm, deep fake pornography, fraudulent paid-for advertising and cyber-flashing which is when someone sends someone else an unwanted nude photo or video.

ChatGPT: AI is both smarter and dumber than you think it is

Artificial Intelligences are here! While they're not like the killer robots in sci-fi movies, they're already having a huge impact on our society.

How will the law be enforced?

Ofcom, which currently regulates organisations such as broadcasters, telecoms companies and postal services, will have responsibility for enforcing the law. If user-to-user services don’t remove harmful content, Ofcom will have the power to block access to them within the UK by working with internet service providers such as Vodafone, search engines and app stores.

This is a big addition to Ofcom’s responsibilities. The closest equivalent within its current portfolio of responsibilities is when it investigates news broadcasters for alleged breaches of impartiality – but such investigations can currently take months.

Now that the bill has received royal assent, the regulator will draw up codes of conduct designed to help companies comply with the new rules. What will be in these codes of conduct remains to be seen.

Over a third of parents say choosing when to get their child a phone is one of the hardest decisions they make

The new research, into parents' views and concerns about getting their child's first phone, comes as Vodafone and the NSPCC launch their online resource to help parents make an informed decision.

What happens if the rules are broken?

A company can be fined up to 10% of its global turnover. Their executives and employees could go to prison.

Is pornography other than deep-fake porn covered by the Online Safety Act?

Any pornographic online service – whether or not it has any user-to-user functionality – must have mandatory age verification to prevent it being accessed by children.

Opponents of the law are worried that this will lead to treasure troves of data being accumulated on adults’ private habits. Such sensitive data could become a tempting target for hackers intent on extortion, which has precedent.

Is it true the Online Safety Act will block ‘legal but harmful’ content?

While the duty of care on user-to-user services will compel them to prevent children from seeing ‘legal but harmful’ content, such as advocacy in favour of eating disorders, a similar duty of care that would have been applicable to adults has been removed from the law.

Even so, the biggest platforms and services will have to offer adults better (and better advertised) control over the content they are exposed to – filtering out unverified users to stop anonymous trolls in their tracks, for example, or topics and themes that they decide they don’t want to see.

Teaching your kids to question what they see online, on TV and in the papers

On Safer Internet Day, we give parents tips on how to protect kids against dodgy TikTok videos, Facebook conspiracy theories, fake news and unreliable social media influencers.

Why is the Online Safety Act controversial?

The original prohibition against ‘legal but harmful’ for adults was seen by critics as unjustifiable censorship in only restricting access to such content online, but not offline.

That provision has been removed now, so there’s nothing to worry about?

For critics of the law, the biggest overarching concern is that it penalises not the people who create illegal and harmful content but the people or companies that enable it to be seen.

For the law’s proponents, this is justified since the recommendation algorithms on services such as social networks enable such content to be consumed and shared much more widely than they would’ve been otherwise. However, as we’ve seen, the law potentially applies to many countless websites, apps and services, whether or not they are social networks and whether or not they have recommendation algorithms.

Critics are also concerned that the duty of care to block ‘legal but harmful’ content for children will lead to inconsistent standards of what is and isn’t harmful as defined by a multitude of private companies, organisations and other providers of user-to-user services.

What to do when your child encounters unsuitable content online

Nicola, 37, of Norfolk, describes the moment she found her child had been exposed to more online than she'd bargained for. We ask the experts how parents should respond when this happens.

Is it true the Online Safety Act will ban WhatsApp?!

Some messaging apps, such as WhatsApp, iMessage and Signal, use end-to-end encryption. This technology protects the privacy of users and their messages by making it impossible for anyone other than the sender and recipient to access those messages. Not even the companies that provide those messaging services can access them.

The law, on paper, gives Ofcom the power to unlock such encryption for dealing with illegal and harmful content. However, such a power is impossible as it would weaken the protections offered by end-to-end encryption so much that it could make such communications much more vulnerable to hackers. The makers of the affected messaging apps, such as Meta and Apple, stated that they intended to withdraw their services from the UK once the bill become law rather than use alternative, weaker forms of encryption that are easier for governments and hackers to unlock.

The Government has since stated that Ofcom will not use this power until it is possible to do so. So, services that use end-to-end encryption, such as WhatsApp, and other similar forms of encryption, such as those used to protect financial services, password managers and cloud computing services, remain available in the UK.

How the internet is making our children's lives better. Yes, really.

If your house is locked in battles over screen time, you may be surprised by experts saying that spending a bit more time online may in fact be a good thing.

Will ‘tech bosses’ really be jailed for breaking the law?

As the law is so new, it’s hard to say. For proponents of the law, this provision will be an incentive for user-to-user services to take their duties of care seriously. Opponents of the law are concerned that this provision will inadvertently lead to UK-based executives, workers and employees blocking content that isn’t harmful to avoid even the prospect of prosecution.

So is the Online Safety Act fit for purpose?

It depends on who you ask.

“The NSPCC worked on the Online Safety Bill for over five years, so we were delighted to see it being approved and look forward to the Bill becoming law soon,” says Rani Govender, senior child safety online policy officer at the charity.

The charity is, she says, fully aware that: “we can never remove all risk to children’s safety and wellbeing online, but we can ensure tech companies step up and take an active role in protecting the children who use their services. The Online Safety Bill will introduce a vital regulatory framework to ensure that companies can no longer overlook the risks that their platforms pose to children through requiring them to meet new safety duties.”

Not everyone is so convinced, however. “I think the Online Safety Bill has been developed with the best of intentions, but the assumption that platforms, and platforms alone, can address harms that occur, is a naïve one,” says Andy Phippen, Professor of IT Ethics and Digital Rights at Bournemouth University. “I’d much rather see a broad range of measures, including effective education and people trained in online harms – supporting young people – rather than assuming technology alone can solve this.”

He worries, too, that attempts to protect children from harmful content could have unintended and negative consequences, so that: “filtering based upon sexual keywords will also block content on sex education, gender and identity.”

James Baker, Campaigns Manager at Open Rights Group, which campaigns on digital rights issues, believes it is likely that some of the law’s intended outcomes will not be achieved: “for example, young people may find ways around age verification technology in order to be able to access adult content.”

It is, he says: “vital that parents don’t rely on this Bill for keeping their children safe. Parents can apply controls to individual devices to restrict content, but above all they need to keep talking to their children about what they are seeing online. We need to empower children to navigate the online world safely.”

Stay up-to-date with the very latest news from Vodafone by following us on Twitter and LinkedIn and signing up for News Centre website notifications.

‘We need parental controls to protect our kids, but we also need to talk’

As Vodafone UK launches Digital Parenting Pro, a content controls hub for parents and carers, Nicki Lyons, Chief Corporate Affairs & Sustainability Officer, reflects on how resources like this can protect kids from unsuitable content and help families have more informed conversations around online safety.