Viewpoint | 09 May 2024

Why Ofcom’s new measures are a step in the right direction for children’s online safety

Nicki Lyons, Vodafone UK’s Chief Corporate Affairs & Sustainability Officer, considers the potential impact of Ofcom’s new code of practices on the online safety of children.

Recently, I was fortunate enough to be asked to join the NSPCC Child Safety Online Taskforce. While my job at Vodafone – not to mention my other role as a parent to three children – meant I already had a fairly good handle on online safety issues, this taskforce has been a really eye-opening experience.

Volunteering alongside 10 other senior executives, across a range of industries, I’ve heard – and shared – many different insights into children’s online safety. And, of course, a key topic of conversation in this circle has been the recently passed Online Safety Act.

Officially put into law in 2023, we saw the latest development this week. Ofcom, the responsible regulator under the Act, has now published a new set of proposed measures that social media and other online services must abide by to improve child safety online.

Online Safety Act 2023: Everything you need to know

The new law has been hailed by organisations such as the NSPCC, yet it continues to be hotly debated. We explain what it is and what it means for families.

From ‘what to ‘how’

These measures, which are open for public consultation until 17 July 2024, are made up of more than 40 practical steps for online services and platforms. Ranging from risk management reviews to age assurance checks, they aim to contribute to an online environment that is safer by design.

‘Safety by design’ is the idea that, rather than simply retrofitting safeguards after an issue has occurred, platforms will be consciously built to minimise harm through the thoughtful development of their processes and services.

For a more tangible example, picture your local supermarket. If a child attempted to buy an item that was age-restricted, they would face a number of barriers that are now so commonplace, we don’t even think twice about them. These range from security monitoring to automatic age checks at self-service checkouts and policies like Challenge 25.

Though it may not be a perfect analogy, the online landscape that children find themselves in today has, historically, not had these same protective measures built in. Which is what makes this latest development so noteworthy.

"While the Online Safety Act originally set out what needs to change when it comes to child safety online, these new codes of practice effectively explain how some of these developments will now take place."

Because, while the Online Safety Act originally set out what needs to change when it comes to child safety online, these new codes of practice effectively explain how some of these developments will now take place.

Among other things, they range from: the blocking of content by recommender systems; improved complaint and reporting services; and new tools to help children protect themselves.

I’m therefore hopeful that these new measures will eventually have a material effect, whether that’s through more robust age verification, better content moderation or stricter controls over algorithm-based systems.

‘We need parental controls to protect our kids, but we also need to talk’

As Vodafone UK launches Digital Parenting Pro, a content controls hub for parents and carers, Nicki Lyons, Chief Corporate Affairs & Sustainability Officer, reflects on how resources like this can protect kids from unsuitable content and help families have more informed conversations around online safety.

Artificial intelligence, real-world impact

This last point on algorithms sits at the heart of so much of this debate, so it is encouraging to see Ofcom specifically call out these algorithms as an issue that companies need to act upon.

This is a topic we have recently been raising awareness of through ‘The Rise of the Aggro-rithm’, launched by Vodafone UK to coincide with Safer Internet Day 2024.

The campaign, which highlighted how harmful AI algorithms target children, was based on startling research that revealed:

  • 69% of boys aged 11-14 have been exposed to online content that promotes misogyny and other harmful views.
  • 59% of boys are led to this content through innocent and unrelated searches due to AI algorithms.
  • 22% of parents have noticed gradual change over time in the language their sons use to talk about women and girls, while 70% of teachers have seen a rise in sexist language in the classroom.

Though these figures make for difficult reading, especially as a parent, they do help remind us of just how serious an issue child safety online remains.

How to talk to your teenage sons about online toxicity

If you’re a parent struggling to talk to your adolescent or preteen about the worrying internet content they’re consuming, these tips from a clinical psychologist could help.

Turning aims into actions

As a result, it is an issue that must stay high on the agenda for all of us – whether that’s as a parent, a care giver or someone who works in a field that can have a direct impact, which is probably more of us than you might first assume.

That’s certainly one thing I’ve taken from being part of the NSPCC Child Safety Online Taskforce. Sitting alongside those from the media, advertising, legal services, education, tech and beyond, it’s clear that we all have a role to play in this fight.

Vodafone’s toolkit, created in conjunction with NSPCC, supports parents in having online safety conversations with their children. Meanwhile, anyone who wants to have their say in helping to create a safer internet can sign Global Action Plan’s petition to keep safety by design a top priority of the Online Safety Act.

Stay up to date with the latest news from Vodafone by following us on LinkedIn and Twitter/X, as well as signing up for News Centre website notifications.