How YouTube is failing children, and what it means for designing AI-moderated experiences

If your child uses YouTube without supervision, they have probably watched an animated video with Peppa Pig weeping as a dentist shoves a needle into her mouth, and then screaming as he extracts her teeth.

How YouTube is failing children, and what it means for designing AI-moderated experiences
October 2022
How YouTube is failing children, and what it means for designing AI-moderated experiences
Darren Menachemson
How YouTube is failing children, and what it means for designing AI-moderated experiences

How YouTube is failing children, and what it means for designing AI-moderated experiences

If your child uses YouTube without supervision, they have probably watched an animated video with Peppa Pig weeping as a dentist shoves a needle into her mouth, and then screaming as he extracts her teeth. Or the one where she is attacked by zombies, in the dark. Or the one where Frozen’s Elsa is burned alive. Or the one where a demon makes one of the Paw Patrol commit suicide.

Children are being exposed to harmful and exploitative content, and AI moderation can’t seem to stop it

These and similar videos— in their thousands — float around YouTube, often a few scrolls and two clicks away from some innocuous search keyword.

Other less violent but equally disturbing videos also target children, featuring bizarre, repetitive footage that has been strung together by algorithms rather than human content creators. These meaningless fever-dreams show eggs being unwrapped by a disembodied set of hands, or costumed superhero characters with unsettling faces marching across the screen.

They are off-putting, valueless, sometimes sickening, and intuitively not suitable for children.

Such videos, from the most violent to the most low-quality — carefully game YouTube’s algorithm to target pre-schoolers and pre-adolescents. They have done so since 2014. They earn large sums of money for the perpetrators who upload them, generating millions of views from their target audience.

The problem has been called Elsagate, a neologism based on an early example involving (again) Frozen’s beloved character Elsa.

In 2017, YouTube tried to deal with the problem by updating its policies and removing, demonetising or age restricting Elsagate content. In 2018, it removed almost 60 million videos that it considered “hateful”, spam or otherwise in violation of its terms of use. Of these, 279,600 were removed for “child safety” reasons. There can be little doubt that Google does not want these types of videos on its platform.

But as YouTube’s scale has ballooned out, it has run into an ethical problem that goes to the heart of how it has designed its systems.

The bad, bad problem of scale

For YouTube, scale means hundreds of hours of footage, across 80 languages and 91 countries, being uploaded every minute. Given these numbers, its not surprising that YouTube relies on artificial intelligence and sophisticated analytics to implement ethical controls over content.

Such intelligent algorithms offer a powerful tool to deal with volumes that have gone beyond what a purely human workforce can manage. This is not exceptional.

Specialised AIs and machine learning have been part of the private sector’s toolkit for a while, helping keep us buying more goods or (in YouTube’s case) keeping us glued to our iPad screen as the software works out what makes us tick and what will keep us engaged.

Governments are also getting into the action — whether it’s finding bad guys or targeting those in need, AIs are starting to emerge as a way of creating positive impact at scale.

For YouTube, AI has been somewhat effective. But for situations like Elsagate, it has not been nearly effective enough.

Controlling the bad stuff (and failing)

So YouTube upped its moderation ante.

YouTube, as far as we know, relies on a handful of methods to identify and de-monetise, age-restrict or remove “bad” videos. These include:

  • Algorithms, which pick up and flag many but not all violations
  • User reporting of bad content — and for Elsagate, this means that by the time an adult sees and chooses to report a video, it has probably been watched by dozens to tens-of-thousands of young children
  • A human workforce addressing violations, reportedly numbering around 10,000 staff

So why was I able to open YouTube today and, in a single reasonably benign search (“spiderman”), find bad Elsagate content, and a sidebar packed with horrific Elsagate content, targeting the very young? One of the nastier of these (though far from the worst) — whose title is a mix of English and Cyrillic characters, and whose metadata clearly targets the youngest children, shows realistic CGI dinosaurs stalking an upsettingly fearful preadolescent boy. The video has over 10 million views.

The answer is that these measures are a finger in the dyke of a problem woven into the foundations of YouTube’s business model. YouTube has created a platform that priortises getting as much content up as possible, as quickly as possible.

While it retains a human workforce and a user tip-off function, it’s the AI that must bear the brunt of the load given the scale. With this reliance on AI moderation, it has created the parameters for the Elsagate catastrophe.

The age of digital abrasion

How much harm will society accept from digital innovation in exchange for public and commercial benefit? What is the role of Government in regulating this space, versus the role of self-regulation?

It’s hard to say, but there is no doubt that Elsagate is a live, continuing and seemingly insoluble example where real damage is being done to real children. Self-regulation has failed here.

We are in an age of what I have named “digital abrasion”, where what digital technology makes possible, what’s ethical, and what’s accepted, all come into constant conflict with each other.

Video websites like YouTube enable the world to upload inconceivable volumes of content — so much so that it’s beyond hard to consistently discriminate the harmful from the benign. Solutions brought to the table seem either ineffectual (like AI moderation) or counter-trend (like slowing down video content upload to a human-reviewable level).

At some point, we will need to ask ourselves what world we are designing for — one of inevitable surrender to the seductions of technological possibility, where we embrace the glut, harm and all?

Or do we want a world where we confront our addiction to intoxicating technology and try to design the world we actually want to live in. And ‘design’ is the right word here. This is not just technology problem, or a social problem, or a regulatory problem, or a criminal problem.

Taking on the digital ethics challenge

This is a design ethics problem. And like so many design ethics problems, it sits across the technology, social, regulatory and law enforcement dimensions.

We won’t solve it with an algorithm, or with an extra 2,700 content reviewers clogging up YouTube’s break-rooms. Any real solution will need to pursue a vision that is based on the aspirations and values we hold as a society, rather than being at the mercy of an “Up Next” list of videos developed using set of values that, as a society, we never granted license to.

We need to think big, and target not just the problem, but the models that are causing it.

As society decides whether we must hold ourselves accountable to a higher vision, Elsagate videos quietly continue to burrow into children’s viewing experience. That’s the reality, and it’s happening again, somewhere close, tonight.

Darren works to create values-based digital futures for society. He is a global partner in ThinkPlace, a world-leading innovation company that works on public good challenges. Darren serves as ThinkPlace’s Global Chief Digital Officer, and is Executive Director of the ThinkPlace Foundation. Previously, he has worked as an official in the Australian Government, shaping the design of service delivery and compliance management. Darren is a non-executive director of Rise Above, which supports people in Australia’s Capital Region who have received a cancer diagnosis.

Other Posts

Unveiling Our Ethical Fashion Collaboration: This is HCD x SUTSU!

Hey folks, I've got some absolutely amazing and exciting news, about a fantastic collaboration that I'm so excited to share with you right now.
Read Now

Deep Dive into 'Introduction to Service Design using Journey Mapping' course

I am so proud of the new Service Design course that I launched recently and wanted to share a little more information about it.
Read Now