top of page
igor-omilaev-FHgWFzDDAOs-unsplash.jpg

Investigating AI-generated Child Sex Abuse Material, contemporary issues and challenges, 13 November 2025

Our workshop provides a contemporary snapshot of the challenges for investigators around Al-generated Child Sexual Abuse Material.

The Internet Watch Foundation and We Protect Global Alliance are at the forefront of threat assessment work around AI-generated Child Sex Abuse Material and will provide a unique insight into current trends and investigation challenges.

We Protect Global Alliance is a membership organisation that unites over 300 governments, the private sector, civil society and international organisations with the aim of protecting children from the growing threat of online sexual exploitation and abuse by driving forward collective global action.

‘Generative AI’, also called ‘Gen AI’, refers to AI technologies that have the ability to create new content including text, images, audio and video. Today, many Gen-AI tools are available to anyone with access to the internet; many are free to use and require no technical expertise. 

While generative AI is a groundbreaking technology with transformative potential, there has been a rise in incidents where it has been used to create child sexual abuse material (CSAM). This creates a considerable challenge to police and other law enforcement agencies.

 

The Increasing threat of AI-generated CSAM

Research conducted by the Internet Watch Foundation and We Protect Global Alliance have indicated an alarming rise in the incidents and the seriousness of AI-generated CSAM.

The Internet Watch Foundation, whose expert analysts regularly examine thousands of photographic and AI-generated images each year, also report the emergence of AI-generated child sexual abuse videos, highlighting rapid technological advancements in AI models/generators.

Increasingly, deepfake videos shared in dark web forums take adult pornographic videos and add a child’s face using AI tools. 

There is also a noticeable increase in AI-generated child sexual abuse imagery on the clear web, including on commercial sites.

We Protect Global Alliance is concerned that children may not understand the risks of generative AI, making them vulnerable to predators who use AI to create fake interactions or manipulate their trust.

Offenders often target children in online gaming communities and virtual spaces, where AI-generated avatars or bots appear friendly and harmless, making it hard for children and young people to recognise potential dangers.  

Children with limited digital literacy or those in low-resource areas are even more vulnerable, as they may not fully grasp online safety or have the support to report exploitation. 

The Threat of AI Companion Apps

We will also hear from the Government’s Department of Science and Technology Laboratory who have conducted innovative research around the threat of AI Companion Apps and their potential use in CSAE crimes.

AI Companion Apps AI offer virtual friends, romantic partners, or confidantes powered by artificial intelligence. One of the most popular is Replika. DTSL experts are concerned these could be misused by lone criminals and organised gangs in criminality. Forensic Researcher Dan Drury will present the results of his research.

 

The day will include:

  • An outline of the current threat globally and in the UK and the seriousness of the threat and a prediction of the future threat including other new and emerging technologies to be aware of

  • Overview of the global legislative landscape for responding to AI-generated CSAM

  • Overview of revised terminology guidelines when referring to CSEA crimes

  • A look at the how CSAM images are created and an insight into deepfake technology 

  • Detection and investigation challenges: advice on how investigators can detect when the images are AI-generated

  • An overview of the criminality committed and the current investigatory powers available and a consideration of future legislation

  • An insight into the offenders committing the crime with a focus on peer-to-peer child sexual abuse

  • A look at victims and the impact on them

  • An insight into the potential criminal use of AI-Companion Apps.

  • A look at the impact of such material on frontline responders (i.e. content moderators, police officers etc)

 

Held online via Teams.

Certificates of CPD available

 

Our Speakers

  • Hannah Swirsky, Head of Policy and Public Affairs, Internet Watch Foundation

 

  • Katie Burke, Policy Manager, We Protect Global Alliance

 

  • Dan Drury, Digital Forensics Researcher, Digital Threat and Technical Exploitation, Department of Science and Technology Laboratory (DSTL)

HOW TO BOOK

Cost: £199.50 + VAT (GBP) per delegate per workshop (LEA and Government Agency rate).  £249.50 + VAT (Industry rate), per delegate 

Group bookings: We offer various discounts for group bookings depending on numbers, please contact us for details.

Booking: Please send the delegates name(s), email address(es) and purchase order (made out to The Investigator) to booking@the-investigator.co.uk or telephone +44(0)844 660 8707 for further information. 

Payment can be made by PayPal/debit/credit card. ​The meeting link will be sent out 7 days before the event.

bottom of page