Understanding the Risks of AI-Generated Child Sexual Abuse Material

2025

Understanding the Risks of AI-Generated Child Sexual Abuse Material

AI technologies are evolving fast — and so are the ways they can be misused. A recent report from Stanford University explores the growing issue of AI-generated child sexual abuse material (AI CSAM), drawing on the perspectives of educators, platform staff, law enforcement, legislators, and victims.

The report highlights alarming developments, such as the rise of “nudify” apps — tools that use AI to create fake explicit images of individuals, often without their consent. It sheds light on how these apps are being used among peers, especially in school environments, where preventive education is often lacking and responses are inconsistent. Poor handling of such cases can further traumatise young victims.

Beyond schools, the report raises concerns about the broader online ecosystem. Many platforms report abusive content without indicating whether it is AI-generated, leaving law enforcement to fill the gap. AI-generated CSAM, the report argues, can be just as harmful as real imagery — not only for those depicted, but also for those tasked with detecting and removing it.

*A “nudify” app is a type of software or AI tool that can take an image of a person — often fully clothed — and generate a fake version of that image in which the person appears nude. These apps use generative AI to manipulate photos in a realistic way, often without the person’s knowledge or consent.

















Previous
Previous

When Children Become Content: Introducing the “Kids as Content” Toolkit

Next
Next

Symposium on Young People, Democracy and Climate Action - Report Published