Deepfake Harassment: How explicit AI is the new frontier in digital abuse in schools
- SLO Communications
- Mar 19
- 3 min read
In the UK last week, the Association of School and College Leaders (ASCL) announced that teachers are reporting increased bullying, abuse and the malicious use of explicit AI images also known as deepfakes against pupils and staff through social media.
Last month in Australia, 60 victims were identified after an unknown number of misogynistic, sexually explicit, AI-generated images of female students at Gladstone Park Secondary College were circulated online. Two school students have been suspended with an investigation pending. This follows a first-of-its-kind widely reported story from 2023 where over 20 young girls in Almendralejo, Southern Spain reported receiving AI-generated naked images of themselves. Some students engaged in blackmailing of the victims, threatening to share the images on social media platforms unless a sum of money was exchanged.
The Role of AI and Technology
This is the worryingly, increasing trend of explicit deep-fake images. Deepfakes are realistic-looking images or videos that manipulate someone’s likeness to create fake, harmful content, like nude photos. While none of the people in the photos had taken these pictures, they look completely real. The generated images use existing photos, often taken from the victims social media accounts, which are then altered using an artificial intelligence application. Blackmail for financial gain is also a regular feature of this online harassment.
In the incident in Spain it was confirmed that the hyper-realistic artificial intelligence creations were made with a freely available ClothOff app. With the #slogan "Undress anybody, undress girls for free", the app allowed users to take the clothes off from anyone who appears in their phone's picture gallery. It costs €10 to create 25 naked images.
Impact on the victims
This trend has serious consequences for its victims. Although the explicit images of the girls are not real, the parents of the Spanish schoolgirls say that the girls' distress at seeing their picture is very real indeed, causing significant psychological and mental wellbeing impacts as a result of this form of online abuse. This is in line with empirical research which shows that victims of deepfake bullying can suffer from emotional distress, damaged reputations, anxiety or depression.
What are the consequences?
As AI technology becomes more accessible, it’s important to understand its potential for harm, especially in the hands of young people who may not fully grasp the consequences. There are stark consequences for the perpetrators which not only includes the creator of the images but can also include anyone who willingly engages with and shares the content.

What is being done and what can we do?
To address AI deepfakes being used as a form of bullying in schools, we need a combined effort from students, teachers, parents, and lawmakers. Schools must teach students about digital responsibility and the harmful effects of deepfakes. Many schools are now trying to raise awareness about the risks of deepfakes and teach students about digital ethics. Parents can talk to their kids about their own online behavior and privacy and by staying up to date themselves on online behaviours, trends and terminology. Staying up to date with us here at Safer Lives Online and subscribing to our updates is a good place to start!
Laws must be updated to protect against harmful deepfakes, with consequences for those who create or share them. The United Nations recently published a report outIing the issue. In many jurisdictions such as some American states they are working to develop laws, but are finding challenges with categorisation and definitions. Elsewhere, like in the UK, deep fakes fall under the same category of intimate image abuse and, where the victims are underage, even under Child Sexual Abuse Images laws.
Social media platforms must address this issue monitoring and removing harmful content quickly. By working together, we can reduce the impact of this dangerous form of digital bullying.