AI and Deepfake Abuse on Women

By Maria Onatt

Artificial Intelligence (AI) has quickly become one of the most transformative tools of our time. Many of us use it on a daily basis, from asking automated software or chat bots for movie recommendations to writing emails for us. The age of AI creates many new opportunities, but in the wrong hands it can create enormous damage and contribute to a new form of gender-based violence. One of the most horrifying and ignored consequences of AI is deepfake abuse, and women and girls are the group disproportionally affected by it. What remains deeply troubling is the widespread silence by mainstream media on this.

What are deepfakes?
Deepfakes are images or videos that appear real but are entirely fake, as the depicted situation never happened. They are manipulated images and videos, created using AI tools. These images might be renamed to sexual digital forgeries in the future. When it comes to sexually explicit content, there are two types of deepfakes:

AI inserts a person’s face onto an existing pornographic video or image.

AI can also create new sexually explicit content by using a few pictures of this person (these can be taken from social media for instance) and a text prompt.

The dangers of deepfakes
One particularly disturbing development involves so-called “Nudify Apps”, which use AI to undress people in pictures, so a clothed image of someone can be transformed into a nude. These apps are advertised openly on mainstream social media platforms and apps used by a large number of children and teen­agers, including Instagram, TikTok, Snap­chat and X.

The worst-case scenario happened in 2023 in a small village in Spain where 12-year-old boys used these nudifying apps to create and spread nude pictures of their fe­male classmates. After this case gained world­wide attention, young girls from other countries came forward saying that the exact same thing has happened to them, but there were no consequences for either the perpetrators or creators of these apps.

Impact on young women and girls
The lack of response from the criminal justice system, or the media, clearly shows that this case is far from isolated but in fact is rather a symptom of a larger, deeply rooted problem. Deepfake abuse affects women and girls in far higher rates than men. Individuals from the LGBTQI+ community are also disproportionately more affected by this form of abuse.

The legality of deepfakes
The legal system is still trying to catch up with this new technology. One of the main challenges is the real versus fake binary. The owner of one of the most successful deepfake pornography sites argues that the content is not real and therefore consent is not needed.

However, people who were victims of deepfake abuse argue that if it is not real, then why does it feel so real and have consequences on their life. Tech companies including Microsoft have attempted to make it man­datory to use a watermark if an image or video is AI-generated. Sadly, what seems like a solution, like watermarks and labels, do not in fact prevent the harm they just acknowledge it after the fact.

What is even more disturbing is how realistic AI avatars are being used in gaming. Players can create avatars that resemble ex-girlfriends or celebrities and then simulate violence and sexual acts on them as part of the video game. The emotional and psychological implications here are serious and still not taken as a problem enough despite the clear evidence to the contrary.

Not just sexually explicit content
Deepfake abuse does not always have to be of sexual nature. For example, if an image of a woman who normally wears a headscarf is altered to show her without it at a public gathering especially together with other men, that can have enormous personal, professional and cultural consequences. This is one of many examples that clearly emphasizes the need for intersectional and culturally sensitive approaches to regulation and prevention. The Muslim Women Research Centre AMINA has worked on a campaign that focuses on how deepfake images affect Mus­lim and Black & Minority Ethnic women, whose experiences may differ from the general understanding of intimate image abuse.

Legislative action matters!
The UK is the first country that proposed criminalizing the creation of deepfake content. As an immediate reaction, access to websites that contain deepfake content were immediately blocked in that region. This demonstrates how policy can have direct im­pact, but it needs to be bold, and it needs to be inclusive to be as effective as possible. The responsibility falls on our civil societies, governments and tech platforms as the main barriers for stopping this from becoming a systemic issue and harming even more women and girls. As AI continues to evolve, we must stay vigilant and more aware of this new form of gender-based violence. It is not just about protecting the pictures of girls, women and people from other marginalized communities, but about defending one’s dignity and safety in the digital age and trying to create a safe space for all in an increasingly online world.

Further reading:

The State of Deepfakes: Landscape, Threats, and Impact—Henry Ajder, Giorgio Patrini, Francesco Cavalli, and Laurence Cullen.

Criminalising the Creation and Solicitation of Sexually Explicit Deepfakes—Dr. Clare Mc­Glynn.

Status of Young Women in Scotland 2024-25 —focusing on young women’s human rights, with findings about online safety & AI.

Maria Onatt is a 25 year old ecofeminist who actively advocates for a safe and fair future for all women and girls.

Source: youngwomenscot.org, May 28, 2025