Regulating AI and sexually explicit content

Main Image
Image
a graphic with blue and gold geo-shapes
Share

Dear Minister Solomon,

We, the undersigned, are writing to express our profound concern about the proliferation of sexually explicit AI-generated content, specifically through tools like xAI’s “Grok”.

With the rise of Tech-Facilitated Gender-Based Violence (TFGBV), the recent use of Grok to generate non-consensual images of women and children is not merely a glitch or an issue of bad users, it is a predictable harm built into the system – a harm that will intensify if the government maintains the status quo.

While the Office of the Privacy Commissioner (OPC) has taken a necessary step by expanding its investigation into X Corp and xAI, Canada’s current regulatory response fails to provide critical guardrails. While the EU, UK, and Brazil have moved toward firm enforcement and massive fines, Canada’s weakened legal framework prioritizes the interests of big tech over people’s safety.

The Current Legislative Gap

In cases of TFGBV, technology becomes a harmful tool used to control and intimidate, and AI is intensifying this form of violence. From cyberstalking to sexualized deepfakes, these tools are being used to target women, girls, 2SLBTQIA+ people, and public-facing professionals. Despite this, Canada lacks overarching legislation to regulate AI models. With privacy laws written before the rise of social media, victims have little recourse beyond reporting to police, who are often limited by jurisdictional considerations, or filing a complaint directly with the very companies facilitating the abuse.

Recommendations for Pending Legislation

As the federal government considers the Online Harms Act, Bill C-16, and mandatory age verification, Bill S-209, we urge decision-makers to include the following protections:

Robust Legislative & Criminal Frameworks

  • Criminalize Non-Consensual Deepfakes: We support the swift amendment of the Criminal Code to address the creation and distribution of deepfake imagery as a criminal offence.

  • Strengthen the Online Harms ActUrgently pass comprehensive safety legislation that mandates a 24-hour takedown window for non-consensual sexual content and requires platforms to release public audits on types of abusive content their tools produce, ensuring full transparency regarding which safety protocols were triggered and where they failed.

Accountability for AI Developers & Platforms

  • Mandatory Safety-by-Design: Legislation must force platforms to implement safety mechanisms before deployment, ensuring they cannot produce dangerous, violent, or non-consensual sexual content. This legislation should include algorithmic audits that allow government to review pre-release and post-release reports on safety protocols.

  • Accountability and Penalties: There must be bold action and serious financial penalties for non-compliance, like the EU’s Digital Services Act, to ensure tech companies are held accountable for the tools they profit from. Mandatory audits of corporate practices and the generation of transparency reports will increase accountability.

  • End “Bad User” Framing: Shift responsibility from the individual user to the platform, treating predictable system harms as a failure of the developer.

Data Privacy & Survivor Rights

  • Right to Deletion: We support forthcoming privacy legislation, providing users with the legal right to demand that personal data used to train AI models be removed immediately, especially when used for sexualized content.

  • Consent-Centred Frameworks: Align federal privacy laws (PIPEDA) to ensure that using personal information for AI training requires explicit, valid consent.

Support for Survivors & Advocates

  • Survivor-Centric Support: Federal funding must be directed toward trauma-informed responses and specialized support for survivors navigating TFGBV.

  • Support for Community Organizations, including Women’s Organizations, Doing this Work Already: Provide direct funding and support for organizations at the forefront of fighting TFGBV. Education on AI that focuses on a clear understanding of it’s function and harms is needed to ensure transparency.

The Beijing Platform for Action set a global standard for women’s safety that we cannot afford to abandon in the current digital age. It is time for Canada to transition from inquiry to decisive legislation and implement a system-wide safety framework that treats gender-based violence as a core safety issue.

We look forward to discussing how the government will support the development of a robust AI safety framework that protects all workers and their families from digital violence.

Sincerely,

Battered Women’s Support Services
Canadian Centre for Women’s Empowerment
Canadian Council of Muslim Women
DAWN Canada
Ending Sexual Violence Association of Canada
La Fédération des femmes du Québec
LEAF- Women’s Legal Education and Action Fund
Unifor Canada
WomenatthecentrE
Women’s Shelters Canada
YWCA Canada