Project Details
Description
Our key hypothesis is the following: ‘Given that XR environments, and first and foremost AI, constitute a new and evolving field of technology, we anticipate gaps in legislation across the Five Eyes nations and India on the matter of accountability over AI-generated CSAM.’
Following that, the overarching questions that this research aims to address are:
• Does existing criminal legislation relating to CSEA/CSAM allow for criminal liability for AI-generated CSAM? This may concern software creators; holders of datasets that may be used to train AI; people who use AI software to create CSAM; people who access AI-generated CSAM; or any other party.
• Is there any other legislation that provides mechanisms of accountability with regards to AI-generated CSAM (including but not limited to civil law, industry codes or standards)?
• What types of punishment does legislation include as a response to the above?
• Are there any proposals for law reform to strengthen legislative provisions for accountability for AI-generated CSAM?
• Has there been any caselaw across the Five Eyes countries or India that has considered criminal liability or any other form of accountability for AI-generated CSAM offences?
Our aim is to:
• Examine legislation and caselaw across the Five Eyes countries and India to identify the strengths of these regulatory contexts with regards to countering CSAM created via generative AI
• Examine the regulatory weaknesses and gaps that may hinder effective safeguarding of children and prevention of CSEA and CSAM production, dissemination or possession via generative AI
• Inform legislation to ensure its future-proofing in view of the increasing use of the content generating aspects of AI.
Following that, the overarching questions that this research aims to address are:
• Does existing criminal legislation relating to CSEA/CSAM allow for criminal liability for AI-generated CSAM? This may concern software creators; holders of datasets that may be used to train AI; people who use AI software to create CSAM; people who access AI-generated CSAM; or any other party.
• Is there any other legislation that provides mechanisms of accountability with regards to AI-generated CSAM (including but not limited to civil law, industry codes or standards)?
• What types of punishment does legislation include as a response to the above?
• Are there any proposals for law reform to strengthen legislative provisions for accountability for AI-generated CSAM?
• Has there been any caselaw across the Five Eyes countries or India that has considered criminal liability or any other form of accountability for AI-generated CSAM offences?
Our aim is to:
• Examine legislation and caselaw across the Five Eyes countries and India to identify the strengths of these regulatory contexts with regards to countering CSAM created via generative AI
• Examine the regulatory weaknesses and gaps that may hinder effective safeguarding of children and prevention of CSEA and CSAM production, dissemination or possession via generative AI
• Inform legislation to ensure its future-proofing in view of the increasing use of the content generating aspects of AI.
Key findings
With regards to Australia, the same issues were examined on both a national as well as a state and territory level, whereas in New Zealand they were examined only on the national level, as no relevant legislation exists on a devolved basis. It is worth considering that Australia has also enacted an Online Safety Act in 2021, which imposes certain duties on online service providers with regard to protecting Australians, in particular children and vulnerable adult users online. The eSafety Commissioner was established in 2015 (then the ‘Office of the Children’s eSafety Commissioner’) in Australia and serves as an independent regulator for online safety, with powers to require the removal of unlawful and seriously harmful material, implement systemic regulatory schemes and educate people around online safety risks. Under the OSA, there are currently enforceable codes and standards in force which apply to AI-generated CSAM with civil penalties for services that fail to comply. In particular the ‘Designated Internet Service’ Standard applies to generative AI services, as well as model distribution services.
The OSA was independently reviewed in 2024. The Review examined the operation and effectiveness and considered whether additional protections are needed to combat online harms, including those posed by emerging technologies (Minister for Communications, 2025). The Final Report of the review was tabled in Parliament in February 2025.
The Australian Government has also recently conducted consultations regarding the introduction of mandatory guardrails for AI in high-risk settings, which considers guardrails like ensuring that generative AI training data does not contain CSAM (Australian Government - Department of Industry, Science & Resources, 2024).
No such legislation exists in New Zealand, although there are ongoing discussions and legal reform suggestions around the potential introduction of similar legislation there.
In both Australia and New Zealand, existing definitions of CSAM or similar terminology used in criminal legislation are broad enough to capture AI-generated CSAM. As a result, and despite limited case law on the matter due to the emerging character of gen-AI technologies, sentencing decisions have emerged in the Australian states of Victoria and Tasmania involving offenders who produced gen-AI CSAM. In New Zealand, no cases have yet been identified in which offenders have been sentenced for offences involving AI-generated CSAM, however, press reports suggest that offenders have been charged in relation to such material. In addition, there are reports of the New Zealand customs service seizing gen-AI CSAM, suggesting that they consider that they have the jurisdiction to do so. No cases have been identified in Australia or New Zealand in which AI software creators or holders of datasets used to train AI have been considered criminally liable in relation to the production of CSAM using their platforms or any other such charges.
In New Zealand, certain pieces of legislation (e.g., Crimes Act 1961 and Harmful Digital Communications Act 2015) do not appear to apply in cases of gen-AI CSAM that portrays purely fictitious children. This is to an extent expected, as both laws require harm to be inflicted upon an identifiable natural person and this is not the case in instances of AI-generated CSAM containing purely fictitious children.
In both Australia and New Zealand, there are no pending reforms to expand criminal accountability in relation to gen-AI CSAM to AI software creators and dataset holders. Given that the definitions of CSAM in existing criminal legislation appear broad enough to capture AI-generated material, this is not surprising.
In the United States of America, the regulatory framework consists of federal laws and state-based laws. Federal CSAM statutes, together with case law, criminalise several categories of harmful material. Still, a significant number of vague points persist. Federal laws are relatively robust, but there is a gap with regards to the criminalisation of artificial CSAM that depicts purely fictitious children. Civil remedies, although significant, are limited in scope. Copyright and consumer protection laws offer some avenues for redress, but they are also limited.
Prosecutors typically require concrete evidence to prosecute, such as incriminating communication or attempts to sell or trade material. These are often hard to obtain. This challenge is increased by more advanced AI models that generate hyper-realistic CSAM without training on authentic abuse imagery. This means that even if we start regulating how AI models are trained, these advanced AI models that can create realistic CSAM without the need for training will be evading regulation.
Drawing on copyright law, platforms and developers can be held liable if they knowingly contribute to the sharing of harmful content. However, online platforms are protected from civil liability for user-generated content, complicating efforts to hold them accountable for hosting AI-generated CSAM. Despite efforts to change the law on this, balancing platform liability with the protection of free speech is a major challenge.
The legal landscape is even more fragmented on a state level due to several outdated pieces of legislation around so-called ‘child pornography’, which fail to address newer forms of technology-facilitated child sexual abuse. State-level civil remedies are often inadequate, leaving gaps in accountability for users, developers, distributors and third-party beneficiaries.
On 16 April 2024, the H.R.8005 - Child Exploitation and Artificial Intelligence Expert Commission Act of 2024 was introduced to address the creation of child sexual abuse material (CSAM) using artificial intelligence (AI). This legislation would establish a commission to develop a legal framework that will assist law enforcement in preventing, detecting, and prosecuting AI-generated crimes against children.
In Canada, there is a distinction between federal law and province-based law. The federal Criminal Code lacks specific prohibitions against AI-generated CSAM. Still, the relevant sections of the Canadian Criminal Code have been interpreted widely by the Supreme Court of Canada to provide coverage for several types of harmful material. There remain two exceptions, though. The first is for material that has been created only for personal use and the second is for works of art that lack intent to exploit children. Canadian federal law criminalises the non-consensual distribution of intimate images. However, whether these provisions apply to AI-generated CSAM is uncertain. Enforcement is the responsibility of provincial agencies, and civil remedies for victims vary widely across provinces, creating a patchwork of protections in which access to relief depends on the victim’s location.
Privacy laws in Canada offer some avenues for assistance, but they lack a more tailored character to address the specific harms associated with AI-generated CSAM. Copyright law offers a potential, although complex, avenue for addressing AI-generated CSAM.
Lastly, in Canada, a significant law reform that was proposed is the Online Harms Act. If passed, this Act would have created a new regulatory framework requiring online platforms to act responsibly to prevent and mitigate the risk of harm to children on their platforms. Under the Act, online platforms would have a duty to implement age-appropriate design features and to make content that sexually victimises a child or re-victimises a survivor inaccessible, including via the use of technology, to prevent CSAM from being uploaded in the first instance. A new Digital Safety Commission would oversee compliance and be charged with the authority to penalise online platforms that fail to act responsibly. Accordingly, the Act would create a legally binding framework for safe and responsible AI development and deployment that could potentially apply to restrict or penalise content that would otherwise be legal, but that poses a substantial risk of sexual exploitation or revictimisation of a child. Prorogation of the Parliament of Canada ends the current parliamentary session. As a result, all proceedings before Parliament end, and bills that have not received Royal Assent are ‘entirely terminated’ (Lexology, 2025).
The OSA was independently reviewed in 2024. The Review examined the operation and effectiveness and considered whether additional protections are needed to combat online harms, including those posed by emerging technologies (Minister for Communications, 2025). The Final Report of the review was tabled in Parliament in February 2025.
The Australian Government has also recently conducted consultations regarding the introduction of mandatory guardrails for AI in high-risk settings, which considers guardrails like ensuring that generative AI training data does not contain CSAM (Australian Government - Department of Industry, Science & Resources, 2024).
No such legislation exists in New Zealand, although there are ongoing discussions and legal reform suggestions around the potential introduction of similar legislation there.
In both Australia and New Zealand, existing definitions of CSAM or similar terminology used in criminal legislation are broad enough to capture AI-generated CSAM. As a result, and despite limited case law on the matter due to the emerging character of gen-AI technologies, sentencing decisions have emerged in the Australian states of Victoria and Tasmania involving offenders who produced gen-AI CSAM. In New Zealand, no cases have yet been identified in which offenders have been sentenced for offences involving AI-generated CSAM, however, press reports suggest that offenders have been charged in relation to such material. In addition, there are reports of the New Zealand customs service seizing gen-AI CSAM, suggesting that they consider that they have the jurisdiction to do so. No cases have been identified in Australia or New Zealand in which AI software creators or holders of datasets used to train AI have been considered criminally liable in relation to the production of CSAM using their platforms or any other such charges.
In New Zealand, certain pieces of legislation (e.g., Crimes Act 1961 and Harmful Digital Communications Act 2015) do not appear to apply in cases of gen-AI CSAM that portrays purely fictitious children. This is to an extent expected, as both laws require harm to be inflicted upon an identifiable natural person and this is not the case in instances of AI-generated CSAM containing purely fictitious children.
In both Australia and New Zealand, there are no pending reforms to expand criminal accountability in relation to gen-AI CSAM to AI software creators and dataset holders. Given that the definitions of CSAM in existing criminal legislation appear broad enough to capture AI-generated material, this is not surprising.
In the United States of America, the regulatory framework consists of federal laws and state-based laws. Federal CSAM statutes, together with case law, criminalise several categories of harmful material. Still, a significant number of vague points persist. Federal laws are relatively robust, but there is a gap with regards to the criminalisation of artificial CSAM that depicts purely fictitious children. Civil remedies, although significant, are limited in scope. Copyright and consumer protection laws offer some avenues for redress, but they are also limited.
Prosecutors typically require concrete evidence to prosecute, such as incriminating communication or attempts to sell or trade material. These are often hard to obtain. This challenge is increased by more advanced AI models that generate hyper-realistic CSAM without training on authentic abuse imagery. This means that even if we start regulating how AI models are trained, these advanced AI models that can create realistic CSAM without the need for training will be evading regulation.
Drawing on copyright law, platforms and developers can be held liable if they knowingly contribute to the sharing of harmful content. However, online platforms are protected from civil liability for user-generated content, complicating efforts to hold them accountable for hosting AI-generated CSAM. Despite efforts to change the law on this, balancing platform liability with the protection of free speech is a major challenge.
The legal landscape is even more fragmented on a state level due to several outdated pieces of legislation around so-called ‘child pornography’, which fail to address newer forms of technology-facilitated child sexual abuse. State-level civil remedies are often inadequate, leaving gaps in accountability for users, developers, distributors and third-party beneficiaries.
On 16 April 2024, the H.R.8005 - Child Exploitation and Artificial Intelligence Expert Commission Act of 2024 was introduced to address the creation of child sexual abuse material (CSAM) using artificial intelligence (AI). This legislation would establish a commission to develop a legal framework that will assist law enforcement in preventing, detecting, and prosecuting AI-generated crimes against children.
In Canada, there is a distinction between federal law and province-based law. The federal Criminal Code lacks specific prohibitions against AI-generated CSAM. Still, the relevant sections of the Canadian Criminal Code have been interpreted widely by the Supreme Court of Canada to provide coverage for several types of harmful material. There remain two exceptions, though. The first is for material that has been created only for personal use and the second is for works of art that lack intent to exploit children. Canadian federal law criminalises the non-consensual distribution of intimate images. However, whether these provisions apply to AI-generated CSAM is uncertain. Enforcement is the responsibility of provincial agencies, and civil remedies for victims vary widely across provinces, creating a patchwork of protections in which access to relief depends on the victim’s location.
Privacy laws in Canada offer some avenues for assistance, but they lack a more tailored character to address the specific harms associated with AI-generated CSAM. Copyright law offers a potential, although complex, avenue for addressing AI-generated CSAM.
Lastly, in Canada, a significant law reform that was proposed is the Online Harms Act. If passed, this Act would have created a new regulatory framework requiring online platforms to act responsibly to prevent and mitigate the risk of harm to children on their platforms. Under the Act, online platforms would have a duty to implement age-appropriate design features and to make content that sexually victimises a child or re-victimises a survivor inaccessible, including via the use of technology, to prevent CSAM from being uploaded in the first instance. A new Digital Safety Commission would oversee compliance and be charged with the authority to penalise online platforms that fail to act responsibly. Accordingly, the Act would create a legally binding framework for safe and responsible AI development and deployment that could potentially apply to restrict or penalise content that would otherwise be legal, but that poses a substantial risk of sexual exploitation or revictimisation of a child. Prorogation of the Parliament of Canada ends the current parliamentary session. As a result, all proceedings before Parliament end, and bills that have not received Royal Assent are ‘entirely terminated’ (Lexology, 2025).
Status | Active |
---|---|
Effective start/end date | 15/01/24 → 31/12/25 |
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.