House debates

Monday, 28 July 2025

Bills

Criminal Code Amendment (Using Technology to Generate Child Abuse Material) Bill 2025; Second Reading

10:18 am

Photo of Kate ChaneyKate Chaney (Curtin, Independent) Share this | Hansard source

I move:

That this bill be now read a second time.

Overview

This is a bill to make it an offence to download child-sexual-abuse-material generators.

Right now, it is possible to access and download these sickening technologies from websites, app stores and the dark web.

The bill focuses on this particular type of artificial intelligence tool that's designed for the purpose of creating unlimited, on-demand material depicting the sexual abuse or exploitation of children, often tailored to specific preferences. Images can be deleted before detection, and the proliferation of this material makes it harder for law enforcement to identify actual child victims.

Possessing a single image is already illegal. But the capacity to infinitely produce, delete, and reproduce abusive images through AI tools represents a new and urgent threat.

This is by no means the only legislation that needs to be passed on this topic, but it would plug an immediate hole in the Criminal Code.

Context – AI Generally

Artificial intelligence has many potential benefits—for much needed productivity, innovation and efficiency, and no doubt benefits that we can't even begin to imagine.

Even within the sphere of child safety, AI can enhance monitoring and reporting tools, ease the burden on frontline responders and help to locate child victims faster.

But, like any new technology, those benefits are accompanied by new risks.

Some of the risks are about exacerbating harms we already understand—about privacy, scams, disinformation and enabling further harm on issues like child sexual exploitation and abuse.

Beyond the risks we already understand, AI potentially opens up new categories of harm that we are only just beginning to get our heads around.

Regulating AI

Regulating artificial intelligence is very challenging, and we're still working through what should be policed, how and by whom. Technology is developing at such an accelerating rate that it's hard to find a workable current definition of AI for regulatory purposes.

Assigning responsibility between developers, deployers and users is complex, and AI is a global issue—it crosses jurisdictional borders.

In the coming years, we will need to legislate more broadly for transparency, safety and responsibility, to ensure we can reap the benefits of AI without blindly accepting the downsides, and this requires a holistic approach and must be an urgent priority for this parliament.

In the meantime, when so many parents are concerned about what role government should be playing to protect their children, we must plug the most urgent holes in our existing legislative framework as they emerge.

Purpose of the bill

This bill plugs an urgent and alarming hole—AI technologies designed specifically to generate child abuse material.

They're available on the dark web and on app stores, and, as ABC reported:

Intelligence company Graphika reported late in 2023 that non-consensual explicit generative AI tools had moved from being available on niche internet forums into a "scaled" online and monetised business.

It found there had been more than 24 million unique visits to the websites of 34 of these tools, and links to access them had risen sharply across platforms like Reddit, X and Telegram.

This bill simply creates a new offence under the criminal code to prevent people from downloading these tools. It also creates an offence of downloading data for the purpose of generating child sexual abuse material using AI tools.

Rationale

These tools need to be specifically addressed for a few reasons:

            There is no good reason for the existence of these AI tools and plenty of good reasons that they should not be downloaded by Australians. The only defences being proposed in the legislation relate to the use of these tools by law enforcement officers and research.

            Similar legislation is being introduced in the United Kingdom and the EU, as regulators struggle to keep up with technological developments.

            Broader changes needed

            This issue has come to my attention through the work for the International Centre for Missing and Exploited Children (or ICMEC) Australia, an organisation that strives to end online facilitated child exploitation and abuse.

            In the last couple of weeks, ICMEC Australia hosted a national roundtable on child safety in the age of AI, and, as well as identifying this legislative gap, the roundtable identified a range of reform priorities needed to 'keep children safe'.

            A whole-of-system response is required, with more work on prevention and education, and greater responsibility on technology companies for detection and prevention, backed by safety by design and the promised duty of care.

            Government action

            We need an urgent response to this from the government—while a holistic view is important, we need to plug the holes in the current legislation to deal with these emerging harms.

            The government has not yet responded to last year's statutory review of the Online Safety Act, and the government acknowledged in 2023 that 'existing laws likely do not adequately prevent AI facilitated harms before they occur'. This is not covered under the current five-year action plan under the National Strategy to Prevent and Respond to Child Sexual Abuse, which expires next year.

            Conclusion

            There is plenty of work to be done to make AI take-up safe and consistent with our shared values.

            This bill addresses a very specific harm that could easily be addressed within the framework of our existing Criminal Code. I urge the government to consider this amendment with urgency to protect Australian children from harm.

            Comments

            No comments