House debates
Monday, 27 October 2025
Private Members' Business
Artificial Intelligence
5:41 pm
Alison Penfold (Lyne, National Party) Share this | Hansard source
Artificial intelligence is well and truly here. It's reshaping the world we live in, but Australians are worried. People in my electorate are worried. Trust in AI is low. With all due respect to the mover of this motion in lauding how the Albanese government is preparing the community and business and government officials to utilise and take advantage of AI, the Albanese government has not stepped up to protect vulnerable people and our institutions from its use. It's being happily used in health care, finance, defence, logistics, construction, retail and government, but the crims, the con artists and the foreign actors have it too. And big tech isn't necessarily playing with a straight bat either.
I'm pleased to note the government's decision to enshrine crucial copyright protections for musicians, writers, journalists and artists, which were threatened by big tech and their AI systems. I'd particularly like to acknowledge and thank Holly Rankin from my electorate, known to many as ARIA Award nominated Jack River, for her advocacy and leadership on this issue.
In researching this topic, I came across numerous detailed papers and submissions to parliamentary inquiries published by Good Ancestors, a forward-thinking charity organisation which believes that AI is not just another technology but one that could change almost every aspect of our lives. It proposes some sensible reforms to address AI harm, including the introduction of an AI act and the launch of an AI safety institute. In its submission to a New South Wales upper house inquiry, it noted that there are numerous threats and recommended that government should list and restrict toxic AI products, like undress AIs, unpredictable AIs, autonomous AIs and rogue AIs. It also suggested AI developers should be liable if their AIs engage in harmful, unpredicted behaviours. For example, AI technology has not just advised but instructed people on how to commit suicide. This is surely one example where an Australian developer of AI that enabled this response should be severely held accountable.
I do see, however, the many benefits of AI but also the risks and how government may check it. In essence, we need to consider how this all-encompassing technology is safely accommodated into our lives, including ensuring that we have a choice as to whether we use it or not. While AI can be used to put finishing touches to a person's letter, a proposal, a project, a policy or a body of work, it's really a matter for each person to develop their own position on their AI use. Ideally, in a liberal democracy, that is not an area for government to regulate. But then there is the use of AI for villainous, deplorable or senseless purposes.
This is where the government must step in. The dissenting report from senators McGrath and Reynolds, the coalition members of the Select Committee on Adopting Artificial Intelligence, makes for chilling reading: AI presents an unprecedented threat to Australian cybersecurity and privacy; due to the recent exponential improvements in AI capability and the unprecedented level of publicly available personal information, foreign actors can now target our networks, systems and people; and existing laws do not adequately prevent AI-facilitated harms before they occur, nor provide an adequate response after they do. It concluded, in part:
The Federal Government—
the Albanese government—
has neglected its responsibility to deal with any of the threats that the exponential growth of the AI industry poses to the Australian people and their entities.
I want to note, however, that the Department of Industry, Science and Resources has done some good work in its Voluntary AI Safety Standard, which gives guidance on how to safely and responsibly use AI and outlines what legislation may look like to manage its improper use. The department has identified areas for mandatory treatment, including how to manage data quality, identify and mitigate risks, ensure regulatory compliance, enable human intervention and respond to people impacted by AI harm. It's a massive body of work, but government needs to be ahead of the game not behind it.
While the Albanese government is providing some means to upskill people in the use of AI, it has done nowhere near enough to protect those people too. This motion suggests that the government is more focused on its PR than protecting the public. Big tech and AI developers also need to step up. If business wants to use AI at scale, it needs to go beyond any regulatory responsibility in AI development; it must obtain society's explicit approval to deploy it. That means AI needs to earn its social licence—and fast.
No comments