House debates

Monday, 27 October 2025

Private Members' Business

Artificial Intelligence

5:26 pm

Photo of Jo BriskeyJo Briskey (Maribyrnong, Australian Labor Party) Share this | | Hansard source

I move:

That this House:

(1) acknowledges the enormous changes that artificial intelligence (AI) will create for Australia and Australians;

(2) welcomes the Government's commitment to ensuring that AI:

(a) contributes positively to a Future Made in Australia;

(b) delivers benefits to all Australians, not just a small number of individuals and businesses; and

(c) is developed, deployed and used in a way that keeps Australians safe; and

(3) recognises the work being led by the Government to ensure that Australians are ready to take advantage of AI, including, more than:

(a) $47 million for the Next Generation Graduate program;

(b) one million free 'introduction to AI' scholarships delivered from TAFE NSW to give Australians the fundamental skills to adopt and use AI; and

(c) $17 million to create four AI Adopt Centres, which are supporting businesses across the country to use responsible AI enabled services to enhance their businesses.

Artificial intelligence is changing the way we communicate, work and live. Just as with every major shift in our economy—from the industrial age to the digital one—how we manage this change will determine who it works for. Handled well, AI can lift productivity, open new markets and make work safer and more rewarding. Handled poorly, it can leave small businesses struggling to keep up and working people without a fair say in their future. That's why the Albanese Labor government is determined that Australia's AI future will have people, particularly workers, front and centre. If technology is going to transform our society, it must do so in a way that strengthens it not undermines it. AI could add over $100 billion a year to our economy and create tens of thousands of jobs by 2030, but those gains must be shared fairly between workers, small business and the communities that power our nation. Labor is backing the opportunities through the $15 million National Reconstruction Fund, including $1 billion for critical technologies like AI, and a new network of AI adopt centres to help small businesses innovate responsibly, ensuring that benefits reach everyone, not just big tech.

We're also working with the unions through the AI employment and workplace relations working group, to make sure technology involves workers and evolves with them not against them. Unions like the Finance Sector Union and the Media, Entertainment and Arts Alliance are leading the way, partnering with employers that address AI safety with consultation, transparency and upskilling. This collaboration is already delivering real results. Just today, the Attorney-General announced that our government will protect Australia's creative industries by ruling out any carve-outs for the tech sector when it comes to applying copyright laws to training AI. That's how progress should happen—through partnership, not imposition.

Last sitting week, I met with Safe Steps, Victoria's family and domestic violence support service. They spoke about how new technologies could help frontline workers respond faster and more effectively to women and families in crisis while keeping empathy at the heart of every response. They spoke of Kids Help Phone in Toronto, Canada's national youth mental health helpline, where AI is already being used to save lives. Their system triages messages, identifying high-risk cases, such as when a young person expresses suicidal thoughts, and moving them up the queue. It automatically transcribes and summarises notes, cutting hours of admin and allowing counsellors to focus on care. Importantly it learns from the caller's own words, allowing responders to use language that feels familiar and respectful. It's not replacing people; it's empowering them to do their work better. That's the model we should strive for: technology that strengthens human connection, not replaces it.

The potential for AI to transform our society and economy cannot be understated. Already 68 per cent of Australian businesses use AI, yet we still lag behind the United States and China in training employees and employing skilled AI workers. If we don't understand the technology shaping our economy, we can't chart our own course as a sovereign nation. That's why Labor is building capability at home and strengthening cooperation abroad, developing skills locally while ensuring AI serves people, not power. Australia must shape its own future, not be swept up in an AI arms race.

Through one million free TAFE and university places and $47 million for the next-generation AI graduates, we're giving Australians the skills to guide this technology responsibility. Through the Industry Growth Program, we're helping small businesses innovate safely with strong data and privacy protections, because technology without trust won't succeed. Australians deserve confidence that their information is secure, their rights are protected and AI is used fairly and transparently.

When Labor talks about technology, we don't talk about disruption for its own sake. We talk about progress, the kind that lifts people up and doesn't leave people behind. AI isn't the future; it's here now. The question isn't whether it will reshape our economy but whether working people and small businesses will have the power and protection within it. With experts leading, workers trained, small businesses supported and a government that puts fairness first, Australia can make AI a tool for good—one that builds opportunity, strengthens work and delivers a fairer, smarter future for us all.

Photo of Colin BoyceColin Boyce (Flynn, Liberal National Party) Share this | | Hansard source

Do I have a seconder for the motion?

Photo of Tania LawrenceTania Lawrence (Hasluck, Australian Labor Party) Share this | | Hansard source

I second the motion and reserve my right to speak.

5:31 pm

Photo of Tom VenningTom Venning (Grey, Liberal Party) Share this | | Hansard source

Artificial intelligence—specifically, generative artificial intelligence—presents an enormous challenge for businesses, students, families, governments and departments. We welcome the notion that AI should contribute to the future of Australia's economy and that it should be deployed safely and for the benefit of all, but small funding announcements are not enough. Labor is failing to manage the transition to AI effectively. They are behind the curve. Their complacency is putting our security, our economy and our children at risk. Before entering public life, I wore two hats. I was a farmer, but I was also a strategy consultant. Working at NAB, I helped roll out their generative AI programs. I saw its power firsthand, and I learned how efficient it can be, but this experience also taught me something else. AI is a double-edged sword and a sword that can easily cut into cybersecurity.

The Annual cyber threat report 2024-2025 from the Australian Signals Directorate should be a wake-up call for this government. In FY24-25, under Labor, the Australian Cyber Security Hotline received over 42,500 calls, a 16 per cent increase from the previous year. The Australian Cyber Security Centre also responded to over 1,200 cybersecurity incidents, an 11 per cent increase. During FY24-25, ASD notified entities more than 1,700 times about malicious cyber activity, an 83 per cent increase from last year. Australians are being relentlessly targeted by sophisticated internet scams, and AI is now the weapon of choice for online criminals. Less than two weeks ago, we saw the Prime Minister's phone number amongst many others skimmed by AI. This government has left the door open.

The technologies that I saw drive productivity at NAB that can free up humans for complex tasks requires a workforce ready to harness them, which is why the coalition is right to call for a stronger focus on back to basics and foundational knowledge in our schools. NAPLAN results have shown roughly one-third of students are not meeting expectations in literacy and numeracy. We are setting them up to fail in an AI economy. You can't be a prompt engineer if you don't have the skills of logic, critical thinking and communication. The $47 million for the Next Generation Graduates Program and the TAFE NSW 'Introduction to AI' scholarships are a fine starting point, but they are token gestures compared to the challenges ahead for the next generation.

Finally, let's turn to the growing challenge that threatens to undermine Labor's own climate agenda: the levels of energy required to power AI. The very engines of the AI revolution will put pressure on Labor's reckless renewables rollout and stifle its already impossible emissions targets. The government's response is an unfunded hope that our 'advantage in renewable energy' will solve the problem. Hope is not a strategy. There also exists a significant opportunity with respect to data centres as a growth industry. Industry leaders won't just give us contracts because we want them. We will get data centres if, as a nation, we deliver energy that's cheaper than other places. It's as simple as that. Organisations looking to operate at scale are going to be looking at a competitive advantage, not emissions targets or where the power comes from. Australia risks missing out on this next big boom, and I would argue we're missing out on it already.

A cybersecurity failure has made Australians soft targets for AI powered crime. An education failure is leaving our children unprepared for the job of tomorrow and an energy infrastructure failure risks turning the AI boom into a renewables crisis. Opportunity knocks, but will this government answer? Australia needs a serious and strategic plan for artificial intelligence—specifically generative artificial intelligence—that is built on foundations of cybersecurity and educational excellence. If not, we leave Australians dangerously exposed and ill equipped for the future. The AI future is here. It's powerful and it's exciting, but, under this Labor government, it's also unmanaged, unsecured and fundamentally at risk.

5:36 pm

Photo of Tania LawrenceTania Lawrence (Hasluck, Australian Labor Party) Share this | | Hansard source

I thank the member for Maribyrnong for this timely motion, because the minister for science, technology and the digital economy—Tim Ayres and his assistant minister, Andrew Charlton, will be delivering the government's National AI Capability Plan before the end of this year. The plan will encompass three overarching goals: capturing the economic opportunity of AI, spreading the benefits of AI and keeping Australians safe. Realising the benefits of AI will require a commitment to the common good that I hope will be shared across the parliament.

AI is not merely another app or tool that will change work around the edges. Rather it's a revolutionary change. AI is expected to contribute up to $116 billion a year to Australia's GDP, to create an additional 150,000 jobs by 2030 and to increase annual labour productivity growth by over four per cent across the next decade. As a result the Albanese Labor government is making AI a national priority by supporting Australian AI companies through the $15 billion National Reconstruction Fund—which has $1 billion set aside specifically for critical technologies, which, of course, includes AI; through the $392 million in the Industry Growth Program to support innovative SMEs; through the research and development tax incentive—which supported $980 million in business R&D expenditure on activities associated with AI, machine learning and robotics, which is a 50 per cent increase on the previous year; and through the Medical Research Future Fund, which is a $24.5 billion ongoing fund with 'artificial intelligence and digital health' being one of the funding principles.

The OECD envisages AI transforming whole industries such as with transport and autonomous vehicles—already we can see that at the cutting-edge in Western Australia in the mining sector; with optimisation in agriculture where, again, Western Australia is leading the way with farmers embracing the technology and with its connectivity to space and satellites; with automation in financial services; with enabling better fraud detection; with marketing and advertising; with science and technology; and, of course, with health research. Natasha Banks of Day of AI suggests that in the near future most jobs will be augmented by AI and that there'll be new jobs arising such as an AI engineer, but there'll be many more hybrid occupations where technical expertise is combined with background experience in areas such as health care, education, energy or public policy. She foresees a great need for upskilling across the workforce.

Last month the ACTU called for AI agreements, including for guarantees around job security and skills development and retraining. Assistant Secretary Joseph Mitchell quite rightly stated that we want to see good AI and that it's something not merely efficient but that serves people's real interests. This approach is already being adopted in the European Union under their AI act. I note that Minister Rishworth, who is alive to the skills challenges referred to above, stated in September, in an interview with the Guardian, that it makes commercial sense—if you want to get the job right and if you want to have the best adoption of AI—to consult with the people doing the job.

Just this month I heard from Gerard Dwyer, national secretary of the SDA, on this topic, where he addressed the national conference delegation of that union. He foreshadowed the work that is before us and before every government in the world at the same time. He asked, 'Will we build an AI ecosystem that has humans at the centre, or will we just allow one to evolve that has a small number of wealthy tech bros hoodwinking us into believing that their interests are our interests? Those who promote a light touch on AI are actually out of touch. People, workers and our communities need to be protected by strong AI guidelines. The argument that we have to choose between productivity gains or protections is a false frame. We can have both, and we have achieved both with previous technological advances. Our lawmakers need to put their citizens at the heart of their decision-making.'

I want Dwyer's question and his answer to guide our work here as we consider the necessary legislation that will allow us to benefit from AI while avoiding the pitfalls that the application of any new technology can bring. As the Treasurer stated, we want to make Australian workers, businesses and investors beneficiaries, not victims, of that change. We have already seen examples of that today with our attorney-general making clear the protection of copyright laws, so we know we can do it, and we will continue to work in the interests of Australians with AI to support our efforts and endeavours.

5:41 pm

Photo of Alison PenfoldAlison Penfold (Lyne, National Party) Share this | | Hansard source

Artificial intelligence is well and truly here. It's reshaping the world we live in, but Australians are worried. People in my electorate are worried. Trust in AI is low. With all due respect to the mover of this motion in lauding how the Albanese government is preparing the community and business and government officials to utilise and take advantage of AI, the Albanese government has not stepped up to protect vulnerable people and our institutions from its use. It's being happily used in health care, finance, defence, logistics, construction, retail and government, but the crims, the con artists and the foreign actors have it too. And big tech isn't necessarily playing with a straight bat either.

I'm pleased to note the government's decision to enshrine crucial copyright protections for musicians, writers, journalists and artists, which were threatened by big tech and their AI systems. I'd particularly like to acknowledge and thank Holly Rankin from my electorate, known to many as ARIA Award nominated Jack River, for her advocacy and leadership on this issue.

In researching this topic, I came across numerous detailed papers and submissions to parliamentary inquiries published by Good Ancestors, a forward-thinking charity organisation which believes that AI is not just another technology but one that could change almost every aspect of our lives. It proposes some sensible reforms to address AI harm, including the introduction of an AI act and the launch of an AI safety institute. In its submission to a New South Wales upper house inquiry, it noted that there are numerous threats and recommended that government should list and restrict toxic AI products, like undress AIs, unpredictable AIs, autonomous AIs and rogue AIs. It also suggested AI developers should be liable if their AIs engage in harmful, unpredicted behaviours. For example, AI technology has not just advised but instructed people on how to commit suicide. This is surely one example where an Australian developer of AI that enabled this response should be severely held accountable.

I do see, however, the many benefits of AI but also the risks and how government may check it. In essence, we need to consider how this all-encompassing technology is safely accommodated into our lives, including ensuring that we have a choice as to whether we use it or not. While AI can be used to put finishing touches to a person's letter, a proposal, a project, a policy or a body of work, it's really a matter for each person to develop their own position on their AI use. Ideally, in a liberal democracy, that is not an area for government to regulate. But then there is the use of AI for villainous, deplorable or senseless purposes.

This is where the government must step in. The dissenting report from senators McGrath and Reynolds, the coalition members of the Select Committee on Adopting Artificial Intelligence, makes for chilling reading: AI presents an unprecedented threat to Australian cybersecurity and privacy; due to the recent exponential improvements in AI capability and the unprecedented level of publicly available personal information, foreign actors can now target our networks, systems and people; and existing laws do not adequately prevent AI-facilitated harms before they occur, nor provide an adequate response after they do. It concluded, in part:

The Federal Government—

the Albanese government—

has neglected its responsibility to deal with any of the threats that the exponential growth of the AI industry poses to the Australian people and their entities.

I want to note, however, that the Department of Industry, Science and Resources has done some good work in its Voluntary AI Safety Standard, which gives guidance on how to safely and responsibly use AI and outlines what legislation may look like to manage its improper use. The department has identified areas for mandatory treatment, including how to manage data quality, identify and mitigate risks, ensure regulatory compliance, enable human intervention and respond to people impacted by AI harm. It's a massive body of work, but government needs to be ahead of the game not behind it.

While the Albanese government is providing some means to upskill people in the use of AI, it has done nowhere near enough to protect those people too. This motion suggests that the government is more focused on its PR than protecting the public. Big tech and AI developers also need to step up. If business wants to use AI at scale, it needs to go beyond any regulatory responsibility in AI development; it must obtain society's explicit approval to deploy it. That means AI needs to earn its social licence—and fast.

Photo of Colin BoyceColin Boyce (Flynn, Liberal National Party) Share this | | Hansard source

There being no further speakers, the debate is adjourned and the resumption of the debate will be made an order of the day for the next sitting.