House debates

Monday, 28 July 2025

Motions

Artificial Intelligence

4:52 pm

Photo of Jerome LaxaleJerome Laxale (Bennelong, Australian Labor Party) Share this | Hansard source

I welcome this motion from the opposition and particularly from my good friend the member for Casey, although I might not agree with some of his editorial comments. It's not every day that we get a moment of consensus in this place, but, when it comes to artificial intelligence, we all know that the stakes are high and that the opportunity is enormous.

AI has the potential to transform our economy, to lift productivity, to solve complex national challenges and to make everyday life easier for Australians. Whether it's improving health care, boosting productivity, helping small businesses with day-to-day operations or helping individuals navigate complex data, the impact is already being felt right across Australia. In fact some modelling tells us that AI could contribute up to $200 billion a year to Australia's GDP by 2030 and create an extra 150,000 jobs. But those outcomes don't just happen by default; they require deliberate action, they require investment, they require guardrails and, importantly, they require public trust. That's exactly what our government is seeking to do. We're taking a clear-eyed, proactive and strategic approach to the opportunities and the risks of artificial intelligence.

Our vision is for Australia to be more than just a consumer of AI; we want to be developers, deployers, adaptors and trusted users. And we're building that capability from the ground up. That's why we've allocated a billion dollars through the National Reconstruction Fund to support critical technologies like AI, backing sovereign capability and helping turn great Australian research into commercial products and services. We've also committed $17 million to set up AI Adopt centres across the country so that small and medium businesses can access the tools and advice they need to adopt AI safely and effectively. At the same time, we're investing in the workforce that will power this future, with $47 million going into the Next Generation Graduates Program to train job-ready talent in AI and emerging technologies. It's being done through programs like the Industry Growth Program and the R&D tax incentive which supported nearly $1½ billion in AI related projects last year. We're also backing start-ups and researchers to scale their work and push the boundaries to see what's possible. All of this adds up to a serious and sustained commitment not just to using AI but to leading in its development and application right here in Australia for the world. But capability without trust gets us nowhere. At last count, 50 per cent of Australians use AI regularly but only 36 per cent say they trust it. This is a serious gap, and, if we don't close it, we risk losing the social licence to use these tools at scale.

A huge shout out to Simon Kennedy—no, not the member for Cook, who's here—who lives in Lane Cove. He's head of the Australian Association of Voice Actors. Simon is a local comedian and a voice actor. He understands the inevitability of AI in his workplace and our workplace, but not at the expense of his personal intellectual property. AI unchecked can mimic Simon's voice with only three seconds of audio. Simon's voice is his and should not be used without his consent.

Ethics in AI is so important, which is why we're embedding it into everything we do on AI. Australia was one of the first countries to develop a set of national AI ethics principles. These guide responsible development and ensure that our approach aligns with community standards. We're also making sure that existing laws apply to AI—privacy protections, consumer laws, antidiscrimination legislation, online safety regulations and the list goes on. These frameworks already provide critical safeguards, and we're ensuring that they keep pace with technology. We're investing in the scientific understanding of AI systems, not just how they work but how they fail. We want to know where the risks lie and how we can mitigate harm before it happens, and we're engaging globally. We've signed onto the Bletchley declaration, the Seoul declaration and the Hiroshima AI process. We're active participants in an international network of AI safety institutes.

You'll see from that list—and I could go on, but I'm running out of time—a very different characterisation of what this government is doing in the AI space. It's very different to that presented by the member for Casey in moving his motion. I'm sure we'll have a bit of back and forth here, but this is a really crucial piece of public policy that we need to undertake, and it's the Albanese government that will do it.

Comments

No comments