House debates

Monday, 23 March 2026

Private Members' Business

Artificial Intelligence

12:06 pm

Photo of Kate ChaneyKate Chaney (Curtin, Independent) Share this | Hansard source

At its heart, this motion speaks to the government's current hands-off approach to artificial intelligence. AI is already reshaping economies, labour markets and the information environment, and, while its ultimate trajectory remains uncertain, the scale of potential change is enormous. Yet, despite this, Australia has done remarkably little strategic thinking about what this transformation means for our economy, our institutions and our society. That gap was apparent in the government's long-awaited national AI plan. It set out three worthy objectives: capturing the opportunities, spreading the benefits and keeping Australians safe. But, beyond that, its 37 pages largely collated existing announcements. Somehow both the techno-optimists and the techno-pessimists walked away dissatisfied. There was little ambition in relation to capturing AI's upside and limited reassurance on safety.

While I commend the establishment of the AI Safety Institute, the plan did not offer much of a sense of preparedness for the future. The underlying message felt like, 'Let's sit back and see how this develops.' That approach is also reflected in investment. Australia is not investing much in AI compared to other countries. Over the last five years, Canada has invested six times more than us in AI. Singapore has invested 15 times more than us. The UK and Germany have both invested three times more than us per capita. To be fair, part of this tentative approach reflects genuine uncertainty. No-one knows exactly how powerful AI will become or how quickly. Perhaps today's excitement will amount to little more than a generation of very effective chatbots. But, globally, there's an incredible amount of capital riding on the bet that the impact of AI will be much greater than that.

We should be shaping our own future. At present, many of the most consequential decisions are being made by the tech oligarchs on the other side of the world. We must seize the reins. So how do we do this? First, we must deal with the risks and opportunities that already exist. We're already seeing real harms from AI: psychological distress from chatbot interactions, sophisticated AI-enabled scams and fraud at scale. A sensible starting point is a digital duty of care, requiring platforms to take reasonable steps to prevent foreseeable harm. This must extend to include AI chatbots. We should ensure our regulators have the technical expertise to identify and disrupt AI powered scams. We must also start to develop policy to manage AI enabled disinformation campaigns, particularly during elections.

On the opportunity side, AI's potential in science and research is already clear. Australia should be investing heavily here. The government has already committed more than $360 million through existing programs. That's a strong foundation, but it must be clearly directed towards AI powered research, and it must be scaled up.

Second, we should pursue no-regrets policies, policies about our future that make sense regardless of how AI ultimately develops. Strengthening the AI Safety Institute is an obvious one. The institute has a significant responsibility in monitoring risks from AI and working with policymakers and regulators to manage them. Yet funding for our AI Safety Institute is about one-sixteenth of comparable efforts in the UK. It needs more funding. It also needs to be protected from the bureaucracy of the standard Public Service so it can move nimbly and independently. The government should also consider giving the institute more powers to gather information so it can effectively monitor the AI risk landscape.

Another no-regrets policy is investing in an AI-ready workforce, from university training to mid-career transition pathways and deepening international collaboration with trusted partners on standard safety and governance. These investments will never be wasted.

Finally, we must start planning for the most significant future risks. There are some very dramatic predictions about how AI could change our workforce, society and economy. These may or may not come to pass but, with the speed of AI, we can't afford to wait for them to happen before thinking about how we might respond. Even if the likelihood of some of these scenarios is low, the consequences are so significant that we need to be prepared. A large chunk of the workforce could become unemployable. Economic value could be concentrated in a small number of companies across a wide range of industries, and our tax bases could be undermined. All of these shocks would require a government intervention, and it would take time to build the social licence needed. What is clear is that this hands-off approach is not good enough. The government must take an active approach, because we must ensure that Australian voices are determining our future, not big tech.

Comments

No comments