Senate debates
Wednesday, 3 September 2025
Statements by Senators
Artificial Intelligence
12:56 pm
Matt O'Sullivan (WA, Liberal Party, Shadow Assistant Minister for Fisheries and Forestry) Share this | Hansard source
Only a few technological advancements have revolutionised the world as we know it: the telephone in the 19th century, the computer in the mid-20th century, the internet and later the smartphone in the late 20th century and early 21st century—and now artificial intelligence. In late 2022, the world was introduced to ChatGPT, OpenAI's conversational AI tool, which amazed the world with its ability to comprehend. Powered by a large language model, it was unrivalled in its capacity to solve problems, write essays and answer endless questions with minimal effort from users. It took the world by storm, with OpenAI reporting it had an amassed one million users in its first five days alone. This has now grown to over 400 million active users each week. Christopher Columbus's exploration of the New World ushered in new opportunities and trade routes, and after five centuries we are once again on the verge of a new world. This era, which is just 12 to 24 months away, will be a fundamentally different world to the one which we exist in and inhabit today.
Artificial intelligence is rapidly approaching a threshold where it doesn't just regurgitate information but reasons and creates in ways previously unique to humans. This new frontier, known as artificial general intelligence, or AGI, is not decades but mere months away. OpenAI's ChatGPT-5, DeepMind's Gemini, xAI's Grok and Anthropic's Claude are rapidly advancing towards AGI, with models capable of planning, reasoning and decision-making that in some areas rival human performance.
Once limited to simple query answering—like you would give to Google or a search engine—AI now matches or surpasses humans in tasks such as comprehension, translation, coding and complex problem solving. This development changes everything. Let me say it again. This development changes everything. The world as we know it is rapidly changing around us. Goldman Sachs estimates that AI could automate the equivalent of 300 million full-time jobs globally. Locally, AI is already being integrated into an array of workplaces. We're seeing AI-backed technology writing out doctors notes, divorce agreements and even sorting through fresh produce. No job is exempt, with job displacement set to affect all levels of work, from data entry, content generation and administrative assistance to knowledge based workers—policy advisers, dare I say—marketers, paralegals and even coders.
Although this shift will generate new opportunities, it also comes with urgent risks. The issues are multifaceted and indeed complex. Firstly, AI is accelerating faster than our ability to contemplate regulation let alone actually regulate it. While many countries are busy putting safeguards in place, Australia has so far contributed little more than recommendations to the AI policy conversation. The European Union passed the world's first comprehensive AI act in March 2024, creating clear obligations for developers and protections for citizens. The UK launched its own strategy, including an AI safety institute. Sadly, Australia is missing in action.
In July, US president Trump issued sweeping executive orders titled 'Winning the AI race'. These orders ban ideological driven AI models in federal agencies. They mandate neutral systems, and they rescind prior safety mandates, favouring innovation focused action plans instead. China's recent AI summit promoting global solidarity included a proposal to base a UN-style AI body in Shanghai. This is a clear power grab that should alarm anyone who values transparency and rights based AI rules. The world's leading surveillance state cannot be trusted to oversee global AI governance. Without immediate implementation of effective and coordinated frameworks, AI technology will rapidly outpace Australia's capacity to manage it. We need frameworks that mitigate serious risks including sovereign risk, biases, disinformation, propaganda, foreign electoral interference, online harm, cybercrime and copyright violations.
It's not just safety and protections that Australians must regulate; our long term economic and fiscal survival is also at stake. We need to answer the hard questions: Who should benefit financially from AI? Who should be taxed when algorithms and AI replace human labour? How should IP be protected and properly remunerated if used? How do we make sure global tech companies pay their fair share of taxes in Australia when they expect to freeload on our infrastructure, our energy and data? We cannot outsource our digital future. Australians must be creators and curators not just consumers of this new frontier. If we're serious about sovereign capability in AI, we need to get serious about energy. AI runs on electricity, not ideology, and right now our energy grid simply isn't fit for purpose.
Microsoft and other tech giants are turning to nuclear energy to meet the staggering power demands of their AI infrastructure. Meanwhile China, who is unburdened by the ideological constraints of net zero at any cost, is building coal-fired power stations to ensure that it can dominate the AI age. If Australia wants to compete in this space, we need to put energy security and affordability back in the centre of our policy. That means keeping all options on the table, including nuclear, and being honest about the limitations of the current approach. Technology has long played a significant role in bringing people closer, and there is the desire to solve problems closest to home. The cochlear implant was developed in part by Australian researcher Graeme Clark, who was motivated by his father's struggle with hearing loss. Innovation has always been best fuelled by the impulse to protect, connect or restore something essential for people.
AI takes us one step further. It shapes not just communication but also our thoughts, beliefs and how we see ourselves in the world around us. It's not out of the question that our identity will become increasingly homogenised and shaped by AI to reflect dominant international narratives instead of focusing on national and local factors. Social media platforms powered by AI algorithms are already key to driving this trend. In this new era, knowing what you believe and why will become more important than ever. Our national identity and our virtues that underpin its formation, as well as ideas of individual dignity, moral equality and the duty to protect the vulnerable, cannot be assumed. As historian Tom Holland observes in Dominion:
To live in a Western country is to live in a society still utterly saturated by Christian concepts and assumptions.
Even our most secular values are shaped by this inheritance. These foundations must be taught, protected and actively championed, especially as we enter an era in which global AI systems may be indifferent to them at best or hostile to them at worst. Our teachers, classrooms and education system must equip children for this new reality. Students must be encouraged to learn reasoning, rationalisation and formations of opinion. Privacy for students and teachers must also be addressed in Australia's AI policy development. I say all of this not as a doomsayer, but as a technological optimist. My colleagues and family know me as an early adopter of technology. I love the latest advancements and innovations, and I remain optimistic about our future where technology continues to make our lives smarter, faster, easier and, indeed, more connected. But optimism is not a strategy. Regulation must keep pace with innovation, and ethics must persist alongside AI capabilities because right now technology is outpacing our laws, our policies and our institutions. The time to act is not now; it was yesterday. We have to urgently catch up and do what we can to make sure that AI policy in Australia is fit for purpose not only for where we are now but, indeed, for where we're going into the future.
No comments