House debates

Monday, 19 June 2023

Private Members' Business

Artificial Intelligence

5:22 pm

Photo of Julian HillJulian Hill (Bruce, Australian Labor Party) Share this | | Hansard source

I move:

That this House:

(1) notes that the development and implementation of artificial intelligence (AI) is rapidly accelerating globally and in Australia;

(2) acknowledges that while there is much uncertainty surrounding both the development and adoption of AI technologies, and that 'AI' is a term used to describe a variety of techniques and applications, what is clear is that these technologies will transform human society, how we experience our lives and how we understand reality;

(3) recognises that harnessing the benefits of AI presents enormous opportunities for Australia, including:

(a) the potential for AI to boost productivity and revolutionise many industries;

(b) the capacity to transform our economy with advances in every conceivable field of human endeavour;

(c) new employment opportunities through human-centered AI;

(d) improving health, wealth and equality outcomes for all Australians including through improved government service delivery; and

(e) enhancements to environmental sustainability through better-informed decision making and accelerated scientific discovery;

(4) further notes that in order to safely harness these benefits, Australia must also act to mitigate the profound risks posed by AI, including:

(a) immediate and tangible threats to job security and industrial relations;

(b) the risk that AI could perpetuate or amplify existing biases and discrimination;

(c) the risk that AI could perpetuate or enable new forms of disinformation and misinformation;

(d) social and democratic harm through the use of AI in cyber attacks and large-scale disinformation campaigns;

(e) further digital marginalisation and inequality; and

(f) the threat of social disruption and national security risks;

(5) recognises that notwithstanding positive efforts underway to address matters related to AI—including responsible AI standards and policy—Australia has broader capability and governance gaps and needs to ensure that regulatory oversight of AI development and adoption in Australia is fit for purpose;

(6) affirms that:

(a) AI is one of the most transformational technologies of the 21st century, on par with the industrial revolution;

(b) the level of risk posed by unchecked AI, and the scope of policy development needed to curtail this risk, warrants urgent attention;

(c) industry leaders are calling for additional government action and regulatory cooperation;

(d) AI governance, regulation and public-good investment is too important to be left to industry or technical experts alone; and

(e) the Australian Parliament and Government have a responsibility to consider and act thoughtfully and promptly in responding to these changes; and

(7) further notes the recent regulatory moves underway in other jurisdictions, including diverse approaches to AI governance in the EU, the USA, China and the UK; and

(8) recognises that all Members of Parliament have a responsibility to engage with the transformative challenges presented by AI, and together explore what Australia should do to:

(a) foster and contribute to a national debate about AI;

(b) seize the enormous opportunities that AI technology will continue to generate;

(c) mitigate, through appropriate regulatory measures, community anxieties and the profound risks posed by unchecked AI; and

(d) deliver an Australian approach to AI governance and regulation informed by values of democratic participation, nation building, social justice, equality, consumer protection and international cooperation.

It's so important that MPs engage and debate this. ChatGPT has fuelled public awareness of artificial intelligence, but large language models are just the canary in the coalmine. AI technologies are set to transform human society, how we experience our lives and how we understand reality. Like a new pair of glasses or hearing aids that are difficult to remove, AI will shape our perception of life as it influences what we see, hear, think and experience online and offline. Our everyday life will be augmented by having a superbright intern always by our side. Service delivery by governments and businesses will be transformed, unleashing creative destruction in some sectors of the economy but hopefully driving Australia's next big productivity boost.

Our understanding of the universe and the world has been developed for millennia through reason, the scientific method and faith. Yet over the next generation, living with non-human pseudo-intelligence will challenge established notions of what it is to be human. Advanced AI systems trained on massive datasets are already spotting new links and patterns. Many appear true but are beyond human comprehension. Others are downright wrong, as a result of faulty data or algorithms. AI is maths. It's not magic. It's not a modern-day Delphic oracle to be worshipped or blindly obeyed. Citizens and policymakers must urgently get a grip.

While AI has the potential to generate enormous wealth, exponentially more powerful AI technologies unaligned with human ethics and goals bring unacceptable risks. Put simply, superpowerful AI is a sociopath, a shoggoth disguised with a smiley mask by engineers trying to align it to prevent misuse. But how might authoritarian governments, bad actors or rogue citizens misuse AI to trash human rights, attack democratic societies, scale up scams and organised crime, and harness dangerous knowledge and capabilities otherwise unavailable? It's more serious than the opposition's big fear, of course—that ChatGPT may support the Voice to Parliament! But, just as humanity was right to worry about the risky gain-of-function virology research, we're right to worry about uncontrolled generative AI. Imagine unleashing this pseudo-intelligence with self-executing power connected to the internet without intermediating human judgement.

Safer AI technologies, well aligned to human values and needs, are only possible with public intervention. Decisions that shape the future of our society cannot be left to the private interests of technologists and multinationals alone. Governments must act in the public and the national interest to establish guard rails and determine how and where to apply both the accelerator and the brake, harnessing the benefits of AI while mitigating and managing the risks.

AI use must support our Australian, established values of social justice, equality, democratic participation and nation-building. Our new government is now acting to address Australia's AI capability and governance gap. Australia has the chance now to cherry-pick from world-leading and diverse approaches globally and to craft our own world-leading AI regulatory response, one which unashamedly voices, champions and embeds our values and the famed Australian commitment to a fair go. None of this is easy.

In saying this next bit, I point out that it's not government policy. I propose the establishment of a time-limited AI commission, just for five or six years, located right at the centre of government and bringing together industry, public servants, academia and civil society. Its functions could include fostering public awareness and education; rapidly building capability across the public sector; formulating options to guide policy responses, including sovereign capability; reviewing proposals; and engaging with the leading international thinking and debate around AI. What you might call, in a non-catchy phrase, a light-touch, cross-disciplinary institutionalisation would see ethicists, lawyers, philosophers, psychologists, economists, doctors, sociologists, educators and public administrators as equal partners with the scientists and the technologists. We can't support pushback from scientists and the multinationals saying this is too complex. Parliamentarians have to be engaged with this debate. None of it's easy. A parallel could be drawn with the Climate Change Authority, the Australian Cybersecurity Centre or the Canberra Commission on the Elimination of Nuclear Weapons.

I believe we also need to regulate access to the most powerful tech. We don't allow people to pick their kids up in battle tanks or businesses to run around with rocket launchers, and I'm pleased the government is acting now.

Photo of Bridget ArcherBridget Archer (Bass, Liberal Party) Share this | | Hansard source

Is the motion seconded?

Photo of Dan RepacholiDan Repacholi (Hunter, Australian Labor Party) Share this | | Hansard source

I second the motion and reserve my right to speak.

5:27 pm

Photo of Jenny WareJenny Ware (Hughes, Liberal Party) Share this | | Hansard source

I rise to speak on the motion brought by the honourable member for Bruce, and I thank him for bringing this important motion about artificial intelligence. I agree with most of the comments made by the honourable member in his speech, particularly that parliamentarians must do more around artificial intelligence.

Artificial intelligence is one of the most transformational technologies of the 21st century. It is, in my view, equivalent with the Industrial Revolution. It is the most significant technology development since the creation of the internet itself. While most of the public commentary on artificial intelligence focuses on its potential risk, I prefer that we also recognise the enormous opportunities and benefits that will flow from this technology. We should not be afraid of this technology. We need to embrace it and we need to work out how we can make it work in our world.

We're at an exciting tipping point in the way the world operates, and the technology advancement we are seeing around AI will be a major and exciting change in the world's economic development. It is very important, therefore, that Australia is on the front foot with this. I note that the honourable member for Bruce said that it's important that governments get on the front foot with AI. I support that view, and that's why I think it's a shame that it hasn't been proposed by the Albanese Labor government that there be a minister for the digital economy, for example, because that would really assist in driving the nation forward on the very important front of AI.

It's a shame, for example, that this government in its last budget did not prioritise investment in AI. There was no new funding in the budget for AI. It said there was roughly $20 million per year for five years, paid for by redirecting funding from within the Industry portfolio. I note that some parts of industry weren't particularly impressed with the budget. For example, the CEO of Sapia.ai, Barb Hyman, said:

This is a massive missed opportunity from the Federal Government to surge ahead in what is fast becoming the race to not only pioneer but leverage new AI technologies.

…   …   …

… this innovation is time-sensitive. By the time it's a focus globally, it will be too late.

There are significant economic opportunities for Australia. These include the emergence of new businesses and more jobs, whether they are pure digital economy businesses or businesses looking to be supported by AI. The changing nature of jobs will expand education offerings from universities to VET and into the school education system. It's the much-needed improvement to productivity that our country needs, enabling businesses to achieve more and also retaining and enhancing our international competitiveness as other high-tech countries accelerate their uptake of AI.

There are challenges that can be addressed, and it is important that these challenges are managed by government rather than used by government to stifle opportunity. For example, regulation will be required for the use of data in terms of attribution, transparency and misinformation. National security matters will need to be addressed, as the power of AI will have profound reach into our security. Legal frameworks will need to be established. These are vitally important to maintaining Australia's high global standing of having stable governance structures while protecting the use by citizens and operations of businesses for all that AI provides.

There will be ethical considerations that will need navigation. Business models will need regulation. Data security and individual privacy will need to be addressed, and that's one of the issues that must remain paramount. Job displacement will need supporting education and training policies. Finally, the way artificial intelligence engages with and collaborates with humans, or the human interaction requirements, will need to be carefully managed. Other countries are developing their approach to AI whilst our current government appears to have missed another opportunity, with a lack of investment in the budget. Let's hope that the next budget does fix that mistake from the last.

Therefore, to conclude, the Australian government now needs to move quickly, otherwise we are at risk of missing out on business investment, economic growth, new jobs, wealth generation and an improvement in lifestyle, with a government that so far appears to be lacking in focus and investment in this area. I do commend this motion to the House.

5:32 pm

Photo of Dan RepacholiDan Repacholi (Hunter, Australian Labor Party) Share this | | Hansard source

I rise before the House to support the motion put forward by my colleague the member for Bruce and to speak on a matter that promises to redefine our future, artificial intelligence. AI, as we call it, is a force of change. It's a technology that is sweeping across the globe, and Australia is not isolated from this. In the face of AI, there is of course some uncertainty. AI includes a range of technologies and applications, each with their own possibilities and challenges. But one thing is crystal clear: AI is transforming our society, our experience of life and how we perceive reality. I also know that there's a bit of apprehension out there about what it could mean for jobs, our society and our way of life. But let me tell you: just as we've embraced the wealth of our land through mining, it's time for us to embrace the wealth of our minds through AI.

Harnessing the benefits of AI could unlock huge potential for our nation. Imagine a productivity surge across industries and a transformation of our economy with new job opportunities created through AI, and picture enhanced health, wealth and equality outcomes for all Australians through improved government services delivery. Think about the possibilities of better environmental sustainability decisions driven by AI and the acceleration of scientific discoveries. Australia is not just a country rich in minerals; we're also a country rich in technical talent, innovation and technological capabilities. We're the country that invented wi-fi. I want to see us enhance our technology potential in much the same way that we've harnessed the power of our resources sector.

Let's be clear: AI doesn't have to be about replacing jobs. It could be about enhancing them. It could be about freeing up our dedicated Australian workers so that they can focus on what really matters to them. It's about equipping our workforce with the tools they need to perform their jobs more effectively and efficiently. It's about creating new opportunities, new industries and new jobs that we can't even imagine just yet. That's why we as a government need to step up. We need to invest in our people, invest in our businesses—invest in our future. We need to ensure our education system is equipping our young people with the skills they need to thrive in this new area. We need to support our businesses, large and small, in adopting and integrating AI technologies.

Yes, with every revolution comes a set of risks. We must acknowledge and mitigate these as we embrace the AI wave. From job security risks to risks of amplified biases and discrimination, from disinformation and misinformation threats and potential social and democratic harm to the threat of social disruption and risks to national security, we have our work cut out for us. As the motion says, the transformational power of AI is on par with the Industrial Revolution of the 19th century. To manage the potential risks, urgent policy development and a steadfast regulatory approach are called for. The responsibility of AI governance and regulation cannot be left to industry or technical experts alone. We, as members of this parliament, bear the responsibility to act thoughtfully and swiftly in response to these changes.

Governments around the world are taking diverse approaches to AI regulation. The European Union, the United States, China and the United Kingdom are all stepping up to the challenge, and we must do the same. We must foster a national debate about AI, seize the colossal opportunities that it will continue to generate, and mitigate the risks posed by unchecked AI. We must carve out an Australian approach to AI governance and regulation—an approach that champions democratic participation, nation-building, social justice, equality, consumer protection and international cooperation. Let's lead Australia into this brave new world with foresight, responsibility and coverage.

Remember, we're a country of innovators and builders. We have always been at the forefront of change. There's no reason for us to stop now. So let's embrace AI, let's harness its potential and let's make sure Australia continues to lead. AI isn't some distant, abstract concept. It's here and it's now. It's helping doctors diagnose faster and more accurately, it's assisting farmers to manage their crops more efficiently, and, Madam Deputy Speaker, it's even helping to write speeches for parliamentarians.

5:37 pm

Photo of Allegra SpenderAllegra Spender (Wentworth, Independent) Share this | | Hansard source

I thank the member for Bruce for moving this important motion. I've just asked ChatGPT to give me a five-minute speech, and it looks like it's covered most of what I've got to say today, but I'm still going to work off what I put together originally.

The changes to technological capability we have become aware of in the last year are nothing short of miraculous. The technological promise of AI is so great that we will doubtless see huge, unexpected changes from its adoption in the years to come. I genuinely believe that AI could be as transformative in this century as the transistor was in the last. The opportunities will be enormous and dispersed across every part of our economy. AI will help farmers maximise crop yields and minimise pesticide use. It will help businesses find their customers and ensure their products are the right fit. Just last week, I met with a radiologist who had started adopting AI tools to help improve the accuracy of their diagnoses. Lives will be saved because of these tools.

There are incredible opportunities for AI to help Australians start their own businesses, supporting entrepreneurship and innovation. That's not least because Australia has very strong data, which is an incredibly important underpinning of AI. It will enable, I think, further growth in the tech sector here in Australia, which already does incredible work and could do so much more to help Australians reach their potential.

But of course this comes with risk. The first that comes to mind is the increasing use of deepfakes, which are getting to the point where they're indistinguishable from genuine content. Seeing is no longer worth believing. I worry about the effect it will have on our society, particularly as malicious actors use deepfakes as a tool of disinformation. We know misinformation and disinformation already pose serious risks to our democracy. AI may supercharge those risks.

A second risk, and one about which many are concerned, is job displacement. AI has mainly been experimental for many people in their jobs rather than the replacement of people by the technology, and there's suddenly the opportunity for AI to really augment individual productivity. But there will be changes to the job market because of AI, and that will affect people in our community.

There are other risks. If you ask one of the commonly available AI systems, such as ChatGPT, about the risks, they flow quickly across your screen: bias and discrimination from systems trained on datasets that contain inbuilt discriminations will amplify and perpetuate those biases, and we've already seen those in how they affect not only women but also minority communities such as LGBTIQ; security and privacy risks, as AI enables surveillance and data collection to increase exponentially; ethical and moral concerns as life-and-death decisions are delegated to AI were lack of transparency and even explainability leaves AI making life-changing decisions that very few of us can comprehend and for which no-one is clearly accountable.

These risks are real, and our understanding of these risks is inadequate. So there is a strong argument for regulatory intervention with AI, and I support the principle of ensuring that Australians are protected from the downsides as much as reasonable.

There is also a danger if we overregulate, if we try and protect ourselves from every possible negative effect of change and end up missing out on the real benefits of reform. This is what we've seen in so many areas of government action. It's why the burden of regulation falls so heavily on business and why we've experienced lacklustre economic productivity and wage growth for the last 15 years. Let's not make the same mistake with AI. Let's get on the front foot and ensure we have a regulatory framework that is fit for purpose.

I very much agree with the member for Bruce on the recommendation that our first responsibility is to get educated, get educated across the parliament and get educated across this country. Let's work with AI users, let's work with businesses, workers, researchers and experts. Let's develop our understanding and develop the regulatory framework at the same time to make sure that it is fit for purpose. By embracing AI with a balanced and proactive approach, we can ensure that businesses realise their potential, drive economic growth and become more competitive in the global market.

Government must foster an environment that encourages innovation, prioritises ethical considerations and invests in essential skills and infrastructure. We need to do everything possible to ensure Australians fully benefit from this incredible technology and what it has to offer but also protect our country, our values and the most vulnerable.

5:42 pm

Photo of Jerome LaxaleJerome Laxale (Bennelong, Australian Labor Party) Share this | | Hansard source

Never fear! I did not use AI to write this speech, but I did use AI to turn each section of this speech into the style of a different Labor prime minister, so, for those playing at home, let's see if you can identify which PM inspired which section.

We're starting with the first section: We must stand firmly committed to the safe and inclusive adoption of transformative technologies such as AI. We need to embrace the immense potential that AI holds while ensuring that its development and deployment align with the interests and value of our community.

AI encompasses a wide spectrum of technologies that permeate every sector of our economy. Studies may well say that AI could contribute a staggering $1 trillion to $4 trillion to our economy over the next 15 years.

We have already witnessed awe-inspiring applications of AI in tackling real-world challenges. From combating the impacts of climate change to revolutionising the development of life-saving vaccines, AI has consistently demonstrated its value in resolving everyday problems. A prime example lies within my own electorate of Bennelong, where Medtronic has crafted an intelligent endoscopy module. This groundbreaking technology, empowered by AI, has been granted approval by the TGA in Australia and amplifies the prospects of early detection for colorectal cancer, an ailment that is highly treatable if treated in its nascent stages.

Did we get that one? No? Maybe?

This is the next section: Nonetheless, it is essential to confront the legitimate concerns and risks that accompany the rapid development of AI. The accelerated progress of AI models has understandably raised apprehensions that these technologies may be deployed prematurely without a comprehensive understanding of their implications. Any unchecked specificity of rollout of AI jeopardises jobs, industrial relations and exacerbates biases, deepening social divisions. Disinformation and misinformation amplified by AI can erode our democracy and social fabric. We must be wary of social disruption and national security threats stemming from AI's malicious misuse in cyberattacks and the spread of deceptive information. It is incumbent on us to confront these challenges head on. We must address issues such as algorithmic bias, data privacy, security, potential misuse of AI and the dissemination of misinformation. By doing so, we can foster an environment that harnesses AI's immense potential while safeguarding our society's wellbeing and interest.

That one was a bit easy.

Photo of Julian HillJulian Hill (Bruce, Australian Labor Party) Share this | | Hansard source

That was Kevin Rudd, wasn't it?

Photo of Jerome LaxaleJerome Laxale (Bennelong, Australian Labor Party) Share this | | Hansard source

Well done. Congratulations.

The next one: Rest assured that ministers of this government are diligently working together to ensure the adoption of generative AI that is guided by ethical considerations, privacy protection, prevention of online harm, and responsible usage. Furthermore, we have started to take proactive measures to establish regulatory frameworks for AI in Australia as trailblazers in the field. We were among the first nations to develop national AI ethics principles, underscoring our commitment to responsible AI development. Through proactive steps, this government demonstrates our unwavering dedication to ensuring that AI development and deployment in Australia align with our values, priorities and the best interests of our citizens. Go the Rabbitohs—okay; that one was the Prime Minister!

The next one: Our commitment to the safe and responsible implementation of AI goes beyond mere policy discussions, because we do not want AI to do us slowly. In the 2023-24 budget we have allocated a substantial $41 million to bolster industry adoption of responsible AI practices through the National AI Centre. Furthermore, we have allocated a significant sum of $132 million to the eSafety Commissioner. This allocation is testament to our unwavering dedication to safeguarding the wellbeing of Australians in the digital realm.

We firmly believe in the importance of widening the pipeline of STEM talent in Australia. By nurturing an environment of inclusivity and providing support for development of STEM skills, we can cultivate a skilled workforce that drives innovation in the field of AI while upholding our core Labor values of safety, responsibility and inclusivity.

An honourable member: That's got to be Gillard!

Ha-ha!

We recognise the pressing nature of this issue and the imperative to develop robust policies that address the risk of AI.

An honourable member: No AI will live in poverty?

I want to finish by just calling out the member for Bruce for his passion in this area. I think he's been at the forefront, in this parliament, of I guess uptake of AI and promoting its use as well as ensuring that we as politicians take real care in understanding the good and the bad of AI. So, looking at his robust motion today, I did have to put my name down to speak to it, because this is an exciting time—but also a concerning one, so it's good that we've got members like the member for Bruce here.

5:47 pm

Photo of David ColemanDavid Coleman (Banks, Liberal Party, Shadow Minister for Communications) Share this | | Hansard source

I also want to start by honouring the member for Bruce. He could not have picked a more significant topic, and he has been talking about it for a number of months. The reality is that AI will dwarf in significance the vast majority of issues that we discuss in this building. This is really quite a remarkable point in history, as this technology really starts to come to practical fruition.

For me, the first time I used ChatGPT—which obviously is the one getting the headlines—it reminded me of much of the first time I used Netscape, when I was at university in the mid-nineties—the first commercial web browser. That moment, for me, was like, 'Wow, the world is about to change dramatically.' And it did. I think we are at a similar inflection point with AI. I'm not generally a hype merchant. There's been a lot of technologies that have come and gone over the years that I've had little to say about or have been quite sceptical of, like cryptocurrency. But this is very different.

There are a number of things we need to do as a society to respond to this. The first is the basic principle of 'first do no harm'. As others have pointed out, there is the risk of jumping too aggressively into the regulatory sphere here, and that could be very problematic, because the last thing we want to do is suppress that innovation and investment into AI technology in Australia. In Australia we've always been good at consuming technology. We've always been very early adopters of technology. Where our record is less clear is in the development of IP related to technology. We've done a lot of good things, and people often use the examples of wi-fi, ResMed, cochlear and a range of other examples. But in some aspects of technology we've been a stronger consumer than we have been a creator. It's really important that we be at the forefront of the creation of intellectual property.

It's also important that we take a leading role in the assessment of risk, which is very real, and that we lead in that regulatory role. Everyone says that there are huge opportunities here but that they come with huge risks. That insight is in no way unique, but the question is: what are you going to do about it? It is important for the government to take action soon. We welcomed the release of those government reports a couple of weeks ago in relation to the consultation process. That was good, but we do need to see the government take action. We are a little perplexed as to why the Minister for Industry and Science seems to be leading this process on his own. Clearly there's a regulatory question concerning the internet which one would think would be in the domain of the Minister for Communications, and we would like to see a broader response to these issues from the government.

We also want to see the government take action right now on the particular issue of the use of intellectual property within generative AI models. Basically what's happening now is that a lot of IP of Australian companies is being taken for use in generative AI models, but there is no compensation being provided, as it would be in the normal course of events under intellectual property law. The government needs to act on that. That issue has been known for a while now, and they need to take action.

On the question of mitigating risks, it's a question of sovereignty. Just as every other technology has, ultimately, been controlled within the realms of our sovereignty and democracy, so too must AI. That is the framework through which you need to think about this. There's complexity in AI products that are created by open source. There are a whole range of issues about how one regulates products where the creators themselves are not entirely clear on how their products work or how they manifest themselves in the market. There's huge complexity. It's very likely that international cooperation will be required, and OpenAI has had some interesting things to say about that which the government should look at very closely.

I acknowledge, and thank, the member for Bruce for raising this very important issue.

Photo of Bridget ArcherBridget Archer (Bass, Liberal Party) Share this | | Hansard source

The time allotted for this debate has expired. The debate is adjourned, and the resumption of the debate will be made an order of the day for the next sitting.