Senate debates

Thursday, 3 August 2023

Adjournment

Artificial Intelligence

5:40 pm

Photo of David ShoebridgeDavid Shoebridge (NSW, Australian Greens) Share this | | Hansard source

The rise of artificial intelligence has rapidly brought to light many of the opportunities and challenges that we will face as the technology develops and inevitably becomes a part our everyday life. One of the applications of artificial intelligence which needs thoughtful and proactive consideration by this parliament is the use of generative AI, which is a type of artificial intelligence technology that can produce a raft of content, including text, imagery, audio and synthetic data. A common use of this technology is for what many refer to as deepfakes, and I'm sure a lot of Australians have already seen this type of content on their social media feeds with various degrees of persuasion. The reality is that as this technology develops it will become harder and harder to differentiate between content produced by generative AI and what we could call genuine content.

If you want to get a sense of how soon this change will be fundamentally impacting on us, then here's a striking fact from a European Union Europol agency 2022 report. By 2026, 90 per cent of online content may be generated or manipulated by generative AI. Let that sink in for a minute: in just three short years the vast majority of the content we will be seeing online may not be genuine and authentic. The same report warns that this may lead to people no longer having a shared reality and create social disharmony. If this is our future then we need to prepare for the impact it will have on how Australians perceive authority and the information media, and the likely loss of trust in authorities and official facts—any kind of common shared understanding of reality.

The truth is we're already seeing this happen, and this trend will likely be placed on steroids with the increasing impact of generative AI. There is a myriad of ways in which generative AI can be weaponised, including through invasions of privacy, gendered online violence, cyber-enabled crime, propaganda and disinformation, military deception creating international crises, and the erosion of trust in institutions. That's quite a list, and it's actually far from complete. We rely so heavily on the internet in today's society. It is genuinely frightening to think that, in a few short years, we may be literally dominated by artificial intelligence.

Two examples highlight the serious challenge we face from generative AI and why this parliament needs to be proactive in addressing these risks. The first is within the legal system and whether technology can be used to generate doctored, false or fake evidence which can circumvent the rule of law. It could make it extremely hard for courts to determine truth and it will impact the pursuit of justice. We've seen evidence of this already around the world, and, as legal experts have raised, most judges and many defence and other counsel may not even consider that deepfake material could be submitted as evidence. As this develops, it will inevitably have a profound and potentially detrimental impact on legal proceedings, where the outcome of legal trials may well be, as we've seen in a number of highlighted cases already, influenced by materials produced through generative AI. We need to address this before it rolls out into a crisis.

The other sphere where generative AI can be weaponised to cause harm is in the political sphere, where generative AI may be used to create content where elected officials, members of parliament or even our Prime Minister may apparently say things that they never did. We saw examples of this during the war in Ukraine, where a deepfake video of Ukrainian President Zelenskyy appeared to tell Ukrainian soldiers to surrender. Now granted that video was not the most convincing deepfake, but the fact remains that as the technology develops then inevitably this won't be the case. How will citizens differentiate between a deepfake video and a genuine one? How will anyone break down the truth in the 24 short hours before an election happens?

The time is now for the parliament to take this issue seriously and take proactive steps to ensure it protects Australians, our democracy, our justice system and social cohesion. The eSafety Commissioner just last week raised the alarm on allowing big tech to self-regulate artificial intelligence and cautioned against it. The reality is that self-regulation simply does not work, and we cannot rely on corporations and billionaires, who are driven by the profit motive, to adequately regulate this technology and protect us. We cannot sit idle and rely on reactive measures when so much is— (Time expired)