House debates

Tuesday, 20 June 2023

Adjournment

Artificial Intelligence

7:44 pm

Photo of Michelle Ananda-RajahMichelle Ananda-Rajah (Higgins, Australian Labor Party) Share this | | Hansard source

AI has exploded on the public consciousness thanks to ChatGPT. I started worked on AI during my doctorate in 2009 and I focused on a rare disease that affected patients with compromised immune systems and infection. I looked at developing a tool that relied on analysing radiology reports and the accompanying CT scan images of the chest. It focused on text and image classification. Over that 10-year journey of inquiry up until I decided to make the transition to politics, I developed this tool and deployed it actively at the Alfred Hospital. That journey allowed me also to really get a deep understanding of artificial intelligence, its potential but its limitations as well.

Looking back now, I can see 'AI', I would say, is actually a misnomer. Artificial intelligence is not really what these models are about. What they do deliver, though, is augmented intelligence. I think it is a more befitting term because what underpins these applications is actually a mathematical model; that is all it is. It is a mathematical model made up of statistical probabilities. This model is then trained on usually large volumes of data, whether that be images or language, in order to come up with a prediction. The model is either rewarded or it is panelised depending on how close it gets to ground truth, and therein lies the rub. What is ground truth? Ground truth we take for granted as humans but, to a machine, ground truth can mean a lot of things. It is honestly a case of garbage in, garbage out. If you train an AI model with a vast amount of data which is unlabelled, it may well end up spitting out outputs that are not quite accurate, and we are seeing that now with ChatGPT.

Although this application is able to write essays and prose, it has inherent problems. We are seeing that with the inadvertent creation of fabricated references. ChatGPT is being used by university students to write essays, and what it is also doing is actually inventing references, fabricating them. But they look so authentic that it is actually causing major headaches for university lecturers and tutors, who are then having to fact check what an AI application has done.

The issue in medicine particularly is one of bias. This is not something we spend enough time talking to. When you have large amounts of publicly available data, that data is often messy. It is untidy and it needs to be organised into what is ground truth. The problem here is that we are drowning in information but we are essentially starved of knowledge. If we are expecting AI to come up with wisdom, then I think we are certainly not going to get to that point, because it is not going to be the salvation of humanity, I can guarantee you that. Why? Because bias is inherent to AI. It permeates every step, from ideation to implementation.

The problem with AI is that it risks actually amplifying the biases that we already have. As British philosopher Miranda Fricker says, we come basically with baked-in biases and attitudinal fallout from a semi-toxic social environment. What that means is our own biases infect the AI models we are trying to create. It starts with the teams that are usually highly gendered, male dominated, devoid of the kind of experience needed to create models that are representative of the community they are trying to serve, and then the data itself is highly imbalanced, usually dominated by groups that are well represented in the community, with marginalised or underserved groups in the minority. That, in itself, can mean that an AI model can amplify not just medical harms and compromise patient safety but also social harms. I would say to the population: be careful what you wish for. AI is not all that it is cracked up to be.