House debates
Thursday, 26 March 2026
Statements on Significant Matters
Royal Commission into the Robodebt Scheme
11:41 am
Kate Chaney (Curtin, Independent) Share this | Hansard source
I want to start by sending my condolences to the thousands of Australians who were wrongly pursued under robodebt, which will remain a very dark stain on our history. Robodebt didn't happen because of a couple of bad actors. It happened because a culture of compliance replaced a culture of integrity, and ministers and departments prioritised budget outcomes over human lives.
The NACC's findings lay bare the limits to its ability to deliver justice to those impacted by robodebt, especially in the absence of public hearings, and the lack of criminal referrals provides cold comfort to the hundreds of thousands of Australians who were wrongly pursued by robodebt, particularly the families who lost loved ones. This has left the families who have fought tirelessly for the truth for the last nine years pretty devastated.
The simple truth is that robodebt was an unlawful system introduced to improve a government's bottom line by punching down on some of the most vulnerable Australians. Robodebt was cruel, it was illegal and the government knew it. It was a shocking low point of unethical behaviour in the Australian Public Service. The Albanese government must learn from the mistakes of past governments, particularly when alarm bells are ringing on NDIS and aged-care assessments right now.
So what can we do differently to prevent this happening again? There are a number of things, but part of what we can do is about automated decision-making. To restore trust in the Public Service, Australians deserve full transparency on how government decisions are made when their lives are affected, especially when those decisions are automated. Robodebt exposed the very real risks of automated decision-making in government. Automated decision-making, or ADM, is the use of automated technology to make decisions or assist humans to make decisions. It can be low risk. For example, the government uses ADM to automatically calculate, process and transfer your Medicare rebate after you visit the doctor. But it can also be high risk, and the use of high-risk ADM is only increasing, particularly with the growth of AI technologies.
Today, automated tools are being used or proposed in complex, high-impact areas like aged care and the NDIS. In these systems, human assessments feed into automated tools that determine support packages, and in some cases human decision-makers cannot override the result, even on review. These are decisions that profoundly affect people's lives, health and dignity. Without transparency on how these systems work, there's growing concern and fear in our communities. If we fail to act, we're at risk of repeating the same pattern we saw with robodebt—different systems but the same underlying failure.
The royal commission saw this clearly. Among its recommendations was the need for a clear, legislated framework governing automated decision-making in government to ensure transparency, accountability and legality. That recommendation was made nearly three years ago. Since then, there have been consultations. There are guidelines. There are policies covering some uses of artificial intelligence. But we still do not have a comprehensive, mandatory framework that applies across government, including to the kinds of rules based systems that caused robodebt in the first place. While the warning was heard, the action has not followed, and it's increasingly ridiculous to hear the government talk about the coalition's robodebt failures when it has done nothing to actually make the system better—nothing to ensure it won't happen again.
That's why I'm putting forward a new idea to push the government to action on this. I'm calling for a new framework to govern the use of automated decision-making in government. This must be legislated and it must be mandatory. At its most basic level, we need a framework to ensure government decisions are safe, lawful and fair. A robust framework ensures automated systems are tested before use, appropriate to the task and governed by clear lines of accountability. It would help prevent decisions that are unlawful by design, it would ensure humans remain accountable where human judgement is required and it would create clear pathways to correct mistakes quickly before harm escalates. This is the minimum standard that Australians should expect after robodebt.
It's not just about preventing harm. Automated decision-making also presents genuine opportunities. When it's designed well, it can make governments faster, more consistent and more responsive. It can reduce backlogs, it can streamline routine rules-based decisions and it can free up public servants to focus on complex cases where empathy, judgement and discretion are needed most. In a system under pressure from growing demands, constrained resources and rising expectations, these efficiencies do matter. But those benefits will only be realised if the government has the confidence to deploy automation responsibly and if the public has the confidence that it will be used fairly. Without a clear framework, departments are left to navigate these risks on their own. Some become overly cautious. Others push ahead without sufficient safeguards. A clear, legislated framework would give certainty, would allow innovation within boundaries and would support better decision-making.
None of this is possible without trust. Robodebt did enormous damage to public trust in government decision-making, particularly in automated systems, and that damage has not healed on its own. I saw this clearly in my own community consultation that I undertook last week. More than 750 constituents responded to my survey about automated decision-making. More than three-quarters of respondents said they were uncomfortable with the government using automated systems to help make decisions about them. An overwhelming majority supported mandatory, legislated rules. The message was clear: people don't trust ADM.
Trust is not built through slogans or assurances. It's built through transparency, accountability and meaningful rights of review. Without a framework, every new automated system will be met with scepticism and fear, no matter how well intentioned. With a framework, we can rebuild confidence and allow government to genuinely harness the benefits of responsible automation.
So what does a good automated decision-making framework actually involve? Well, at a high level it needs three pillars. The first is transparency. People have a fundamental right to understand decisions that affect them, particularly when those decisions are made by machines. A strong framework requires transparency at three levels. At the system level, there should be public visibility over where automated decision-making is used. A public register of ADM systems would allow security, accountability and informed debate. At the decision level, people must be informed when an automated decision has been used in a decision about them, and this should never be hidden. And, at the explanation level, people must be given meaningful reasons for decisions in plain language. If a system cannot explain how it reached an outcome, it has no place in decisions that affect people's rights or livelihoods. Transparency is simply essential to build trust in ADM.
The second pillar is strong decision-level controls. Not all decisions are the same, and a framework must be risk based. Before any automated system is deployed, its risks should be assessed. Some decisions, particularly those involving discretion, complexity or significant potential harm, may simply be inappropriate for automation. Where automation is used in high-risk contexts, clear safeguards are essential. There must be a human who's accountable for the decision and its outcomes. Responsibility cannot be delegated to an algorithm. For high-impact decisions, there must be meaningful human involvement, including the ability to override or correct an automated outcome. Automation should support decision-making, not replace responsibility.
The third pillar is review and oversight. One of the most damaging aspects of robodebt was how difficult it was to challenge. When people did challenge it, the system pushed back repeatedly. A good framework ensures people can quickly raise concerns and seek review, particularly where delay could cause serious harm. It also requires ongoing monitoring and testing of automated systems so that errors are identified early, not after years of damage. Critically, there must be independent oversight. Without an independent body empowered to enforce the rules, a framework risks becoming aspirational rather than real. Independent oversight ensures compliance, accountability and continuous improvement. Rules only matter if they're followed; the robodebt scheme was unlawful at the time, but, without a well-resourced oversight body, it was left unchallenged. A new framework governing the use of ADM in government is essential and will only work if it includes strong oversight.
Robodebt was a profound failure, and it's time we did something so that it doesn't happen again. We need a legislated framework for automated decision-making, and this must include requirements for transparency, strong decision-level controls and provisions for oversight and review. This framework would support the government in unlocking increased efficiency for automation and present minimal risk-based compliance requirements to departments and agencies. It would also build trust. It's time to show Australians that we've learnt from robodebt and that we're prepared to do what's needed to prevent it from happening again.
No comments