House debates
Tuesday, 3 March 2026
Bills
Appropriation Bill (No. 3) 2025-2026, Appropriation Bill (No. 4) 2025-2026, Appropriation (Parliamentary Departments) Bill (No. 2) 2025-2026; Second Reading
6:10 pm
Andrew Leigh (Fenner, Australian Labor Party, Assistant Minister for Productivity, Competition, Charities and Treasury) Share this | Hansard source
In the 1950s a significant change took place in medicine. The advent of evidence based medicine saw that field move from an approach which had previously prized grey-bearded experts towards a more scientific and critical approach, which looked to test new treatments. Streptomycin, the polio vaccine and other treatments were evaluated using randomised trials. This was a significant step forward for medicine, saving thousands of lives. Treatments which were previously thought to be effective turned out, when subjected to a rigorous control, to be ineffective. Treatments which had been thought to be long shots turned out to save lives.
The shift towards evidence based medicine has marked one of the most crucial changes within medicine over the course of recent centuries. Right now, public policy is going through the same transition. Effectively, what we're trying to do with a randomised trial is to identify the counterfactual—what would have happened if the treatment hadn't been put in place. You can think of this a bit like Gwyneth Paltrow from Sliding Doors. In that movie, we get to see both of the alternatives—what happens when she catches the train and what happens when she doesn't. But real life isn't like that. We only get to see one pathway through, and, in evaluating new medicines or evaluating new policies, it's critical to have in our minds what would have happened if the policy hadn't been delivered.
Randomised trials assign people to the treatment and control groups through the toss of a coin. That approach ensures the two groups are identical when you have a large sample on both the observables and the unobservables. I've spent much of my career as an economist using natural experiment approaches, trying to tease out a treatment group and a control group using differences in state variation, using a regression discontinuity or using instrumental variables. But what we're trying to do in each of those instances is to benchmark against a randomised trial to see what would have happened if we'd been able to put a randomised trial in place.
In many instances in public policy, we now do have those randomised trials. When New South Wales was facing the question in the late 1990s as to whether to establish drug courts, it set up just such a randomised trial. That trial assigned people randomly to go through either a drug court with its specialist drug addiction treatment programs or the traditional criminal justice process. The evaluation found that drug courts passed a cost-benefit test. Those offenders assigned to the drug court were less likely to re-offend subsequently. You didn't have to place any weight on the wellbeing of drug offenders. You just had to look at the benefits to the community. Randomised trials can help cut through ideology in order to ascertain what works and build up that evidence base.
We've seen a suite of randomised trials put in place in educational interventions. I acknowledge the work of Richard Holden and others at the University of New South Wales, evaluating what happens when culturally appropriate questions are placed on tests. For example, if students are able to think in terms of the Parkes telescope—a local reference—rather than the Sydney Town Hall, they turn out to do better on the same questions. I see the member for Bass nodding. Her experience as a teacher reflects the understanding that it isn't simply about testing students on an arbitrary set of facts; it's also about making those questions culturally appropriate. That randomised trial conducted by the UNSW team provides insights that are going to be valuable in education.
In the United States we've seen the advent of randomised trials looking at what happens with high-quality housing programs. The Moving to Opportunity study provided housing vouchers for people in high-poverty neighbourhoods to move to low-poverty areas. It found a significant improvement in fields such as mental health and particularly a significant benefit to children whose families moved when they were very young.
In Australia, we've had a randomised trial of high-impact case management for long-term homeless people. The Journey to Social Inclusion evaluation conducted by Sacred Heart Mission in Victoria showed how difficult it is to make an impact on the lives of the most vulnerable. It found extremely low employment rates in both the treatment and the control group—a reminder not that we should give up on these challenges but that they are hard, and we need to do more in order to establish efficacy of programs intended to help the long-term homeless.
Increasingly, systems are being put in place in order to conduct more randomised trials. Our government established the Australian Centre for Evaluation within Treasury in order to conduct more rigorous experiments right across the policy spectrum. We understand that taxpayer dollars are provided in a sacred trust to governments, and that it is our obligation to do with those dollars the very best that we can. The Australian Centre for Evaluation has worked with the Department of Employment and Workplace Relations, the Department of Health, Disability and Ageing, the Department of Social Services and other agencies in order to build the evidence base. They produced a recent report, Strengthening evaluation in the Australian government 2026-2030, which turns that intellectual case into a set of operating instructions right across the Commonwealth.
The Australian Centre for Evaluation, led by Eleanor Williams, has conducted a scan of all of the available randomised trials within Australian public policy going back to the 1970s. That evaluation, that scan, has been important in terms of identifying what evidence is there and what more could be added. The Australian Centre for Evaluation has also ensured that they are using their own techniques, evaluating their own education programs, their own training programs, through randomised trials. I commend them for the work that they're doing in order to bring a greater sense of rigour.
Sometimes randomised trials can be testing significant programs such as drug courts or housing vouchers to move to low-poverty neighbourhoods, but sometimes they can also be looking at effective tweaks. The BETA behavioural insights unit within the Department of the Prime Minister and Cabinet has experimented with trials that simply inform those doctors who are superprescribers of antibiotics that their propensity to prescribe antibiotics is higher than average. That randomised trial has seen a significant decrease in the propensity of doctors to overprescribe—a straightforward tweak followed up through administrative data, which had a real impact in saving money for taxpayers and in reducing antimicrobial resistance.
This illustrates the point that randomised trials can also improve democratic accountability. Philosopher Ana Tanasoca and I have written about the value of randomised trials in strengthening democracy. At a time when democracy is under strain from populist forces around the world, it is important that governments are able to show to taxpayers that we are using the very best available evidence. One way in which we're able to do that through randomised trials is by showing how cleanly we are comparing the treatment and control groups. Randomised trials are an important way of systematising evidence. At a time when there has been a reproducibility crisis within a whole lot of fields of social science, it's absolutely vital that we are able to show clearly how the evidence is being built.
I acknowledge in the chamber the member for Banks, Zhi Soon, who spent considerable professional time before he entered parliament working on randomised trials and other forms of rigorous evaluation. He brings to this place a rigour of thought which is reflected in the work the Australian Centre for Evaluation is doing. I acknowledge, too, the leadership role that the Australian Centre for Evaluation has played through the OECD. They helped the OECD convene a workshop last year on randomised trials and other rigorous evaluations, which led to a recent policy paper, Unleashing the policy potential of rigorous impact evaluation and randomised trials.
Bringing together the expertise of people like David Halpern from the UK has been critical in order to build the rigour and the international evidence sharing. The Wellcome Trust, the European Economic and Social Research Council and other funding bodies are now supporting work in order to build more rigorous evidence and to distil the evidence that exists. In the UK the What Works Network, through centres such as the Education Endowment Foundation, ensures that policymakers have at their fingertips the best available evidence. The Education Endowment Foundation has conducted more than 200 randomised trials, and about half of all pupils in British schools have been part of a randomised trial conducted by the Education Endowment Foundation. That rigour is not only helping education policymakers; it's also helping principals and teachers themselves, who are able to turn to the Education Endowment Foundation site and see presented in very straightforward terms what works and what doesn't, how big the impacts are, in months of learning, and how much the program costs. In this way they can make a very scientific comparison of what's available.
I also commend the Paul Ramsay Foundation, whose experimental grants round funded to the tune of $2.1 million seven different randomised trials, which were aiming to test what works in social policy at a modest cost. This is an example—as the Arnold Foundation in the UK and Jon Baron and others have shown—of the power of low-cost evaluations. There were more than 100 non-profit organisations that stepped forward to apply for the Paul Ramsay Foundation's funding to support experimental programs. They recognise the value of having rigorous evidence in order to support policy programs.
We should hold fast to our passion for tackling big policy challenges, but we should hold lightly to any particular solution. We should be willing to acknowledge, just as those researchers who are trying to find a cure for cancer do, that programs that look good in the lab may not work in the field, and we should be open to moving on to other solutions. If you take 10 medical drugs coming out of the lab, only one of them, on average, will make it through three phases of clinical trials and into market. So, too, in the area of social policy, we should recognise that much of what we value will not necessarily have the impact that we intend it to have. The rigour that the Australian Centre for Evaluation brings is important in terms of building the policy case.
Alongside this, we need to do a better job of building systematic reviews. Rather than making decisions based on a single study, we should be drawing together all of the available evidence, weighting more highly the higher quality evidence and bringing out what health researcher Julian Elliott has called 'living evidence reviews'. Living evidence reviews came out of COVID, with the notion that you needed to continuously update the evidence. The Campbell Collaboration in social science and the Cochrane Collaboration within medicine have done a very good job of producing systematic reviews, but we now need to make them continuously updated so policymakers and practitioners can immediately reach out and find out what is the best evidence on any particular topic that they're engaged in. Better distillation of the evidence and better creation of new evidence will help to build the case for better economic and social reform.
Our government is committed to evidence based policy. You can see that in the record number of randomised trials we're putting in place and in the rigour with which we are training public servants through the evaluation profession. I commend all of those public servants who've engaged in the Australian Centre for Evaluation's evaluation training, which is raising the quality of evaluation understanding right across the Public Service and better serving the Australian people. With rigour and a 'try, test, learn' approach, we can better serve the Australian people.
No comments