Senate debates
Wednesday, 4 March 2026
Motions
Online Safety Act 2021
9:02 am
Fatima Payman (WA, Australia's Voice) Share this | Link to this | Hansard source
In line with general business notice of motion No. 386 given yesterday, I move:
That the Senate—
(a) notes that the statutory review of the Online Safety Act 2021, which was completed in October 2024, remains unimplemented by the Government;
(b) further notes that the statutory review of the Online Safety Act 2021 found that the eSafety Commissioner is ill-equipped to protect the Australian women and girls who are subjected to harassment, abuse and cyberbullying on a daily basis; and
(c) calls on the Government to implement the recommendations of the statutory review of the Online Safety Act 2021.
Colleagues, imagine waking up to find your face circulating online, not in a family photo, not in a news article, not making headlines, but in a pornographic deepfake. You open your inbox and read, 'We are going to rape you. My bat is going to fit nicely in your skull. I will cut off your throat.' Your home address is published. Your family's names are circulated. Images appear of you hanging in a noose and a gun pointed to your head. You call the police, you are told it is serious, you attend a station, you are handed a pamphlet, you contact the cyber security hotline, you are referred back, you contact the eSafety Commissioner and you are told that this does not meet the threshold. That was the experience of Perth activists Caitlin Roper and Lyn Kennedy. It was the experience of women who campaigned against video games that simulated rape. After more than 20,000 games were removed, the backlash came—coordinated, sustained and vicious—with threats to rape, threats to kill, deepfake pornography, doxxing. And when they sought help, they were told that the law could not act.
That is not a failure of effort by the police or regulators; it is a failure of the threshold set by this parliament. The statutory review of the Online Safety Act delivered in October 2024 found that the adult cyberabuse scheme is not fit for purpose. Only around six per cent of reports to the scheme meet the current legislated threshold—that's six per cent. When 94 per cent of Australians who report abuse are told, 'It does not qualify for removal,' we must ask whether we have set the bar too high for a civil take-down regime. Under the current law an ordinary, reasonable person must conclude the material was intended to cause serious harm to an Australian adult before eSafety can act. Serious harm—that may be an appropriate standard in a criminal prosecution, but the adult cyber scheme is not about imprisonment; it is about removing menacing, harassing and abusive content from public circulation. Requiring proof of serious harm in that context has been proven disproportionately difficult. The review made a clear recommendation: lower the threshold.
Yesterday I introduced the Online Safety Amendment (Broadening Adult Cyber Abuse Protections) Bill 2026, which implements recommendation 18. It recalibrates the test so that an ordinary, reasonable person must conclude it is likely the material was intended to have an effect on a particular Australian adult and the material is, in all the circumstances, menacing, harassing or seriously offensive. At the same time, the seriousness requirement is strengthened. Material must not be merely offensive; it must be seriously offensive, representing a significant departure from the standards of morality, decency and propriety generally accepted by reasonable adults. This is rebalancing that we need. It lowers one limb and raises another. It maintains an objective test, and that's what experts have been calling for. It preserves safeguards. It ensures that the scheme catches genuinely harmful conduct without sweeping in robust debate.
Colleagues, this is not theoretical or fanciful. We know from eSafety's own research that 70 per cent of adults have had at least one negative online experience in the past year. Women, First Nations Australians, culturally diverse communities, LGBTQI Australians and people with disability are disproportionately targeted. Organisations across this country see the consequences every single day. Phoenix Australia works with victims of sexual trauma, including those who experience sustained harassment and threats that create ongoing psychological harm. Women's legal services report increasing numbers of clients whose online abuse escalates into fear for their safety. Headspace and Lifeline speak about the impact of digital harassment on young people's mental health. And mental health organisations are telling us, time and time again, that online abuse is not just words on the screen. It has real-world consequences in the form of anxiety, hypervigilance, depression and withdrawal from public life. Advocacy organisations such as Collective Shout have warned that women are increasingly silenced by fear. Disability advocates speak about targeted harassment campaigns designed to exhaust and intimidate people.
This is the environment that Australians are navigating every single day—and I know what it's like to stand in that environment. Nearly two years ago, I crossed the floor of the Senate chamber on a matter of conscience. I knew it would generate anger, but I did not anticipate the volume and persistence of the threats that have since followed. You can grow accustomed to the criticism because it's just part of public life, but what you do not grow accustomed to are the threats to kill you or threats against your husband, your five-year-old nephew or even your British short-hair kitten, or messages describing how someone will shoot you. The routine contact with the Australian Federal Police about individuals making credible threats is not just exhausting; it's time consuming and it's something that we should never have to subject our teams and our officers to. The vicarious trauma that impacts our staff is definitely something that needs to be addressed.
Going back to the online world, it's not abstract when you see your address circulating or your family members' names mentioned. This isn't about politicians feeling uncomfortable; it's a broader pattern of behaviour that we're seeing. It's disproportionately affecting women, especially those who choose or have the bravery to speak up.
When Caitlin Roper told my office that eSafety dismissed the material because it referred to an organisation rather than an individual, even where individuals were specifically tagged in threats to murder them, it exposed how the current threshold can operate in practice. The regulator is constrained by the legislation we have written, so, if we want a different outcome, we must change the law. The amendments in the bill do not criminalise speech or the freedom of speech. They do not remove the reasonable-person test. They do not eliminate safeguards. They are just the threshold so that the targeted abuse can be addressed before it escalates further.
The amendments also apply to material provided before commencement. Again, this does not create criminal liability. It simply enables the regulator to issue takedown notices for material that meets the recalibrated threshold regardless of when it was posted. Again, we see that as a protective measure rather than a punitive measure.
Freedom of expression is fundamental, and it is protected under article 19 of the ICCPR, but it is not absolute. The same covenant recognises that restrictions may be imposed where necessary and proportionate to protect the rights and reputations of others—the right to dignity, the right to safety and the right not to be subjected to unlawful attacks on honour and reputation. I see this bill as being proportionate. It maintains a high seriousness bar. It ensures that only material representing a significant departure from accepted standards is captured.
I implore each and every one of you to look into the bill that I introduced yesterday. In three months time, I will become a mother, and I will bring a baby girl into this world. I think often about the world she will grow up in. Will she feel that participating in public debate is worth the risk? Will she believe that her voice matters? Will she inherit an online environment where threats of rape and violence are dismissed as the cost of speaking up, or will she grow up in a society that clearly says, 'This crosses a line'?
We cannot control every individual who hides behind a screen and acts as a keyboard warrior. We cannot control every individual who hides their identity and continues perpetrating the harassment online. But we can ensure that our laws reflect the reality of digital life. We can ensure that, when someone reports targeted abuse, they're not turned away because the threshold is unrealistically high for a civil scheme. We can ensure that our regulator has the tools necessary to act.
We often speak in this chamber about the kind of country we want to build together. We speak about respect, equality and safety. The online world is no longer separate from the real world. It shapes reputations, careers, relationships and mental health. When threats to rape and kill are normalised in comment sections, when deepfake pornography circulates without consequence and when victims are handed pamphlets instead of protection, trust in institutions erodes.
The statutory review has given us a road map. The recommendation is clear. The evidence is compelling. The harm is real, and the experts have been speaking to us and making sure that we understand that this will not just go away. The bill I introduced yesterday, like I said, is proportionate, it's measured, and it's evidence based. It ensures that the adult cyberabuse scheme functions as intended, and it sends a signal that this parliament will not ignore the lived experience of targeted online abuse.
Colleagues, we have a responsibility to consider what kind of digital environment we are shaping. In three months time, when I look at my little baby girl, I want to be able to tell her that, when women are being silenced by threats and intimidation, this parliament did not look away. I want to tell her that we chose dignity over indifference, that we chose protection over paralysis, that together we chose to act to stamp out this harassment and abuse. I commend the motion to the Senate, and I urge each and every one of my colleagues to support the passage of the bill when it comes to be debated in the future. Thank you.
9:16 am
Sarah Henderson (Victoria, Liberal Party) Share this | Link to this | Hansard source
It's my pleasure to rise and speak on what is a very important issue for all Australians, and that is the online safety of Australians. We all want to know that, when we go online, we are not going to be subjected to some of the behaviour that we've just heard in Senator Payman's speech.
I want to start by making a couple of important corrections to the representations that Senator Payman has made. Firstly, I think it's very important to understand that there is the criminal law in this country. Many of the terrible things that Senator Payman outlined—threats to kill, other threats to menace, deepfake porn and the like—already have very strong existing provisions in both our civil and criminal law.
Of course, starting with the criminal law, it is an offence to threaten to kill, to threaten to harm or to menace, to harass or to intimidate someone using a carriage service, and there are very significant consequences for that. It was never the intention of the parliament to enact the online safety framework and cut across the critical role that the criminal law plays in protecting people from those sorts of heinous acts, regardless of whether they are online, on the street or otherwise in the community.
I also want to briefly mention—and I congratulate Senator Payman on her pregnancy; it's very exciting that she's about to give birth to a baby girl—and reference what Senator Payman has gone through personally. She talks about what she endured after she crossed the floor, and I make the very brief point that I think a lot of what she endured was isolation and condemnation by fellow members of the Labor Party after she exercised her conscience, crossed the floor and was forced to leave the party—or expelled. I want to put that on record.
I also want to make the very important point that the adult cyberabuse scheme in the Online Safety Act was designed to target serious, targeted harm, not lawful disagreement. The statutory test requires that material needs to be intended to cause serious harm—harm that is serious, not trivial, subjective or merely offensive. That high threshold was not accidental. It was a conscious safeguard to prevent regulatory overreach, so I want to flag it also in relation to the private senator's bill. I want to flag that lowering the definition of adult cyberabuse risks converting a harm based safety regime into, potentially, what I would say is a speech-policing mechanism.
In relation to the first particular example that Senator Payman gave, in relation to some of those heinous threats that were made, I am interested to understand the detail because there is no doubt that any sort of suggestion of criminal conduct or doxxing—the eSafety Commissioner does have very important powers to act in relation to some of those matters, but it sounds to me, Senator Payman, that they are principally criminal matters. As I say, there are very important provisions in our criminal law to deal with that sort of conduct.
I do want to raise my concerns in the context of the full Federal Court decision which was handed down just a couple of weeks ago, where the full Federal Court ruled in favour of the children's rights activist Celine Baumgarten, finding that the eSafety Commissioner had improperly issued a take-down notice to X seeking to remove a post where Ms Baumgarten had raised concerns about a queer club at a Melbourne primary school. This was a damning finding against the eSafety Commissioner, because what we know is that, in fact, the eSafety Commissioner has been using these so-called 'informal notices', writing to the online platforms, saying that these are not within the terms and conditions of the platforms' operations and requesting take-downs of those posts. However, what the court has found is that those informal notices did actually constitute take-down notices and that they were, in fact, in breach of the Online Safety Act because they did not meet the threshold for adult cyberabuse. So what the eSafety Commissioner has been doing in approximately hundreds of cases every year under the Adult Cyber Abuse Scheme is issuing notices that don't comply with the law.
I do want to put on record that the eSafety Commission has now changed the way it goes about issuing those informal notices—after Celine Baumgarten took that matter to the Administrative Review Tribunal—making it very clear that there is no obligation for the platform to take any action and that it's a voluntary scheme only. That's in stark contrast to the sorts of notices that were previously being issued, which, of course, the full Federal Court found were unlawful. Those notices included a reference to section 7 of the Online Safety Act, suggesting very clearly that the conduct complained of was a breach of the Online Safety Act, specifically the cyberabuse scheme.
One of the recommendations in the Online Safety Act review is recommendation 14. That says:
For the avoidance of doubt, the legislation should make it clear that informal requests for takedown are legal and legitimate as they lead to quicker results for individuals who are often in severe distress.
Well, clearly, we've now had this full Federal Court decision, which makes it clear that you can't turn something that's unlawful into something that's legal just by stating so. Recommendation 14 is, I would put, a way of trying to circumvent the current law, which the parliament, as I say, has very deliberately crafted to ensure that the eSafety Commissioner has powers to act in limited cases.
What I'm concerned about in relation to the Celine Baumgarten case is that she has, in my view, raised very legitimate concerns about extreme gender activism at a primary school. She wrote in her post and took significant issue with the fact that the school, she felt, was indoctrinating young children aged between eight and 12 about radical gender ideology. She said:
Children should NOT be learning about sexualities at such a young, impressionable age.
This is foul. Leave the kids ALONE.
She certainly did identify a particular teacher. This teacher had actually published some information about the queer club in a school newsletter, and this, of course, ended online.
My concern about lowering the threshold is that there is a real risk that, if we turn the Adult Cyber Abuse Scheme into one which prevents someone from being deeply offended, we are then getting into a whole new world of stifling free speech, and we know that this is a fundamental right for all Australian. This is a fundamental right. I have to say that we are proud of the intention to combat some of the risks that we are seeing now online. As I say, the online safety of all Australian is incredibly important, but in recent times we have seen a number of decisions by the eSafety Commissioner that I would suggest are really a bridge too far. They are really stepping into the area of stifling free speech, and that is a fundamental right of every single Australian.
I want to refer to the evidence that the eSafety Commissioner gave in estimates when I was questioning her about this case. This, of course, was before the full Federal Court made this decision. In a statement that I released on 25 February I said that the eSafety Commissioner should clarify her evidence, because, when I asked about Ms Baumgarten's post, at one point the eSafety Commissioner did actually say that she thought this was adult cyberabuse and then she corrected her evidence. So it was quite confusing, and then the general counsel for the eSafety Commissioner made it clear:
We didn't see it as adult cyberabuse. That's our assessment: it wasn't adult cyberabuse.
We have to get the balance right. That is critically important. The Federal Court scrutinised the limits of the eSafety Commissioner's powers under the act, and I think that what this case has illustrated is that we've got to be very careful to ensure there are objective legal standards. There's a lot of constitutional sensitivity surrounding the regulation of online speech, given there is an implied freedom of political speech in our Constitution, and if that threshold were lowered, there are real concerns that, in a case such as that of Celine Baumgarten, concerns about freedom of speech would intensify.
There was another case, involving a person by the name of Chris Elston, known as Billboard Chris. That particular case also demonstrated overreach by the eSafety Commissioner when he took the eSafety Commissioner on and, again, the eSafety Commissioner was found to have overreached in her powers. This was back in 2023-24, when the eSafety Commissioner issued a removal notice that referred to an Australian transgender activist. The commissioner formed the view that the material constituted adult cyberabuse under the act and the post was geoblocked in Australia following the notice. This was challenged in the Federal Court and, again, the decision was made that this material did not meet the statutory threshold of being intended to cause serious harm. That is the really key issue here. The act provides that there must be an intention to cause serious harm.
If we were to adopt the approach of Senator Payman in relation to conduct which causes deep offence, then we are straying into a very, very different world. Someone can say something where there's no intention to cause offence, but, by changing the act quite dramatically to include that and many other recommendations in the online safety review, I think we are then facing a whole new world in terms of the government's obligation to protect freedom of speech. I certainly think there is merit in looking at issues where perhaps there are some gaps in the law. Senator Payman identified that there was no room for the eSafety Commissioner to act because it involved an organisation, not a person. I'm not suggesting that there is not merit in perhaps having a look at some aspects of the Online Safety Act, but I am concerned that we are entering a whole new world where the eSafety Commissioner is using the Online Safety Act in a way that the parliament did not intend. That, as I said, is a bridge too far.
We've got to remember that serious offence is not necessarily harm. Democracies can be noisy. Conversations can be robust; they can often be uncomfortable. Political speech can be confronting, offensive, passionate, even harsh. But offence is not the same as cognisable harm. So, if subjective distress becomes the benchmark, almost any controversial view could be suppressed. This could have a chilling effect on public debate. We cannot lower the threshold to cross this bridge. It is a bridge too far.
9:31 am
Sarah Hanson-Young (SA, Australian Greens) Share this | Link to this | Hansard source
I rise to contribute to this debate this morning put forward by Senator Payman. I want to thank Senator Payman for bringing this issue forward, because it is important. After almost 18 months since this government introduced the social media ban—a very blunt instrument—they have failed to respond to the 67 recommendations of the Online Safety Act review. These are 67 recommendations which go to the very heart of what needs to be done to make our online world safer for everybody—safer for children, safer for adults, safer for young people, safer for old people, safer women, safer for men, safer for everybody. But the government has been dragging its chain on implementing these recommendations precisely for some of the reasons that Senator Henderson has suggested. They don't really want to take on those who profit and who use the online space to peddle their fear, to peddle their hate, to peddle their misogyny all under the false claim of freedom of speech.
If you're not allowed to abuse people and threaten people and hurt people in the supermarket, in the street, at your local sporting club, in your schoolyard or in your workplace, why on earth should you be allowed to do it in the online space, which now, of course, is the public square? There are big, big forces, vested interests, pushing against making those who profit off of this hate online profitable. The big tech companies—those that are making massive profits of billions and billions and billions of dollars a year—don't want these types of laws enacted. They don't want the online safety commissioner to be able to do their job effectively, because they are making billions off seeing hate, misogyny and abuse circulate and go viral online. The entire business model of social media companies and the digital tycoons is peddled by individuals spreading hate, abuse and nastiness. It is why we need not only to ensure the online safety commissioner has more powers to do its job, crack down and hold individuals to account, and to take down damaging, abusive, dangerous content, but also to ensure, as individual users online, that we can control what comes into our feeds.
The algorithms that power our social media feeds and the online space are all designed to profit from fear, misogyny, racism, hatred and abuse. The tech companies—Elon Musk, Mark Zuckerberg and all of those big tech billionaires—have monetised fear and hatred. They do it through their algorithms. They control how viral it goes and how far that hidden abuse spreads. They turn it up or turn it down. They don't care about that end user—they don't care about us as individuals online, as people, as humans. All they care about is how much money each post, each eyeball or each click is making for them. They have become merchants of hate and fear, and they use their algorithms to profiteer.
That is why we need to implement not just these recommendations—and the government needs to stop dragging its feet—but laws that allow each individual user to control their own algorithms. We should be able to make the choice as to what we want to see in our feeds or not. We shouldn't have advertising rammed into our feeds even when it's dangerous and harmful. We shouldn't be having young people and kids bombarded with gambling advertising online, or alcohol ads, or ads that promote pornography sites, or ads that feed into young women's unhelpful and unhealthy body issues—or ads for quack medicine, phony drugs that help you get skinny or phony drugs that make you happy. We shouldn't have these companies being able to dictate what type of hurtful information is pushed into our feeds. If we don't want to see it, we should be able to turn it off. We should be in control of our own feeds. We can only do that if we regulate and make it legal for us, as customers—as the user, as the individual, as the people—to be able to control what we see and what we don't see. We should be able to turn off misogynistic posts. We should be able to turn off racist rubbish. We should be able to tune out of content that is unwanted and unwarranted. The tech bros don't want us to have control of our own feeds because they want to be able to push whatever nasty and viral content they want that is making them maximum profits at the time. They want to be able to control what goes into every single one of our phones, our tablets and our smart televisions.
This is about choice. Yes, we need to crack down on harmful and hurtful and abusive information, comments and content. People who are posting that abuse need to be held accountable—but those who are making money from this hatred are those that really need to be taken on. We have to hit them where it hurts, and that is their business model. They should not be able to ram advertising down the throats of minors online. They should not be able to control everything that comes into your feed or my feed or our children's feeds online. It is about choice.
When I hear the coalition and the coalition spokesperson stand here and make contributions about freedom of speech—what about the freedom to choose? What about the freedom to have control over our own algorithms and not just have it outsourced to billionaires that are making billions and billions and billions of dollars in profit at the expense of the safety of women and at the safety of people in vulnerable situations—those that are struggling with eating disorders or those that are struggling with racist comments and abuse or young men who just want to be young men growing up in the world and figuring out how they fit? They shouldn't be bombarded with misogynistic rubbish that tells them the only way to be a man is to be a bully and the only way to be a man is to be abusive to women. They should be able to opt out of that. They should not be bombarded with misogyny just because it makes people like Mark Zuckerberg millions and billions of dollars.
I urge the government to get on with the important steps of reform in this space. They promised to move on a duty of care bill. We still haven't seen it. Where is it? 18 months later, we've still got nothing. It is as if they introduced the social media ban for under-16s and thought: 'Oh well, job done. Move along. Nothing to see here.' Meanwhile, abuse, hatred, misogyny and racism thrive online. But they think they have done something.
There's a reason that the social media companies didn't want the government to touch their algorithms and the transparency of how they work and of what data is being used. As individual users online, we should have control of our own data. We should be able to decide whether it gets sold to advertisers. We should be able to choose who gets to see it and who doesn't. We should be able to choose what's in our feeds and what isn't in our feeds and what we want to see and what we don't. That is freedom. Spare me the tears from the coalition about freedom of speech. How about freedom to choose what is on our phones and what isn't and what Facebook, Meta, Instagram, Twitter and TikTok will make money off of from our content, our children's content, our young people's content and our mums' and dads' content? Australians should have the right to choose whether our data is used for advertising and what our algorithms show in our social media feeds. That would instantly make a big difference to the abuse that we see towards women, towards people of colour and towards minorities. It would see a huge decrease in the amount of abuse online, because there wouldn't be people making money off of it. You want to make online safer? You have got to hit them where it hurts and you have got to stop the merchants of hate and fear from being able to make money off our social media feeds.
9:44 am
Matt O'Sullivan (WA, Liberal Party, Shadow Assistant Minister for Fisheries and Forestry) Share this | Link to this | Hansard source
The coalition opposes this motion and is likely to oppose the private senator's bill introduced by Senator Payman, the Online Safety Amendment (Broadening Adult Cyber Abuse Protections) Bill 2026. But I just want to say, having listened to Senator Payman's contribution on this, she presented a very thoughtful and considered speech. I think that what's behind the bill and what you're trying to achieve is notable and definitely worth commending. While I disagree with what it might achieve and how it would achieve it, I do want to seriously commend you, Senator Payman, for the work that you've done.
You raise a very important issue. I've got children, and they're through those very difficult teenage years and are older teenagers now. But I've watched them and their friends deal with the challenges of the online world that, as a 47-year-old, I didn't have to deal with when I was a teenager. Young people these days are dealing with things that obviously generations before them didn't have to deal with. It is important that we tackle this issue. It's important that the government deals with this seriously. It's a global issue and it's something that needs to be dealt with.
This issue of online safety has been recognised by the coalition and indeed also by the Albanese Labor government, which started the Statutory Review of the Online Safety Act in November 2023. But that was 2½ years ago. Ms Rickard was appointed to undertake that work and provide a report to the minister by 31 October 2024, which was done. Ms Rickard's report was extensive, and more than 150 public submissions were received. The report, all 200-plus pages, made 67 recommendations, and, while the report was completed in October 2024, the minister sat on it for nearly six months before tabling it publicly in parliament in February 2025. That was over a year ago. Despite receiving the report over a year ago, the minister's most recent statement notes that the government is 'continuing to carefully consider' its recommendations.
You could be critical of the government for taking so long. You could be critical of the government for sitting on this, for sitting on its hands, and criticism is rightfully due to be put onto the government. But the government is right to carefully consider these matters because matters of freedom of speech are critical to a functioning democratic, civil society. This is a vexed issue. Of course we must protect children and of course we must put in place measures that provide that protection. No-one wants to see children, teenagers, put at risk online, but, if we overreach when it comes to restrictions on speech, then who knows what the consequences could be?
I'm pleased that the government is carefully considering it. They shouldn't use that as an excuse not to take proportionate action that's required, but I urge them to carefully consider these things. If we just run at a million miles an hour into implementing some of the recommendations that were in that report and that would have maybe unintended consequences and a chilling effect on freedom of speech, then the consequences for this country—and indeed we're leaders in the world on this—would be significant. We absolutely need to deal with these things carefully.
But we do need a digital duty of care. The government has called on the public to have their say, but what's happened since? What's the government actually doing? I think it's right that Senator Payman and even Senator Hanson-Young—while I disagree with a lot of what she was saying, there were some elements, which I'll come to in a moment, that I do agree with, particularly when it comes to choice; I absolutely agree—are critical of the government. We want to know what you're actually planning to do and we want to see action. Parents want to see action. Parents want to understand what it is you're actually going to do, and of course freedom lovers want to understand it too, because we know that there is a propensity, particularly with this government, to overreach when it comes to impinging upon our rights and freedom of speech. So we want to make sure that what you're going to do and implement is appropriate and proper.
Children are of course the ones that are most vulnerable to cyberbullying, grooming and other nefarious online abuse. It's not just children though. Adults—particularly vulnerable adults—are susceptible. So it is right that the responsibility is put upon platforms to prevent this online abuse from happening in the first place. Now, while the Minister for Communications is busy attending sporting events and jetting off overseas on promotional tours, Australians at home are still waiting on the Albanese Labor government to take appropriate steps and action.
I want to go to something that Senator Hanson-Young said about choice. I think she's actually right when she says that users of these platforms ought to have choice over what they are fed on their feeds and what information they have that is then shared and used. I think that's right. In fact, I don't just think it; I absolutely agree with that. It should be the domain and right of the individual, of the user. I worry that, if we're giving the choice to the platforms themselves or, dare I say, to government or to the eSafety Commissioner to determine and make the ruling on what information can be shared and spread on people's feeds, that's where we start to go into dangerous territory. But we absolutely should have these platforms enabled to give greater choice and transparency to the individual users. It actually is the epitome of freedom of choice and freedom of speech when we as individuals are empowered to make choices about what our information is used for and what information we receive. Absolutely, it should be the right of parents to have more than a say and put in place the controls that are necessary to prevent information that's being passed on.
The review acknowledged that freedom of speech must be protected in order to preserve online safety. Senator Henderson, my colleague, has called for a full investigation into the eSafety Commissioner after the Federal Court ruled that the regulator exceeded its powers by issuing a takedown notice to X—Twitter. The case Senator Henderson referred involved the children's rights activist Celine Baumgarten, who posted concerns in 2024 about a Melbourne primary school teacher promoting gender ideology. Despite an internal finding that the post did not meet the legal threshold for adult cyberabuse, the eSafety Commissioner sent X a complaint alert resembling a formal removal notice citing the Online Safety Act. Both the Administrative Review Tribunal, the ART, and the full Federal Court found that the commissioner acted out side of her statutory authority, leading to the post being reinstated.
This raises serious concerns about free speech, the misuse of regulatory powers and whether the commissioner attempted to suppress views about gender activism in schools.
Adult cyberabuse is significant and a serious issue, but, if a post does not reach the requisite threshold, what the eSafety Commissioner did was effectively stifling free speech when we know free speech is, as I have already discussed, one of the most important and fundamental rights that we have in this country. If we undermine this, we actually undermine the very fabric of the society that we live in. You only need to look at countries across the world—none more so than what we're seeing on our televisions right now, as I stand here—to see that oppression at its extreme, when it comes to freedoms of speech, where you see a country eroded.
So it is of course of great concern. It's something that the minister needs to answer. The minister needs to answer how often the commissioner has improperly used take-down notices and whether individuals have been properly informed of appeal rights. Importantly, we really should have an answer on what the total cost is to the taxpayer for the legal proceedings.
In conclusion, the government should not just shove this report in the bottom drawer; they should respond in a considered way, absolutely, as I have outlined. The report raises serious concerns around online safety, particularly as it pertains to children and young people. This government needs to be accountable in how it stewards the safety of the next generation. In the same breath, though, we cannot hide censorship behind the mask of safety. The coalition will always seek to protect Australians' freedom of speech, and it's committed to ensuring that the measures implemented by this government do not encroach on this very important and fundamental right.
9:56 am
Tammy Tyrrell (Tasmania, Independent) Share this | Link to this | Hansard source
I will start by saying I will support Senator Payman's Online Safety Amendment (Broadening Adult Cyber Abuse Protections) Bill 2026, which makes it easier for people to take action on online safety concerns without damaging freedom of speech. We need to take a proper look at our Online Safety Act and make it better reflect the independent review's recommendations. The government has taken great strides in improving online safety over the last few years, like fast-tracking the legislative review of the Online Safety Act, increasing funding to the eSafety Commissioner and committing to a digital duty of care. But we do need to acknowledge that Senator Payman's private senator's bill is fixing a problem that the government has failed to fix.
It's been more than a year since the release of the report of the independent review of the Online Safety Act, providing the government with 67 recommendations on how to keep Australians safe online. That means it has been more than a year since the government was handed an easy-win how-to guide on how to help Australians' safety, yet the government hasn't even provided a formal response, let alone taken any action to improve our Online Safety Act. It has sat on its hands and relied on crossbenchers to follow up and do the work for them. That's what's happening here. Senator Payman is just trying to implement one of the key recommendations from the review. She's now moving a motion to make the point that we need action to protect Australians online.
The government isn't acting fast enough. The one thing we did get from the government off the back of the Online Safety Act review was the commitment to a digital duty of care, but we haven't heard much from Minister Wells since the very brief consultation last year. Where is it up to? When will it be ready? A digital duty of care is really exciting; we need to have it as soon as possible, whilst ensuring we get it right. It would put the onus of protection back on the platforms, not on victims online. It would use Safety by Design to actually fix the problem at the source—to fix the problem at the system level. It would overhaul the system to be a proactive one, rather than just tinkering around the edges with our current whack-a-mole regulatory approach. That's real change, not just political speak.
And hey, yes, we passed stronger hate-speech laws earlier this year, but when are we going to take stronger action on hate speech online by bots? We don't even know when a robot posts or when a human posts. Bot don't need to sleep. They can post 24/7, artificially amplifying hateful and divisive content. That hurts our social cohesion, all without us even knowing what is coming from a human being or what's coming from a robot. That's why we need to make social media platforms label bot accounts. This wouldn't censor content, and it wouldn't silence free speech. It would just let people know who they're really talking to online and who is commenting and posting on their news feeds. It's a simple change that would make a big difference to keeping Australians safe online.
We know that, last Friday, the Attorney-General met with all her state and territory counterparts to discuss how to stop online hate. What was even achieved from that meeting? Where's the announcement? Once again, it's radio silence from the government when it comes to protecting Australians online. For over a year now, you have sat on a report on how to stop online hate, and you know there are easy solutions, like labelling bots on social media, so where's the action? Why are you wasting the time of the attorneys-general? The people want less talking and more action, please.
To the government: I urge you to please be bold. Don't leave it to the crossbench to follow-up and action reports from legislative reviews. Do your jobs and release a government response to the review that you commissioned. Be transparent about where work is up to on the digital duty of care, and, in the first instance, make the platforms label bot accounts online. Give the people the information they need to stay safe online.
10:01 am
David Shoebridge (NSW, Australian Greens) Share this | Link to this | Hansard source
I want to thank Senator Payman for bringing this motion on and bringing the issue to the Senate. It was in October 2024 when the Labor government got the statutory review which says, at its core, that we need to do something to keep people safe online. That's what the review says—things need to happen to keep people safe online. One of the core recommendations was to implement a digital duty of care and have at least some underpinning for decency and standards and protections online.
That was in October 2024. The world is moving quickly when it comes to the online space. I don't know if the Labor government hasn't noticed this, but, since October 2024, there have been major changes—this constant rolling mall of technological changes, the expansion of AI and the creep of that into even more parts of our online world. Unless we have a nimble government willing to actually get in front of this or, at least, not be 10 or 20 years behind this—we absolutely need action from the government.
They keep saying that they're doing something on a digital duty of care, on an online duty of care. They keep saying that, but where is the consultation? There has been no effective public consultation and no identifiable public process. Are they going to do what they did 18 months ago when it came to banning children from social media? Are they are just going to rush in into parliament with a thought bubble which won't achieve their policy outcomes? Is that their plan—just wait until a further crisis develops and then just rush a poorly drafted, poorly conceived, ineffective piece of legislation in on digital duty of care and say, 'Problem solved'? I fear that's the actual plan, if you could call it that.
Why do we need to be taking this seriously? Why should government be taking proactive steps to protect us from some of the worst risks of AI? Because we're seeing those risks happen right now. We are seeing chatbots that are enticing—why should be taking action now? Why should we be expecting the parliament and government to be taking steps to protect us from the obvious harms of AI? It's because we're seeing the damage happening right now. We're seeing chatbots that are enforcing delusions which potentially lead to psychosis. We are seeing chatbots encouraging suicidal ideation and self-harm. We are seeing chatbots, AI run chatbots, literally facilitating sexual harassment and the grooming of minors, and we are seeing, across the online world, AI promoting misinformation and extremism. And the government's response is: crickets, nothing, some vague statements about a digital duty of care that perhaps might come out at some future time without consultation with the public.
Well, I say, and the Greens say, that we need to start this urgently now—a public consultation on duty of care, with a concrete proposal before us that will at least put some standards in the online world that we can hold platforms to account for, such that we can actually give people some protection and not just do what Labor is doing at the moment, egged on by One Nation and egged on by the coalition, who say, 'No rules: whatever the billionaires want, whatever the big platforms want; we're just going to sit frightened in a little corner and let the online world descend into hatred, extremism and online violence.' That is not an answer.
We need digital duty of care now—one that works, that holds these platforms to account—and for once to have this parliament and this government stand up to the billionaire backers of this AI attack on our basic freedoms and liberties and stand up for the rights of ordinary Australians.
10:06 am
Tyron Whitten (WA, Pauline Hanson's One Nation Party) Share this | Link to this | Hansard source
I'd like to start by saying that I wholeheartedly agree with the eSafety Commissioner's goal of removing child exploitation material online. This is absolutely what we want to see as a nation, and I believe that harsher punishments are needed for those who produce or access this material. However, the scope of the eSafety Commissioner, Julie Inman Grant, has been expanded well and truly beyond the safety of children online. She has turned the agency into a worldwide censorship machine that is being used to target political enemies.
Free speech is a fundamental value of the Australian culture. It should be in the Constitution. But the uniparty does not want the Australian people to have the protection of the Constitution. After all, it was the coalition that appointed Julie Inman Grant to patrol the Australian public under the guise of protecting them. The uniparty is not for the people. It is not for the rights of Australians to say what they think and to see the events of the world and judge for themselves. It is a blight on our country that we allow bureaucrats to decide what Australians are mature enough to see.
This hasn't gone unnoticed around the world. The US congress has demanded that the eSafety Commissioner, a US citizen, present herself before congress to give evidence in their review of worldwide censorship regimes. In his request, Jim Jordan, the chairman of the House Judiciary Committee, refers to the eSafety commissioner as a 'noted zealot for global takedowns' and cites her travel to the US in September 2025 to be a keynote speaker at a Stanford University event. I would like to read a passage from the letter from congress around how they viewed the event:
The stated purpose of this event was to 'bring … together policymakers, academics, and experienced Silicon Valley experts to discuss the state of compliance and enforcement of existing regulations related to online trust and safety'.
Put plainly, the roundtable sought to facilitate cooperation with global censorship by bringing together foreign officials who have directly targeted American speech and represent a serious threat to the First Amendment.
This clearly shows the antagonistic effect that the eSafety Commissioner has had on one of our most important allies—an ally that has much more regard for the freedom of its people than the government of Australia does.
What has the eSafety Commissioner been ordering to be taken down that has prompted such strong language from the US? It was the stabbing of Bishop Mar Mari, an Islamist attack on a Christian bishop, an example of the government's complete failure to address the Islamist extremism that they decided should be censored, not only in Australia but around the world. And the slaying of Iryna Zarutska and the assassination of Charlie Kirk were geoblocked in Australia—once again, both videos that the far Left of politics found inconvenient to their cause. I asked about the blocking of these videos during estimates. I was told they were blocked because of the National Classification Scheme and that the content was unrelatable and therefore not suitable for anybody to view.
There is some important work to be done by the office of the eSafety Commissioner. However, the scope needs to be limited and monitored closely. We have seen an appetite for this office to reach beyond their duty to protect children and into the realm of ideological censor. Adults do not need a government guardian to tell them what is and is not safe for them to view. If they knowingly view anything that is child exploitation material they should be subject to the full force of the law. It is up to parents, not the government, to decide what is suitable for their children. No-one loves kids more than their parents do—certainly not the government. I have no issue with education, but taking away parents' rights to choose what is right for their children is gross overreach. Provide parents with the tools and leave the parenting to them.
Last of all, I would like to point out the ridiculous cost of eSafety's failed litigations. Extending the ban on the stabbing of Ma Mari: they were made to pay out over $600,000, not including their own legal costs. Australian taxpayer money was spent to have them censored. And eSafety ordered that Billboard Chris's criticism of transgender ideology used on children be taken down. It's a criticism that most Australians would agree with—complete ideological censorship. It cost the Australian public $66,000 in legal costs. The recent case of Celine Baumgarten, who raised the issue with a queer club being advertised in school, had an estimated cost of $50,000.
What a waste of taxpayer money, which should be used to prosecute the eSafety Commissioner's real objective of removing child exploitation material. The recommendations of this report do not address the need to severely limit the scope of this office to what it needs to be focusing on and away from government censorship. One Nation would completely reform the eSafety office for this purpose. We would scrap it and start again if necessary. Australians do not need censorship. We do not need the government to parent our children. If you don't like what's online, switch off. Stop the censorship now.
10:11 am
Malcolm Roberts (Queensland, Pauline Hanson's One Nation Party) Share this | Link to this | Hansard source
Senator Payman's motion calls for the recommendations of the review into the Online Safety Act to be fully implemented. One Nation agrees in part. We agree with supporting the work of the eSafety office, which was in place long before Julie Inman Grant took on the position of global internet commissar. We do not support the eSafety Commissioner's crusade to become the world internet policeman.
Dave Sharma (NSW, Liberal Party, Shadow Assistant Minister for Competition, Charities and Treasury) Share this | Link to this | Hansard source
The question is that the motion as moved by Senator Payman be agreed to.