Senate debates
Thursday, 5 March 2026
Statements by Senators
Cybersafety
1:56 pm
Tammy Tyrrell (Tasmania, Independent) Share this | Link to this | Hansard source
On 20 January, new hate speech laws were passed on the wake of a horrific, hateful act of antisemitic terrorism. Whether people agree with those laws or not, the conversation reflects a broader need to deal with hate and violence in Australia, including online. But, with how complex our online world has become, laws alone can't do all the work. Alongside regulation, we also need sensible, commonsense ways to help people understand what they're actually seeing, especially online, where so much hate is spread. One simple step I think we need to take is labelling bots on social media. Right now, when I scroll through posts of Senator Hanson collapsing tragically on the Senate floor—which is green, by you—it really needs to show whether a real person or a bot has made that picture. I know I'm not the only one. Bots can't post around-the-clock, start arguments and amplify hateful views to seem more popular than they really are. Labelling bots wouldn't stop anyone from speaking. It wouldn't delete content or shut down debate. If you know a comment is coming from a bot—like Senator Hanson falling on the Senate floor—you can take that into account instead of assuming it reflects genuine public opinion.
That kind of transparency supports the aim of the new hate speech laws by reducing harm in a quieter, more practical way. Rather than relying only on enforcement, it helps people make informed decisions for themselves. It also ties directly into digital literacy. When people understand how automation and manipulation work online, they're more equipped to think critically and less likely to be pulled into hate or violence. To help combat hatred in this country, we need to look at our online spaces more critically, and that's why I'm pushing to label bot accounts so we know when we're engaging with real people and when we're engaging with machines.