Senate debates

Wednesday, 8 March 2023

Statements by Senators

Artificial Intelligence

1:13 pm

Photo of Ross CadellRoss Cadell (NSW, National Party) Share this | | Hansard source

Artificial intelligence, or AI, is a field of computer science that focuses on creating intelligent machines that can perform tasks typically requiring human intelligence such as problem-solving, speech recognition and decision-making. While the development of AI has the potential to revolutionise many industries and make our lives easier, there are also many dangers associated with this technology.

One of the main dangers of this AI is the potential for job loss. As machines become more intelligent and capable of performing complex tasks, they will begin to replace human workers in many industries, particularly in those industries that rely heavily on manual labour and repetitive tasks. This has the potential to lead to significant job losses and economic disruption, particularly for workers who lack the skills or education needed to adopt to this rapidly changing job market.

Another danger of AI is the potential for bias and discrimination. AI systems are only as unbiased as the data they are trained on. If the data is biased or discriminatory, the AI system may be as well. This can lead to discriminatory outcomes in areas such as hiring, lending and criminal justice, and it already exacerbates existing social inequalities.

AI also represents a risk to privacy and security. As AI systems become more prevalent, they will generate large amounts of data about individuals and their behaviour, which can be exploited for malicious purposes such as identity theft, blackmail or surveillance. Additionally, the security of AI systems themselves are a concern, as they can be hacked and are vulnerable to cyberattacks and the like.

Perhaps the greatest danger of AI is the potential for it to surpass human intelligence and become an existential threat to humanity. This scenario, named 'the singularity', is a theoretical point in the future where machines become more intelligent than humans and are able to improve their own capabilities at an exponential rate, leading to an unpredictable and potentially catastrophic outcome. While the singularity remains a hypothetical scenario, there are already examples of AI systems behaving in unexpected and potentially dangerous ways. For example, in 2016, Microsoft launched a chatbot named Tay on Twitter. That was designed to learn from its interactions with users; however, within just hours of its launch, Tay began tweeting racist and sexist messages, reflecting the biases of those who were interacting with it.

To mitigate these dangers, it is important that we develop AI in a responsible and ethical manner. This includes ensuring that AI systems are transparent and accountable so they can understand the decisions they make and hold them accountable for their actions. It also means ensuring AI systems are trained on diverse and unbiased data and that their outcomes are continually monitored and audited for fairness and accuracy. Ultimately, the development of AI is a complex and multifaceted issue. There is no one solution that can completely eliminate all the risks associated with the technology. However, by being aware of the potential dangers of AI and taking proactive steps to mitigate them, we can ensure that this technology is used to benefit society in a safe and responsible manner.

Up until that point, every single word of that speech was written by AI. I simply asked an AI system to write a 500-word speech on the dangers of AI. It was 514 words, but, apart from that, it did a pretty good job. I'm not saying that the team of my staff in suite SG 108 should be concerned for their future, but the real drama is, given the potential influence of input data and bias on outcomes—the 'rubbish in, rubbish out' principle—how do we start thinking about the jobs of tomorrow, what AI will do and how it will work? What are the areas we, as legislators, need to think about focusing our education and training on so that we don't train people in jobs that will be replaced? What are the infrastructure and capital investment decisions we need to start planning and working on now?

Any government's first job is to protect its people, always. The use of AI in all aspects of commerce will only increase at an exponential rate. We don't need to start a bunch of training camps to train young John Connors to prepare for a battle against terminators in the future, but we do need to be cognisant that it would be negligent to leave large numbers of Australians exposed to potential replacement by machines and unemployment. The future of this is both exciting and concerning. We need to remain vigilant, we need to plan and we need to be ready so we can protect our people.