House debates
Tuesday, 10 September 2024
Committees
Employment, Education and Training Committee; Report
12:12 pm
Terry Young (Longman, Liberal National Party) Share this | Hansard source
by leave—Firstly, can I thank all the members of the committee and the chair, the member for Bendigo, for their valuable contributions and probing questions. Can I also thank all of those who sent in submissions and gave evidence, either in person or electronically. Lastly, I'd also like to thank all the APH staff, including the technical media staff and the hardworking secretariat staff, led by Fran Denny, who worked tirelessly on what must be said was a complicated yet very enjoyable and intriguing subject: AI in the Australian education system or Study buddy or influencer, as the report has now been named.
I tried to approach this inquiry from the perspective of not just the deputy chair of the committee or an elected member of parliament. I tried to see this from all the different perspectives: those of the student, the teacher, the business owner, an employee, a parent of a student as well as a concerned citizen that may not be any of the above. One of the great challenges we face as legislators is getting the balance right. This is where we all have different views, even within our own party. This, of course, is one of the major benefits of the committee process, that hopefully with six to 12 committee members with different cultural, educational, geographical and vocational backgrounds we can be challenged in our thinking to come up with a report that addresses the concerns of all of these different cohorts in our community. For me, as a former small-business owner and as a parent and grandparent, my greatest concerns were around getting the balance right: ensuring that future generations of Australians, from the business and employment point of view, won't be left behind the rest of the developed world in a highly competitive global market, balanced—when I put my parent and grandparent hat on—with ensuring that AI won't be used by governments, individuals, corporations or other organisations to coercively nudge our younger people in a particular direction.
After 18 months of listening to evidence and submissions, I must say that these concerns have not abated. In fact, they have only increased in their intensity. I have now seen the potential power of this technology, and, of course, if used in the correct manner, it will improve productivity, which is so important in business, particularly in a country like ours where we have some of the highest—if not the highest—costs as far as employee costs, energy costs, red and green tape and the cost of tenancy, not to mention other costs associated with doing business, like insurance.
But we must ask ourselves: at what cost will these improvements in productivity come? Will it risk critical thinking? If problems are just submitted to platforms like ChatGPT and the like and answers are just churned out according to the algorithms and parameters that the human developers of these platforms decide, where will students learn to simply figure things out for themselves? Not to mention that this is where the moral issue of coercive control comes in, because, if the same answer is being given to the same question, if there is one dominant platform, do we risk our individuality and basically become sheep? Or could it be worse? We have platforms where answers are always skewed to the left or right, depending on what platform is being used. I have observed firsthand the development of social media platforms and YouTube, where I personally only receive recommendations that are all the same. This, of course, pushes people almost unknowingly further left or right in their thinking, which I think is very unhealthy for us as a society, as neither far-right nor far-left ideologies are healthy for individuals or society as a whole. Balance is always the key.
I also have concerns around the validity of the information. It must be remembered that AI only draws its conclusions from the most common consensus, from mainly the internet. In fact, ChatGPT was brutally honest about its own failure to present the correct information when I asked it, 'If ChatGPT had been around in 1500 AD and I asked it if the world was flat, what would the answer have been?' The very honest answer I was given was: 'As the general scientific and religious belief of that day was that the earth was flat, ChatGPT probably would have said the earth was flat,' which we all know is incorrect. Apologies to any flat-earthers out there!
That brings me to my final comments. Whilst I have real concerns about generative AI, they are somewhat curtailed by what I can only describe as my delight in the current generation, who have in some ways been forced to have one of the best BS filters of all time. This generation have been bombarded almost since birth with information on devices handed to them, in many instances, way too early for young developing minds in my opinion. Ironically, as often happens, this negative practice has developed this positive outcome of a filter that I believe will hold them in good stead as they navigate this brave new world that will increasingly include more and more machine learning in our everyday lives.
My final words to all are to constantly ask the question: what is the source of this information I'm receiving? And question, question, question the information given, and don't take it as gospel. I commend the report to the House.
No comments