TER #238 – Teachers, AI, and the Law with Michael Waterhouse – 17 Jan 2024

https://www.buzzsprout.com/2028492/14326449-ter-238-teachers-ai-and-the-law-with-michael-waterhouse-17-jan-2024.js?container_id=buzzsprout-player-14326449&player=small

Support TER Podcast at www.Patreon.com/TERPodcast

Lawyer and Mediator Michael Waterhouse outlines potential legal concerns for the use of AI in education, and what teachers need to consider when engaging with AI in teaching and learning and all other aspects of school business.

Timecodes:

0:00 Opening Credits
1:31 Intro
10:48 Interview – Michael Waterhouse
1:23:05 Patron Shoutouts

Read More for transcripts.

Feature Interview Transcript (unedited, prepared by Otter.ai)

Cameron Malcher 0:00
I’m joined now from Waterhouse mediation by Michael Waterhouse. Michael, welcome to the to podcast.

Michael Waterhouse 0:07
Thanks very much for having me, Cameron.

Cameron Malcher 0:09
Now in the world of education, you’ve had a somewhat more unique perspective and career. Can you tell us a little bit about your background in education and the law?

Michael Waterhouse 0:21
Sure, well, perhaps if I go backwards in time a bit, until about two and a half years ago, I was General Counsel at the Education Department in New South Wales. And that role or its predecessors under different names, I was in for approximately 20 years. Before that, I was in a policy role in the department negotiating Commonwealth state relations. And in various at various times, I was a political advisor and Chief of Staff for the Education Minister. So I’ve got a range of different backgrounds. And now I’ve come to the role of being as you announced a mediator. And I’m mediating legal disputes and various disputes, which I would describe as kind of asymmetrical disputes between, let’s say, organizations and individuals. And I’m also giving seminars for schools on education, law issues, like the ones we’ll be talking about today. And, and I’m, I’m still a lawyer, although I’m not doing as much legal work as I once did. So in the education department as a lawyer, we had the Legal Services Directorate, which dealt with all of the legal issues of the department, we also used external law firms to help represent us in in various litigation and so on. So it’s quite fair. And we dealt with all of the legal issues that a large government department might have to deal with.

Cameron Malcher 1:55
Oh, I’m sure there are many stories you cannot tell. But what we’re going to privilege

Michael Waterhouse 2:00
and confidentiality very important.

Cameron Malcher 2:03
Yes, I can imagine. What we’re going to delve into today is this topic that is still currently quite hot in education about the rising generation of artificial intelligence technologies, and their potential impacts on education, but obviously, looking at some of the legal consequences. Now, in the context of us recording this interview, it’s only it’s currently, early December, at the time we recording, it’s only in the last week that the federal government have published the first official version of their guidelines and principles for using AI in education, with talking about principles like it must be to enhance teaching and learning and student well being must be factored into people’s thinking about use of AI and things like that. And so we’re going to talk about some of the legal issues that arise from some of these principles and drawing on your experience of education at a law looking at some of the places where teachers, schools, systems just might need to be aware of what potential legal complications may arise from even well intentioned following of some of those principles and policies. So just to dive right into it, one of the big ones and one of the first things that has arisen from the federal government’s principles is the idea of duty of care. You know, they they talk about their principle of, of human centered use of AI and of fostering student well being. So why don’t we start there? What do you see as some of the how does the rise of AI potentially affect duty of care for students?

Michael Waterhouse 3:47
Sure, well, it’s a very important question to consider for teachers and for schools. And I suppose if I could just make some broad general comment first. First of all, I think at one point, you said, Oh, it’s, it’s still we’re still interested in it? Well, I think there’s going to be interesting I for a long time to come. I think we ain’t seen nothing yet. And I think we’re, it’s very hard for us to know where we are in this kind of potentially exponential curve or this tsunami. And I think that means because of rapid change, we need to be thinking about these questions very regularly. And the answers that we provide today or the fourth, we think about what might go wrong today, we may well think, in five years time or less that boy, we were naive back then we just didn’t realize what was happening. So I think we need to revisiting and rethinking these all the time. So that’s the first general caveat I’d like to say so that if you are listening to this podcast in in a year’s time, it might be very out of date. The second thing I would say is that there are not yet tomorrow My knowledge new laws passed, I saw there was a report that the EU is trying to enact a law. And it’s having a lot of difficulty in getting its members together to support a common law. And they’ve been negotiating it for three days. And now taking a break, the UK have issued a white paper. And again, these are I’ve Well, I don’t I don’t know the content of the EU law. Because they haven’t passed it. And the UK one is a similar broad framework, but perhaps in more detail than the Australian one for schools at the moment. But without any laws being passed, there’s a body all of the existing law that we have governing Australia applies to new circumstances that will fit into that law. So if we take, for example, the duty of care, which was your question, and in the national framework, that’s referred to in I would say, point two, which is human and social well being generated tools are used in ways that do not harm, well being and safety of any member of the school community, that’s both teachers and students or anybody else, diversity of perspectives that might relate a bit more to discrimination, and so on. And human rights, individual autonomy, again, that’s probably more about discrimination so that 2.1, using AI in ways that don’t cause harm. Now, features will be aware, probably from their early training, and probably they’ve heard it again and again, over the time that a teacher has a duty of care to their students. Now, in a practical sense, it is the school authority. In the case of a government department, that would be the Department of Education for government schools, in the in the case of non government schools, it might be the system of schools, which owns the non government schools, or it might be the individual school. So it might be the Catholic Diocese, or it might be the trustees for a particular school board or whatever. And it’s that school authority, which has the legal responsibility. And in New South Wales, and probably most other states, the employer has a duty to indemnify the individual employee against that liability. So it’s the school authority that has to take it on, but they need to enact that through their employees, their teachers, so teachers have a duty to take reasonable care for the students who are in their care. And that’s just a very solid basic common law principle. Now, the question here in AI is, well, what counts as reasonable care? And reasonable care is basically avoiding reasonably foreseeable harm? And what counts as reasonably foreseeable. That sounds like a, you know, a kind of nebulous concept reasonable, it could mean anything. Well, that’s what what a reasonable person would think. So the question is, would a reasonable person expect a teacher to do something to protect a student from harm in the circumstances in which you find it? So for example, if we talk about the impact of AI, we’re already aware that there’s there’s responsibilities for schools to have policies against cyber bullying. And cyber bullying is something that can be potentially made worse or more effective cyber bullying, by the use of AI and I’ve seen some examples around the world of that, for example, in Spain, there was a case of some students who took photos of their fellow students, and used an AI tool to place them on a nude body, and then send those photos around to schools in a form of, you know, cyberbullying and sexting kind of thing, or so that was quite a serious one investigated by the Spanish police. I checked it, it wasn’t a hallucination of of the AI. It was a real case. And those kinds of cases where the deep fake technologies can be used to create false information that is then used for cyber bullying purposes against other students is something that could be an example. Now, you might say, what’s the school’s responsibility for that? Well, in New South Wales, there’s a ban on the use of phones. So in one way, they’re removing the possibility of that happening in school hours. But nevertheless, the law is unclear as to whether outside school hours if there’s cyberbullying between members of the students of let’s say, a rise in the context of a dispute at school, that then is continued overnight at home in their bedrooms and then comes back at school. If, if the school becomes aware of that, and is not doing anything about it or has no policies to prevent it, then it could be liable for failure of duty of care, because it has brought the students together and the students, in a sense, because they are students is the cause of the, of the conflict between the students. So that’s one of the examples. Another, again, potentially related issue is the issue of self harm. And I, in my view, self harm. It’s kind of anecdotal, I wouldn’t say this is a scientific conclusion from my perspective, but I think self harm has become worse in recent years, because of some of the ways in which social media tools have become addictive. And so and again, AI may add, add to that, the question is, can those things be prevented? How does a school become aware of it? Does it have policies and practices that will, you know, allow us allow students to report potential harm or to get help if they are addicted or that kind of thing? So anyway, that’s a kind of preliminary answer to your question.

Cameron Malcher 11:14
Well, there is one thing you brought up there that I think is also interesting to explore from sort of a, a legal and employment and even a policy perspective. You know, we’re talking about a federal government framework for the use of AI. But as you indicated, just there, the federal government is not the one that employs people that has the obligation to have these policies in place. It’s state governments, it’s Catholic, systemic structures, it’s independent schools and their governing boards that have to have these policies in place. So with these principles that have been published by the federal government, are they are they basically some nice ideas that state governments and other education employers will be expected to try and incorporate into policies? And if you’re talking about it from a perspective of who has the legal obligation? How does that work in the dynamic between, say, a state government employing 1000s of teachers and the federal government who don’t?

Michael Waterhouse 12:12
Okay, good question. So looking at the policy, it is signed off, not it’s not a federal government policy alone. It’s I’m not sure if it’s the the actual, it’s the national AI and schools Task Force, I think, is what it refers to. And it’s signed off by all of the State Education Department, as well as the Commonwealth department, by independent schools, Australia, National Catholic Education Commission, a range of other bodies that are all national and state or individual systems. So it’s not just a common Commonwealth Government policy, it’s all of those bodies have have come together to agree on this framework. That’s the first point. Second point is, as you rightly say, with very with some small exceptions, the Commonwealth Government is not the employee of teachers, it’s each of the state governments or each of the independent school bodies, whether it be individual schools or systems of schools, who are the employees or, you know, there are different arrangements with different systems over the employers, I should say. So generally speaking, it will be the school authority. So I don’t think the Commonwealth is a school authority. There may be some individual Commonwealth schools, in rare cases, there might still be a school in the Jarvis Bay territory or somewhere like that. And it is that it is the the owners of the schools, which will be the state governments or those non government school bodies who are charged with the legal responsibility.

Cameron Malcher 13:48
And to your understanding, is there any significant difference in the legal approach to issues of duty of care and self harm that are taken by particularly the large education employers of different state governments or the different Territory Governments are different? The different Diocese of the Catholic system, for example? Or do we have general kind of agreement of the legal accountabilities of education systems around the country? As

Michael Waterhouse 14:19
a broad generalization? The answer the last question is yes, there is basically the same law across Australia. That’s for two reasons. One is that the common law, so the judge made law of our land is, is one national common law. The ultimate Court of Appeal is the High Court of Australia. There was, for example, only a month or so ago, a very significant high court case that came out of a was actually a church related case, but would apply equally to schools. In New South Wales. It was about the The limitation period, which after the Royal Commission into child sexual abuse was abolished on their recommendation. And basically they said, there’s a new framework where we’ve got it, there is no limitation for a case of that can’t. So that’s a case that would apply right across Australia, a common law, personal injury case, and for civil litigation for personal injury, same law across Australia, that’s the main thing that gives rise to the duty of care. The other part is the Work Health and Safety Law. And although that’s not completely identical across Australia, there is intended to be a common framework where basically, they’d annex the same law and the Commonwealth and acts the same law. So the law is, generally there may be some states who haven’t enacted it, but I think most of the states have, so the obligation of an employer to provide a safe place of work both for his for his or her customers, people who might visit the site and employees. So again, that applies as a duty, both with respect to staff and students. And that’s, again, very common across Australia. So basically, the answer is yes. With all that qualification. Okay.

Cameron Malcher 16:25
So going from that issue of duty of care. One thing that’s all increasingly contentious in the age of social media and education, is managing privacy. And obviously, cybersecurity. But, you know, the example that you gave of someone using AI to create images of a student’s head on a, on a nude body, obviously goes into people’s right to represent themselves or to have their images and data private. I mean, how do you see AI? What other potential issues with with privacy and, and security of student images and data do you see arising out of AI and education?

Michael Waterhouse 17:09
Yeah, well, that’s a that’s a very interesting one. And I think the courts need to, I mean, I haven’t been any cases that I’m aware of that have come in Australia about AI and whether someone has a right to protect their image in that I sense. I believe that the that there’s a national law, which is administered by the E Safety Commissioner. And basically, it prevents cyber bullying, both in relation to students and adults. And in terms of sexualized kind of conduct like 660. I mean, it was it was crafted for the circumstance where, let’s say, an angry ex partner takes private images and, and publishes them as a kind of revenge porn, as they have called it. It’s designed to combat that and create some serious offenses about that. And I think the interesting thing will be if something isn’t, in fact, an image of the person, but it’s something very close to their likeness, whether they will have the capacity to combat that, I suspect they would, but there might be sort of marginal cases, I haven’t looked deeply into that law. And I could do some more homework on that. But I suspect they’d been marginal cases where the image is similar, but not the same. And you know, where is the cutoff point? For whether an image is like another person or not, it will be difficult to determine. I think that’s one of the areas where the courts will have some new principles and new thinking to be encountered. In more generally, in terms of privacy, so privacy if we’re talking, I mean, that’s kind of protection of someone’s personal reputation. And when it comes to reputation. Defamation might also be a factor, a law that comes in to protect that or to allow allow a person to take action to someone who has published an image of them. You might recall, maybe the younger listeners might not recall, but perhaps about 25 years ago, there was a famous case of a footballer who was taken a photograph in the shower, a slightly fuzzy photograph, but he he took a defamation action against the newspaper or magazine that had taken the photo and published it based on defamation where his claim and I think he was successful was that his reputation was damaged because it was it will be taken by the readers that he had been given permission to have a nude photograph taken of him and published and that wasn’t true. So there are those kinds of things that you could take as a way of protecting it. But defamation is not a very as you’ve seen by some of the defamation famous defamation cases that have been around in, in Australia in the last year, one of which is ongoing at the moment. Defamation is a very clumsy tool to try to fix this kind of problem. So again, the Safety Commissioner tool might be better, but or perhaps I’m

Cameron Malcher 20:27
sorry, you mentioned you mentioned the 25 year old case about the football of a defamation. But when we’re talking about image manipulation, and some of the things we’re talking about now with AI, the case that comes to mind, for me more recently, I think it’s about 10 years ago, when the comedian’s that were part of the chaser team, published an image of journalist Chris Kenney, engaged in a sexual act with a dog. And he, you know, obviously, it was a manipulated image. And he, he sued them for defamation, and they ultimately settled before it went to court. But the idea of manipulated images coming into defamation, I think, I think that was a case where the comedians were trying to push the boundary of what constituted satire and fell under the laws for for satire and commentary and ultimately came off second best. But this is a case, you know, the things that you’re talking about with with AI and image manipulation. That’s one case that there seems to work, you can even say this illegal, there’s a legal question for you. If it’s settled before going to trial, that doesn’t technically set a precedent does it? Now,

Michael Waterhouse 21:38
set a precedent is a decision by a judge of a Supreme Court or above probably you would say that is authoritative and has carefully considered the legal issues normally, you know, court, a court of appeal, or the Supreme Court or the full bench or the federal court, or the High Court would be the kind of decision so matter that settled out of court, normally, that’s done without admission of liability on either side. And often the results are kept confidential unless ordered to be released, as has occurred in the recent case I referred to.

Cameron Malcher 22:16
So but then beyond, beyond those issues of defamation and image privacy, I mean, there’s also student data privacy. And, you know, again, at the time that we’re recording this, there have been some, you know, rather concerning exposures of chat GPT, and someone was able to manipulate the prompts of chat GPT to reveal private user data that was supposed to be that was supposed to be secure. So how do you see that as an area,

Michael Waterhouse 22:45
that is a real issue. So just to give you a quick summary of the principles of privacy, there’s quite a lot of them. But essentially, it’s about protecting personal information. And personal information is information that identify from which you can identify the person. So name, email address, maybe social media account addresses, photographs, DNA, medical information, etc. So anything that’s about the personal or from which you can identify the person, and with that, you have to seek permission for it. You have to hold it securely. And you have to use it only for the purpose that you’ve collected it. And you can only release it elsewhere, if you’ve got a lawful excuse. So it might be for law enforcement purposes or for a court order or something like that. So that’s the general set of principles that apply in protection of privacy. Now, if you’re saying that, so privacy law, there’s different privacy laws that apply to different levels of schooling. So there are state privacy laws that apply to government schools. There’s the national Australian Privacy Act that applies to non government schools and to private companies. In the case of chap, GBT getting the information. And then someone, let’s say tricking it into revealing the information for a person who is not authorized to get that information. That would be in my view, a potential breach by chat BT chat GPT have the privacy law in the Australian Privacy Law would apply to it. Now. From my general awareness, it appears to me that if schools are going down the track, and some are of using chat GPT or any of those other large AI generative AI models, they will need to be doing so with some kind of agreement with a company that takes responsibility for the privacy. So for example, Microsoft has is the main owner of chat GPT and it’s flogging a Mmm, the latest version of chat GPT in eight, what they say is a secure and encrypted environment so that both businesses and potentially schools can rely presumably on the contractual obligation of the company, whichever company I’m promoting Microsoft, in order to keep the data secure from outside hacking, or and to make sure that it’s not, you know, vulnerable to release and to prevent, you know, some reader assistance to prevent abused and cyber hacking and both kind of data breaches that we might have seen, like, for example, the recent one, with Optus. Now, I think the issue is this in my mind, that there’s so much competition, as I understand it, from media reports, and so on between these companies that are going forward, and they’re wanting to get out as fast as possible and to roll out the things into automatic and embed the AI into their systems. There’s a tension between the rapidity of that kind of development, and their processes that they have in place that they’ve probably evolved over quite a period of time to ensure their security, is there a tension between those sort of things, and that will be a difficult thing for a school or a school authority to be sure of that, that the company that they’re dealing with? Hope those measures continuing able to be continuingly able to be guaranteed, like, for example, my impression, generally, is that the major companies like Microsoft, and Apple and Amazon and so on, who have data warehouses, and have kept data have have developed over the last 10 years or so pretty strong. controls around that. The question is, is any of that compromised by the speed of development of all of that sort of thing? I think that’s, you know, so that’s probably not a question for individual teachers. But it is a question for how the privacy is dealt with by BAE Systems. From an individual teacher point of view. The question is, are they collecting information? Are they simply using encouraging students to use tools that might be collecting information? That I think it’s more a question of warning students warning parents getting permission from students and parents if they are using third party tools, and knowing what to do if they do become aware of a data breach, who to contact, how quickly who to make known about it, and how to stop it.

Cameron Malcher 27:48
And so I suppose, you know, for the perspective of the individual teacher, it seems to me like the main consideration, especially at this relatively early stage of AIS, being picked up and used in education systems is until until your employer signs off on or endorses a particular product or model, just be very careful of what information you feed into it. And I’ve seen stories on social media of teachers using AI to assist with report comment writing, for example, will to do that would involve feeding in personal information about students, maybe not identifiable information, but I suppose teachers need to consider what information that might potentially identify a student you are putting into the system, that may actually be too much or go beyond privacy restrictions until your employer says, this is the tool we’re using for that purpose. So

Michael Waterhouse 28:44
if you’re putting Yeah, I completely agree with what you say, if you’re putting information into the current publicly available chat GPT I have no idea who could potentially use that information. I’m not quite clear whether the My impression is that the training can keep going on with all of the information that’s in the system, including everything that’s been put in so that you know part of it being free is that it helps the chap GPT system train itself to respond more favorably to the kind of requests it’s getting from its users. Now, even if there is anonymized data put into a system you know, a eight year old student with you know, eight year old female student and the circumstances and this that and the other it may be possible because you have various other things that are not personal information that are somehow still associated with the individual that the information could be triangulated and whether an AI model could through the right kind of prompting, be coaxed into, triangulating and identifying an individual is is, you know, something that’s unpredictable to me because of the, what I would say is, there is a kind of lack of transparency of exactly how the tools can work. Now, I think those models are, seem to be having the intention and and you know, if you try to do nefarious things, like if you try to do something that looks like it might be cyberbullying, it will stop you. If there are certain things that will stop, I don’t think we should be talking about that. Let’s talk about something else, what the model will say. So it may stop me, but I haven’t I haven’t experimented and see what it does, if you’re trying to coax that kind of thing out of it.

Cameron Malcher 30:40
Okay, well, it sounds like as you said, at the beginning, it sounds like this is one of the areas where there’s still a lot of development and a lot of consideration to be given to policy and legal ramifications. So I wouldn’t mind moving on to the application of AI that probably is most front of mind for teachers, which is its impact on teaching and learning practices, particularly to do with, you know, assessment, integrity, and to do with, with things like authenticity of work. And, and obviously, when it comes to assessment, the issues of cheating and plagiarism, come into play. I’ve seen some hugely varied policies when it comes to AI and education, you know. And just to tell, just to tell a couple of anecdotes, like I know of one university, for example, that has just fully embraced AI supported writing for students, and other universities are choosing to implement, you know, very strict use of Turnitin AI detection, even with its identified issues with actually identifying AI writing. So when it comes to classroom teachers, and I’m thinking, particularly of the highly litigious experience of matriculation exams in year 11, and 12, and the needs for assessment authenticity, around those, where do you see some of the potential issues arising from AI that teachers might need to be aware of?

Michael Waterhouse 32:15
Yeah, well, look, I’m sure there’s been a lot of educational experts. And I know, you talked to some previously on your podcast about possible avenues for dealing with this in a in a, from a teacher’s perspective. So I mean, I guess that I, as you’ve said, I’ve seen at least one university where the tutor and I think it’s in marketing says, I’m going to be expecting you to use AI. And I’m going to be expecting you to be doing very well at using AI and be expecting a much higher standard than I would have in the past, because you have access, and I’m, I’m going to be expecting you to show me how you’ve used the AI, and to show me how you’ve edited the AI and things like that. So that’s one pathway. Another pathway, of course, is to go down paper and pencil again. And, you know, I guess there’s a spectrum of options in between, based on considering what it is you’re really trying to assess. So I guess it does put pressure back on teachers to think from first principles, what is it that they’re exactly trying to assess, and teach and so on. And, you know, I certainly feel that there is great opportunity, from AI for from a teaching and learning and the individualization of, of, of, of student learning, I, you know, I’m learning languages, and I’m learning a number of languages. And I found just asking the chat, GPT to be my tutor in Spanish, all that sort of thing. It’s fantastic. It can, it’s very quick, it can give me some good examples. It’s not too boring. It’s a way of practicing skills. So anyway, I think it’s, I think there’s a lot of benefit there. But then when Yes, when you come to cheating, there are there are already other examples of online exams, for example. And there’s a, for example, the Bar Association tests use us online exams, which they have required people to online to put on special software, on their computer for the purpose of the exam, and they block out everything from that software. And they know that everything you put in is only you don’t have access to the internet. It’ll make sure you don’t have chat GPT on your computer.

Cameron Malcher 34:38
And yeah, and those kinds of LockDown Browser the same things to get used for NAPLAN online for the online tests that are increasingly being rolled out. That’s

Michael Waterhouse 34:47
another that’s another sort of event. All right, let’s say are there legal issues out of there. So I think there are legal issues in the sense that cheating and plagiarism them are something that morally wrong, and for which there is student disciplinary rules. So for example, under the New South Wales Education Act, there are student disciplinary rules. And they include things like not cheating, the in the New South Wales setting the NASA, the body that implements the HSE has policies of requiring people to assure things that they are all their own work. So, the question would be, is if you do the kind of assignment that you have done to date, and you could just put a essay question in the chat GPT and get them to write the essay, and, and then perhaps manipulate it a bit more to say, you know, say a bit more about this, give me some examples about that. Use the chat GPT to edit its own work, and then and now dumb it down. So it’s, it’s a beast. And it has a few errors in it. Can I mean, you know, that might be a way in which it’s not all your own work? I guess the overall question with with plagiarism and cheating is about student discipline. And so as with traditional student discipline issues, schools need evidence on which to base the decision, they need to give students procedural fairness, they make a decision on the balance of probabilities, if it’s a serious consequence, they need to be comfortably satisfied that of what happened. So, you know, I’d be very cautious about accusing a student that they had engaged in plagiarize ation, simply because, for example, a filter system said, you know, there’s, say, a 50% probability that this is plagiarism, I think you might want, you know, you’d certainly want to give a fair chance to the student to explain well, now, I didn’t do plagiarism, or, you know, here’s what I did. Let me show you.

Cameron Malcher 37:11
Well, that I mean, that when we think, again, about the, you know, the individual classroom teacher, and, you know, school level management, especially, again, acknowledging that caveat that we are in the very early days that we don’t know what the technology will look like six months from now, or what policies might look like, six months to a year from now, if a teacher has a reasonable suspicion that an assignment has been completed or partially completed by AI and is in breach of a school or a System Assessment Integrity Policy, like you’re in New South Wales, we have the all my own work policy in years 11 and 12. Are they from a legal perspective? better off letting it go than to potentially raising it without solid evidence? I mean, if a student turns around and says, I didn’t do this, and all they’ve got is a hunch. I mean, and I want to make something clear that, you know, teachers deal with this all the time, you know, copying and pasting from the internet, or, while it’s not as big an issue in high schools, as it is in universities, even contract cheating and paying people to work towards completing an assignment for you. So it’s not like the actual underlying issues themselves are new. But when it comes to, as you say, needing evidence of AI involvement, and the AI detection technologies being quite, you know, inadequate to be accurately relied on. What’s the best course of action for a teacher who suspects AI has been used unethically, but may not have that concrete evidence.

Michael Waterhouse 38:51
Yeah, so Okay, let’s say it’s a classroom teacher. And it’s not someone who would normally have the delegation to make a disciplinary decision of a serious kind in relation to a student like suspending them or expelling them or something like that. If the classroom teacher, and I think they have a duty if they have a reasonable suspicion, as you say, and what counts as reasonable I’ll come to in a minute, that as a student has cheated, I think it’s their ethical responsibility and might even be considered their duty of care to raise with the student because there could be harm that follows if a student thinks that this is something that can easily be gotten away with, it’s a bit like a parent whose child is obviously lying to them. You don’t want your child to keep lying in future you want to explain to them look, it’s obvious to me that you are lying if you’ve got the cigarettes in your hand still or whatever. And and you are sure you might be wrong. So I give them an explanation, a chance to explain themselves and it’s not necessarily that you move initially straight away to a punitive thing, but it’s a question of learning opportunity about what is reasonable behavior. And I think that’s within the individual teachers remit. And you know, the teacher has to think carefully is okay. I last month they came with this kind of essay was very poor this month, they’ve come up with something that’s so much higher than I’ve never seen a student improve so much in in one month. And when I talk to the student, they can’t explain how they wrote this, they don’t really seem to understand the subject. My suspicion, every little piece of new information I’m seeking, my suspicion is confirmed, you know, I’d be raising it with and if it’s a serious issue, then go up the hierarchy. And again, if the if the school it’s not, it’s not the school does not get in get in trouble or teachers not going to get into trouble. If there is a reasonable basis to make the allegation. That doesn’t mean that decided that the allegation is true, but to raise it with the student, and I’m thinking that you might have cheated. What do you say about that and give them an explanation? A chance to make an explanation. And I think that’s a reasonable step for a teacher to take now, the ultimate disciplinary the step is, you might need a higher degree of evidence then simply making the allegation if I could put because the suspicion is a reasonable, yep.

Cameron Malcher 41:24
Fair enough? Well, can we just then shift the focus slightly as well, because teachers may also use chat GPT, or other AI models to assist with their planning program resource preparation. So how does that potentially raise issues for teachers as well? So we hear a lot. And currently there are lawsuits against some of the large creators of these AI models over plagiarism and over content being used to train their AI models that might have been copyrighted or not used in a legal way. So for teachers using the model, just like students may be using it help with their assessments, where do teachers need to be cautious when using AI as part of their own professional practice? Yeah.

Michael Waterhouse 42:10
So I think, first and foremost, again, the issue of truth and accuracy is important. They should not be representing something as their own work, that is not their own work. I think a good practice would be to say, this is generated by AI so that, for example, if I’ve used AI to generate pictures to illustrate something, I generally label it generated by AI. And the there is, I guess, a potential for legal action for misleading and deceptive conduct. If a school or a teacher has held themselves out to be doing something and it’s actually not their work or or it’s, it’s something different, or it’s false. So it’s not just the question of, you know, false information or, you know, falsely holding out to be their own work. It’s also the question of is it accurate, so, that checking, so you might use chat GBT or whatever, to do some research to put some of that stuff together to gather some materials to give you an outline, to write a first draft of an email, all of these kinds of things might save you a lot of work. But it’s still your responsibility to be the final editor and the final arbiter, that what is seen is correct. So you can’t just do it and not read it and send it out, you need to read it, you need to check it in a if there are hallucinations. And I’ve experienced the hallucinations. For example, when I was researching the cyber bullying issue, a few months back, I asked for examples where people have been defined by use of deep fakes. And it gave me an example of Elon Musk being defined like that, and an example of Oprah Winfrey, Oprah Winfrey being defined like that, both of which were false once I went to look in, there wasn’t any such example. But what it but it did give me a true example as well. So you’ve got to be very wary that anything new you’re putting out to anybody else that they might rely on is accurate. You can’t just outsource the the accuracy problem to the chat GPT if you’re relying on it. Now then coming to the issue of copyright that is very complex and the Australian law, there’s various decisions as to whether how much human ingenuity has to be put into something before it before copyright really arises in the material. Now the issue of whether there’s copyright material that’s been used as part of the training of the large models, I think that’s an issue that probably Schools and teachers don’t have to worry about. But the question of whether they hold copyright in anything new that’s been produced that is a quite a difficult one to determine. There may not be copyright in that in that material. GPT is rules say any users hold the copyright, that that’s a question of if it exists. And it’s still possible that it could reproduce something very close to something that is already copyright. And if it is, that could be a problem. But I think it’s probably a problem for me, no, I don’t want to there could be different, there could be some difficulties there at something is very close to being just copying something else that’s out there in the internet. So it’s important to check that it is original,

Cameron Malcher 45:47
although I suppose, and please correct my understanding of this. But my you know, when it comes to copyright material, the primary concern is people commercializing someone else’s work without, without permission without authority. So for teachers in a classroom use, you know, I know we have limitations or copyrights, such as only being allowed to use a chapter of a book or 10% of a book.

Michael Waterhouse 46:11
Yeah, there’s probably a good defense for that kind of thing in, in classroom use, as opposed to publishing else in publishing on. So the statutory license for education providers, which will apply to all schools and universities in in Australia gives that reasonable use of of material for teaching purposes, as you say, a reasonable portion of a book. So what counts as a reasonable portion of something that’s hidden behind whatever kind of search chat GPT does is probably undecided at this port. A, there’s a very good site on for teachers who are interested in this copyright issue by the National copyright unit, which has a site called Smart copy. And they have a lot of good advice. And I know that they have been considering they’re one of the signatories to this national framework. So I’m pretty sure they have been considering the copyright issues around all that. And they will have some direct advice for teachers looking to think about one of the copyright implications of any use, they might make of chat, GPT, etc.

Cameron Malcher 47:24
So I suppose at this stage, the teachers who might have some of the biggest concerns of those who might dabble in, you know, producing content for sites, like Teachers Pay Teachers or twinkl Law, actually commercializing some of their work as well, that’s true, or people

Michael Waterhouse 47:37
who have, you know, publishing textbooks or that kind of thing as well. Yeah, one final point to make. You know, there’s a general duty of between an employer and between an employee and an employer to be truthful to the employer. So again, if you were to use an AI to generate something that is sent to your employer or is representing something to your employer, that could be an another area where you could get into trouble if you haven’t taken reasonable care in something you said. Let’s say for example, you’re on playground duty, and an incident happened and you had the right and incident report. And you just said to chat, GBT write me an incident report about this thing. And it really it. It wasn’t a true thing that might be in breach of the duty of fidelity to the employer. Well,

Cameron Malcher 48:35
let’s, let’s move on to some other areas that come into school practice and teacher and teacher conduct as well. You flagged the idea of information access, which relates to standard three of the framework about transparency. Yeah. So what what do you see as issues here that teachers might need to be aware of? Yeah,

Michael Waterhouse 48:59
well, I think one of the things certainly in government school so there’s different answers for government and non government schools, government schools are subject to Freedom of Information laws in New South Wales. So that’s called Gipper government information, public access different names in different states. Those laws don’t apply to non government schools. But non government schools are still subject to privacy laws where people can have access to their own personal information or right to access so in any of these kinds of things, that can be a right to access now, what is it that if you’ve there might be information that you’ve put into this system, even if it’s protected in some kind of system that’s not hackable as we were talking earlier, where the information is manipulated and it spits out I mean, if you were to using it to spit out student results in some way, this idea that of transparency where what what is the black box doing, and I know In the fact that I recall various instances where the people such as the the owners of chat GPT, or were Google bad card actually explain what it is that the algorithm is doing and in when, when generating answers now, the fact that you can’t explain what’s going on in the black box men, traditionally, you would have had a right, a legal right to obtain the information about you in, in a government school system to obtain information about the policies and practices and and decision making rules that a school might have that affect you. And so I think that issue of transparency. I mean, you’ve only, I guess, in one sense, you’ve only got access legally to something that can be given to you. But the fact that something can’t be given to you, it’s kind of it’s more of a policy than a legal issue in a way that there is a kind of gap there in in what’s going on, depending on how it is you. So I guess, again, I would make sure that teachers feel that they can explain what it is that it’s been used, is being done to information about students that is then coming out, if that’s used at all. Otherwise, don’t don’t use it, if they can’t explain it, if it is what I would say, then it don’t use it if they can’t explain it. Now, it may be that really the responsibility is on vendors to explain it. And there can be reasonable explanation from from them. But I think there’s a reasonably explained expert patient from the parent and student community that in effect decision making about them, whether it be educational decision making, and so on is has some reasonable rational explanation or explanatory basis that a teacher or a school can can give them. And that could be whether it, you know, disciplinary issues could be marking and assessment issues. Or it could be just well, you know, why are we going down this pathway? Why is it asking me to do this. So, you want to be able to understand the educational rationale of anything that will be used as a tutor mine might do.

Cameron Malcher 52:21
And this is this is a really interesting consideration in light of, you know, earlier conversation about privacy, particularly if the systems are built in such a way that inherently the way that the system is using the data can’t be identified or controlled. Can you actually control privacy in a meaningful way? That’s a really interesting conundrum. I

Michael Waterhouse 52:44
think it’s a potential tension between those those two principles, both of which are desirable. That

Cameron Malcher 52:52
that does then raise the question about the use of AI. And one of the topics again, you flagged here is, is decision making in procedural fairness, like if we’re not sure how the algorithm actually uses information beyond it being a privacy issue? What potential impact does that have on? You know, if people are again, as we said, outsourced not outsourcing, but utilizing chat GPT or other tools as part of their professional practice? What other challenges does that throw up if you simply don’t know how that information is being used and integrated with other systems and other data? Yeah,

Michael Waterhouse 53:31
it does. It does throw up challenges. And I think really, a lot of these challenges. Go to the question of how familiar the teachers are with the AI tool, and how it works and what it does and how to make it do things that you’re wanting to do it and how, how to make sure that you have that capacity to shape it all. Just put in a more Jetsons reminds me to, in a sense of, let’s say when the piano was first invented. And suddenly you had this thing which could play any note on the scale, and anyone could come along and play it. And, you know, you can imagine they thought, oh, great, I can play chopsticks. But on the other hand, chopsticks wasn’t what made the piano. Great. It was the fact that my bite Hogan and whoever else could compose these wonderful things. And it was the capacity to really engage with the instrument and explore the possibilities with in with human creativity that turned it into something fantastic. And I think that’s the same with AI Sure, we can find with little way of helping us but unless we engage with it enough and understand the capacity and understand what limits are and what is good, what is bad. I don’t think we should be using it. in a professional sense, I think I think I think we should be using the professional sense but I think that means we need to learn the piano as a We’re not just placed play chopsticks, and AI. So in this, just to come back to your question in relation to decision making and procedural fairness, there’s duties to, to in decision making, whether it be, let’s say, decision making about disciplinary issues, either about students, or staff, where the decision makers need to allow the other person, again, to make a decision to have a fair hearing, to not be biased, and so on. I think these are areas where, although you might get decision support from an AI, it really is very, it needs to be very clearly the human who’s making the decision. And knows how the AI works. I had a little play with this beforehand, I, you know, tried to put in a, you know, I’m a deputy principal in the New South Wales government school, adopt the role of my legal researcher and disciplinary advisor, research disciplinary penalties for New South Wales Government, schools and powers under the Education Act, what would be a fair penalty to impose on a student who admitted bringing one gram of marijuana to school to sell to other students? So that was my question, to attract GPT. And it paid me what I regard as a very good starting place. And so I’m not going to read out the whole thing. It’s quite detailed. Then I asked a follow up question gave me, you know, again, a very good answer. This was actually in it’s probably checked GPT four, which is part of being and, you know, so I think there is, if I’m wanting to do something quickly, getting my head around an issue like that, it can be very helpful, but I wouldn’t want to rely on that. And I’d still want to be talking to legal services, or the lawyer, the lawyers who might advise the school to be making sure I’m getting the decision. Right. But it’s a good way just to kind of, you know, I was surprised how good this answer was.

Cameron Malcher 57:05
It’s funny, you highlight that and the accuracy of it, because I am aware of a relatively recent, I don’t know, if it was actually an academic study or more of just a survey, but there was studies, or there were there was a group that put chat GPT is ability to help patients with medical diagnoses, and found that it significantly outperformed human doctors, not only in the accuracy of its diagnosis, but also that, but also that the patient’s preferred its manner in responding to questions, and it raised, it got a much higher rating on bedside manner. So it’s interesting to hear you talk about the accuracy of it in processing legal and decision making processes in schools. Because, you know, it is very quickly in a number of industries. Similar results,

Michael Waterhouse 58:03
it makes sense to me. I mean, you know, I assume there’s still a doctor and not just an AI treating those people for their cancer or whatever it might be. But you know, yes, if it can pick up, if it can get a pick up a false negative, in other words, if you test negative for the, you know, the cancer screening, or whatever it is, and you would have been told by the doctor that there’s nothing to worry about, but it can find something that the doctor missed. And really, you do need treatment for it. I mean, it seems there is a duty to use it. If it’s better than doing that, that seems to me, where the standard of metal medical care would go to so long as you know, that that’s more likely than not, you know, you’ve got some way of determining the standard of the decision maker. So I think in those cases, the AI needs to be trained very carefully on the right kind of material to reach that kind of standard. But it wouldn’t surprise me that it can exceed human judgment capacities, when well trained.

Cameron Malcher 59:09
Well, I mean, I realized this is getting off topic a little bit, but I’m bringing this up, just out of sort of curiosity to see what it might do in education, because my understanding is that and to go back to your example, about cyber bullying, where you, you sort of suggested that it would be trained and bring up caveats about this looks like inappropriate use. My understanding was that in that medical context, it similarly would say to people, you need to see a doctor, or this is not full, proper medical advice, you know, was programmed to give those caveats but still engage with people and I wonder what that might look like, in an educational context. You know, especially when we live in an era where teaching practices are so contentious and where, you know, the debates about what constitutes effective evidence in education. Currently, quite contested. And so that kind of that kind of caveat driven, providing information will be interesting to see in an educational use as well.

Michael Waterhouse 1:00:09
Yeah, well, the example I just gave in relation to this student discipline issue, again, both of the responses that said, please consult with your school’s legal adviser or the Department of Education for guidance. You know, it doesn’t want to give advice willy nilly like that it wants to cover itself, which is, is sensible. But I still think it’s a pathway to help understanding and help help decision making, so long as it’s not the final thing, certainly at this stage, but I think, you know, if you weren’t in that doctor’s care, you know, you could get to a point where helping teach a judgement. And pointing out, you know, I don’t think you’ve got that right. Think about it, again, might be something that is worth, that it can enhance, it can enhance, so long as it’s not the final decision maker, I would say I mean, yet a legal person. AI is not a legal personality, it can’t be held legally accountable for something a human is. So human has to take responsibility. So ultimately, well, when I say a human is a human or a legal entity, like a government department that is headed by a human.

Cameron Malcher 1:01:24
Well, that brings us to the last topic on this list here that I think is quite an interesting one. And I want to preface this by bringing up another example from again, about eight or nine years ago, when Microsoft first was trialing earlier generations of AI chat bots on publicly available information. And there’s quite a famous story that they use Twitter because it was one of the largest repositories of public facing information as the training data for this chat AI. And very quickly, it basically became a hugely racist and sexist entity because of the volume of content, that access from Twitter. And similarly, these AI chat bots. Now, obviously, a lot of work has been done to balance the way that chat GPT engages with, we keep saying chat GPT, but the way that AI models balance this stuff, but it does raise the issue that the AI models are trained on data, and that that data may have a bias or that may have a skewed perspective, because it’s either all for a particular cultural perspective, or sometimes even from a negative perspective. So if teachers are using it, or even if students are using it, how does it raise potential issues of discrimination? If if the data set informing the model is skewed that way? What issues does that raise for teachers? Yeah,

Michael Waterhouse 1:02:56
so look, I think there are discrimination pressures. And I think, as you say, the models have been, you know, I think people have become aware of that. And they’ve tried to train the models on a wide diversity of thing, of of information. But it could be also about not just what facial appearances are about, but it could be things that people of different diverse backgrounds are culturally interested in their cultural practices. A lot of different things like that that might be that might tend to treat people this advantageously on the grounds of that, say, race, or sex or, or disability and so on. Now, one of the obvious cases where it comes up with the employment decisions, if you are using the AI model to help you call people for an interview, or something like that, based on the quality of their job application, that could be done on you know, if there were biases in the way it did that. That could be indirect discrimination, that you’re treating some people less fairly on the grounds not not directly on the reluctance on the grounds of race, but you’re in effect, creating an unseen barrier, which correlates with race, and is therefore or sex or whatever. And is therefore harder for a person of the protected class to compete with that’s called indirect discrimination. And I could see that if there were unconscious biases, let’s say within the material, that would be a problem and I think it might be, I’m not sure how this will work. But one of the issues to be assured about is if schools are contracting with companies to have a protected system Like, you know, not a non hackable system, and to train the AI on that system’s own data, that maybe you have a, you will reintroduce some bias because of the particular distribution of people that you have within your system. So that, for example, if there was a school that had predominantly people of, of a single race or ethnic background, it might be discriminatory against people who are not of that background, because most of the material in their own internal environment that’s being trained on are of the dominant background. So I think that’s again, something to be aware of, and there could be discrimination claims on on that grounds. The other thing, I mean, yeah, I think it’s potentially also very helpful for people who would like people with disability, that there could be better ways of using through AI of making material accessible, providing access to students providing gradation of material that is suitable for a person at the level that they are at it, and to extend them along those wires. So I don’t I don’t say it’s all dangerous. It’s always a risk of benefit and, and risk, I guess.

Cameron Malcher 1:06:24
Yeah. And I suppose, you know, something that we didn’t bring up at the beginning. But that does highlight something that’s come up in a number of interviews I’ve done in the past, which is making sure that people remember what this current generation of what it’s been called AI, but really, the phrase large language model is the more technical term, and I think it’s worth reminding everyone, that these programs don’t have anything we would call an understanding, you know, they have a, they have a statistical, they have a statistical view of language of if I say this, what’s the most likely word to come next? In the context of your question? And so, you know, that idea of training that you’ve brought up, the platforms themselves don’t have anything we would call an understanding of those issues of race, or of sexuality, or of gender, or, or even an understanding of what policies mean, what they do know is, in that context, is the thing most likely to be said.

Michael Waterhouse 1:07:24
It’s philosophical questions there about when have we got to consciousness or intelligence? And, you know, what are the different philosophical models of consciousness and intelligence? You know, I won’t go down that rabbit hole, but

Cameron Malcher 1:07:44
when when we start seeing an AI model, you know, pursuing its own legal status in the courts, we’ll have a whole other discussion that

Michael Waterhouse 1:07:52
counts as a human. Is it in? Yes.

Cameron Malcher 1:07:54
Yeah. Michael, are there any other general statements or advice for teachers that you might have about legal issues, and I

Michael Waterhouse 1:08:03
thought I would just come back to the idea that there are general duties of care, that the duty of care, and a lot of these ideas apply to most of the areas that we’ve been talking about the duty of care will evolve with circumstances, let me just come back to if you look at the policies that school systems generally have, they have evolved through circumstances, we’ve got policies of inspecting trees, because of a tragedy that happened. When a tree fell over, we’ve got policies for preventing anaphylaxis because of a tragedy that happened in a school, we’ve got policies for safe swimming, because of a tragedy that happened to a student. These instances weren’t necessarily foreseen by the people involved at the time, but we learn from them. And we have to keep evolving. So we have to, if this is evolving very rapidly, and exponentially, we need to be looking all the time. You know, what’s happened, what we thought this some time ago, what do we think now? And I think the rate at which you need to keep revisiting the what counts as reasonable and what the right policies are, is more rapid, because of the rapid development. So I know the national framework says this will be revisited and reviewed every 12 months, but it also says more frequently at their discretion. I think if you look at how much has changed in the last 12 months, no reason for me to think that there’ll be less change in the next 12 months. If anything, we might be you know, in the rising part of the exponential curve. And I would be looking at these things more frequently. I wouldn’t be saying in return until we see whether things have you know, if we look at it every term and it’s the same as last term, let’s not worry about it. That’s fine, but I will be looking at it for frequently and I will be making sure that there is expertise that we’re not the to the schools have enough experience by teachers who are engaged and have looked at how the AI works have played with it, have learned it have known how have developed techniques of using good prompts, developing a continuing understanding of what the capabilities and what the risks are, I think it’s important that there’s a corpus of knowledge in in, in every school to do to do that. I’m sure there’ll be a mixture of people within the teaching profession. Those who are very interested in do that, and those who are there’s so much change, I’ll wait until they tell me what to do. And, you know, I think the more that we can bring the latter set towards the the former set, I think the better off schools will be in terms of you responding to and having a good sense of what’s a reasonable way of managing all this? There’s not one answer at one time, it has to be well,

Cameron Malcher 1:11:04
maybe we might need to revisit this conversation in six months with a Well, here we go.

Michael Waterhouse 1:11:10
Well, back then. But now, if I really hadn’t known this,

Cameron Malcher 1:11:15
Well, Michael, for anybody who’d like to follow your work in this area, or to learn a bit more, is there anywhere you would like them to go?

Michael Waterhouse 1:11:20
Well, I’m, I’m on LinkedIn, they can find me as Michael Waterhouse on LinkedIn, or I have a website, which is Waterhouse mediation.com. That Are you a mediator primarily. So if you’re interested in that, that’s fine. But also, I have on there seminars that I provide from time to time for the schools if people are interested in that they’re generally provided on online.

Cameron Malcher 1:11:49
Excellent. I’ll make sure there’s links to both of those sites and your profile in the show notes for this episode. Let us hope that any teachers that engage with your work do so more from a professional learning sense than as a mediator, but But anybody who wants to find out more can find those links in the show notes. Michael, thank you very much for your time and your insights. As I say we may need to revisit this conversation in a short period of time if things do change rapidly, but, but thank you again. Thanks

Michael Waterhouse 1:12:13
very much for having me, Cameron. I’ve enjoyed it.

Leave a comment