You're listening to a CNA Podcast. So hey everyone, welcome to another episode of Deep Dive with Otelli and myself, Stephen. Today we're talking about
or rather the use of AI when it comes to doing your homework. Are you an AI fan? Do you use it for work? Not homework, but work? Okay, who doesn't use it nowadays? Unless you are like living under a rock, but everyone uses, you know, large language models to a certain extent, right? To improve writing. I mean, you know, a line of work, we use it.
to refine the original sentences that I came up with. I like this term that I came across. It says basically to learn and to work efficiently. It helps me do what I do better. Yeah. So we can imagine a lot of students from the high institutes of learning, they're also doing that. And as a matter of fact, recently, right, there was this issue that blew up in the news. It's when NTU accused three students from the School of Social Sciences of
academic fraud. Yeah. It's a very, very harsh term if you think about it. Saying that they basically use generative AI tools in their assignments and were given zero marks for that. Yeah, zero marks. That's a big blow. Of course, the students are upset with that and they're fighting back. They're asking the university to justify why they're doing that. You know, and trying to understand just where do you draw the line? When is it okay?
And when is it not okay, especially when it comes to students who are still in school? And today we have some people who will help us understand that issue better. So let's welcome Associate Professor Ben Leong. He's the Director of AI Centre for Educational Technologies at NUS. Hello, it's a pleasure to be here and I'm happy to discuss this issue, which is quite close to heart.
Jeremy Su, he's the co-founder of Nex AI and graduated from Neon Poly just last year. Yes. Happy to be here. Thanks for having me. I think this is going to be really interesting. So, okay, let's just start off with a very big picture question, right? Help us understand how is AI being used both by teachers and schools? So, Jeremy, from a student's point of view, because you just graduated, right? So, when it comes to the students, how they actually use it, they use it for quite a wide range of things, right? There is the obvious writing problem.
my essays, doing my homework, that's a very, very standard use case. We've had it since years ago, even before ChatGPT, because we have, for example, GPT-3 original models, two, which were all pretty good autocomplete models. These were used by different students to, for example, write their essays or kind of refine certain sentences. Okay.
And then right now, people are using it for more creative things. So things like research, they're able to kind of trigger an AI, have it go online, search for a bunch of different information, pull it together, synthesize it, and then eventually create some sort of report. And so they don't need to do all this work anymore. That's a big part. I think specifically for me, what I've seen is because more from the creative side. So in, for example, media school, people are using it in art.
people are using it to actually edit. So people are using it in many different ways which are not conventionally, I think, what most people think of when they think of AI in the academic sense. It's always just reports and essays. Ben, I'll ask you. So at the university, again, you know, you are facing this. Many students will also be using AI. And AI is, of course, a lot better now. It can literally write the essay for you. So at which point do you say, okay, that's all right? And which point do you say, nah? Okay, so I think it's a...
It's a complex question, right? Really depends on the discipline that we're dealing with. So I teach computer science and mostly programming. So in programming, you write code. And I think my colleagues who have students write essays, they have a different kind of requirement. It may be harder for us to detect plagiarism for coding. Because the fact is that AI is really good at generating good code and the code looks kind of like standard now.
To be fair, plagiarism is not a new thing. So universities that use these AI detectors, like Turnitin and all that, does that detect the code? I think most of them are inaccurate. So the truth is that we have had very good AI code plagiarism detectors in the past, before AI. And the way, I was going to explain, ever since I started teaching, we have caught students cheating by copying their friend's code. And generally what happens is we catch them because they write bad code.
Okay. Not because they write good code. I mean, to write good code, you have two clever people writing good code and look the same. It's actually, there's some standard ways of writing code. So the general way to catch people copying code is that they have a friend who writes bad code and wrong code and you have mixed-end mistake, right? Yeah.
And the best part is it came from somewhere that you don't know where it came from, right? So there's no way to kind of prove that, in fact. There's no easy way to prove. Because code is code. 2 plus 2 is 4. So when you write good code, good code is good code, as in it should all look pretty much the same, right? Yeah.
Correct. I mean, so there are many ways to do the same thing. But you're right. There's a certain style that considers good code. So my question then becomes, what's wrong with that? Because they're getting you the result that they need. And in the future when they work, why would they bother going through it themselves when they can use the AI to write it? That's an excellent question. So then you need to understand what we're doing in school here. There are two things that we are trying to do.
Number one is that we're trying to help the students learn something. Okay. And let me be blunt. I mean, homework is there for a reason, right? I mean, if I give you homework, frankly, it's going to cost us work, right? Because someone has to grade it. Of course, you can outsource to AI. But frankly, you know, if we don't give issue homework, then we do less. But the homework is there to test something. No, no, no, no. There are these things called formative assessments. Right. And there are these things called summative assessments. Right.
So formative assessments is to help you learn something. So you're going through the struggle to learn the stuff, right? Now, so the problem with AI in this regard is that it allows the students to kind of like avoid this struggle, right? And they avoid learning, correct? I mean, so frankly, if I give you a question. Okay, so they get the result without the learning process. And so they're not...
Okay, at some level, they are short-shading themselves because their parents paid good money for them to come to university and get clever, right? Okay, so that's number one. Number two, there's also a very important aspect of this assessment thing, which is that there's this thing called a summative assessment, but universities have a role to play in the economy, right? Because ultimately, these kids graduate and need to get jobs, right?
Okay, so let me ask a question. If there's no reason for us to give the first class honors and second classes, we actually have a role in society to sort of grade the
the students, right? So that employers have an easier time with the hiring, right? So if everybody gets an A, then it's debatable whether or not this is still a kind of function of the university, but I'm certain that if we give every student A's, I think the employers will be happier. And there's some harm that's done. But you see, okay, the whole process, as you mentioned, is for the student to go through the learning journey so that when they face other situations in the real world that AI maybe hasn't encountered yet,
then they can help solve it. So you're teaching them how to think, basically, right? But now with the AI doing everything for them, they miss out on this. But I've got a question as well that a lot of people are turning to your large language models to learn stuff. Let's say you go into like GPT or Gemini and whatnot. As you are learning stuff, right, it picks up your...
up your learning style. It becomes individualized, so to speak. So then does that mean that the role of a teacher becomes, will teachers be left behind, I'm trying to say, because the style, yeah, exactly, they're already used to learning from GPT. So when a teacher is trying to teach them, it becomes increasingly difficult for teachers to teach students. That's almost like having tuition, right? Because no point learning at school, I just have my private tutor at home. When it comes to this, I think
There are a few points, right? So the personalized learning pathway is always going to be better than the generalized one because it's simply, it's going to be a better fit. What we've seen is that motivated individuals who are trying to pick up some new skills or they're trying to accomplish a task or if they're even trying to do less work in school, they will turn to these AI models. And first of all, you mentioned large language models. There are all sorts of different models. They will turn to these tools to try and accomplish their task.
Now, the thing is, I think many people think that just because you use AI, you know, people become dumber. People don't think about it anymore. And actually, I quote, you know, I'm seeing Reddit. Students are saying that it's rare nowadays to see people think on their own because they're sending their assignments to GPT first before even thinking.
thinking is good. So the devil's in the details. How are they sending it in? So the degree of specificity that you would need to include in a prompt in an instruction requires that you identify your own thinking and label and actually describe step by step exactly how you would go about
attacking this task. So if I were to just tell GBT, write this essay for me, you're not going to get that good of an essay. If you say write in style of so-and-so, it does a little better. If I wanted to think really in-depth, go through this point, articulate this particular argument, quote an example here, and make sure it makes sense in the grand scheme of things, and then go step-by-step, you end up with these prompts that are pretty normal, I think, in industry to write three, four paragraphs. So you're just about asking...
right questions it's always asking the right questions are you actually learning anything what is learning then yes I think you are so it's like having when we first had Google as a resource
how do you use it and use it effectively, right? So you can use it the wrong way. You can use it the right way. Same thing with the AI. You can use it to make whatever you're doing better and to learn more and understand better. Or you can just be lazy and just chuck it the whole problem and let it do everything for you. The question is, how are you using the AI? And at which point can you say it's okay to use it this way, but it's not okay to use it that way? Yeah, how do we differentiate misuse and misjudgment? I think there are the two things I mentioned that matter, right? Number one is the learning.
Number two is the fairness, right? Because the trouble now is if some students use AI and they get better grade than other students who don't use AI, it's so unfair. I'm like, you just needed to switch on your computer, that's all. Too lazy to do that again. No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no
I'm wrong, but suppose if an honest man and if a dishonest man and a dishonest one actually gets a better grade, then there's something wrong, right? Because then we have a society which actually rewards people who are dishonest. Is that something you want in Singapore? I think so. But is that the fault also of the way they're being tested? The professor may be a bit naive to say,
you cannot use AI in this day and age? You know what I mean? Okay, let's set it aside. But I think if you ask me, what is the issue here? It's an issue of fairness. Okay, so to address this issue of fairness, if the professor says you can't use AI, people should not use AI, right? Okay. Now, in a case where my first prof says that I don't care, you all use AI, then you all go ahead. But the trouble will be a problem, right? Which is that then...
That's the whole point, right? Because you get pretty much the same thing. And worse still, worse still, you know, there are different models. There are paid models, there are unpaid models. And suppose this guy is rich, right? He buys the... Unfair advantage. And he may get unfair advantage again. We could argue the kids who have tuition versus those who do not also have an unfair advantage, right?
That's true, that's true. That's the way the real world works, right? That's how far you want to bring that. But ultimately, that is the issue of fairness. Can I just add something about the fairness point? In that case, is it even a good proxy of the real world because the real world is not really fair? People have to learn. I think the whole point of having the school and having that environment is it's assimilated almost a one-to-one mimic of the real world and it trains them to operate in the real world.
Does it help them learn in that case? That's an excellent philosophical question. But there are these two issues. The first issue is this, right? Which is that the learning part, I think the learning part is real. There are students that are really not getting... I mean, what I'm seeing... Okay, again...
This is not scientific. I actually put up some of the ground to learn, to study this thing more carefully. But based on what I'm seeing on the ground today, there's a small group of students who are using AI very effectively. They are actually moving ahead faster, better than before. So you compare them today, the same students, to those in the past, these guys are doing better. And that's the rest. Yep.
I really think the rest are actually getting dumber. I don't know why you're saying that. So you're saying dumber for not using... No, because they use AI, they are actually not as smart. Okay, so you are saying that because they have lost their authenticity, their creative juices, they are counting on...
So this is the part where we're just saying, so do I get this project and just chuck the whole thing to AI and say, give me an answer? Or do I understand the project, understand what are the five tools I need, then go to AI and say, help me figure out what are these five tools. So there are different ways of using it. That's not how the human psychology works. That's not how the kids think. So what I'm telling you on the ground is that there's a small group and they are actually moving ahead.
And actually, to be honest, we don't fully understand how the kids are using AI today because it's quite new. It's like two years. We don't watch over them 24-7. And they are doing some things that are kind of... I'm waiting on the wait because we don't know what we're going to hear from them. Really, kind of wild. There are some ways that we don't understand. But then the question becomes, are we then testing the students the wrong way?
Should we also evolve in the, you know, how we analyze and evaluate? Yeah, maybe more face-to-face sort of discussion about a certain topic or... Excellent point. There are two kinds of disciplines. There's the my kind of disciplines and there's the poor humanities people. Yeah, yeah. The difference is this. What we do is
We lock them in exam hall, we lock them in a lab. But I do have some colleagues who are in humanities whereby the grading is all done by essays. They are struggling and I feel bad because I don't really have a good solution for them. - And you were in humanities, right? Were you in humanities so often? - I was, I've been operating in tech for five years now.
So do you discuss among your friends, okay, what is the best way to go around this problem, to ask these large language models? How can we sort of get around the system? I mean, is there that sort of discussion? It's interesting you bring that up because I think many people now use the conversational models as almost that partner. So instead of asking friends, they just ask the model. The models will probably answer better. I think on the point of essentially essays being a poor measure, it's a horrible measure. Essays are the new MCQs.
Because what we learned about language from all the years of research in AI is basically that language, given the roles, given the syntax, given the semantics, the meaning of each word, if you just apply a few rules and you keep in mind all the exceptions, you can algorithmically generate anything you want. And that's what all these models do. So it's basically standardized testing. It just doesn't look like standardized testing when you give an essay. 100% agree. It's horrible to test it that way. So I know you're not really in the humanities because that's where it seems like the...
Where it's questionable A very grey area you see So therefore I mean Can we come to the conclusion That something has to change Because right now If you ask a lot of students It's not clear What exactly is the demarcation You know Like can I use AI I can use AI But to what extent
you know how do I know and now there's a lot of anxiety among the students as well because oh I don't want to I know people that don't use AI because they're scared that the waves doesn't make sense I think that's wrong because these are the real world resources they're available to everyone why are we saying don't use it but in the real world I've
But do you want to be labelled, risk being labelled an academic fraud? Because I wouldn't want that on my transcript or anywhere. That's where the school, in a sense, has failed to evolve and adapt to what is happening in the real world. Maybe they're not catching up fast enough? Can I try to address that point? So I think I've explained to you the two concerns we have in school. But the problem is this. We've always had this complaint from the students that, like I said, we lock them up in classrooms
in the lab, right? To do a coding exam, right? They're like, what the heck, bro? I mean, what, you know, in real life, I get to go to GitHub and then I copy, copy, I'm better, right? Okay. So that's a very good, right? Now, but you see, the whole point of school actually is that we are trying to train them to learn some things. Okay. And,
I don't know why people think that coding is easy, but I must tell you, it is not that easy. And the fact is this, I actually don't use AI because sadly, AI doesn't write as well as I can write. And AI does not write code. It doesn't write code the way I can write code either. So for coding, actually, sometimes I might use AI because it's just convenient because
The truth is that we are trying to teach the students to a certain level. And the AI today, while it may seem like magical to many people, they are doing things that are relatively competent, but not expert level. But the challenge is this, right? If the kids, the students today never even reach that level of minimal competency, they will never cross the chasm. So you can think of this curve, right? There's this big bar of people that then there's AI and then there's this...
experts. I call this AI chasm of death. So if they're not able to bring the skill level to this level, they will never cross over to become experts.
And that's where the money is. That's where the jobs are. So in other words, you're saying they need to have a good foundation of that learning first before so that they can evolve later. But then I will also argue, like mathematics, we learn about simultaneous equations and all that. I used to tease my uncle as a math professor. I said, well, for at the end of the day, all I need to know is if I give $10 at a hawker centre, I get back $5 change when I buy my nasi lemak. The rest, I mean, and I have a calculator, I have a computer. So I don't need those steps involved, right? So in a way,
that foundation, yes, some of it is lost, but some would argue that some of it is not needed in this day and age. Yeah, it's true. And I used to learn chemistry. I could do all the organic chemistry reactions. So isn't that what AI is doing too? No, no, no. Time out, time out. But you must understand what we're doing in school, right? Okay, so I would tell, okay, I'm actually a teacher of teachers. So my students are actually MOE teachers. So I tell them, actually, you can teach them anything you want, right? Because the practice that they are not in
in K-12 to learn simultaneous equations. They're not there to learn, you know, benzene rings, how to react with each other, right? They are there to learn how to learn. That's the key. And that's what we need in life, right? Let me address this question which I tried to address, which is that why don't we let the kids do what they would have done in real life? The reason is very simple. It's because we are trying to get them to reach a certain level of mastery so that hope can... I must admit, I mean, every...
Math prof hopes that the kids will become math profs and then the computer scientist wants the kids to become, grow up to be good programmers, right? So we are trying in good faith to help our students acquire the skills to become masters in this. Some may not want to do it. I agree with you. We should be doing that, but maybe we need to be doing it a different way now that the tools have changed. No, no, no. There's no difference because the learning is in your head. Okay, now let me explain to what the difference is. Okay, so first of all,
I want to make a very bold claim. There are no revolutions in education. You know, like 10, 15 years ago, there's the MOOCs, right? They came out and said, oh, we're going to change the world and I'll be out of jobs. I last checked, I just had 800 students last semester. It's not going to happen. And I tell you why. Because actually,
The business of education and teaching is not about communicating knowledge. We are in the business of managing motivation. Okay, so can I ask you, with large language models, you said your job is to motivate the kids, right? In general, yeah. In general, with LLMs, would you say the motivation has gone up or gone down? Okay, so that's the problem. So that's why I was also telling my students who are not teachers that actually the trouble is that we may think that we are, after 20 years, we're good teachers.
Yeah, maybe we have certain skills. The trouble is that the clientele has changed. The kids have evolved, you know, and what has happened over the last 20 years, mobile phones, social media, COVID actually also impacted the kids a lot. And now today, we have LMs. And then, of course, internet also affected everything. And that will keep changing. So you, as a recent student, just graduated, I mean...
Do you agree with what Ben's saying? I think for the most part, it's probably correct, but... Probably. Probably correct. But the implications of that, you know, if we really go with the implications of that, and I agree, you're in the business of motivation. I think it's quite telling because what it...
means if you extrapolate it out is that students who are not motivated in the first place they're going to go to school they're not going to be motivated and it's going to be the same thing they're going to you're just keeping the same student not motivated and then giving them a bad score so that's the value for those people which is the majority of the people who we are saying are not at that mastery they're not at that level where they've passed the curve and then for the other people they're going in and getting this validation so the value I don't see it basically in this case so Jeremy if you have to teach students
aspiring students who want to ace their exams how to use AI right what would you say to them? I think one of the things is I believe people are quite output focused especially in today's world so you can see this people want instant results and then after that they want to know how to replicate those instant results it's very easy to hook people in if you show them that this is the output you could get and you just kind of need to figure out what's the secret sauce what are the inputs that leads to this output so I think actually one of the interesting things is with AI
I have started touching on a lot of topics which previously I wouldn't have because AI, when I ask, for example, chat GPT, currently I'm using Claude, I'm using Gemini, some other tools as well. It starts telling me at a very high level, this is how it works, this is how it works. It condenses the information down to a five-year-old level. Then I say, okay, cool. Now let's elaborate. Let's go into this particular topic. Let's dive even deeper. And so,
At all of these levels, I think my motivation constantly increases a little bit, a little bit, a little bit. And there's this compounding effect rather than I think what's more common in a lot of educational institutions is they start off kind of giving you the A, Bs and Cs, but you don't see that you're going to write an essay. So you don't see the story. You don't see where the final outcome really is going to be. And I think all throughout, it's just kind of this hazy black box. Students are not that motivated partly because the methodology is not there. Yeah.
But you also see students like obviously abusing it, right? Was it that UCLA graduate we saw, you know, it went viral because he posted like he showed he was, you know, using...
Yeah, check GPT and he's graduated. But there will always be those who abuse the system. Should we stop and eliminate and create all these barriers to entry just because a small handful of people abuse the system? I think all systems are gameable. So then, is that where, again, the institutions need to again set clearer, defined lines, like, again, how to use what and what to use? I think it'd be fair to universities, right? It's only been two years now.
You know, it's not been that... Okay, it may seem like a lifetime that LLM is around, but I mean...
Things evolve, move not so quickly. And to be honest, we are also not completely, like I said, as I admitted, we're not completely sure what the students are doing today, how they're using it. And that's part of the learning process. So I think that the thing about fraud, like I said, the hostess told about fraud is really about rules. I mean, the prof, whoever's instructor has the right to set the rules. And the real question here is not whether you agree with the rules or not. It's that if the rule is broken,
what should be our response, right? I mean, you sound like anarchists. Are we just training students to be slaves? No, no, no. Fair enough. So you say, this is my class, this is what we're going to learn. You are not allowed to use, for example, you're not allowed to use a calculator for this math equation, whatever, right? But you make it clear, right? You make it clear. If you agree to sign up for the course, you play by these rules. I think they did make it clear, right? And it's on that basis that they sort of prosecute and correct.
It didn't seem like it's clear because it's still in limbo. I mean, it's been... It's pending, you see. So if it's so straightforward... Why don't you just let NTU and the appeals panel do their work, right? So that would be the question. Just what is clear and what isn't clear. As they go through it, they will then have to further elaborate and then seek clarity. And even if it's not clear, I think it'd be fair to the instructors because this is a brave new world we are entering. And I...
So many of them are also struggling to cope. Many of them are, I mean, many of us, I think, are ill-equipped to... But the tuition centers can do it really well. Because we service some other tuition centers. I have friends doing personal tuition. I know other business owners who own tuition centers. Their teachers, they're kind of forcing it down on them and saying, okay, this is our business. We're at stake. If we don't adapt, we're going to die. So I'm going to give you guys a suite of new tools. Learn how to use and teach your students with it.
They seem to be adapting fine. So you're saying that the tuition centres are teaching the students how to use AI? No, I mean, they are using AI and then they understand that the students will use it as well. So the way they come up, the kind of ideas and the methods of verification, whether the work is good and whether their thinking is solid, they're coming up with brand new ways. So in other words, it's like saying, it's like here at the company, they say, we have this new system for operating your leave and all that. Use it or you can't take leave. So I learned how to use it real quick, right? So same thing with AI, the same thing.
But this is new to me. I mean, I'm not aware of what's happening in the team. But his argument being that we can, we can force this change to be faster. And it feels to a certain extent that some of our educational institutions may be struggling to keep up with the rapid pace of change. Would you agree with that? I think we need to go back to what is our role in society, right?
I'm a little bit idealistic. I think our role is to help our students learn better. And this thing about learning stuff, right? So the fact is this, you're right. You are complaining about learning...
simultaneous equations. The fact is that many of the things they learn in college actually don't need to care about. But it's part of the process and it's good to know more things. But if they're going to be a lawyer, they're going to be a doctor, you better learn it well, right? Because if not, they're going to die, right? And as a software engineer, they need to learn the craft very well. So for that, in terms of learning for the things that matter,
then I think we need to do what is right. And let me just tell you what I tell my students and I tell my teachers to tell their students, which is a very simple argument, which is the following. It is quite clear that if they use AI to do something they cannot do, they will never be able to do it.
Okay, they will never be able to do it. On the other hand, if they can do it already and then, you know, if they use AI to do it and they save some time, they can go home and date a cute girl. But if I see my other friends, they're using this chat GPT and they're scoring higher marks than me. I also want to do it. The incentive structure is broken. There's a spike, right? If it's some random class. But then you're saying good luck to this person because if he gets into the real world, maybe they won't be able to think as... So I think what you're getting at is that...
along the way we may be losing some of that learning experience they may not be learning how to think how to analyze and so on because you now have a device or AI to do it for you so at the end of the day I do think that the people who actually put in the effort right will have much better chance of surpassing that barrier which is the AI barrier and become experts
Yeah. So you are learning from the best, learning how he has learned how to use technology and teaching that to the next generation of students, which also means being a bit more open to allowing the technology to come in. Okay, I don't think we're going to come to a clear black and white answer today, but I guess my view is that I think we need to evolve a bit more and to be a bit more open
to the use of technology and that we need to change our way of assessment to find what is the best in that person and to see whether they've actually learned how to figure it out. I think that the truth is that the profs actually know their business. They know what they're trying to do, what they're trying to achieve for the students. And there is no one-size-fits-all kind of solution. And I hope you believe that
NTU and US, all of us are trying to do our best. Of course. The right things for students, right? So they don't punish students because it's fun to do, because they like publicity like this. I mean, it's not something that they do because... And I think that's why they're taking it seriously. They're still deliberating over the other two students. Yeah, and then this is a learning platform for all of us, right? The students, the institutions, the lecturers, and hopefully...
from here on, you know, we'd be able to, or at least the students would have just a clearer idea of like, how do you make sure that you are learning it properly and not, I won't say not getting caught, but you know, you want to use AI. No, so you made a good point. I think what I'm going to ask you is, so if I'm a student now, should I go and ask my professor, a prof,
So for this next project, can I use AI? What can I do? What can I not do? And get those boundaries set up. Because as you mentioned, it's different for every subject, every different area. I think I need to tell you what is available. But it's,
allowable. Not, not, you get it. No, correct. I come to my prof and I ask you, what can I use it for? What are you allowing me to use it for? I think when I issue the assignment to you, I will tell you explicitly. That's the moderate, right? And that's the approach that actually the school is proposing, which is that the profs figure it out and communicate this clearly. But like I said,
be kind to the profs, I mean, we have enough students to deal with, this is a new thing. It will take some time, I think, for the vast majority to actually figure this out. But generally, I think the guidelines is that the school does ask that the instructors make explicit what is the AI policy. And generally, like I said, either you can use it, you cannot use it. If you can use it, normally you need to declare. So at the end of the day, if it's clear...
and everybody knows what is allowed, what is not allowed, then there's no argument. No, no, no. But the NTU case is not so simple. The NTU case is... The NTU case, no, no, no, no, no. I'm not going to explain to you. We have run out of time already. But the NTU case is that it's clear. Unfortunately, the students these days, they will try to wriggle
their way around the rules that things clear. I have no comment on that because we don't know the case well enough. I can't agree or disagree whether it's clear or not. So we will have to let it play out on its own and we will hear what the final outcome is. But I'm sure we'll get there. AI is here to stay and it's a part of our lives so there's no running away from it. Okay.
Alright And that wraps up This week's edition Of Deep Dive Big thanks to Tiffany Ang Junaini Juhari Joanne Chan Sayewin Ho Painting Shaza Dalila Alison Jenner And video by Marcus Ramos Faith Ho And Hanida Armin
Thank you so much. You know, the team that's bigger. But one day we're going to take a video and show you everyone who's in the team because you're probably thinking, who are these people? Maybe next week. Behind us, of course, with everything we do here is a big team effort. So thanks so much for joining us. Any comments, drop us a note. Love to hear from you. See you next week.