We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Learnovate, AI and EduTech with Joon Nak Choi

Learnovate, AI and EduTech with Joon Nak Choi

2024/10/7
logo of podcast Analyse Asia with Bernard Leong

Analyse Asia with Bernard Leong

AI Deep Dive AI Insights AI Chapters Transcript
People
J
Joon Nak Choi
Topics
Joon Nak Choi: AI应该像钢铁侠的贾维斯一样辅助人类,而不是让人类过度依赖而丧失独立思考能力。他举例说明,如果人类过度依赖AI,可能会像《机器人瓦力》中的肥胖人类一样,失去行动和思考的能力。AI的正确使用是确保人类能够保持批判性思维和独立解决问题的能力。

Deep Dive

Key Insights

What is the potential impact of AI on education according to Joon Nak Choi?

AI has the potential to transform education by enhancing content creation, personalization, and feedback cycles. However, its short-term impact may be more limited than expected due to technological, organizational, and societal constraints. In the long term, AI could drive significant changes, especially as universities face resource pressures and the need to innovate.

How does Joon Nak Choi view the role of AI in essay grading?

Joon Nak Choi sees AI as a tool to assist in essay grading by providing rapid feedback to students. AI can draft detailed comments based on rubrics, which instructors can review and refine. This shortens the feedback cycle, allowing students to learn from their mistakes while the material is still fresh in their minds.

What are the ethical concerns surrounding AI in education?

Ethical concerns include bias in AI-driven tools, data privacy, and the potential for AI to perpetuate inequality in educational outcomes. Joon Nak Choi emphasizes the need for human oversight to ensure AI is used responsibly and ethically in educational settings.

How does Joon Nak Choi describe the future of work with AI?

Joon Nak Choi envisions the future of work as a collaboration between humans and AI, where humans act as supervisors and AI as junior assistants. He warns against over-reliance on AI, which could lead to a loss of critical thinking and decision-making skills, likening it to the human descendants in the movie Wall-E.

What inspired Joon Nak Choi to start Learnovate?

Joon Nak Choi started Learnovate to address the challenges of grading essays and preventing cheating in online exams. His goal was to make essay grading more efficient and provide timely feedback to students, enhancing their learning experience.

What is the significance of feedback in education according to Joon Nak Choi?

Feedback is crucial in education as it helps students learn from their mistakes. Joon Nak Choi highlights the importance of shortening the feedback cycle using AI, which allows students to receive detailed and timely feedback, improving their understanding and performance.

How does Joon Nak Choi address the balance between AI and human input in coding education?

Joon Nak Choi advocates for a balanced approach where students use AI tools like GitHub Copilot to enhance productivity but remain vigilant in auditing and understanding the code. He warns against over-reliance on AI, which could lead to errors and a lack of critical thinking.

What are the key lessons Joon Nak Choi shares from his career?

Joon Nak Choi emphasizes the value of being a generalist rather than a specialist, avoiding hype-driven career changes, and recognizing that peak productivity often occurs in one's 40s and 50s. He also advises fully committing to a field before rebranding oneself.

How does Joon Nak Choi view the role of AI in personalized education?

Joon Nak Choi believes AI can play a significant role in personalized education by tailoring learning experiences to individual students. However, he cautions that societal and organizational limitations may slow its adoption, and incremental changes are more likely in the short term.

What is the long-term vision for Learnovate?

The long-term vision for Learnovate is to provide fine-grained assessments of student performance, offering detailed feedback to both students and educators. This data can also be shared with corporations to improve recruitment processes, demonstrating the value of education in preparing students for the workforce.

Chapters
This chapter explores Joon Nak Choi's perspective on how AI can transform education, emphasizing the importance of incremental changes and leveraging AI to improve existing processes before implementing more significant changes.
  • AI's role in education should focus on incremental improvements of existing processes.
  • Significant changes in education through AI are expected in the next 5-10 years.
  • The short-term focus should be on making existing things better using AI.

Shownotes Transcript

Translations:
中文

Do you manage your own IT for distributed teams in Asia? And you know how painful it is. SFL helps your in-house team by taking cumbersome tasks off their hands and giving them the tools to manage IT effectively.

Get help across eight countries in Asia Pacific from on and off boarding, procuring devices to real-time IT support and device management. With our state-of-the-art platform, gain full control of all your IT infrastructure in one place. Our team of IT support pros are keen to help you grow. So check out ESEVEL.com and get a demo today. Use our referral code ASIA for three months free. Terms and conditions apply.

The humans are going to be empowered to become superheroes like Tony Stark. And because you have your loyal AI assistant Jarvis doing all the stuff in the background, and that's the example I always use when I actually give lectures on this topic. What ends up happening is that you need to make sure that you can use AI correctly.

If you offload too much, if you offload inappropriately, if you become too dependent on the AI for all the tasks where you shouldn't be dependent, then all of a sudden, you're no longer Tony Stark. You're one of these fat human descendants in Wall-E, right? The ones who can't even actually get back in their own chairs because they have forgotten how to walk. They have forgotten how to think. They're being fed a steady diet of whatever, of soda pop from a...

Welcome to Analyze Asia, the premier podcast dedicated to dissecting the pulse of business technology and media in Asia. I'm Bernard Leung, and we've heard about how AI will impact education from accessing student assignments to personalized learning. What does the future of education look like with AI?

With me today, Drew Nup Choi, Abjunct Associate Professor from Hong Kong University of Science and Technology and founder of Learnovate to help us decipher the future of AI in education. JC, welcome to the show. Thank you very much.

Yeah, I think we got to know each other through the Undivided Ventures team because we were both advising them on different aspects of the AI piece. But when I was in Hong Kong during June, we had a pretty long conversation relating to education and AI. And I think this specific intersection of edutech now is going to be supercharged by the large-language models. To start off the conversation, we always go into origin story. So how do you start your career? All right.

I think I've got a long and convoluted career. And by the way, you can call me JC if that's a little bit easier. So I am Korean by birth, American by nationality, and a Hong Konger by residence. I'm not really sure where I'm from anymore, but I spent most of my formative years in the U.S. Like a lot of other undergrads, I went through my university progression thinking I was going to be a management consultant.

And I was that for a couple of years. But then I went back to get a PhD focusing on a field called social network analysis, which is the quantitative analysis of how people are linked to each other. It's always been a pretty influential field within technology. For instance, the Google PageRank algorithm is originally a social network algorithm from the 1970s. However, most people don't recognize it as tech. And my technique in my PhD is in sociology.

So like a lot of sociologists, I ended up migrating to a business school after I got my degree. The funny thing is the kind of research I was doing, which was on how people are connected socially, but using quantitative algorithms. One day I went to sleep. The next day it had been reclassified as machine learning. And then I

A couple of years later, right around the time when I was actually doing a visiting professorship at Stanford, it somehow got reclassified from just machine learning to AI. So it's kind of interesting, however, from being a sociologist to being a management professor to being an AI guy in a span of about a decade, although really the kind of stuff I'm doing has never really changed.

So you have this very interesting balance of academia, startup life. Can you talk about the different endeavors that AI has been factored into your current journey? So the first time I saw something about a neural network was just back in 2004. It was one of my first classes back when I was a PhD student and some of the psychologists were talking about this as the next big thing.

And honestly, at that time, I was like, yeah, this is actually going to be really cool. But it's so far into the future where it's not going to impact me for a while. And it did take me about 15 years to really get to it. But it's really interesting how it's impacted my career in the past five years or so.

The first time I was using large language models was back around 2018, 2019. I was working on a startup called Zector that actually applied some of this technology to survey data. And so we're looking at an earlier generation of models. I think we're working on BERT at the time. And I was like, wow, this is really cool, but the results aren't nearly good enough. A couple of years later after that,

I actually found Learnovate, which is more in the educational technology space. And then what ended up happening was all the stuff I've been working on in November 2022, it kind of blew up. In other words, ChatGPT came out and everyone was like, wow, never mind that the technologies existed for a long time. And the only difference between GPT-3 and ChatGPT-3 was really a better user interface, right?

but it mattered, it hit the popular imagination. And because I've been trying to push for online proctoring using some visual processing algorithms,

people in Hong Kong somehow started seeing me as an AI and education guy. Yeah, that is the interesting thing, right? You started working on the first two iterations of what is now known as the large language models. And I think it's only until somebody decided to throw the entire internet as a data set, and then we found the scaling laws, and then we found the emergent behavior of ChairGPT. But rewinding back to this very interesting topic,

journey that you have gone through till today what are the key lessons you want to share with my audience

Wow, there are so many key lessons. Actually, can you give me a little bit more guidance, what your audience might find useful? Anything like life lessons, you know, sometimes the career advice. I think that's what they usually want. So in terms of life lesson career advice, wow. People tell me I've actually had a very unusual career. Ever since I was an undergrad, I resisted specialization.

I was a jack of all trades. I would like to say I was a master of some. Some people would actually say I'm a master of none, right? It's hard to make it as a generalist sometimes, but it seems to be working right now. So I would say that there is really big opportunities if you go generalist. But at the same time, early in your career, it might be good to heed people's advice and go specialist.

What's another piece of career advice? I think that there is a tendency to get caught up in the hype whenever something new comes along. AI being one example, blockchain being another, and people go all in. And it's...

I think career advice there would be to make sure you're fully committed to something before changing your personal branding. Because I think to do something like this without being fully committed is extremely dangerous. And I see a lot of people who are all in on blockchain supposedly, but not so all in today. So what's another piece of a career advice? I've actually had the pleasure and frankly, the honor to actually work with a lot of very talented students

I'm seeing the young people achieving great things at a fairly young age now. I think that's also very possible, but I also think it's really important to remember that you can actually do your best work in your 40s and even into your 50s, right?

So there doesn't need to be the hurry and the urgency to try to do something big early on in your careers, especially when you're doing startups. It actually turns out, according to research, that the 40s and into your 50s are the peak productive times for founders to produce the best startups. So...

I suppose that these are actually pieces of advice. Hopefully they're useful to your readers or viewers. Oh, it's super useful to me. At least I know I'm not going to be irrelevant because I'm a startup founder where I'm getting very close to 50 now and just initiated that journey.

But I want to come to the main subject of the day, which is about edutech and AI and some of the work that you have done in Learnovate. So maybe the first way to start is how do you see generative AI transforming education? I think specifically you think about content creation and personalization for learners. So I think before I actually just jump into the potential that AI poses in education, which is quite substantial, I want to actually talk a little bit about the different hats I'm wearing.

At first, I am a USD professor. I do a lot of teaching and technically I am an adjunct because it is actually not allowed by Hong Kong law for somebody at a public university to be full time and also doing a startup.

Having said that, I kind of have a quasi full-time role at the university. I teach quite a bit. And I also, how should I say, I've been invited to participate at several university level and even across university level initiatives on AI and educational technology. So I see a lot of things on that end. And I also have my own startup, which is HKUST Spinoff.

And it is also funded partially by the university. I am actually very, very grateful for the university's support of all of this. And so I see a lot of different things from a lot of different angles, probably more than most people. I'm also involved with organizations such as the Digital Education Council, which, of course, will be having its annual meeting in Singapore at the beginning of November. So if anybody would like to meet me in Singapore, I'll be around that first week. Please do reach out to me.

And we have a lunch there at the Wii, right? Yes. Yeah. So where do I begin? I think AI in general is a topic in which there's been so much hype and also so many skeptics of the hype that what AI can actually do and why AI should not be used for everything, it sometimes gets lost along the way.

And education is no exception to this rule. So what can AI do in education? I think on one hand, you have to be really careful not to get caught up into the hype. There's a lot of people who are talking about moving directly to a personalized learning model. There's a lot of people talking about completely changing the way education is done. The reality is that they're buying a little bit too much into the hype.

Because it's not just technological limitations, which are unfortunately all too real. It is also about organizational and societal limitations. What can you do? What are you allowed to do? There are two different things. And there's limitations along the way there. So at the other extreme, you can actually do what a lot of the corporates are doing. And this is kind of the approach I'm putting forward for education in the short term as well.

How can you take existing processes better and make it better? And as Bernard, I believe you're the one that actually coined this term. There's a difference between doing better things and doing existing things better.

And I'm firmly a believer in that we can actually do a lot of existing things better using AI in the short term while society kind of catches up. And then there is some talk in some circles within education about more, how should I say, less incremental changes that will be coming in the next five to 10 years.

But in the next five years, where a lot of us will be making our careers and trying to build companies and really trying to make an impact, you really have to focus on getting things done over the next five years while planning for the longer term.

And in the next five years, I very much see the world following along the lines of incremental change, taking things that exist today, processes, organizations, the way people think, and making them better using AI before we see bigger changes in about five to 10 years from now when people are more rich.

So one of the things that I do in my class the first day is to show the assignment that I gave them, a written assignment for my students. And then I basically ran ChatterBPT in front of them and tell them straight, I'm a person who's an AI practitioner, do not try to do that. However, you can definitely write your entire logic flow of your essay in points and can use ChatterBPT to help you to write

but not trying to get the GPT to write your essay. And that is totally allowed. So I think, which is what I think you and I as educators, we think a lot about this aspect of things about grading, but maybe what are some of the examples where you see generative AI actually making a very tangible difference in classrooms or online education platforms? So to back up a little bit,

One of the points I've been emphasizing, and this is the same points I made at the World Economic Forum event about a month ago, and Turnitin recently had a conference in Hong Kong where I made the same points.

Before we actually talk about how education will change and what kind of impact AI can make, we have to really think about the point of education. A lot of people don't understand the background of this. At first, higher education especially was about training values, character, critical judgment, decision makers. And then higher education became about teaching practical skills, which is almost a German model from about 150 years ago.

And somewhere along the last 50 to 100 years, education shifted into this mode where we need to measure everything. And then what can you measure? How can you measure it? That kind of became the tail that whacks the dog. So what are universities often going for? They're trying to get higher rankings. How do you get it? You do certain things and then everything else kind of follows. Now,

I feel that there's an opportunity for AI to change this. Because if you're looking for stuff that's measurable, a lot of times it becomes memorization and stuff like standardized exams. So AI devalues certain types of knowledge, the kind of knowledge that is memorized rather than something that is actually used. So then...

I think it gives us an opportunity to kind of move back towards more of a value and skills kind of education. In other words, it's not what you know, it's more how you use it. And AI can be actually a tool to kind of help gauge what you know and expand your set of knowledge. So I'm sure we're on the same page here.

And having said that, how does it affect the classroom at a day-to-day level? As you well know, I'm kind of an evangelist for integrating AI into teaching and learning. Learning more important than teaching, of course, as it should be. One of the first places where this came to a head is the notion of the essay. An essay is far better than multiple choice of or even short answer for assessing certain forms of knowledge.

Now, that being said, an essay also is very vulnerable to manipulation. So what happens if students are using ChatGPT or Gemini or something else to write their essays for them? And there's been a number of different approaches that have been used. For instance, the folks that I met from Turnitin really got a head start at kind of like saying that this is an essay detector, AI essay detector.

And of course, there's major limitations to this. And I believe that the folks at Turnitin are aware of this. So they're actually really trying to promote it as one source of input about whether somebody is using AI. But I think I'm from a different school of thought where I think it's important to emphasize that it's not just the notion of integrity that's actually really important because you can always assign in-class essays, right?

And in class handwritten essays, nobody's going to be using AI to cheat on those. The more interesting use case for me is the idea that we should be teaching students to use AI to write your own essays.

In other words, I talk to a lot of corporates here in Hong Kong, as you do in Singapore. And one of the things they're really emphasizing is AI readiness. They want to recruit people that are able to use AI, not just abuse it, use it to be more productive and deliver better work.

And I think we as educators have a responsibility to kind of integrate that into our curriculum. I was on the record about a year and a half ago, amongst the first in Hong Kong at least, that I was pretty much requiring students to use AI to write essays, co-production, if you will. And this was a little bit controversial at the time. Of course, now a lot of other people have actually come around to this mindset.

But there's a class that I'm co-teaching with HKUST's Director of Educational Innovation, Sean McMinn. And Sean is one of the leaders in AI and education, not just regionally, but also globally. If you haven't looked it up, take a look at it because he's actually doing some really awesome work. But he and I are co-teaching a new core class at HKUST.

where we kind of look at AI, we look at what it does, we look at what it cannot do. We also simultaneously look at human intelligence, what are humans good at, what are humans not so good at, and then how do you combine not only at the individual levels, but they have a level of teams and organizations. And a lot of what we teach is how to co-produce properly. In other words, how can you use AI without actually messing everything up?

And then there's an old American TV commercial on the war on drugs. It's a little silly, but I think it's a good analogy. They actually hold up an egg. This is your brain and start frying it on a skillet. And they say, this is your brain on drugs. Now, I like to change it up and say, this is your brain. Crack it open. Have it fry. This is your brain on AI.

Then the next thing I add is let's make an omelet. So I think that is a metaphor for the entire approach that we use in that class, where we very specifically and explicitly talk about how the way we should be using AI is directly linked to this relative strengths and limitations of our human brains.

And so we do have a final essay for that class. We also have quizzes. But the quizzes are in-class handwritten essays to make sure they're able to think on their own. But the final essay is a group project in which students take it step by step, literally, writing reflections on the way about how they use AI. And then in the end, they'll present not only the end product,

but also, which is an essay once again, more or less, but it is also an essay created, essay on AI created using AI. And they actually have to be very mindful using what they call metacognition. In other words, you have to be aware of the way you're thinking and how it's changing.

Can I interject here and just ask you this, right? I mean, in the essay situation, generative AI is capable of generating different tonalities, different forms of expression. It's like in literature, there is no one single source of truth to what you write, obviously.

or the way you think about it from a very abstract point of view. But in the case of coding, which is also the other big use I'm seeing in generative AI education, some schools in the US, UK allow students to use GitHub Copilots or CodeWhisperer or equivalent software, while others don't. The argument is there are two different schools of thought. And I think it's actually the question of if you are learning how to code, you should be learning how to see, audit and

Be sure that code doesn't go wrong. If you trust the AI too much, things will go wrong. It's this whole process of trying to leave AI to do all the work. How would you address that balance between what I've just talked about in the coding situation, where I think a lot of educators are currently struggling with as well? So it is really interesting that you mentioned coding.

Because what most people don't recognize is that coding is just another language. And large language models are quite good at languages. So I think there's two extremes to be avoided.

At one extreme, you want to make sure you're using AI, because if you don't, I think there's some truth to the idea that AI is not going to replace you, but your competitor using AI will indeed replace you. For instance, in a place like Vietnam, where you're seeing a lot of programmers working

I'm actually hearing that there's actually a relatively small number of programmers who are embracing AI who are actually able to be about 2-3x as productive at just getting stuff done. So I think if you're not using AI, you risk falling behind, especially in coding. On the other extreme, you have people who think that AI can do everything for you. And the simple reality is this is just not true.

Because AI, frankly, it doesn't think. It doesn't reason. It doesn't do planning. Although some of the new models, ChatGPT 401, may actually be able to do some of that. But it's still very slow and inefficient, and you're not going to get the best results. So it seems to me that the right approach

mix of roles should be a human as supervisor, AI as the junior level assistant,

Which is, of course, the model that everybody seems to be going towards nowadays, right? So I can add to your point about the Vietnam thing, right? I actually had developed, when I was in my last corporate role, I had 12 Vietnamese engineers first year. We actually did this 15-month project for coding. We scoped it out and we gave them 50% with and 50% without co-pilot. We saw the productivity improved by 50%.

What happened was when we start giving everyone co-pilot after the first month, the project went from 15 months down to nine months. And that is pre-chat GPT. And when I did it myself to write, which I cannot produce a production level code, but just doing it as a toy model, as you have rightfully indicated, that particular project that I did, because I have to rely on my memory on how the product itself worked, I've actually replicated the entire thing in three weeks.

with chat gpt available. So that perspective of going downwards in time is actually really significant if you really know how to use it. I do think that it gets better and better with time as you get more and more familiar with it, as you start understanding what you can use it for and what you shouldn't, and you get more efficient. And this is something in general that you see with most AI tools,

But I think it actually not only reduces the time required, it also expands the kinds of people who can actually do certain tasks. Because what people are good at is planning, architecture in technical lingo, just figuring out what you're trying to do and how you're going to do it and chopping it up into little pieces.

Each of the little pieces, if you actually can prompt AI to do it for you a clear and concise way, it doesn't really need to be concise, but in a very clear way that has no room for misinterpretation, you're going to get fantastic results. And you're going to get stuff at the fine level execution level of detail that might even be better than what humans are able to do.

So let me also give you an example about like how this is transforming the nature of work. Do you have any questions before I slightly jump on to the next topic? I have a different question, but actually it was a point that you just made about it. There is actually bias and maybe even sometimes data privacy in these kind of AI driven education tools, right? There's two perspectives, educational institutions point of view,

and the point of view of the outcomes that we want to generate for the students. So I'm not going to conflate that. How would you think about the AI solutions being ethical and not able to perpetuate biasness or even inequality in terms of the outcomes from there, given the examples that we are talking about, whether it's coding, writing an essay, etc.?

So, Bernard, I'm actually going to take a pause in the question because if you let me start talking about it, I'm going to go for three hours on it. Let me just jump back and wrap up with the changing nature of the workplace. One of my key arguments I am

advising the master's program in business analytics at HKUST. One of my core arguments is that the kind of technical things that a traditional computer science oriented data scientist is able to do, and what a modern business analyst is able to do, is actually rapidly converging.

The reason for that is because the real advantage of a traditional data scientist is the ability to have some of the engineering and some of the backend execution capabilities that some business analysts may not be trained in. In general, the difference is that data scientists and computer engineers are

really good at the nitty gritty of coding, where a business analyst understands the basics and can figure it out. But it takes more time, because you don't know every little nitty gritty detail of Python. Now, what I'm actually finding in the past year and a half, it is that the business analytics students are starting to use ChatGPT to really start coding the nitty gritty

You don't have to go and figure out what the API interface for a model needs to look like. You can actually type that into ChatGPT, it'll draft it for you, and you can test it and see if it works. And all of a sudden, you've made the business analysts like Forex as productive while actually maintaining their competitive edge.

which is, of course, their ability to understand business processes and problems far better than most data scientists. I mean, the best data scientists do get it, and a good data scientist is supposed to understand it. But the percentage of data scientists who really understand processes and strategic objectives, I'd say it's fairly small. And if you're actually one of those data scientists, you're going to get promoted very quickly.

So I see that there's actually an emerging role for greater hiring of people coming out of these business analytics programs because they have the business acumen. They're able to function as almost MBA lights at the same time, have the technical chops and eventually take on the role of actually leading technical teams. So it's an interesting phenomenon that I think people are not

appreciating nearly as much as they should. It's devaluing the nitty gritty knowledge of something like Python because ChatGPT can actually draft it for you. Copilot can do it even better, right? And at the same time, it's actually really highlighting a skill set that a lot of people at the C-level have been telling me is badly needed amongst their technical staff, which is an understanding of the business side of things.

If I were to rewind back for the future of work and then ask you now that things like privacy, biasness in AI within the education tools, what would be your response to that? So I think in general, the future of work, to make a very general statement, is that it's going to look like the Avengers. In specific, the humans are going to be empowered to become superheroes like Tony Stark.

And because you have your loyal AI assistant, Jarvis, doing all this stuff in the background. And that's the example I always use when I actually give lectures on this topic. What ends up happening is that you need to make sure that you can use AI correctly.

If you offload too much, if you offload inappropriately, if you become too dependent on the AI for all the tasks where you shouldn't be dependent, then all of a sudden, you're no longer Tony Stark. You're one of these fat,

human descendants in Wall-E, right? The ones who can't even actually get back in their own chairs because they have forgotten how to walk. They have forgotten how to think. They're being fed a steady diet of whatever, of soda pop from AI. So one of the things I ask students at the beginning of the semester in all my classes is, would you cheat on your essay using AI?

And a lot of them, frankly, I got a good relationship with most of my students. So they're pretty honest with me. And I don't know, about a quarter of them actually raised their hands. I'm like, yeah, let me show you a movie clip. And I'm saying, and I tell them, look, if you use AI inappropriately, for instance, writing your essay for you without really doing the thinking, you're going to end up one of those guys. Do you want to be one of those guys? Then a lot of them do change their minds. Yeah.

Just one question before I get to LenoVid. What's the one thing you know about the intersection of AI and education tech that very few do? I think that the potential in the short term might be more limited than people think it is. But the potential in the long term is much bigger than people think it is.

Because whenever you're looking at technology, you must not just look at technology, you must also look at the societal background. For instance, you're talking about AI and ethics and privacy and security and all that kind of good stuff, right?

There are legitimate fears about that, and I don't want to get into the technical details quite yet, but I think because schools, universities even, are very cautious organizations by nature. After all, if you're as connected to government and if you actually have a social mission like we do, then it's important that we actually take these kind of things seriously. Schools are very slow to adopt in a lot of ways.

For any kind of adoption, you have to actually go through multiple committees and make sure you have multiple approvals. And sometimes it can be slower than you'd like. However, what also needs to be considered is that the notion of the university itself is about to undergo a fairly major change.

Because in over the next five years, the usage of AI in educational institutions will be mainly an effort to take existing processes and shore them up and patch them up so they work better. But you're starting to see some big changes on the horizon. If you look at what's driving a lot of revenues for universities, it is actually mainland Chinese students who are studying overseas.

And overseas students actually pay additional tuition. And they're such a large percentage of the number of student base at some of these universities where effects of restricting them, like in a place like Australia, for instance. From what I'm hearing, the Australian higher education sector is actually in something of a panic because they see massive revenue shortfalls coming in the near future.

At the same time, you're also seeing a renewed skepticism towards universities in a jurisdiction like the US, where one of the two major political parties are very anti-higher education, right? So you're going to see a lot of resource pressures. There's going to be a lot of pressures to innovate. And in five to 10 years time, you're going to see a lot more willingness to try to actually build something new, replace existing model. But

In the short term, there will be a very strong effort to kind of patch up the existing model. And to be honest with you, I think a lot of the stronger universities will be just fine. HKUST, I think, for instance, is perfectly positioned for the future. So I don't think you're going to see as big of a change driven by AI as you might expect in the next five years. But you might see some massive changes being driven in the next five to 10 years. So...

I think that a lot of people are going to be disappointed and they're actually going to be thinking, oh, nothing's happening. Then they'll give up keeping an eye out for it in the next two or three years. Then all of a sudden, you're going to have some huge changes being announced maybe five, six years from now, driven largely by research pressures. That is something that we learn in sociology. People don't want to change unless they're made to change.

and the resource chain pressures will cause a change to the business model. The biggest example that actually has come out of this is actually the University of Adelaide, which is a top six school in Australia, which actually has recently announced that they're getting rid of in-person lectures. They're going to move everything to personalized education, pace as you go kind of stuff online. And we don't know if that's going to actually happen. There's been a massive pushback against this.

But you're going to see a lot more experimentation like that as certain universities are increasingly pressured. So I'm going to get to the conversation of LearnOvade. What inspired you to start LearnOvade? What is the overarching vision for the company? So it started out from my point of view as an educator.

So I was working pretty actively on Zector for a while. I was also teaching at HKUST, but in a true part-time role. I wasn't teaching all that much. But then there came a time where I transitioned out of Zector and into university teaching closer to a full-time basis. And this just happened to be when COVID hit.

So I was like, you know what? There are certain things I'm doing. They're very tiring. One is actually the number of students cheating on online exams. What can I do about this? But the real passion for me was how can I actually make essays easier to grade? Because I just don't believe in multiple choice.

I don't think it's appropriate for the university level. And if you're expecting students to become critical thinkers, there's some things you can do with tricky multiple choice questions, but you really can't do too much with it. And don't get me wrong, there are times where I use very tricky multiple choice exams, but it's fundamentally limited and it doesn't allow students to exercise any kind of critical thinking or judgment or all the other things that they're going to need to know more

especially after AI kind of takes over the lower level tasks. So I was assigning a lot of essays for classes with, I don't know, between four sections, 270 students. And it got to the point where every week I was spending about a day and a half of the work hours, about 12 hours a week, just grading because I believed in the value of the essay of an educational tool.

At the same time, it just basically took a huge time commitment.

So I went with these two ideas. I talked with very senior management at HKUST. They put me in touch with one of our senior computer science professors, Huanmin Chu. And basically we started building stuff and we got university funding for it. Then it got spun off into its own company. Huanmin's less involved now, but I'm kind of running it. And we're just kind of like adding stuff and

This happened before ChatGPT came out. And once it came out, of course, it allowed us to do a lot more things. Our focus shifted away from online proctoring to essay, not just grading, but also commenting and feedback in a big way. And then along the way, I kind of got sucked into the old educational, like innovation kind of ecosystem here.

Once again, my colleague, Sean, he's been fantastic. And he's invited me to a number of different venues and conferences and international organizations. I'm very grateful for that. Where I was seeing a need for something along the lines of this to try to make exams and other assessments less about just judging students, are you up to par versus also helping them learn. And the notion is that if you can actually get feedback to them fast enough,

quickly enough and in enough detail with a human in the loop. Once again, I don't trust AI 100% at this point. So learn-based technology, all of it involves a human in the loop. And the way we do it is we actually use the AI as an AI grading assistant. At a place like Stanford or Harvard, where you actually have more money than God, you can actually hire, teach grading assistants specifically to grade essays for you.

Most universities don't have these kind of resources, so the instructor or the TA has to do it. But even when you actually ask your TA to do it, you give them a rubric. In other words, we give them a set of guidelines by which to grade. And then once you give them a rubric, you say, "Hey, here's a big stack of papers. Have it back on my desk in two weeks." And of course, it's going to take four weeks, right? And then you as a professor have to go back through everything and check all the answers and make any changes, and then it goes back to the students.

Or technology called pre-grade. An AI grading assistant does essentially the same thing. It takes your rubric, it takes your giant stack of papers and gets back to you in a matter of minutes or when you actually have a lot of stuff, maybe tens of minutes. And then you as an instructor can review everything. It still takes time, but a lot less time. And then all that actually puts you in a position to get feedback back to the students in, let's say, three days instead of three weeks.

And if you give feedback, especially the kind of detailed feedback that humans can't write, which is a little bit too repetitive, takes too much time, but AI can draft and humans can check, you can actually give that all back to the students right away. And then the exam or essay is fresh enough in their heads where they can actually learn something from it. So this has been an aspect of AI and essay grading that I didn't anticipate that I actually think it's the most important thing.

It's not even about saving time for the instructors. It's about shortening the feedback cycle to students about so they can actually learn from their assignments. And I think it works really well at the undergraduate levels.

With MBAs, I think the kind of feedback they're needing, AI probably can't handle 100% yet. They need more human time. But I think even with very smart undergrads, students seem quite happy with this.

So it's probably now Lenovo probably using things like natural language processing, large language models. How the technologies are then placed in to make that experience better in terms of grading? The way how you think about it is slightly different from people are thinking that you should be using the tool to minimize the amount of time to get the essays. But actually what you were thinking about is how to get the feedback cycle quicker and

and able to draft that. And there's a very big difference. You know, there's a good heart's law about, you know, you make certain measure, you actually get, you actually make it worse for that. How do you think that this kind of technologies that you all developed in LearnOVIT that can actually be placed even in other parts of the assessment piece of the education pedagogies or assignments that you give to students? So I actually think it's really interesting that,

So when I was... There's a lot of truth to the old adage that sometimes you learn more by doing than by thinking. And when I went into this, my goal was to save instructors time. But it's only after...

I've actually been talking with more senior people in education and AI at the intersection that I'm realizing how much of a need there was to give students more rapid feedback. I should have caught it as a teacher and I had an intuitive understanding of it, but I wasn't verbalizing it like that. And then so it's only by being involved that I've actually found that this is such an important need.

So I do think that there is a huge role to be played in shortening feedback cycles. Now, I think there's a lot of different ways where this technology can be used. If we do want to move to stuff like personalized education, it can't just be one big jump because there will be a societal and organizational rebellion. You need to take it in steps.

You can see what you can do and you need to also see what you can't do. And you need to take it very slowly to make sure people are comfortable with. The reality is that I think progress along those lines would be slow for exactly those reasons. I think people are not very comfortable with it. You may see it coming in faster with corporate training than anything else.

However, I do think there is another big immediate role to be played. And this is actually our next generation folks at LearnBig. There is a need to better assess the outcomes of education. We need to know what the students are learning, partially to be able to give them better feedback more quickly. If you just have a whole bunch of like comments on specific students, if you're talking about a 270 person class, you might not be able to get the insights from that right away.

So you need to actually really look through it. That could actually be a lot of work right there. So a good analytics platform, not just to take the grades, but to take not only the grades at the level of the rubric, in other words, the subcomponents of the grade, and then also the comments that are being fed back to the students.

What we're actually doing right now is we're prototyping a tool that will take all of that and give much more fine-grained assessments. These are the different groups of students. This is what they're good at, but this is not what they're not good at. And giving it to the teacher in a very easy-to-use package.

So something like that, I think, has the potential to transform the nature of teaching and learning short term, because you can see exactly which groups of students are doing well at which things. So you set up like specialized review sessions for them and you invite only those students that need it. You know exactly who they are and you can actually invite them to kind of join for that TA session. Right.

That's one way to do it. I also think it's got bigger applications because the time-honored student course evaluations, they're actually not a very good way of measuring teaching outcomes. Because students, for instance, if you're funny and a comedian, you may actually get high evaluations. Sometimes I wonder if I benefit too much from that, too. But it doesn't really measure what the students are actually learning.

If you can actually put this kind of technology at the level of the comments and the level of the feedback and the subgroups to the...

People who are course coordinators, they're looking at multiple sections of the same course, then they can actually have a much better big picture understanding of where the entire course across all instructors is doing. You can even actually put it in the hands of other folks who are actually even more senior for them to kind of like see which teachers are doing really well that you would like to reward.

So I think that's where some of this technology combined with more traditional like dashboarding and analytics. But I also think the notion about dashboard itself is anachronistic at this point, because why use traditional dashboards when you can actually use dashboards and explanation together? AI is changing all that. And I think there's going to be huge changes being made.

I have two more questions. And the first one is, what is the one question you wish some people would ask you more on AI and edutech? Why are we doing this? I'll ask you that. Why are you doing this? With their heads cut off.

Yeah. So it's like a lot of people are motivated by FOMO, fear of missing out, right? They're motivated by the hype. They're motivated by the fads. And what they're doing is they're really pushing the envelope, not because they want to push the envelope. They're pushing the envelope because they read about it somewhere and they want to be looked on as innovators for all career progression. But what is this doing for the students? And when's the last time you've thought about the actual end purpose of education?

So think about your goals. Think about FOMO, fear of missing out. That is going to lead to a disaster. Instead, focus on really think about what you're trying to accomplish and how you're going to get there and how AI can be useful. And I think you'll get the answers right there.

And my traditional closing question, what does GRID look like for Lenovo technologies or how, or maybe to you in how to enhance using AI for education? Gotcha. I think a GRID for Lenovo would look something like in the early stages, a tool like what I just mentioned, right? Something that will help teachers teach

better using traditional processes to enhance the learning experience for the students. So that's what it would look like in the short term. In the long term, we have much bigger aspirations because all of this, how can you actually connect this to what the corporates are trying to do?

One of the big pain points with corporate recruiting is their grades just aren't granular enough, right? And that's why the corporates are using standardized exams for incoming recruits before they even give them interviews, right? Do we even want to actually take the time to interview you? Do you have certain basic skills that we are looking for?

And what if we can actually give them on a voluntary basis on the side of the students, right, to respect their privacy and autonomy, much more fine grained information at the level of the rubric about the stuff they did well on and the stuff they did not so well on.

I just based off my conversation with some senior leaders in HR, I think this is something that they would value. And this is something that the universities could provide as a value add, especially as pressure increases on the universities over the next five years to demonstrate that they're doing a great job teaching. So I think that is going to be kind of like a long term evolution of what we're trying to do.

But let's not worry about that for right now. Let's make sure that we're doing a great job with grading essays. And it took a little while.

If you look at the history of essay grading using AI, there was a big hype about that around early spring. It seemed like every other innovative secondary school teacher was putting out their own po-box to do this. But they were all proceeding with a lack of understanding about the basics. Some of the smartest people I was talking with and the most forward-thinking, the most innovative,

They're making basic mistakes, like assuming that if you run a greater number of essays through Poe, then Poe's just going to get better at grading essays. It doesn't work like that, right? You actually have to have specific training parameters. You have to set it up to train. And then even then, it's actually much harder than they might actually imagine, right? So I'm seeing a lot of these kind of

initiatives disappearing. And then some of the pushback I get from some of the teachers, especially at the secondary level, is like, hey, AI essay grading just doesn't work. And my response is AI essay grading doesn't work because you're not doing it correctly. It's a lot harder than you might imagine. It took us much longer than we initially expected to make sure we get essay grading just right. But now I think we're at the point where it is more or less right for a lot of tasks.

And then if we can actually get this right, and if we can actually put the appropriate analytics around it to make it useful for teachers in the short term, I think I'll consider that a short-term win. It'll be useful. It'll make impact. Then we can focus on making even bigger impacts. JC, many thanks for coming on the show. In closing, two quick questions. Any recommendations which have inspired you recently?

Actually talking to you. I really enjoyed meeting you. Basically, thanks for actually coming and speaking at my events in Hong Kong. I really appreciated that. Right. And also that lunch we had, I found it refreshing to speak to another leader in the field who really understands both the business and the technology sides.

More often I'm talking with people that are very aware from a business side. Sometimes I talk with engineers who understand the technology, but it's really rare to find somebody who understands both the business and the technology angles. And I think there's a big need for that. And it was actually really inspiring to talk to you about this. So I really appreciate that. The other thing I really feel I was inspired by is the students.

The students are not afraid of change. The students are embracing change. The new core course that Sean McMinn and I are teaching at HKUST is turning into a fantastic course. It's one of the two options to satisfy the university's critical thinking requirements. We are combining the most advanced technology

with the most advanced teaching pedagogies. So half the time, these students are working in small groups to explain what back propagation is to your 11-year-old sister using mirror boards.

And then you see what they produce, and it's funny, it's entertaining, it's heartwarming, and it's clear that the students understand what backpropagation is. You don't need to actually talk about the chain rule, all sorts of mathematical mumbo-jumbo that none of these people are going to get outside of an engineering department.

to actually teach people what AI is. It is possible for them to understand everything in plain and simple English. And now I'm seeing these kids, first-year students at HKUST, being able to explain how AI models are trained to a layman using language in English that an 11-year-old kid would understand. I have found that tremendously inspiring. And it makes me wonder if we're overcomplicated the whole thing. And if...

we can actually get everybody up to that level of AI literacy. I think the chances of us making sure we get this right instead of screwing it up go up exponentially. How do my audience find you? Hopefully not long-winded. I do like to talk, but I really appreciate the opportunity and hopefully they find what I mentioned at least informative.

I'll point them to your LinkedIn, Lenovo, and everywhere else. Okay. All right. Thank you very much. And hopefully I can actually see you when I'm actually at Singapore for the digital education councils, annual meeting of the first week of November. Okay.

Okay, I should be in town. And of course, to everybody, we are found everywhere from our YouTube and our new format now for the main site is that actually we were posting the transcript, which I actually spent some time to edit it. But I think I found an AI workflow that can allow me to do it faster. So I look forward to see you in Singapore, JC, and we continue to talk. Fantastic. Thank you very much, Bernard.