We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Fixing education for the AI age

Fixing education for the AI age

2025/6/3
logo of podcast LSE: Public lectures and events

LSE: Public lectures and events

AI Deep Dive AI Chapters Transcript
People
C
Conrad Wolfram
Topics
Conrad Wolfram: 我认为我们正进入人工智能时代,这是一个重大的新工业革命。现实世界已经发生了根本性的变化,教育必须反映这种变化,否则就会与时代脱节。计算已经变得无处不在,它从根本上改变了世界上的决策方式。因此,我们需要重新思考教育,特别是数学教育,使其适应人工智能时代的需求。我们需要培养学生的计算素养,使他们能够批判性地思考计算做出的决策,并明智地使用计算工具。当前的数学教育过于注重手工计算,而忽略了计算思维和问题解决能力。我们需要改变课程设置,将计算机纳入教学,并教授学生如何使用正确的工具来解决实际问题。此外,我们还需要解决教育体系的激励机制问题,鼓励创新和改革,而不是仅仅追求考试成绩的提高。

Deep Dive

Shownotes Transcript

Translations:
中文

Welcome to the LSE Events Podcast by the London School of Economics and Political Science. Get ready to hear from some of the most influential international figures in the social sciences.

Okay, good evening everybody. Hopefully you can hear me all. My name is Emma McCoy. I am the Vice President and Pro-Vice Chancellor for Education at the London School of Economics and Political Science. I am very pleased to join both our online audience and our live audience here in the Sheikh Zayed Theatre for this event hosted by the Data Science Institute and the Department of Mathematics.

So the prominence of computational resources and more recently the rapid growth of access to generative AI tools has exposed major deficiencies in traditional education. Not only in that it has raised the urgent need to harness technology to improve the pedagogical process, but also how the subject matter has diverged from what is needed to solve real-world problems.

Tonight's event will explore how we can leverage AI to reshape maths education and whether we are ready to dismantle outdated methods and embrace an AI-driven educational revolution.

To address these questions, we welcome Conrad Wolfram, physicist, mathematician, technologist and strategic director, European co-founder and CEO of Wolfram Research, the company behind Mathematica, Wolfram Language and Wolfram Alpha. For over three decades, Conrad has been a leading voice in reimagining how we teach and use maths in the digital age. Distilled in his book,

Published in 2020, I believe, kind of pre the launch of ChatGPT and so on. The math's fixed depending on which side of the Atlantic you are. As usual, there will be the chance for you to put your questions to our speaker. And I will try to ensure a range of questions from both our online audience and our audience here in the theatre.

For those of you here in the theatre, please raise your hand when it comes to the Q&A and someone will bring you a microphone. Those of you joining us online can submit your questions through the Q&A feature at the bottom of your screen. Please also let us know your name and affiliation. For those social media users following the conversation, the hashtag for today's event is #LSEevents.

Now this event is being recorded and will hopefully be made available as a podcast subject to no technical difficulties. Quite relevant for today's talk. Finally, I will please ask that phones are put on silent to avoid any disruptions. And with that, it is now my pleasure to invite our speaker, Conrad Wolfram to the lectern. Well, thank you very much for the kind introduction. It's very nice to be here.

and I have a quite complex talk which I will try and go through quite quickly to give us some time to talk about it afterwards. And so I've entitled it, "Fixing Human Education for the AI Age", the age I believe we are now entering. And really, I suppose the opening set for this talk is to point out that the real world changed and is rapidly changing.

and to ask the question, how should education react to that? And to try and work through some of the issues that I believe have not worked in the past that hopefully we can fix as we go forwards. So a central way in which the world has changed that I'm going to focus on is what I would call ubiquitous computation.

The fact that pretty much anywhere you are, certainly in a developed country, you have access to massive amounts of computational power. In fact, we each have multiple devices and the cloud and all sorts of things that we can immediately deploy to compute things for us. And that has fundamentally changed how decisions are taken in the world.

And I think one forgets just how fundamentally different it is from even a few years ago. So that is a major fundamental change in the world as we know it outside education. And I would also point out that I would consider computation as, so to speak, the enabling revolution of artificial intelligence.

It's the sort of core technology that's allowed artificial intelligence to happen and for us to be entering this new age. And if you want a comparison with the past, I think electricity is probably the most similar sort of epoch opening technology that there was.

I guess the first motor was run in the Royal Institution in 1821 by Faraday, I believe. And electricity, we are still discovering applications and learning how to deploy electricity 200 years later. I think computation is probably on some sort of trajectory like that, but maybe plus 150 years. So this is a big change in the world and a fundamental change to society. So we're entering...

what I call the AI age, and essentially a major new industrial revolution. It's hard to know since even I'm not old enough to have lived through the previous one, that this feels much more quintessentially human than what's happened before because it's attached to the thing that we've always thought of ourselves as humans as marking out. We can think in a way beyond

animals and other things. And now somehow machines seem to be able to do this thinking. And so this feels very human, potentially an attack on humans as well as a compliment to humans. Maybe it felt like that when you suddenly had machines that could physically move things that previously humans were able to move. But I think there were animals that could move things before and so forth. So I think this

It's certainly very fast moving and I think it also feels closer to the bone perhaps than previous revolutions.

Well, my day job, as Emma kindly mentioned, is we've been building technology for doing computation. And this has been everything from very technological things, things that real specialists, research people would do, much more recently to things that are much more general purpose, because computation has become a more general purpose thing than it was when we sort of set going.

And it's a slightly interesting trajectory. So I believe we're shortly going to be 37. And I believe that's June the 23rd, which we discovered with Wolfram Alpha was Alan Turing's birthday. But that was a coincidence. And Mathematica was launched in 1988. And in fact, Steve Jobs named it Mathematica because we were bundled on his next machines on which the web was then invented in CERN.

So the pattern here is quite interesting. It's kind of an iteration between us and computation in the sense that computation was a very specialized thing 37 years ago. More or less.

Now, it is everywhere, all the time. And so we've been sort of iterating that. Hopefully, we've been driving some of that. But we've also been sort of benefiting from that. And now, you know, in every boardroom, in every company, in every government, people are discussing things that I would count as computation, data science, AI, how do we better make decisions on these sorts of things.

So this is a thing that sort of moved a lot between us. I think it's this one probably. Okay, let me see what I can do with this. Okay, is that better? Okay. And so if you look at the pattern up to 2022... Okay, we'll try and do that. Okay, if you look at the pattern up to 2022, we've seen...

computation, so to speak, get to be everywhere. So it's always been associated, obviously, maths itself sort of intertwined with computation. You can name things in different ways. Physics has been a very central user and driver of computation and accounting of more simple mathematics in many ways.

But there are other things. So there are newly conceived areas that really have only come to be there since the rise of computation. So programming, data science, and of course AI we've discussed. Also many parts of finance. They really depend on sort of high-level computation happening in many places. This third column is quite interesting because these are subjects and areas that have existed for a long time but now have a sort of computational element that's driving them hard.

So biosciences, very typical example. Most biosciences was very qualitative in many ways until, I don't know, a couple of decades ago. Now you have computational biology and other areas that are highly, that's the sort of cutting edge. And often they're quite hard to deploy computation to. But the point here is, in virtually every area of life,

Decisions depend on computation. Progress depends on computation in a way like it's never happened before. And one forgets just how much this has moved forward. So, a simple example. I'm going to have to maybe perch this somewhere. Or maybe, I don't know, we can switch to this one if that's easier. So, a typical thing one might do now...

which was unimaginable not long ago, is ask for a sequence in the human gene and look it up. So I'm going to try and... Oh my gosh, I picked and there was only one site for this particular thing. Maybe I'll shorten it a little bit and we'll get a few more. So what we're doing here is we're searching the human genome and we're finding everywhere where this shows up. And the...

Don't know maybe I haven't evaluated this or something there we go um Something funny's happened there, okay? but the point is you know the idea that you could take a whole human genome sequence it and find you know in a few seconds on a laptop and

where a particular sequence appears in the whole thing is a very new thing. I mean, right back, I think it was in the Clinton era in the US, they didn't know how long it would take to sequence a human genome. It might take decades. And now we can call out the results of this very quickly. And that's a typical sort of computational thing you couldn't have imagined a little while ago.

Lord of the Flies, you know, analysis of sentiment, you know, this is a straightforward thing to do. You can suck in the code, you can suck in the book, you can look at, you know, what happened to the sentiments of the characters. This is something that's very straightforward. And by the way, the code for doing that is this. I mean, it was this long.

You know, you can just, you know, there is the Lord of the Flies, etc., etc., etc., and off it goes and produces this. So the fact that you could imagine doing essentially a literature analysis like this is a new facet to how we make decisions. So that was, in a sense, actually are we back on this okay? It sounds like it's functioning. Good, okay.

Enter 2023 and generative AI sort of coming on stream. And the question is now, this high-level computation that is everywhere can now become sort of accessible to everyone. And I think this is the big potential drive that we have, and it's rather exciting. So, I mean, one way to look at this, sort of chat GPT-LLMs are a major bridge between, in a sense, the human linguistic world and the computational world. And, you know, that in itself, as I was...

sort of intimating is built on, you know, sort of 3,000 years of computational ideas and 50 years of mechanized computations because we've got high-powered mechanized computation that we can even consider doing something like this. And of course, that's a major driver of what's happening. So...

Simple example here, you know, make me a program to show EU countries each marked with their GDP per person. So if I do this, we get a piece of code. This is in our new notebook assistant, and it's pulled out, you know, data from the European Union. Hopefully, if I evaluate this, we should get a, you know, a map showing us, here we go, showing us the GDP per population, you know, nicely plotted on this map.

So the fact that one can kind of utter something, get a piece of code written, and deploy the code, compute it, in a very short space of time, is pretty new. You could have written the code before,

But you couldn't really have gone this sort of full workflow really quite quickly. I think this will open a lot of things. Just to point out, by the way, that you don't always want to linguistically interact with things. And the fact that you have sort of high-level code, I think isn't, you know, if I, I don't know whether, let's see, is rainbow a, I think we have a rainbow-style color.

a gamut. So I've reshaded that by typing something into the code. So sometimes you want to be abstract coding and sometimes you want to be linguistic and those are useful interactions as we go forward. But that's a pretty new sort of ability to be able to do those sorts of things. So that's a quick snapshot of the real world of computation. The sort of big issue here is that education has to reflect this change.

or it's out of step. And the first and most important part of this is it has to reflect the real outside subject matter that's going on. So, the world of mathematics, the subject has changed. And it's changed dramatically because we now have computers that do the hardest, what was the hardest step, and we'll come on to that. We can't ignore that in deciding how we're going to educate people, because we're going to educate people for the wrong subject.

So that's a crucial thing I want to talk about at some length. The other thing is, whenever you're doing anything which involves machinery, you should use today's machinery, not yesterday's machinery, at least when you're learning it. Use the stuff that you're actually going to use in the real world today. You need to learn how, as that changes over time, you need to learn how to adjust to that. I think the other thing that's specific this time is we've got to understand how is the human supposed to be involved in the future. The real world is changing in this

AI age in which we don't quite know what the human quotient of thinking is, to put it bluntly. And so we need to work out as part of education how we help to optimize that, you know, the human quotient in a sense. And one of the pieces of that is to think very, to work out very carefully that

What is thinking? Quotes. What is the human thinking that we want to have happen? What is the human contribution? And be very explicit about that when we talk about education. So most of this talk, I'm going to talk about the subject, the content. Whenever I go to education conferences, which I don't do too much, but...

People immediately jump into the pedagogy. Oh, you know, it's great. We can now teach maths or physics, whatever it is, and we can do it with AIs. Wonderful. But the problem is we need to figure out what the real world changes, the subject, before we decide how we're going to deliver the subject. And there are great things we can do with modern technology and improving the pedagogy, but it needs to be on the right subject as our first priority. So I'm going to talk a lot about maths today.

Partly because I know something about that, partly because I think it is the most extreme case of the problem so far. And in a sense it's emblematic of the AI age we're entering, that the outside world is changing. It's changed pretty rapidly with respect to maths over a number of decades. It's the forerunner of a lot of the other changes we're going to see.

history and things have evolved, maths has dramatically changed because it became mechanized, but now we've got mechanization of many other subjects. So I think we can learn a lot from maths, plus it really does need fixing in its own right. So another thing about maths is people perceive it as the most important school subject with respect to the AI age we're entering. It also has a negative in many respects that it is very quantifiably accessible. If you get 76% in your maths exam,

People believe that that's a very reliable measure of how you did because it seems to be very numerical. If you get 76% in your English exam, people might say, "Oh, well, there are different ways to measure this, although that's mostly been locked down more." So that gives the power of it, of quantifiable assessment, which is not very positive. And a huge amount of time is spent on this through many years of a student's life. Huge numbers of students, essentially in every country,

One of very few core subjects that, you know, essentially everybody is forced to learn for years of their life. And yet, I think it's more divergent from the real world than many of the other subjects that you see in school. And I think the maths, certainly at school level, has done an absolutely terrible job of adapting to the real world. And we need to fix that and also not repeat that for the other subjects that are coming up for big change.

So back to Steve Jobs. Steve Jobs actually early on when Mathematica came out said, had this quote which I thought was rather nice, which is, Mathematica will revolutionize the teaching and learning of math by focusing on the pros of mathematics without getting lost in the grammar. A very Jobsian style quote somehow. I've highlighted will because this was 1988. And we're talking about Mathematica or maths in general. I thought this was very relevant to what I realized I've been talking about, which is that this is yet to happen.

And if you look at maths from 1988, certainly in schools, it's pretty similar to what you see now. And that seems extraordinary. So just to zoom out a moment, you know, the mainstream, a key mainstream need for this AI age we're entering is good decision taking at all levels possible.

So that means sort of in life, in work, in society, across many people, using the power of computation. And I tend to talk about this somewhat arbitrarily, but as the idea that sort of computational literacy is the terminology to describe the cause needed across society. We need a society that is literate to the extent that they can be sceptical about computationally taken decisions, have a sensible...

reaction to them, believe that they understand enough to not be scared by them but be sensible with respect to them. Up to computational thinking which is kind of like you know you need specialist people to drive the subjects forward to do high-powered data analysis and so forth. And so in between those there's kind of you know just survival in a modern society and really adding value. So I think at all levels I believe we need

what I would call a core computational subject. I avoid the word maths because it's very polarizing what people mean by that, but there's this core subject that I think we do need at all levels of base and real value add. I would add to that that I think we need something that is good for general purpose reasoning even if you're not explicitly using the maths process.

I mean, I think if you look back 100 years, philosophy was very much put in that category. I would argue that actually computational thinking is probably more relevant for many types of decision-making if you had to pick one. My mother was a philosopher, so I was avoiding philosophy anyway. But the...

I do think, you know, when you think of PPE, for example, at Oxford, is that really... You know, that was introduced, I believe, in the 1920s or so as a replacement for greats, as in classics, as the main core for thinking and for sort of preparing people for public life. And I would question whether that's exactly the right set of thinking skills that you really need. We need a sort of slightly new core set based on the fact that we do decision-taking in a different way. So, to be blunt...

A key educational crisis we have, certainly at school level, is that there is no core computational thinking, computational literacy subject. Maths ought to be that subject. But I would argue if you're talking about something like A-levels, it's 80% off track. I mean, not 20% off, something like 80% off. It's pretty much the wrong subject for achieving this. And I want to explain to you why I'm saying that. I mean, it's quite easy to summarize. No computers for computation.

In real-world maths, computers do all the calculating, pretty much. In educational maths, people do almost all the calculating. And there's your problem. And that is obviously diverging further and further and further. And there are many consequences to this problem, actually worse than you might initially think. Certainly worse than I initially thought. So essentially, this is the 80% subject mismatch. You're learning endlessly how to do the calculating.

What is maths? What is computation? Well, I think of it as a sort of four-step process. You're defining a problem. You are abstracting to a computable form. Traditionally, mathematics now probably code. I mean, mathematical notation now probably code.

And the reason you're doing that is because you can take many sort of apparently disparate situations and you can turn them into something for which we have hundreds of years of experience to compute an answer. So abstraction is incredibly powerful for getting answers. And so we then have something set up, might be an equation. To give an example, defining the problem might be, if we lock this room down, turn the air conditioning off, and I speak for too long, how long could we survive?

Hopefully not to test this. There might be some questions. It might be different people who survive different lengths of time, you know, depending on whether it's oxygen or food or whatever, etc. So the abstraction might be, you know, we turn this into an expression, you know, I don't know, x cubed plus whatever the expression is or an equation to solve. Computing is you're taking the question in abstract form to the answer in abstract form, you know, x equals 3.

And interpretation, you know, this was three hours, and does this make any sense? Or if it came out as minus two, then obviously it was wrong, et cetera, et cetera, et cetera. And maybe we need to do this multiple times to figure out what to do. So the big problem in sort of school-level education is we're spending essentially all the time on step three by hand.

And what we ought to be doing is using computers basically mostly for step three, like in the real world, and using humans for much more of steps one, two, and four are much harder problems that are much messier and much more complex. Because that's what's happening in the real world. We aren't doing simplistic problems that are very clean in the real world. That's the modern world of mathematics.

Now, of course, 2023 plus, actually, it's, you know, there's a bit of infringement on steps one, two, and four, right? Because the AIs are helping us define and abstract and interpret to some extent. And there's a big question as to, you know, who's doing what in the future. So sort of throwing that in. But the compute step has absolutely been done by computers almost universally for a long time by this point.

And so just to be clear, you know, you typically in this process, you know, you may kind of go around it multiple times to get an answer that's kind of good enough for what you require for that. And that's part of the process you need to learn. How often you do this, what can you assume, how you not get fooled. You need real experience of real cases where there's mess all over the place. So if you remove the computer from education,

You will, in terms of mathematics, remove most of the modern context. Essentially no government in the world I've met actually understands this. You think you're educating people for the AI age because you want them to do maths for A-level or the equivalent in other countries.

The real world out there does messy problems with high amounts of computing power. That's what they do. And that's been possible. A lot of mathematicians don't like me saying this. But the fact is, maths wasn't that useful for many of these things until computers came out. It was great for physics, accountancy, some economics. It was lousy for anything biological, too much data, couldn't really compute anything useful, etc., etc.,

Computers have liberated mathematics from that problem because they have mechanized computing. It's become very cheap to compute. So most of the modern context, and the reason we're so bothered about people understanding mathematics and computation,

is enabled by computers. So if you take the computer out, you will remove almost all this context. So you cannot do real problems that mimic the real world if you do not have a computer for the subject in school or wherever you're doing education. So you end up essentially with the wrong subject. And that's sort of the core issue. It actually gets worse because you end up using a different, you learn a different process and a different computational tool set. So many of the tools you learn in school, you wouldn't use now.

They were great when you were trying to minimize the amount of computation. But that's not your optimization that you're trying now to achieve often. Actually, I had a teacher at school, a rather interesting character who is a friend of Alan Turing's actually, and he had a saying which is, maths is the art of avoiding calculation. And he was right. That was exactly the point of it.

Because the step this step three you couldn't do and that was the hardest step So everything was organized so that you minimized the difficulty of step three And you know very sophisticated structures were set up to do this, but we've turned this on its head step three is now incredibly cheap Unbelievably cheap. I mean the biggest sort of turnarounds in history. I think of any kind of mechanization and

So actually, where you're stuck is define, interpret, abstraction. These are much, much more expensive steps. So if you end up removing the computer, you're training people largely to do the wrong thing. So a typical thing that happens, and so a lot of things I've listed here, you use different algorithms. You try and only abstract one thing.

Because you need one computation instead of doing well, let's do 10 parallel abstractions and compare the results and see what happens You know the whole management the automation is wrong, etc. Etc. Etc So I mean this is sort of like you can look at this as sort of cost-benefit analysis between these steps and you know three is cheap um, so another thing You know to say here is that if you think of the ordering of the curriculum

It's largely by sort of how hard it is to calculate things. So, question I often ask: why is calculus not in the primary school curriculum for 10 year olds? Right? I mean, I don't really understand. If I have this glass of water and I want to figure out how much water is in it, the idea that you might slice it into discs, which you can easily compute the volume of, and add it up and make them infinitely thin and so forth,

I don't know, I think that's amenable to many 10 year olds, or probably less than that. The idea of limiting things, can I make this smaller, smaller, smaller, bigger, bigger, bigger, is actually quite a young age type thing. So why do we leave it until, you know, late secondary? Well because the calculating's hard, integrals are hard to calculate, you need a lot of algebraic skills to do it by hand.

Completely the wrong way around. Let's get the concepts straight. You can, you know, it's a magic function that does this. You can worry about the inside of it if you're interested at some other point. That's not the central thing you're trying to achieve for mainstream core mathematics education. I mean also, you know, where is machine learning in the curriculum? It's nowhere. This is a problem. And it should be in elementary school. The idea that you can use machine learning as a mechanism for working things out is important. You don't need to know the insides of it at that stage.

Geometry is interesting because I was thinking about this. You'd think that 3D geometry was the thing you wanted to do first, because the world is 3D. The funny thing about this is that computers with mice and 2D screens have actually made people at a much earlier age be more au fait with 2D. So actually there's a funny interplay there as to which one might do first. Obviously you were unable to do 3D much before without computers. So here's a typical thing you might want to do

I think we should be doing in schools. You might have an example problem we've been trying with students here, can I spot a cheat? And what we've told, we've divided the class in half and said we want half the students to actually toss a coin and note down the results. And we want the other half to cheat by doing something like I'm doing here and just typing in heads and tails. And then doing a little bit of analysis on it and seeing whether the, sending the results to the teacher.

So the little bit of analysis we did here showed that I didn't... This appears like I failed most of the tests we put it to, and I would guess that I cheated. And just if I'm a slightly better student, I might try and use a... write a program, actually, to do this, let's say, to see whether I could cheat...

show that I'm not cheating by using a proper random number generator and so forth. And so if I write myself a little program and I kind of throw this in, hopefully this format will work, and we'll see whether we managed to sort of show that I wasn't cheating. And indeed we passed all five tests in that case. So then you ask the students, you know, how do we figure this out? They're amazed that you can spot who cheated. And, you know, they start hypothecating, you know, is it...

And sometimes, you know, they come out, oh, yeah, maybe it's the patterns in the data. And so you can discuss this, you know, credit card fraud detection, very simple thing. You know, it's patterns in the data that weren't, that seemed wrong. And in this case, most people type things in much too evenly. And that's how you can typically spot that they were cheating. This does not appear, anything like this sort of pattern of thinking is not in your A-level curriculum or anything like it.

It can't be because it's not the way you would approach things and you don't have the computing power to actually do it. So this is a typical thing that just isn't there. And this is a very simple example of something I think ought to be there. So if you wanted a simple summary of the problem, it's just to walk through this. These are the mistakes that I think have been made in maths education and outline in the last few decades. So, you know, on the left is kind of what's been...

you know, claimed or what one wants to think about and on the right is sort of what people think is the equivalent of that. So the basics of doing mathematics are not doing it by hand in my view. The basics of mathematics are following that four-step process and getting good experience of it so that you can really deploy it and not get fooled and get good results out of it.

Whether you happen to do step three by hand or not is a side issue to that, in my view, if you're looking at the maths that you want people to use, the maths that we're considering as sort of compulsory across school education anyway. Succeeding education is not necessarily improving assessment results necessarily.

fair is not equal to easily accessible and reproducible. One of the frustrating things is when people say, "Oh, if ten markers can get the same marks for a script,

That sort of shows it's fair. Well, it doesn't, in my view, necessarily, because fair to my mind also is that what you're measuring is related to what your outcome is supposed to be, which is what the real world is supposed to be doing. In the real world, the number of times, for example, the CEO of a company, I find that there are five answers to a question, and I know one of them is correct and the other four are wrong.

as in a multiple choice thing, is basically zero. That's not how decisions come to me. It's usually much fuzzier than that, and there isn't a particularly right or wrong answer. It's a judgment call across things. That's what we should be reflecting in our assessments, even if it's harder to assess. Again, the essence of something is not so the mechanics are past moment. You know, as I often say, if photography was a mainstream subject, I think the first lesson would still be loading a film into a camera.

Because the mechanics would have got muddled up with what you're trying to do, which is, let's say you're interested in still photos and representing the world with still photography, it would have got muddled up. And that's sort of a typical mistake that gets made. The mechanics get burnt in when they should be moving. The essence adjusts, but the essence needs to be kind of what guides you. And I think this one's very important. How to work the machine is not how the machine works. Not the same thing.

How to drive my car is not the same as knowing how the fuel injection system works. And also learning how the fuel injection system works if you're driving an electric car isn't particularly useful either. So typically what we need people to do at the outset is learn how to work the increasingly complex machinery, work it in a way that doesn't get them fooled. But that is not the same as knowing how the insides work necessarily. And you see this everywhere. And the more automation layers you get...

the more this is different. You know, my computer, I don't really know how my computer works, right? It's got many layers. I bet nobody in the world really could know how every layer of the computer works, enough detail to reconstruct it. And that's typical of how society moves forward. Driving is not the same as building. Conceptual is not the same as by hand. Trustworthy is not the same as by hand. You need to be able to trust things that you've used machinery to work on.

So by hand is not an option. If you build a bridge, you can't do it, the calculations, by hand. You can't do it. So you've got to find other ways to verify things. And a big problem for how the education ecosystem works is that there's this idea that you need evidence to make a change. Well, you can't make much of a change then. That's the way it works. There's a difference between what I call innovation-led evidence and evidence-led innovation. So innovation-led evidence, you try something...

you measure it and you see whether there's success, but you've got to take a jump. You can't just sort of, you know, not make a jump. Evidence-led innovation is like, oh, we need evidence before we make an innovation. Well, it's not going to work. And I was somewhat amused that some years ago I talked to somebody in the UK, actually, who'd been sent around the world by the government to gather evidence of, I think it was maths, actually, how maths was done around the world so that Britain could lead.

And this is a typical kind of mistake that occurs. And again, the other problem is we've got a risk profile difference. The government minister in charge who has a certain risk profile, which basically says don't make too much change because I might get in trouble for doing that. But what you actually need, that's not lower risk for the population because the population can simply not be educated in what they need.

And so, lower risk is not necessarily no change. That's the problem. And those risk profiles are not very well aligned. So my message here is partly, you know, I've gone through this in some detail because I think these mistakes are terrible for mathematics, but we're just potentially about to repeat them on a whole slew of other subjects when AI changes the real world. So there are real consequences of this right now.

I think one of the things we're doing at the moment is needlessly rejecting many students. I think there are all sorts of people who are pretty good computational thinkers who are lousy at doing quadratic equations by hand. They don't care, they don't understand the point of it, just not their thing. And so, but you know, then they don't get their A grades in their exams and so they're rejected for lots of things that really didn't need them to do that by hand anyway.

So that's a problem, and I think it's disenfranchising people completely needlessly in some cases. I mean, there are people at the other extreme who actually might not be very good computational thinkers, who did very well at the existing math, but I think it's a much smaller number, and they probably manage anyway because they're excited enough about it.

I think sort of in work and government, there are increasingly bad decisions happening. People in charge are not schooled in this way of thinking, and so they get misled. And I'll come on to that in some more detail because I think it's a really important thing to understand. And I think we're seeing societal dislocation where, in a sense, an expert stands up and says, you know, "I can predict that there'll be 242,000 deaths if, you know, you don't wear a mask. You do wear a mask, whatever you do," right?

where they're over claiming for their prediction. They can't do that properly. There are lots of things they could say in essence. The person with the loudest mouth gets out there to say something. People in the population don't know how to assess this and so then a lot of mistrust is built up because in the end the assessment isn't quite right, it wasn't exactly what was predicted, doesn't make much sense to the average person. There's no way by which they can evaluate this.

And so you get this kind of dislocation between how the decisions are being taken and the evaluation of those decisions. So I think these are all real consequences that we are starting to see. One that I will mention is by our own Department for Education. Early on in the pandemic, as you may know if you're in the UK, they cancelled public exams and assessed the grades.

And I wrote a blog post about it that was rather caustic. A failure of computational thinking destroys public trust again. So the essence of what I was saying here, I mean, what they did was what they trained people to do at school. One of these problems of not using computers. So basically what they did is they had a lot of data from schools and students and everything else.

They then stripped most of it out and as I understand it to a large extent fitted a distribution for each subject in each school and then basically forced that distribution on today's cohort.

Okay, so what did they do? They made the calculating simple because they had lots of data and they made it simple into a distribution. And I named it the Procrustes-Ofqual algorithm. Procrustes after the Greek innkeeper who had to make everybody the right length for his bed and either chop them off or stretch them, right? Because it's essentially what happens.

And then, you know, all sorts of stuff was talked about, you know, the algorithm, we shouldn't use algorithms. Well, you're going to use an algorithm. It might be a human algorithm, it might be a computer, but it's an algorithm, right? So this was garbage, etc., etc. But this was pretty annoying, right? So in a sense, they were hoisted by their own petard of using failed computational thinking to address a messy data problem. I think they could have assessed the grades quite well.

And they didn't because they simplified the thing exactly the wrong way around, which is what we're training everyone to do. So there were real consequences of this. And I mean, just to cap it all, right, the place where they couldn't apply this was in side subjects like ancient Greek, where they didn't have enough data. So they just took teacher assessments, right? So which are mostly private schools. So they literally couldn't have done more to annoy and infuriate everyone than what they did.

But this is typical. This is typical across the sort of way decisions are being taken because there isn't enough understanding. And this is a failure of education system. So just to dig into this a bit further, other failures that we're seeing. Simplistic quantification of complex issues. So just to describe this, you know, a typical scenario we have now is, I kind of argue against computation here for a moment, that basically, you know, there is a primary fact somebody wants to model.

and they may do a good job or a bad job at modeling. Let's say for argument's sake they do a decent job. But the problem is, and they end up with a metric, they want to describe it simply. So they have a metric that just describes this, you know, that the hospital performance was three, right? But hospital is a complicated thing.

And so you end up with these sort of quantifiable metrics, but you ignore other effects because there's a sort of tremendous power given to the number. It has a huge marketing panache. And so you end up often with a worse decision than if you just qualitatively thought about it. And so the more we apply computation to more complex, messy problems without sufficient education and also the technology will come onto that, the worse this effect can get.

And so in the end, you end up with a sort of metric that doesn't represent what you wanted to achieve. But you end up optimizing the metric. I think this happens quite a lot in economics. I think it happens a lot in government, administration of things, wherever there's a complex effect. And so you end up with a rather sort of too small a bucket of effects. It was all a bit hard to model the more complex things, so we just didn't do it. We just modeled the simpler thing. We used that. A loss of trust, et cetera, et cetera. Lockdowns, there was a lot...

wrong with how some of that was done in respect to this. Although there was some good characterization by people like Chris Whitty who were very clear about the different effects they were trying to balance in doing this. So it wasn't all bad at all. And etc., etc. So I think beware of quantification beyond its ability to judge.

and beware of a number that seems to have too much power for its ability to represent the real situation. And we see this all over society, from exam grades, as we were just discussing, to some of these other metrics. And I think another sort of related problem is this sort of idea that you don't balance enough between the primary and secondary and tertiary effects, because you model the primary effect

And you kind of obliterate in your thinking the secondary and tertiary effects, which often can overwhelm the primary effect if you're not careful. So it's difficult, and often you don't bother to model all of those things, and so this is again sort of part of that same problem. Another part of this is claiming you can compute something when basically you can't. It's very odd what people... Remember some debate before Brexit, right?

in some question time or something and members of the public were hammering the table saying you must be able to tell us if the house price is going to go up or down if we brexit right you know you must be lying if you're not being honest about this etc etc well it's basically uncomputable it's too complicated it's too many error bars but people have got so used to the idea that you can predict with computation because they're not educating the idea that actually it wasn't like that

that they forget that, I mean, you know, they wouldn't have thought that 100 years ago. They just thought, we can't predict it. So there's a problem with, there's no education in what, in practice, you can apply this to and where it works. And I think that's an important thing we need to get straight. So my question, where is all this stuff in the curriculum? It isn't there. Times a thousand. There are all these pieces, you know, cause versus correlation isn't really there. You know, all sorts of things about risk,

all sorts of things about ways that data analysis goes wrong. It's not there. So we need far broader outcomes than we have. And one of the things we were working on was trying to work out a good list of sort of required outcomes. So I won't go through these in great detail, but, you know, these span...

a lot of stuff from confidence to tackle new problems, which I think have definite things you can do, to generalizing to actually defining questions and things you do, to the actual concepts and tools that you want to be able to use.

You need to have as broader outcome tree as that, otherwise you get into trouble in terms of what you're trying to achieve. Not ones that just are accessible with a simple number. Another saying I've sort of adopted is, you know, the old adage, a bad workman blames his tools, I think should now be adjusted, possibly to be more politically correct, a bad work person uses the wrong tools for the job.

One of the problems now is not that you've got three tools and you have those three tools, you just need to learn to work them well. The problem is you've got 100, 1,000, 10,000 tools. Which tools do you actually need for the job? That's actually typically the problem.

And so we need to educate people. In fact, one of those outcomes that you showed you is exactly that. You even get this problem in DIY. You know, it used to be like a screwdriver and a hammer. Now there are a zillion different tools you can use. And if you're an amateur DIYer, for example, it's actually something that's quite difficult to know. Much less of a problem than in computation, but same kind of effect. So...

That's sort of the thesis as to why there's a big problem and we need to change it. I want to give a quick briefing on where some of this computational tech actually is at this point and some of the things we've been trying to do

to make technology that interfaces well with it. Some people say, you know, in a sense, if we make the ultimate technology, why do you need people to be well educated in it? Because it should just help them do the right thing. And I agree with that, except that we want to step up to the next level. If you've got the technology to help, you want to step up, and this is what's happened in previous industrial revolutions. The humans hopefully can go up to the next level. So, one of the things we've been working on is trying to build a unified language for all of computation.

So one way to represent everything you might want to talk about in computation, so it's systematic. And, you know, again, back to the electricity analogy, I think we're trying to do something similar to what Faraday and Maxwell did extremely successfully with electricity. There are a lot of electrical effects. They managed to build out many new effects, but also harmonize so that you really could see them as one effect.

or one set of effects very well connected. And that gave tremendous power to sort of moving the use of electricity forward in all sorts of ways. And so when you look at Wolfram language, it's sort of a programming language, you know, as in instruct the computer what to do, but it's also a way to represent everything you might want to in computation and also a way to communicate as humans. In Wolfram research, we often, it's quite funny, people are very versed in this,

we're discussing some problem and they'll say, do you mean this? And they just write the piece of code to represent what they're taking. It's just easier to write the code than it is to describe it in English. So I think that's some mark of some success in terms of representing things.

So everything we've been trying to do is the same sort of symbolic expression. So maths, data, you notice the thing on the right here is all looking like a similar kind of thing. So whether it's images, documents like I'm using here, these are all represented in the same symbolic representation. And this unification gives tremendous power to what one can actually do. And the...

One thing that's exciting here is to think about spaces that we haven't built this into that many other people might think of building this into. So I bring up macroeconomics, for example, which are not, I think, as I understand, particularly well unified in representation. You know, even things like banks...

where the regulators don't have a good representation of them computationally. And many other things one might want to represent. There's a good example out in the US where somebody's taken chemical and biological experiments and represented them with our language and extended our language to do this. Now you can go on the cloud and represent an experiment which then actually gets done by robots and the results pumped out. But you can only do that if you have a representation of the thing to describe what you're trying to do.

And there are a lot of ways you want to query things, like interfaces, so that you can actually get to this. It really matters what the interface is. It really matters how you connect with the data, or else you as a human can't do well. And it's amazing to me how many, I don't know, board meetings where they're discussing things with data

I mean, this happened, I think, a lot in government, right? Where it's like, ah, well, here's a PDF printout of the possible scenarios. Well, you shouldn't have that. You should have something where you can argue about, you know, where are the sliders positioned while we're doing the live computation to see whether you actually, you know, can agree about what the effects are and what's important.

So an example actually from a few years ago was this paper I dug out by somebody who had, this is one of our notebooks, and he was in some hearing in the state in Texas about whether they should or shouldn't, he's an economist, whether they should or shouldn't buy, he's an economic lawyer I should say, whether they should or shouldn't buy insurance for hurricanes.

And you can see this looks like a normal paper that you would kind of see in the normal way. But a bit further down, it's actually got a model or sets of models that you can actually start playing with. And so in this hearing, they were actually, you know, like pulling sliders around and agreeing or disagreeing about where the slider should be to try and make some of these decisions.

So these are typical sort of interfaces you need to computation to actually use it in a way that's usable. And this sort of notebook assistant thing that we've been trying to do is to bring together these different things, the typical notebook mixture of code and other things you might do to work things out. So a key question that people often ask me is, given that we now have AI linguistics, do we need to code, as in write actual code as humans?

I actually think the answer is yes, but I think it'll be much more about editing code than it will be about writing fresh code. But the idea that you need abstract code

as a important way to be precise and to represent things, I think is going to stay for a long time. I don't think that we can just talk in English. That was a big thing we gained from mathematics by being precise and being able to abstract out. We don't want to throw that away because we can now do that in more cases. So I think there's a good interplay there. And in fact, I think we're in a sort of era where potentially we can have sort of

citizen programming because the access to the programming is much greater. Many more people can partake because they can edit bits of code rather than just have to write it from scratch. The other thing that's critical, critical is sort of automation. You've absolutely got to have good automation to achieve trust and this is a big topic but the example I think I gave earlier in a session I was talking about is when I was a child there were things called Instamatic cameras.

where basically every photo you took was bad because they did nothing. You pointed them and you clicked and they didn't focus and they had really lousy low-resolution film as well. Now you do the same thing in essence with an iPhone, but essentially every picture is really good technically.

And that's because you've got layers of automation that do a fantastic job. But on the surface, it seems like you're doing the same thing. In computation, it's quite hard to pick between those because it's quite hard to see what's happening. And that's something that we've been working very hard on trying to do because it's very hard to know whether you're being fooled. So the technology is important in building trust like that. So briefly, how do we deliver? So I've been talking about the subject, but very briefly, just to come back to the pedagogy, how do we deliver this stuff?

And does AI change that? And I think the answer is yes. You need to think about many facets. This is a rather complex diagram of sort of the different things you need to think about in thinking outcomes, computation outcomes, this process we're talking about, different grades that you might do this in, so to speak, as you're building up these different outcomes across different levels in the educational system.

And of course there are different sort of contexts and things and so forth. You know, is this starting from economics or is it starting from engineering or whatever? So that's sort of picture of it. And these outcomes I showed, we must be explicit about actual thinking we need. You know, abstraction of thinking techniques, intertwining creativity and process. These are pretty critical things. And, you know...

This thing of, in different disciplines, there are very few disciplines where it's sort of pure creativity and zero process. And computation is somewhere in between. You want some process to back up creativity. You want the sort of interplay. And the most success happens when you have that interplay. And that's true in many, many disciplines. And we really need to think explicitly about that. You know, what's for the AI, what's for the human?

These are open questions and we need to start to build this picture so that we're clear what we're trying to educate people for. This is an early cut. We've been trying to build a computational curriculum that assumes computers exist, and that's computerbasedmath.org and so forth.

And one of the things we've been doing is sort of, you know, trying to get some of these modules together. This is an example of a module to try and get people, you know, who are interested in cycling, you know, how fast could I cycle the Tour de France or the equivalent? And this is a module that shows a teacher...

This is the teacher version of the module, so it has teacher briefing notes as well as sort of the student piece. But we're trying to get them to understand, you know, at the outset, what is a model? Can I play with a model and understand something about the power needed to cycle at a certain speed, and how is this affected, and do I believe this model?

And then much later we're trying to get them to see, you know, how does air resistance fit with this? You know, just throw a complex thing at them. You know, there's an equation for air resistance. You don't need to know how to derive it at this stage necessarily. Et cetera, et cetera. You know, can you load in a map of your local area and try and figure out how much power it would take to, how much energy it would take to go around this and so forth. So we're trying to build things that sort of start from something like that and move forward.

alongside projects and many other things. So you think about what is it you need to deliver for a new subject, and it's actually quite a lot of different things. The way this is typically built assumes a lot of knowledge on existing subjects, which doesn't work. The exciting new kid on the block is AI tutoring, and I showed that module we just talked about, but let me just show you this sort of video of a bit of tutoring happening on...

You know, what will the population of Doha be in 2100? And a bit of tutoring sort of helping through thinking that open-ended sort of problem in a sense. And it's amazing how with the right setup you can get the AI to do a pretty good job at helping even in open-ended problems. And so this is the kind of thing I think there's a lot more to be done on but we can deliver this and it's much more promising was a few years ago. And you need to be very clear on the difference between symbolic and statistical AIs.

R Wolfram Alpha is what I call symbolic AI. Generative AI is statistical.

The aim of achieving AI is, you know, there are lots of tools that might achieve this. We need to put them together in the best way to produce the results. And so, for example, for doing maths education, you really need both of these to do a good job. I don't think the generative AI by itself is particularly promising for the most part. It can fake it quite well, but actually you can do much better by combining these approaches.

And that's something we're trying to do with our tutor project to try and actually, you know, kind of, you know, you can load in a curriculum and try to actually get it tutored and so forth by building bits of computation around the generative AI and so forth. And as Emma kindly mentioned, I wrote a book called The Maths Fix. And the point of this was to try and make a proposal for, you know, what's the problem

Kind of, what's the fix I'm proposing and what are some of the political and other issues for achieving change? Try and actually mark this all out. But it opens up these big questions, you know, which we need to answer. What is really the job of education? People ask me this and I say, well, my simple answer is to enrich life. Not just in riches and money, but in how you value yourself and what you do. I'm not sure we're always there. I think we've zoomed a bit far away from that.

And I think a key way in which we want to do that is sort of to accelerate experience, in a sense. And you need real experience, messy experience, to be able to do that. I see this missing increasingly in education. I mean, you know, one of the experiences I have, for example, is if you annoy a teacher sufficiently, they get angry, right? It's quite a useful lesson to learn. But they're not really allowed to do that anymore.

because they're kind of told you must behave in a very fixed... I don't mean that they should go apoplectic and crazy, but actually it is important that you learn things like that, right? But a lot of this is sort of being... So I'm talking both about...

things directly to the curriculum, but things outside that. I learned a lot of things at school that were nothing to do with the curriculum at all about how people functioned and things happened. But some of that is being stripped away because we're trying to reduce risk in a slightly odd way. And I don't think you accelerate experience of what you actually meet in the real world. And we see this with some of our... Sometimes when we get younger recruits, so to speak, we see some of these effects. Actually, they're not really sure how to handle...

some of the things that happen in the real world in those respects. So don't cleanse the reality, right? Messy problems, machinery, human emotions. You know, it's funny, this sort of educational thing that's become more formulaic at a time when the formulaic stuff can be handled by machines better. So we need it less formulaic in a way. And I think we need to figure out how to do that with that. So

How do we make change here? I think employers, there is a lot that can be done, particularly with non-technical, so to speak, people who are not technically trained employees. And you could do a lot with fairly simple interventions so that people understood there's a core thing. Don't just teach them vocational, purely vocational things. You need a core that's been missing at school for how you take decisions in the world.

Universities, I think there's an interesting discussion about what is the core computational thinking subject that's powering everything else.

And I know in LSE you have an interesting setup which is slightly different to some universities for that. And there's also sort of what's the computational X for all subjects? There's history and there's computational history. There's economics, which has always been somewhat computational. There's other subjects that were not traditionally computational.

And so there's also an implementation, rethink, etc. One thing I think is really important is that many of the things, if you put all the mass in, the conceptual is the practical in a way. I mean, it's kind of like, it's high concept to work out how to operate in the real world today. Don't strip that away. It's not lower intellectual ability to figure out how to work in the real world in most cases today.

And people sometimes think that that's low concept to leave all the mess in there and to see the actual practical ways to do things. I don't think that's typically true. And it's kind of important to leave it in. And policy makers. I mean, we've got to fix this stuck ecosystem of education for subject change. If you want to uptick by 2% what somebody gets in a maths A-level, you can get a lot of funding to get that done. If you think you've got the perfect way to pedagogically improve their score in an A-level maths paper...

Everything's in your favor to try and do that. If you say, well, actually, we've got the wrong subject there, everything is completely against you. So we've got a system in all countries, including this one, where you really can't make a subject change. Because, you know, universities want to admit people with a different subject, typically. That's something that I think could be changed, actually, with a group of universities getting together and saying there are alternatives and we would accept them.

Schools say, "Oh my gosh, we ought to make a change if universities want to admit people like this." The government says, "Oh my God, this is all too risky," etc. So you literally can't make a change at all easily. Only side subjects make rapid change. And you can see this. For example, religious studies in the UK has rapidly gone through many changes.

it's a very uk centric thing but that's happened because it's considered much more of a side subject the maths which is so important you can't change it so that's a problem we've got to fix in the incentive structure and if you want to look at a comparison the fixing started by the us of how startups could happen is a good example startups were really hard to do in like the 60s all the things incentives were against you really hard to do a startup and then you know

the US basically led by saying we really want startups, right? So we're going to unlock all of the ecosystems that's possible. Now every country is falling over itself to say come and do a startup here because the risk profile has been changed. So it's a risk profile problem, I think. And the funding is all siloed between materials and exams, all of which should be put together. So for policymakers, there is a lot to be done. And by the way, the UK has a structure that would allow this to be done.

So one could have technically an alternative computer-based maths A level, for example,

But it needs the incentive structure, it needs universities to admit people on this basis, and therefore schools to think it's a good idea to teach it, and then obviously one could go and convince people if there were enough in favour of that. So there is a possibility, and the UK is actually more flexible in being able to make that sort of change than many other countries which are run by the government in that sense. So anyway, it would be nice to see some universities lead on this and would be great to us. To round out...

If we do all of this, what do we really achieve? I think we'll end up with first-rate human problem solvers, not third-rate human computers. And work up a level from the automation. Don't try and compete with it. You'll lose. Competing with what machines do best is a losing proposition. And I think we should really rapidly understand that we need to change that.

I purposely call it computational literacy at the lower end because I want to align it with literacy for all from the, I think, late 1700s when that really got going. At that time, a lot of people thought you didn't need everyone to be literate. It wasn't necessary.

You know, it's like some priests and aristocrats could tell you what you need to think and that would all be fine. And people were probably too stupid to be literate, etc, etc. Well, I think that's been, you know, one of the biggest things that's happened in education is that was proved dramatically wrong.

And countries like the UK that were quite early with that were very well prepared for the Industrial Revolution relative to others. And I think the same is true now of computational literacy. So we can leapfrog others, organizations and countries, if one really thinks about this. And this thing about sort of enfranchisement across society. I'm worried about this problem that we are losing a lot of people from the decision-taking process and they're feeling disenfranchised.

and we end up with a sort of divide between elites and non-elites on a new basis, which I think is pretty unhealthy. We need a new computational knowledge economy, which I think will rebuild trust. Thank you very much.

Thank you. Comrade, come and have a seat. We do have some time for some questions. Thank you so much for your presentation. So we'll open to the floor for questions. I do have the iPad for those that are online that would like to ask questions. So if you're in the room, do raise your hand.

but do wait for the microphone so that we can kind of hold your horses so that we can hear you. And then also, if you type your question in the Q&A box, I will try and capture some online questions if they appear as well. Do you remember to state your name and affiliation? And keep yourself to one short question because we have about 20 minutes and we'd like to get as many in as possible. So we have in the green, first of all, just there. Thank you. Can I take your question? Just behind.

Hi, I'm interrupting this event to tell you about another awesome LSE podcast that we think you'd enjoy. LSE IQ asks social scientists and other experts to answer one intelligent question. Like, why do people believe in conspiracy theories? Or, can we afford the super rich? Come check us out. Just search for LSE IQ wherever you get your podcasts.

Now, back to the event. Thank you for that. You ran through numerous different stakeholders, but I noticed a critical one wasn't mentioned, the lecturers. And

I work for some institutes of higher learning and working on a startup right now. And I recognize, or I feel, that it's a two-sided marketplace, in fact, where only one side is viewed as the customer, the one that needs kind of growing existentially. So I wonder if you could speak to that.

that stakeholder group. What can they do? What should they be doing? How should they be perhaps interacted with by their institutions? What could policy makers be? I mean, there's a whole range of things there. Thank you. Thanks very much. Look, I think there's more flexibility in higher education than there is in most school education because it's less prescribed and it depends on the institution. But I think there's a lot that can be done by lecturers, you know,

introducing as much real stuff with computation and computers as possible as allowed by the way the whole thing is examined. Now, can one somehow, it depends a lot, I'm sort of speaking across, I've spoken to many universities, for example, across the world, and in some places there's a lot of freedom to innovate and just to do a new curriculum. In fact, at school level we call the curriculum

And in some places, it's rather locked down. So I think it will depend a little bit on how much power to do new things in your case, for example. But I'm afraid I see this in a rather mechanistic way in the end across the whole ecosystem, which is in the end, people are driven by incentives. And I don't mean money necessarily. I mean incentives of whether students, lecturers, institutions of what they're trying to achieve

And so I think the thing to look at is what is the incentive to make the thing happen? And is it aligned with what the outcome is you actually want? And so what kind of wants to look at that for lecturers in general? And, you know, do they get the recognition for innovation in this area that will produce these good outcomes, hopefully? So I think that, you know, I suppose that would be my comment. I think that matches what you were asking. Yes.

Thank you. Next question. Bernard at the front. Bernard von Stengel, mathematician at the LSE. I think the problem is a little bit more basic going back to the two cultures by C.P. Snow, that there is a lack of numeracy. The fact that we had this disaster with trying to fit a curve based on ranking and so on is because people are still proud of not knowing mathematics.

And you find people who cannot give you change without a pocket calculator and so on. I mean, this is pretty bad. I mean, and I think we should still address these questions because people are still proud of not knowing mathematics because they feel left behind at an early age. They think that it's stupid just because we don't teach them that it's a human activity like reading. Look, I think it's all, it's a funny intertwined problem.

It's a defence mechanism, mostly, that people, you know, sort of say they're bad at maths. Because they are told it's extremely important, and if they don't think that they're particularly good at it, then they kind of want to use a defence mechanism. But then, on the other hand, some of the mathematics we're telling them they have to be good at isn't the right mathematics, in my view. And you've got to strip it down into quite...

When you talk about numeracy, for example, it's like how many times have you added 7 8ths and, you know, I don't know, 2 16ths. Well, that's an easy one. But 2 15ths together, you know, formally. Right? I've never really done that. Right? I don't, you know, the little process for adding fractions. I mean, you know, it's something you learn, but basically it's not something you do formally very much. So don't hang people's...

esteem on whether they can do that kind of thing. But on the other hand, you know, understanding proportions and actually seeing what it would look like seems very important to me as a practical matter. So I think we've got to be much more careful about what is important. And that's part of the reason in those outcomes we've divided between concepts and tools. Two of the outcomes are, there's a difference between the idea of equations and the quadratic.

And we need to be much clearer about that. Don't tell people they need to learn the quadratic formula as their main thing when what they really need to understand is when they might apply equations. And so I think that's all played into this issue. And again, the whole marketing mass, you get people standing up, ministers saying long division is absolutely critical. As long as we can do long division at the end of primary or whatever they say, that's very, very important.

That's pretty lousy marketing for maths. It's like, great, I really wanted to do long division. There are much better pieces of marketing we could do, which I think would make people more positive about saying that mathematics was what they wanted. I mean, I'm sad to say that I think the brand of mathematics, the word...

unfortunately has some bad connotations and there's a big question as to whether the word is the right word. I do have a question online from Jan Mertens.

So it appears that AI, ChatGPT, is also very able to define questions, abstract, interpret results already. What makes you think that these stages of your process are not also going to be replaced? I think to some extent they will be. And the question will be, can humans manage this to be in charge?

I mean, in a sense, we're all turning into managers. It's just not necessarily of people.

I mean, we're in a sense using AIs as workers and we're managers. And the question is how to manage that process. So, you know, in a sense, I mean, I run that process myself for doing things. I also manage many other people to effectively run that sort of process. Actually, in the future, I'll be managing AIs too. So, yeah, there's going to be an interesting interplay between human and AI to do that. And, you know, the question is, can humans add value to that? I actually think they will be able to. I think it's still...

It's extraordinary to look at the generation of AI and there's much happening and other forms of AI. So things we're doing as well. I still think that well-educated humans have value to add and we'll see how that plays out. Interesting. So at the front. Thank you for the lecture. It was very interesting. I'm a sociologist. My definition of education is the transmission of culture from one generation to the next.

My question is, who does all the thinking in your world? Who does all the thinking in my world? Yes. We've got, if you want to go back 2,000 years, you've got Aristotle, Plato, blah, blah, blah, moving forward, people like Locke, Rousseau, who've contributed to our knowledge and understanding of the world. In the world of computation, and I know nothing about it, who does the thinking?

Well, a lot of people do the thinking. It's a question of... I mean, there are different levels of thinking. There's thinking to move the world of computation itself forward. I mean, there are the specialists who are, in a sense... What about society? But... I mean... In a sense, the issue we've got is that we need society to be... When I say the computational knowledge economy as opposed to the knowledge economy, we've moved from a sort of economy in which most people were...

doing things with our hands as the driving force, and we're probably doing less explicit thinking, to a society in which knowledge was the sort of driving thing, the sort of Druka definition, to a knowledge in which we are cohabiting that space to some extent with machines, but I'm arguing for the fact that we need to be educated to a higher level of thinking, if anything. And, I mean, it depends how you define society. Society to me is...

There's a kind of grouping of people that hopefully you have some sort of common process and progress forwards that you can bring to bear. I think we need a greater range of thinking that's happening than is happening at the moment. One of the problems at the moment is that, and this is sort of the point I was making at the end, we need a broader educational range of thinking that's imbued. And it seems to have got narrower.

And the question is how to, you know, assessments have a lot to do with that. And the question is how to kind of reverse that. Because I think what you're saying perhaps is that in order to have these, I mean, the people you cite from history were very broad thinkers. They were not siloed at all. And I think actually that's what we require more of at this point. We've got machinery often to do the very specific pieces. Not always, but sometimes. And we need broader thinking, but that's not really what's being...

sort of optimised for in traditional education. I'm just going to take one from our online audience. So this is from Doug Morrison. This is kind of an interesting one. My son is a maths teacher. So how we actually bring this into schools in terms of our teaching population. But the question is, if a school does not adapt to the world as it is, what can individual pupils do to prepare for the real world? Yeah, this is a really hard one.

Look, I mean, in the end, you know, I suppose what you can do is to get through the formal stuff you're supposed to do as quickly as possible so you can spend more time doing real stuff and hopefully have the motivation left to do that. And, you know, hopefully that's not how it has to be, but at the moment that's sort of how it is.

So, I mean, I would encourage people to, I mean, we've got things that they can do online and look at. I think there are things other people, you know, have provided. I think if you want to think about computation particularly, I think think of each thing that you interact with in life and try and see what the computational angle might be on it.

And it's not easy to do that. And if you've got parents or teachers who are helpful to that, that makes it much more amenable. Or AIs, perhaps, as we move forward. But this is tricky because the trouble is that as the push for better and better grades goes on,

more and more people's time at every level is taken up with getting better grades, which almost drowns out some of these other effects. I mean, hopefully the review that's coming in the UK will slightly improve that. But I think that is... And there's a problem at every level. I mean, from the top schools down to the worst-performing schools, actually, there's a very similar problem in that respect. So I think try and carve out as much time as you can...

and do things that you find interesting and try and see if you can engage them with computational thinking. And hopefully we can be a little bit of help to that in some of the resources and things we have and so can some other people. I wish I had a better answer for you. Over there.

Thank you for your talk. My name's Amar. I'm a mathematics graduate from Cambridge University. I should thank you because I wrote most of the software for my PhD using your technology. Okay, good. Nice to hear that. So my experience at high school for mathematics was very much like you stated, full of computation. But at undergraduate, it was actually the exact converse. There was hardly any computation. It was all definitions, abstractions.

I don't use any of the theory that I learned, but I do think I learned how to think computationally. And I would argue most of the individuals who created the software we're talking about in LLMs and all this buzz that's been created did study things like mathematics and theoretical physics at university. So I guess my question is,

what can the schooling system learn from what universities are doing? Because the jump from going from high school to university for mathematics was gigantic. It really put a lot of people off. Is that a way we can encourage high schools to shift towards this computational thinking? This is a complex question. One of the problems at school is it's not an elective subject. You chose to study maths at Cambridge and you were good at maths to be able to do that.

to start with. And then I think at Cambridge, you know, in a sense, the mathematics well serviced that from what you're saying. And I did some of it as well. So one of the problems, we've got compulsion to do this subject, which we're claiming is for everyone, essentially, at school level. And so I think what you...

you somehow need to tie that to what most people need a reasonable fraction of the time for it to be justifiable. Now, the question then is the interplay between those people, everyone like that, and people who really are interested in mathematics, physics, whatever it is, in its own right, at a level to go to Cambridge or LSE or wherever it is.

And my personal belief, although I have no proof of this, is that we are losing a large number of people who would in fact, some of whom would go on to be interested in that, but aren't going to start from being told to sort of look under the hood first. So I think what we're doing is we're giving them a rather banal, mechanistic look under the hood of the math that we're both describing from school.

And we're hoping that from those people that, you know, some will happen to be excited about it, like you were, and I was, to some extent, and then jump, and then luck is, when they arrive at university, they kind of got excited about, you know, that that was encouraged. I mean, I think that we've just got to be clear, you know, mathematics in that sense is a fairly, you know, it's not really the mass subject for everyone to invent new bits of mathematics and do PhDs in mathematics.

And some people are very interested in that abstract thought. Some people just want it as a practical matter, and we're essentially doing neither most of the time at school. So I would argue actually for, in a sense, school mathematics to be either end of the spectrum but not bang in the middle. I mean, it's either practical, which I think would be more conceptual, as it turns out, if you were honest about it,

I'm all for it to be, you know, for there to be interesting abstract things in it. And I think there should be more of that. I think there probably was in the past in like further maths, A-level, for example, in the UK than there is now. So we've kind of gone, you know, it's got to be practical for people to use, but sort of between those two stools in an unhelpful way. So I think, I suppose what can be learned is...

It's a different issue, but you need to allow space for both of those groups and you may need two different subjects, but I think you can do a lot with one subject that's better sort of specified, but not just falling in between those two stools. That's what I observe anyway. We were kind of talking about this earlier. I think

great mathematicians are probably great mathematicians despite their mathematical education rather than because of it. So I think there's some interesting questions that we have to ask ourselves about particularly at that kind of how maths is taught at younger age. I actually think there's some really exciting things going on in primary maths and then you get to secondary school it becomes very driven by the GCSE curricula and so on. Do we have any other questions at the front? Catherine.

- Hi, I'm Catherine Xiang, a faculty member here at LSE, but my research area and interest is applied linguists and intercultural communication. So I think I really enjoy your talk and talk about how we can look at math and what is required

in the age of AI. I just wondered and curious about your thoughts beyond a little bit in terms of subject matter, particularly when you talk about use math reform as a template, how would that apply or what we need to adjust and adapt to another subject, for example, language education, which is quite different, but also share similar challenges.

So one of the main points is you've got to fix this ecosystem problem so that when things change in the outside world, you're ready to change the subject matter in the curriculum. And I know less about language education and things in the outside world. Obviously there are going to be big changes to what you're trying to achieve. There's cultural understanding you're trying to achieve, but there's just straight translation. Obviously some of that is going to get more mechanised.

But we've got to decide what is the real world, what's the representation of the real world that seems, what is it that we're trying to achieve and to educate people for? But then manage to get that put in to the subject matter. And right now the process for this is crazy and every possible incentive is against you.

I mean, just down to everything, like how do you write a curriculum for a school? You write a treatise of things you should learn, then somebody sort of figures out what the books should be that are related to this. And the exams that really are the thing that drives what people actually learn in the end, because you're learning to the exam. So these are all rather disconnected steps in a traditional curriculum process, which in itself is very slow. I mean, I sort of imagine that, you know,

we still have sort of rooms full of smoke-filled rooms, except they're probably not smoke-filled anymore, full of experts, you know, deciding what's good nourishment for education in all sorts of different areas. Again, one of the problems is you need people from the outside world. I mean, one of the problems with maths sometimes is you get maths educators deciding what needs to be taught. Well, sorry, but those people don't really typically know what's actually happening at the moment in the real world. And if you're claiming maths is a general purpose subject...

as you are languages, which you're trying to teach for a lot of people, who is it that's using it? Well, it's physicists, engineers, everybody, accountants. So you better have those people represented in some way as well. And so for languages, I think it's the same. So I guess what I'm arguing for is you need a process for rapid deployment of subject back to education, and you need the incentives to drive that to work.

And that is completely misaligned in essentially every country. And it shows up, you know, maths has shown up, but it will show up for languages as presumably our eyes are going to do a lot of things that we were being educated for in languages, although not all. And we need to change what that is.

I think we're kind of running out of time. In fact, we have run out of time, I'm afraid. So, I mean, I do want to thank everybody. And I'm sorry to those of you whose questions that we weren't able to take. What an amazing presentation. Thank you so much. And thank you to everybody for joining today, particularly those involved in producing the event. So, once again, can you please join me in thanking our speaker? Thank you very much.

So in closing, if you've enjoyed tonight's event, there's lots more coming up at the LSE Festival taking place from the 16th to 21st of June. One highlight for your calendar is Google DeepMind's Chief Operating Officer, Leila Ibrahim, on Friday, 20th of June, who will be sharing her vision for the future and will discuss the transformative potential of AI in the years ahead. Thank you so much, everybody. Thank you.

Thank you for listening. You can subscribe to the LSE Events podcast on your favourite podcast app and help other listeners discover us by leaving a review. Visit lse.ac.uk forward slash events to find out what's on next. We hope you join us at another LSE Events soon.