We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode 338: From Extraction to Understanding: Martin Goodson, CEO of Evolution AI, on Why AGI Is The Wrong Goal

338: From Extraction to Understanding: Martin Goodson, CEO of Evolution AI, on Why AGI Is The Wrong Goal

2025/6/2
logo of podcast AI and the Future of Work

AI and the Future of Work

AI Deep Dive AI Chapters Transcript
People
D
Dan Turchin
M
Martin Goodson
Topics
Martin Goodson: 我认为语言并非智能的首要方面。我在成为父亲后发现,孩子们在掌握语言之前就已经表现出智能行为。例如,我的女儿在还不会说话的时候,就通过模仿撞头来告诉我她发生了什么。我们是否过于强调语言在智能中的作用?现在的机器在语言方面表现出色,但在其他方面却存在不足。我们是否应该重新思考人工智能的目标,不应仅仅是模仿人类智能,而是应该关注更广泛的智能形式。 Dan Turchin: 我赞同Martin的观点,我们应该追求碳基生物和硅基机器之间最佳的合作关系。机器擅长处理人类不擅长的单调乏味的任务,例如基于数百万页文档来确定贷款规模。我们应该思考如何让机器更好地补充人类,而不是仅仅追求通过图灵测试,让人类误以为自己在与人类互动。

Deep Dive

Shownotes Transcript

Translations:
中文

So why do we think that language is this core thing that's primary? Why do we think that language is a primary aspect of intelligence? I'm telling you it's an aspect of intelligence, but there's a humanity, there's humanity that comes before language. So that is in no way an answer to your question, but it's something that I do think about. So maybe we are getting this wrong because these machines that we've built

It seems to be really great at language, but there's lots of other stuff that they're not very good at. Good morning, good afternoon, or good evening, depending on where you're listening. Welcome to AI and the Future of Work. I'm your host, Dan Turchin, CEO of PeopleRain, the AI platform for IT and HR employee service.

Thanks to you, our loyal listeners, for helping our community grow. As I hope you know by now, we started a newsletter a few months back and it's become quite popular. We share fun facts and some additional clips that don't always make it into the main episode. We will share a link to subscribe to it in the show notes. Please join us there. If you like what we do, please subscribe to our channel.

please tell a friend and give us a like and a rating on Apple Podcasts, Spotify, or wherever you listen. If you leave a comment,

As you know, I just may share it in an upcoming episode like this one from May in Nashville, Tennessee, who's in customer support for a tech company and listens while on her break. May's favorite episode is that great one with Daphne Jones, author of When They Say You Won't from back in season three about overcoming imposter syndrome. It's a great listen. We'll link to that in the show notes.

We learn from AI thought leaders weekly on the show. The added bonus, you get one AI fun fact each week.

For today, Samuel Shen and Tom Westbrook write in Reuters Online about how Chinese financial institutions are accelerating their adoption of AI using DeepSeek. The China-built foundation model competing with ChachiBT and others, over 20 financial firms, including SinoLink Securities and China Universal Asset Management, have followed suit embedding AI into research, risk management, and investment strategies.

The shift reflects China's broader push for AI-driven financial innovation, a trend that carries geopolitical implications. As China rapidly deploys AI in finance, it challenges Western dominance in global fintech and data-driven capital markets. My commentary? The geopolitics of AI involve more than chip export restrictions. Foundation models want to be free.

It's naive to think the best models won't soon be freely available and accessible to all countries, regardless of how any administration uses tactics like trade barriers and economic sanctions to deter access. Let's prepare for and celebrate a world where the most ambitious technologists and entrepreneurs can solve the hardest problems unimpeded. Let's also insist on the responsible and fair use of AI.

If you don't agree with censorship or the values of any product's creators, know what you're using first and vote with your dollars and your downloads. Of course, we'll share a link to the full episode in show notes. Now shifting to this week's conversation.

Dr. Martin Goodson is the founder and CEO of Evolution AI, which he founded in 2012 as an early pioneer in the field of applying deep learning to improve optical character recognition or OCR. Evolution AI was funded by one of the largest AI R&D grants ever awarded by the UK government, along with investment from venture capital firm First Minute Capital.

Martin's a former Oxford University scientific researcher and has led AI research at several organizations. In 2019, he was elected chair of the data science and AI section of the Royal Statistical Society, the membership group which represents professional data scientists in the UK.

And without further ado, Martin, it's my pleasure to welcome you to AI and the Future of Work. Let's get started by having you share a bit more about that illustrious background and how you got into the space.

Hi Dan, really great to be here. Well, I don't know about illustrious background, but I'm happy to talk to you about why I got into the AI world. And really, it started when I was a teenager. I used to go to the library to study for my exams, but 16 years old, but I used to just read these books on AI. And I was absolutely fascinated by

These ideas about teaching machines how to think, how to learn, how to have a memory, how to represent information. So I've been really interested since then. You know, obviously, I've had a career in research, really in life sciences. So I was looking at using machine learning algorithms to analyze biological data. I've worked in various fields.

I worked in the research context in academia, but also worked in pharmaceutical companies and different things like that. And since then, when I left academia, I started to work in several tech startups where I was running AI research and been through the whole kind of deep learning, traditional deep learning phase. And now, obviously, everything's moving over to generative AI. So you started by solving a problem in evolution AI that is...

Pretty mundane. OCR, a lot of people would have said is a solved problem, but clearly pre the work of Evolution AI and kind of modern approaches to OCR, it was kind of a blunt object. Where are there opportunities to improve optical character recognition with AI? Yeah, I thought it was a solved problem as well, but then I tried to use traditional ACR to actually do something. This is in another startup, you know, a really long time ago. We were trying to...

build some kind of tool that would read your tax documents and then calculate your tax so that you could, you know, calculations so that you could upload it into the UK, you know, for the UK tax system. It would need to look at payslips and other kinds of information in PDF form. And I thought it was going to be really easy. You know, OCR, like you say, it's a problem. You just extract the data and then you...

do some fairly simple accounting calculations, you should be able to fill this out. But we really failed at the first hurdle. That startup just failed because we just couldn't get the data out of the payslips. The really easy thing of just getting the data from the payslips, just going to do it. The thing about OCR is it works really, really well.

optical character recognition. It recognizes characters really well. You can read out text on a page really, really well.

But most of the document kind of processing tasks that we do, it's just not like that. You're not just reading out characters. Humans are not just reading out characters when they read a document. They're understanding the overall layout. They're understanding the meaning of the words. And they're doing a lot of very complex processing, integrating information from the visual domain and the language, the linguistic domain.

and putting these things together to figure out what this document's about. How do I find the piece of information that I'm interested in? There are loads of subtle cues in a document that we just don't even think about. But obviously, when you're trying to do this computationally, you do need to think about it. And it was very difficult to do this kind of thing with traditional OCR. You know, it works great if you're looking at something like a check. You always know where the check number is on a check because it's in the same place.

or both checks look basically identical to each other. OCR works great for that. But for anything more complicated than that, even if something like an invoice, you just can't do it. So we decided we were going to do something about this. And the other thing that happened to me is that in 2011, I went to this deep learning workshop. There's a conference in the AI world called, it's now called NeurIPS, it used to be called NIPS.

In 2011, I went to this workshop. It was deep learning. It was about this whole new thing of deep learning. It was all about convolutional neural networks. And there's like 30 people in this room now. If you go to NeurIts, there's going to be like 5,000 people doing that kind of workshop. But it was a small thing. And then in 2011 was just before the whole explosion of deep learning happened.

But once I started to understand how this stuff was working, especially convolutional neural networks, I realized this was the perfect time to deploy this new technology to work on this old problem of understanding documents. And that's why we set the company up. So, gosh, you open the time capsule and go back to 2011. So you were doing this work when you were random foresting these kinds of problems. And then you saw the rise, like you said, CNNs and CNNs.

all sorts of neural nets, and now we are looking at language modeling. How would you talk about the progress we've made or the evolution of data science and machine learning over the last decade plus? Yeah, I mean, for the last decade, my interest has been quite narrow. I'm really interested in data extraction from documents. So I can't really speak for the wider enterprise of data science, but for what we do,

The big difference with the generative models is that it works and it didn't really used to work. So we really struggled to get good accuracy. And even though in some use cases it was adequate, the accuracy that we were achieving using traditional deep learning approaches, forget about random forest, even the deep learning approaches were just not really very effective. And now

the modern transformer-based models, they work. They work like humans work. They work as well as humans, or if not better than humans. And the really interesting thing is they can integrate information from different domains. So like I said earlier, what we're mostly interested in when we're reading a document is the image domain and the language domain, the linguistic domain.

well that's what humans use when they try and understand a document now the fantastic characteristic of these transformer models these generative models is that they can integrate information from different domains so they can integrate information from from images and from language plus really major benefit particularly for what we do is that they they actually understand the world so they understand finance

So it's not just, you know, if you take an untrained person and ask them to interpret some financial document, it's not easy. Even if they're just doing data entry, you know, I say just data entry in inverted commas. It's not that simple. They are actually using a lot of background knowledge about finance that you will need to have people, specialists, especially some complex financial documents, even just to do the data entry, just to extract the information. You need to have background knowledge about finance.

So if you think about, you know, if you're trying to understand an income statement, even if you just want to extract the numbers, you need some background knowledge. You need to understand what does cost of goods sold mean? What would cost of goods sold mean in a particular industry? That information, that knowledge, that kind of background knowledge

You just couldn't encode that in previous models. You can now with large language models. There was no way to encode that information with previous generations of models. That opens up a whole world of financial analysis, and it opens up a whole world of even fairly what looked like quite mundane tasks of taking the information from a balance sheet or an income statement, structuring it, and then using it in some risk model or valuation model. That was basically impossible before, but now we can do it.

So you're the expert here, but I'm going to go out on a limb and say that the traditional problem that OCR solve, kind of an extraction problem, is now migrating into, call it a cognition problem, interpreting and kind of allowing a human to speak to a document as opposed to just extract text or images, et cetera, from it. How are you thinking about the evolution of Evolution AI, your company, in a world where traditional extraction is commoditized?

Yeah, we think about this on a daily basis. I don't think it's quite commoditized yet because the thing about OCR is it works really well at extracting characters. The accuracy level of OCR algorithms is really high. It's really, really high now. So they don't really make errors. They don't make mistakes anymore. If you have a good document which hasn't been photocopied 10 times, scanned 10 times, if it's a good document, they basically don't make errors.

But like I said, they don't solve the problem because they don't actually understand documents. So they're a write-off for most tasks. However, LLMs, they can understand documents, which is great. They have all this background knowledge. They understand finance. They understand the overall picture. They understand the layout of the document. Awesome. They can extract the information that you're interested in. The problem is that information is subject to hallucinations and other kinds of errors.

So in some ways, it's great. You've got these two complementary worlds, the OCR system, which is really highly accurate, but doesn't understand the document, and LLMs, which understand the document and have quite a high error rate. If you are looking at some sensitive data, most of our clients are in financial services, so it's basically all sensitive. It's all important. There's not much tolerance for errors. You can't just sling everything into an LLM and hope for the best. You will get hallucinations. It will literally make up line items in a balance sheet.

very frequently we see this you need to check for this so you need some stuff around it to stop that happening so that's that's one answer to the question but let's assume that this is all going to be solved and you know next year uh we'll have a new generation of llms and they won't have any error rate when it comes to the extraction of characters on a page then we start to think as a company when we start to become really really interested we have started to become really interested in additional things not just extraction of information from documents

but things like modeling sensitivity analysis objections valuation you know much deeper interpretation questions about particularly financial statements we're mostly interested in financial statements right now so though we are sort we are we are building technology which will it goes beyond extraction of information it's financial analysis of the documents

You mentioned an LM might hallucinate a line item in a balance sheet or a financial statement, which could obviously have significant implications on how the data is used. On this show, we talk a lot about how to exercise AI responsibly. And I'm going to assume because of the way

these LMS are trained to interpret these financial documents that when they encounter things that are not as well represented in their training data could be language, it could be different cultures or societies have different financial regulations, different formats. So presumably the models are going to be less accurate when they're analyzing

even in your narrow use case, financial documents that are less well represented in the training data. Is that a fair statement? Yeah, I mean, to be honest and transparent, most of the documents that we're looking at are in major languages, major European languages or English.

So, I mean, we get some documents in Chinese, Indian, you know, they're still major languages. We're not seeing some low-resource language. We're not seeing a balance sheet from, you know, Tuvalu. I assume so. You'd be reluctant to commit the same level of accuracy to languages that are less represented, right? That's... You know what? I would say that actually the problem is not language, it's numbers. The problem is numbers. What we see is things like digits, right?

If you try an LLM API, just to stand, you know, the leading models, you will see digits swapped in a number. You see this quite a lot. You just see the digits swapped. Or you might see a number which has been transposed into a different area of the table. So it's swapped rows. You might see rows being swapped, cells changing place in the table. These are kind of hallucinations.

Really, they're definitely not the kind of traditional OCR error that you might get. That would be mistaking an I for a 1. That's the typical OCR error. That doesn't happen, but you get things which just move, the table just moves around. The cells of the table move around. So that is more of a problem. The language is not actually a problem, or at least we haven't seen it.

being such a problem. And I think that's because these models are really, really good at understanding language. So they tend to not make so many errors because they just understand language. They know the kinds of things that are written in a financial statement. They have a deep, deep knowledge of the language that's used in a financial statement. So they tend to use that background knowledge in order to not make mistakes. You can't do that with numbers because numbers is just a number. So

So you really need to... These things are much more subject to errors in the numbers and digits that they read out. We've become accustomed to an environment where documents are authored for an audience that's primarily human first. And I'm going to assume that in the future, the primary consumer of most documents will not necessarily be a human. It might be an AI agent, and then the human might be

having a conversation with the document after it's been processed, let's say, by an LLM. To the extent you agree with that, what do you think it means to have systems that maybe are offering documents for an LLM as opposed to a human? Are we going to get better at formatting, adding metadata, things that might make it more machine-friendly? Look,

The thing is, you don't need LLMs for this. You could just decide on a structured format for reporting financial information and then put it in a database and use an API to extract it. We don't need an AI to read this information. It's just numbers. You could just structure this in a sensible way. The real problem is that no one can agree how to represent these numbers correctly.

They just can't agree. And to some extent, there's an adversarial setup here in which people are trying to hide the information. So the problem here is not the technology. The problem is that people just can't agree. So I don't think that really changes in a world of LLMs. You have an intelligent entity that's trying to figure out what the data really means and maybe...

Maybe the person that created the financial tables was trying to hide something. And you need to go through a forensic process of trying to figure out what exactly the data, you know, what underpins the data. I don't think that really changes whether it's an AI or a human doing that. It's really good insight. Made me recall a fascinating conversation we had, I think it was last season, with the CEO of a company called April, a gentleman named Daniel Marcus, who was formerly the CTO of Waze, the mapping company that Google acquired.

And he's, April's basically an AI accountant.

So it automates doing your taxes. And he talked about solving the problem of mapping the world at Waze was actually quite similar to solving the tax problem. Because there's a set of rules, there's country-specific regulations. And when you train a model to understand all the patterns in tax forms, we have the IRS in the US, etc. It's analogous to training

a model to understand road signs and that sort of thing. Is that a kind of problem you encounter as well where you're, like you alluded to, you're struggling with the lack of consistency or kind of, you know, with financial statements, there is no worldwide structure for how those get populated or produced. So it's a really hard machine learning problem to solve for that reason, right? I think it might be a bit more fundamental than that. I think...

These models, these vision language models, they underpin on an algorithm called a vision transformer. I won't go into the details, but it's basically an application of the transformer algorithm to vision information, visual information. And it seems like this algorithm

type of neural network architecture is not very good at understanding spatial relationships very accurately. Whenever you try and get an LLM to do something which requires very careful understanding of the visual image, it tends to... It's really good at understanding the gist

It can understand, oh, this image is two cats playing with a ball of wool. It's really good at that. But if you're asking it something very specific, like does this square intersect with this circle, it doesn't do very well. And most of the document tasks that we're trying to do, you need more than the gist. You really need to understand where exactly on the page the information is so that you can double check it, for example.

Humans double-check stuff. They give it to someone else and then they ask someone to double-check it or you get someone else on the team to double-check it. Most really critical use cases, you would get multiple humans looking at the same piece of information, for example. This spatial thing is kind of a problem with document

processing. And that's, you know, like I said, this is a fundamental problem. So this is really, it's more the fundamental problem at a low level of the technology itself and trying to do something about that, trying to use OCR-based methods in conjunction with the generative models. It's challenging because you don't know exactly where the information is supposed to be on the page. You can't do that. It should be trivial, but it's not. So we're more, you know, we're grappling with these kind of more fundamental problems, I'd say.

Yeah, it makes sense. I'm going to go off topic here, but something that, based on your background, I really want to get your reaction to. So...

Alan Turing comes along in the 50s and says, you know, the way we'll know when we've achieved, you know, our artificial intelligence or even artificial general intelligence is when we can get a machine to demonstrate that it appears to be, quote, finger quotes, thinking like a human. And so, you know, obviously, I've got an opinion here. I'll try to mute my personal opinion. But we've conflated this idea that what it means to replicate human intelligence is to

be able to interact with the machine using language like we'd interact with another human. From the seat of Dr. Martin Goodson, what is intelligence and is that the right way to kind of know when we should be planting the victory flag having achieved artificial intelligence? Oh, you mean...

You mean it's the right way to figure out whether we have intelligence to be using language? Yeah. Yeah, I don't have a good answer to that, but I have obviously sort of mused on this. One thing that I found when I became a father was that my children, they didn't really... Language didn't come first before intelligence. So they kind of made...

jokes with me and tried to, like for instance, my daughter, she fell over and hit her head on the table. She was crying and she was a tiny baby really. She couldn't speak. But she went over to the table, I was comforting her, she went over to the table and she just leant down and mimed hitting her head on the table to me to show me what had happened. So that's an intelligent act in my opinion.

But she didn't have language. Language came a lot later. And also my kids, you know, jokes like physical humor. There's lots of physical humor with children before language. So why do we think that language is this core thing that's primary? Why do we think that language is a primary aspect of intelligence? Clearly it's an aspect of intelligence, but...

There's humanity that comes before language. So that is in no way an answer to your question, but it's something that I do think about. So maybe we are getting this wrong because these machines that we've built seem to be really great at language, but there's lots of other stuff that they're not very good at. I'd argue many species in the animal kingdom are intelligent by most standards.

definitions of intelligence, and yet they don't communicate using language, using the spoken word like we do. To a layperson, what would you say is intelligence? How do we know when we see it? I think it's the ability to think and learn. One of the problems that we've had is that lots of people don't use the kind of simple definition like that. They tend to use a definition which is something like

It's the capability that we would think of as needing human-like brains or something like that. That's not a good way of putting it, but they tend to place it against human intelligence.

Which doesn't really make any sense because, like you say, intelligence is widespread in the animal kingdom. And I'm much more interested in thinking about intelligence as something which is very basic. And obviously, some animals are more intelligent than others, but it's very, very widespread. The ability to understand information and act on that information and to learn

You get that in very, very simple animals. Very simple animals. You know, microscopic animals have the ability to learn. So I'm much more interested in thinking about intelligence as a much broader phenomenon rather than this very, very specific thing, which is human intelligence. It's just one species. So I'm sitting in the cradle of Silicon Valley here, and within about a five-mile radius of where I'm sitting, you know, you couldn't throw a stone and not hit a dozen animals

entrepreneurs or technologists who feel like achieving AGI, artificial general intelligence, is kind of the pinnacle of what we're trying to do in this field. In fact, we had the co-inventor of the term AGI, a gentleman named Peter Voss, on this show talking about the

the vision for achieving AGI or post-AGI, what would you as a scientist and you say is the objective of the research, the innovation that we're doing beyond OCR for financial documents, but when should we plant the victory flag in the ground if it's not just improving language modeling?

Well, you know, we're trying to do something quite simple. We want to make it easy for our customers to understand the documents that they need to understand so that they can make their decisions. Our customers, they want to do some quite boring things. Well, you know, they're trying to figure out whether they should lend money to a company, for example. So they need to look at loads of documentation to do that. And instead of humans having to do those boring jobs, they just use our software. One of our customers looks at four million pages of bank statements a year. They

They use our software to do that. Can you imagine the human endeavor that it would take, which is wasted endeavor, wasted intelligence, to extract information from 4 million pages of bank statements? So that's what our mission is, and that's what we do. We're not really interested in AGI. But in the broader context, I think we need, as a society, to ask ourselves, why are we trying to recreate human intelligence? It's not like there's not much human intelligence in the world. There are billions of human intelligences.

Billions. There is not a deficiency of intelligence, of human-like intelligence in this world. There's a surfeit of it. The difference is that they are powered by sandwiches rather than by electricity. Lots of electricity in the case of GPUs.

So is it the right goal? I think as a society, we need to ask ourselves, is this the right goal? Obviously, we all grew up with science fiction, novels, and it's a really exciting idea that you can have human-like intelligences made of silicon. It's a super interesting idea. But is it the right problem to solve? I do wonder about that.

That's more profound than I think you intended it to be. And I agree, obviously, that the objective should be achieving the best possible partnership between

call them carbon-based us and silicon-based machines, life forms. And I think when we extract a lot of the things that are mundane or that humans just don't do very well, figuring out what size loan to make based on 4 million pages of documents, like, you know what? Turns out that's not something that

We've naturally evolved to do well. It's something that machines have naturally evolved to do well. I would love it if more entrepreneurs, technologists, et cetera, thought about this problem space in that kind of more enlightened way. What can we do to make machines better at complementing humans?

And I'd love it if we stopped talking about the Turing test and confusing humans into thinking that they're interacting with humans when they're interacting with machines. Is that consistent with your thoughts? Yeah, I agree with you. How do we get there? How do we get there faster, Martin?

I don't have a good solution for that. There's so many, there is a lot of money and a lot of people heading in that direction. I'm not sure how we are. You're an influencer. I know you host a big meetup of data scientists in your community. I think it's in London, right? That's right, yeah. What are things that they care about? Like what could you talk to them about that might help us at least catalyze this kind of conversation?

Well, you know, we have our technical community, it's the London Machine Learning Meetup. So it's our, I mean, it's based in London, but most of the meetings are online. So we get a broad listenership and viewership. But it's quite academic. We're really discussing academic papers, technical papers. Most of the community, if you ask what they're interested in, they're really interested in solving their day-to-day technical problems.

But why, but I have, you know, the reason why I run that community is that I am interested in these bigger questions. I'm particularly interested in knowledge sharing and helping people to, to help each other. I mean, it's a problem in the UK that there was a lot of really good talent, technical talent, but there aren't very good ways of communicating that information to each other. The, the,

Lines of communication between academia and the technical community are not very good. Definitely the lines between the technical community and government are absolutely terrible. And that's something that I want to help. I'm...

I agree that talking about these wider societal questions is useful. At the moment, I'm more interested in improving communication so that we can actually have a good and well-functioning AI industry in the UK. You know, for us in the UK, that's more of a problem, I think, and more of a barrier.

There's no point talking about the societal implications of AI if you don't have AI. You don't have any AI industry. We have that problem. You don't have that in the US, but in the UK, it is. We are not a leader, even though the government likes to think that we are. We aren't. And along with many of my colleagues, we're interested in building that community and building that industry in the UK. Well, if you get us there, you will have accomplished a lot. So it's also a worthy goal.

Hey, Martin, we're about out of time. I got to get you off the hot seat. But now, before you answer one last important question, so talk to us about your personal journey from kind of being a researcher and an academic now to being the CEO of a well-funded, successful company, tackling kind of commercial problems. What have you learned about yourself along the way?

Oh, I've definitely learned that I'm a technical person through and through. You know, I've been through, you know, I went through a period of thinking, you know, I'm the CEO, I can't do any technical work. I've got to just focus on the commercials and everyone, lots of people saying the same thing. You just have to leave all of that technical stuff behind. And what I realized was when I did that,

there were lots of problems on the technical side of the company that I just wasn't aware of. I completely lost track of what was going on. And it was a huge problem. I realized that was...

All I was doing was just being a not very good salesperson. I turned myself from being a pretty good researcher, a good programmer, a good developer, a good technologist. I turned myself from being someone who's quite good at that to someone who just wasn't very good at sales, which is completely pointless. And especially, I started to realize, no, that's wrong. And actually, my skills are in developing products using AI.

And also really, really understanding deeply the algorithms and developing algorithms. And so I decided, well, I'm just going to hire people to do the other stuff that I'm just not very good at. Those people are much better at it than I am. They're professionals. And I'm just an amateur. So I did that. And I now focus much more on the technical side of the business, which I think is where I'm happier. It just makes me happier. But I'm also much more effective, I think. Good wisdom.

Hey, Martin, where can the audience learn more about you and the great work that you and the team are doing?

Well, they can have a look at our website, evolution.ai, if they want to know more about data extraction. But if they're interested just generally to learn more about AI, especially the technical side, keeping up with this stuff is quite difficult. So I would definitely check out the London Machine Learning Meetup. It has a YouTube channel as well. We record all of the talks and come along to the next meetup. Yeah, that would be great. We'd love to see more people there. Good. We'll publish those links in the show notes, right?

Well, this was great. Thanks for hanging out. Really a pleasure to meet you. And certainly we're all rooting for you to succeed. Thanks so much, Dan. Really great to be here. You bet. Well, that's all the time we have for this week on AI and the future of work. As always, I'm your host, Dan Turchin from PeopleRain. And of course, we're back next week with another fascinating guest.