The primary differences include company size, resource availability, and collaboration culture. Google, with over 180,000 employees, has vast resources, allowing for experiments on thousands of GPUs. In contrast, Two Sigma, with around 2,000 employees, operates on a smaller scale, requiring more cost-conscious decisions. Collaboration is also more critical in finance, where complex systems demand teamwork, code reviews, and rigorous testing, which may not be as ingrained in finance as in tech.
Data quality in finance is often poor due to multiple factors, including human errors like 'fat-fingering' trades, engineering issues in data recording, and inconsistencies during events like mergers. These issues make financial data noisy and messy, requiring significant effort to clean and standardize before it can be used effectively in AI models.
In finance, ethical risks are less about disadvantaging specific groups (as in tech) and more about ensuring accurate decision-making. The primary concern is avoiding overpromising or misusing AI, which could lead to financial losses. Unlike tech, where bias might affect users directly, finance focuses on optimizing measures like risk and return without disadvantaging retail users.
The breakthrough came when a small team, including Mike Schuster, developed a prototype using neural techniques instead of statistical methods. This prototype, tested on English-to-French translation, outperformed existing systems significantly. The success led to scaling the model across multiple languages, eventually running on 20,000 servers globally, revolutionizing machine translation.
Schuster expects incremental improvements rather than revolutionary changes. He anticipates models becoming more efficient in terms of energy use and cost, with advancements in software rather than just hardware. Data quality will remain a challenge, but filters to separate good from bad data may improve. Overall, AI will become more integrated into daily life, similar to how speech recognition and calculators are now commonplace.
Schuster warns against the hype and overpromise of AI advancements, which can mislead investors and decision-makers. He emphasizes the importance of grounding expectations in reality, as many doomsday scenarios predicted in AI have not materialized. Instead, he advocates for a balanced view, focusing on practical improvements and avoiding exaggerated claims.
Collaboration is crucial at Two Sigma due to the complexity of AI systems. Schuster highlights the need for teamwork, code reviews, and rigorous testing, which are standard in tech but less common in finance. Building a culture of trust and constructive feedback is essential for tackling large-scale projects and ensuring the reliability of AI systems.
You need to build a team that works together on a big system with things like, you know, code reviews and of course, unit and regression tests. And these things are all normal for people who grew up in tech, but not necessarily normal in every finance company, right?
Good morning, good afternoon, or good evening, depending on where you're listening. Welcome to AI and the Future of Work, episode 314. I'm your host, Dan Turchin, CEO of PeopleRain, the AI platform for IT and HR employee service.
Our community is growing thanks to you. I get asked all the time how you can meet other listeners. To make that happen, we launched a newsletter where we share weekly insights and tips that don't make it into the podcast, as well as opportunities for you to engage with each other. It's free and not at all spammy. We will never share your details without your permission. And of course, we will share a link to register in the show notes.
If you like what we do, please tell a friend and give us a like and a rating on Apple Podcasts, Spotify, or wherever you listen.
If you leave a comment, I may share it in an upcoming episode like this one from Lori in New Haven, Connecticut, who teaches psychology at Yale and listens while gardening. Lori's favorite episode is the one with Chris Fernandez, co-founder of Enso Data, about how AI is diagnosing and treating sleep disorders. It's a great conversation. We will, of course, link to that in the show notes. Thank you, Lori.
We learn from AI thought leaders weekly on this show. Of course, the added bonus, you get one AI fun fact each week. Today's fun fact, Martin Arnold and Ian Smith write in Financial Times Online that AI may make some people uninsurable.
Some experts have raised concerns about AI used in areas such as health insurance, where live data could increase personalization and lower costs for some consumers, but also risk making it harder for some unhealthier people or those without access to technology to get affordable coverage.
The EU insurance regulator says companies should make reasonable efforts to monitor and mitigate biases from data and AI systems, given the risk that algorithmic price
pricing models could end up discriminating inadvertently against certain people. Recently, the EU Insurance Commissioner bowed to pressure from insurance companies by announcing they won't be subject to EU AI regulation until at least next year. My commentary, regulation can't keep up with the pace of innovation. Let's stop pretending it can. It's incumbent on vendors to hold each other accountable.
It's equally important for consumers to get comfortable demanding to know how decisions that affect our personal lives were made and finding alternative services if we're not satisfied with the answers that we get. Of course, we'll link to the full article in today's show notes. Now shifting to today's conversation, Mike Schuster is head of the AI core team at Two Sigma,
He's an expert on AI trends in tech and finance with over 25 years of experience on general machine learning at Two Sigma. Mike leads a team of engineers and quantitative researchers to bring together AI efforts in areas like machine learning and deep learning across the firm's investment strategies. Mike also oversees advancements in the AI technologies the firm uses internally to realize greater efficiencies,
Before joining Two Sigma, Mike spent 12 years at Google, where among other initiatives, he worked on the Google Brain team to develop Google Translate. Dr. Schuster received his PhD in electrical engineering from the NARA Institute of Science and Technology in Japan. I would say over 314 episodes, there are only a few guests who
unequivocally have changed the world. Today's guest is actually one of those. I've been so looking forward to this conversation. Without further ado, Mike, it's my pleasure to welcome you to the podcast. Let's get started by having you share a bit more about that illustrious background and how you got into this space.
Okay, well, thank you very much. That was a really nice intro. You know, I'm just one of the people who have worked on this technology. So it's not just me. I just want to point that out. There were so many other people and we'll probably talk about that more, right? It's never just one person. But about my background, so people may notice my German accent. Some people notice it right away. Others, it takes a while. I grew up in Germany.
I studied electrical engineering in my hometown and I went to Japan to do my PhD. And during that time, and actually a little bit before that, I got interested in speech recognition and machine learning. Basically learned a lot of these things at the time myself because it wasn't taught
in university. When I was in university, there was almost no computer science actually at the time, but it was one of my hobbies and me and my friends. And after the PhD, I moved back and forth between Japan and California quite a few times, working in research and development, you know, in a startup at NTT, at ATR, which was, you know, the research lab where I did the PhD, mostly in speech recognition and some machine learning. So
Then a bigger change was when I came to Google in 2006, working on the quite small speech recognition team. It had, I think, 10 to 12 people at the time, and it was quite small for a while. And this was a time when speech recognition was not really interesting to many people in the world. I mean, everybody thought this will never work. But Google...
Google had a little bit the foresight of, okay, let's keep working on this. And by 2007, 2008, we put the first system after some other experiments, the first system on Android and actually on the iPhone, which was server-based. And that really then took off. And it was a really interesting time for us because the whole world started using speech recognition, which we...
often cannot really imagine today that that was just 15 years ago because today everybody is used to having speech recognition obviously on the phone and in all kinds of environments. So
At Google, I worked a long time on speech, but then also on recommendation systems. And you can think about, for example, YouTube. When you watch something next, what video are you going to watch next? I worked for a while on that, which was done by actually using neural language models at the time, which was a really interesting internal project. And then
I joined the brain team and pretty soon started working on translation, machine translation, one language to another language. And we have, I think, another question later that dives a little bit more into this. But this was a really interesting time because we...
Just a small team of us made actually a prototype work that worked really well. And we managed to probably change a lot of translation in the world at Google first, but then in the world. Although other people were working on the same thing, but this was a very interesting time. So now since 2018, I came to Two Sigma, completely changed everything.
My life, basically, because I moved from the West Coast to the East Coast in a new field, I had always been interested in finance. So I somehow had...
good feeling about it. I also believe that a background in speech recognition is actually not quite useful because it's a lot of data, it's large models, the data is noisy, you need to deal with big systems that is very similar in most quantitative finance jobs. So yeah, maybe I'll just leave it there. We can talk more about what we do here to Sigma in the later questions.
Let's actually pick up on that topic. So you're now at one of the most successful hedge funds. It's not obvious what an NLP guy is doing at a hedge fund. Describe your role and maybe just share a little bit more about Two Sigma. Right. So I am, as you said in the beginning, I'm running what is called the AI core team. You know, we have currently we have roughly 25 people, but AI means a lot more at Two Sigma. You probably have
altogether maybe 1,300 or 400 engineers or something and 300 PhDs. A lot of modelers work in all kinds of areas that you could say are that you would probably call AI today. And in terms of when I came, maybe that is an interesting part, when I came in 2018, I
I remember talking to the leaders, also talking to the founders, talking to the CTO, and we were supposed to build a team that uses AI across the company. And founders here and the leaders of the company had the vision, and I would say correctly, that what happens in tech will also at some point happen in finance. We want some of that technology. And they basically said to me, hey, we'll give you some freedom.
and some people and so on, try to make us better, basically. That meant to me is, okay, now it's on me to really make a plan, find problems that are applicable, be realistic about things, of course, right? It's not an ivory tower where you can just have ideas that nobody's ever using, right? And that turned out to be the right environment for me or for us. And we built this team that is basically a mix of
engineering and what we call modeling here or research and development. And this works very well. I mean, we've made a lot of progress within the company and in many different areas. And it's basically getting better by the year, I'd say.
So famously, your team at Google doing Google Translate asked for 2,000 GPUs, I think, to Jeff Dean. And kind of by the end of the week, they were provisioned. So I got to ask, how is it different building AI in a finance company versus a tech company? You know, I don't remember this exact tidbit that you just mentioned, but it sounds very typical for Google, right? So, I mean, this is maybe one of the biggest differences. So people always ask me, what is different between
you know, finance and tech. And I said, well, the number one difference is really, first of all, the size of the company. You know, when I joined Google, there were less than 5,000 people. When I left, there were 80,000. Today, they have 180,000 people.
At Two Sigma, we have about 2,000 people. Just that makes a huge difference because you know all the faces, you know there's only one small group taking care of, let's say, GPUs, which was completely different at Google, right? And it's not, I mean, obviously that is true for resources as well. At Google, we had sometimes people who run experiments on thousands of GPUs easily, right? That is more difficult here.
But it's also possible because, you know, obviously we're using the cloud and, you know, in the cloud you can do a lot of things. But all these things cost money and that is actually a thing that we keep thinking about all the time. And I believe that's not just a problem or an issue for us, but...
for basically everybody who works in this field, right? Nothing is free. Ideas are maybe free, but the work and the compute cost a lot of money. Maybe I can focus also maybe on some more differences that I have seen when I think about it long term, right? Finance...
probably used to be not as technical as it is today, especially in the company I am in now. I mean, Two Sigma feels always it's one of the leaders in technology in finance. So when I came from Google, it was obvious that there were so many scientific people there. Everything was...
Neural networks were known by probably a thousand people at least when I left, right? And many other machine learning techniques, you know, the cloud was known and so on. Here, that all maybe took a little bit longer. So neural networks were used. I don't actually know when the exact year was when they used them for the first time, but that was actually before I came, right? So they were used.
Maybe not in the same way we're using them today, because today we have hundreds of thousands of parameters or millions of parameters, terabytes of data, and things like that. We train on multiple GPUs in the cloud, use technology like TensorFlow and PyTorch, obviously Python and Rust and many other languages as well. So this is very technical.
And finally, also one difference that I want to mention is when I came from collaboration was a very quite important topic. And this is why is collaboration important? Collaboration is important because the technology you work on is so complex that you cannot do it yourself. Sure, there are always people who can do a lot by themselves, but in almost every advanced tech area, it has become difficult.
typical that the problems become so big that you need a team. And that is also probably has already happened in finance. But during the time I was at Two Sigma, it has happened more and more. Meaning you cannot find one genius who does everything and that's basically you're saving the company or something. You need to build a team that works together on a big system with things like code reviews and
And of course, unit and regression tests. And these things are all normal for people who grew up in tech, but not necessarily normal in every finance company. And the trust of, this may sound funny, but code reviews is something that if you were just in university, maybe you have never had that experience that somebody else looks at your code and actually tells you what to do better. This is something you need to learn if you haven't done it.
And to build that kind of atmosphere is, I believe, very important. And I think that is what has given us a lot of leeway in how we work and how we keep pushing these bigger projects forward that we have been working on. So on this podcast, we talk a lot about AI risk and the ethical issues surrounding AI. And
They're obvious when it comes to machine translation, when it comes to maybe using it for medical diagnoses or translating national security documents. I'm going to go out on a limb and say the risks and the ethics of using AI in finance are very different. Talk us through how you think about risk and ethical implications of what you do and maybe the potential impact of a false positive or a false negative.
Right. So first of all, I think when people in tech talk about risk, they may think about it a little bit differently than in finance, right? But so the technology in finance has been changing for a long time, continuously getting more advanced, more integrated and more complex. But in a way, the risks that...
let's say, normal people see, that hasn't changed at all, right? So let's say the risk is that you make a lot of money. Well, that's probably not a risk, but that you lose a lot of money is the risk, right? And basically, every finance company has risk teams and limits in place and all kinds of controls to really control that part of the equation, right? And you can say whatever technology you use...
let's say, I don't know, if the market drops by more than 3%, there is a certain thing that may happen. Somebody may call somebody and should we reduce the risk even more? Things like that have always happened. And I would actually argue that the
that the risk is, I mean, this kind of risk, right, is probably lower than before, right? Because there are, I mean, I wouldn't, you know, it's hard to prove this kind of stuff, but just because there's so much more science and technology at work that probably works better than ever before,
This kind of risk, I don't see it as a problem. What is actually a risk, and this is maybe a completely different risk, is that the amount of hype and overpromise of certain advancements
can really throw people off and lead them in the wrong direction. I mean, and I have seen that not only in finance, but in so many other areas. This is, in a way, a typical internet thing. That, you know, if somebody promises you
certain directions of finance or something like that, right? And please invest your money here or things like that. I say that is a real risk because many of these things are not true. We've heard so many things over the last 15 years in AI, not just finance, but in AI in general that go into this doomsday scenario. And most, if not all of them, have not been true, right?
I'm in a way, way more, I think the change is definitely going to happen, but not in this extreme way as it is often portrayed on the internet or in movies or in, you know, by people often who are actually in some cases even experts on the topic, right? So if markets behave perfectly rational, then all information that
will be kind of priced in very quickly. And traditionally that would be by humans receiving all the signals from the market, every earnings report, right, everything out there.
But humans competing with algorithms that have perfect recall and can instantly ingest and analyze much more data than a human can, it seems like the days of humans being able to make, quote, more rational trading decisions than algorithms are numbered. Is that far-fetched? Okay, that could be the case. But so far, it certainly hasn't happened, right? There have been
I mean, in a way, you're right. One thing where computers are good at is just to collect a lot of information to look at thousands of stocks or whatever you're trading at once, you know, and things like that. A human can't do that. But there are enough so far that there have been enough events
that make it what we call, you know, it makes the data non-stationary over time. And, you know, you can just in the last few years, what has happened that hadn't happened in that way before, right? It was COVID, was a big change to the system. And then we had a war starting in Ukraine, which was also unexpected at the time. So all these things happen all the time.
Many of these events happen that are maybe not familiar to everybody in the non-financial world. And every time a human needs to go in and say, hey, guys, I don't think this is what we should do. And so we are, I would say, quite far away from really automating this. I mean, one thing we have learned is that we...
We have hired, in fact, more people since we started with AI. And a lot of people are sometimes worried about jobs and things like that. I mean, the only thing I can say again, reality is that we have currently one of the lowest unemployment rates in the US ever. And certainly I've been hearing...
technology taking over and things for a long, long time. And it's just not true. Jobs change, but they don't go away, right? So what used to be, maybe we have, you know, things like prompt engineering today that we didn't have a few years ago for LLMs. And there are many, many more examples that basically will happen while some other jobs are more fading out. But the
Yeah, I'm not sure that was actually one of the questions you wanted to get into. But what I try to...
tell people all the time, let's look at reality right now instead of trying to project something in the future that may happen, but who knows what will actually happen. And I want to comment one more thing. You said before, which was, of course, the famous hypothesis by the Nobel Prize winner to say, all the information is there, so basically there should be no
gain possible in financial markets, right? And that has been proven wrong for whatever, 50 years now, right? Because the problem is, in practice, you cannot get all information, right? And it's just, there's a lot of engineering in between. And it is...
This is just too complex for the world to handle, right? And I think this will actually keep going. There will always be humans who make more correct long-term decisions than a machine can make that only has a limited amount of information about the past and the potential future.
So as the head of the AI core team at Two Sigma, you personally have some responsibility for, I assume, overseeing kind of the responsible use of AI on behalf of the organization. How do you think about that responsibility? Maybe any best practices or kind of policies that you use to train your team or kind of enforce best practices? What that means for us is we have to test 10 times. We have to write unit tests. We have to have regression tests for everything.
we have to make sure that the technology we develop actually works in the way we intended to and not in a different way. So when you think about the nature of AI, and I'll go out on a limb and say, all AI is a data problem and AI by design is perfectly capable of replicating human bias. So when you think about the kinds of data that you're training models on, I imagine it's a different set of data hygiene challenges.
How do you think about data quality when you're building models and assessing bias? Right. So let's talk about data quality first, maybe. So data quality in finance in general is actually quite terrible, right? So
There are many different sources. There are people fat fingering trades and type in 10 times the amount they actually want to type in. There are engineering problems with getting the data or recording the data in the right way that you have often
you miss out on certain periods or something, then there is a company, let's say merger or something else that where you got the wrong data on the wrong day. I mean, there's all kinds of things that happen all the time that make the data very, very noisy and messy. Sometimes people generate wrong data and it's out there. So this all happens. So data quality is quite terrible.
But how does that influence biased decision or responsible AI in that sense? So one thing, again, what is different here is when people talk about biased decisions, in tech, they usually mean something completely different, right? In tech, it often means that a certain group of people has more disadvantages than the other group, let's say for race or for whether men or women or something like that, right? And it always applies to users. So
Here in finance, we don't have normal users in that sense. Of course, we have investors, but that is a very different story. So in the sense how we could create or use bias as an example, just if we would use too much short-term data versus long-term data. But eventually, we want to optimize a lot of measures at once and
And we try to obviously do this the best way. And only there's no real risk that somebody gets disadvantaged in that sense. It doesn't really apply for us, at least because we don't have retail users. It doesn't really apply to us as much as in, as let's say for maybe a bank or something like that, right? That is completely different. That's good perspective. So,
I often say the state of AI is comically immature. And for a lot of people, they raise their eyebrows when you say that because, well, gosh, LLMs can write haikus about cats really credibly. But when you think about the cost to run the models
the energy consumption, the speed to train them, the speed at inference time, the fact that these next token predictors are just kind of bags of words. They don't really understand what they're outputting.
They don't really understand the physical environment. That's what I mean when I say that the state is immature. Fast forward five years, given what you know about AI and the state of research and development, what are some of the biggest impediments that you think will be removed when we're having a version of this conversation in five years?
Right. I think five years is actually a great time window because it's not 20 years, right? And so five years is something that we can somehow imagine at least, right? And if we...
So, you know, to predict exactly the future is, of course, hard, right? But especially, you know, I think you want to talk about LLMs as one part. But I personally think actually a lot of things will change less than what people actually expect. So...
Sure, some of these models will become better and they will, instead of winning a silver medal in the math competition, they may win a gold medal in the math competition, right? Okay, that may happen. Okay, and that is impressive. That is as impressive as when DeepMind's Go engine beat the world champion in Go, right? That was also very impressive and kind of unexpected at the time for experts.
But you talked about the energy use, right? Energy and, you know, when I hear this, I've read actually some interesting articles, you know, that predict exponential energy use by these systems and, you know, exponential improvements and all this stuff. And I always tell people, guys, I'm an engineer myself.
Nothing is exponential. Everything, you know, things are exponential until they're not. So, and we had this discussion this morning on the coffee chat, you know, people said it's either a sigmoid or it's a, you know, or it's a bubble and then it stops, right? And when you think about energy use or cost in that sense,
When people run out of money, they will stop training expensive models, right? That's how it's going to be because we have to use the money. It's not only used for training models and same with energy. What I do expect to happen is that many of the models and the algorithms will become more efficient, right? So not on the hardware side, not necessarily only on the hardware side, but also on the software side.
When you think that... And I use humans as a good baseline, right? I mean, we have...
you know, a lot of neurons in our head and we use very little energy to make quite complex decisions and learn a lot. And that is where I want things to go and where, you know, I'm not sure we will achieve that in the next 48 months or whatever, but, you know, that should be our goal, right? We cannot make these models bigger and use more energy and just expect everything will just keep going that way, right? That's not...
That's not one of the... It's just unlikely. So the other thing is data quality. A lot of the data that these models are using is not of high quality, right? And
Maybe we can build filters that, you know, people have tried this, of course, many times already, that separates the good from the bad. But this is actually quite hard to do. We tried this for Translate as well. It's very hard to get this right. So, I mean, overall, I expect things to improve, but not change in a way that will change our lives forever.
you know, that much, right? There's going to be continuous improvement that we will just accept. Similar in the way we accept, you know, speech recognition today, which we hadn't 15 years ago. We expect translate to work. We expect, you know, image recognition to basically work and so on, right? We didn't have all these things. And today we think it's totally normal. Sometimes I give people the example of, let's say you have a calculator and
And you would have told somebody 100 years ago, you know, there's going to be this machine. You can type in a bunch of numbers and multiply them, and the machine instantly will give you the answer. If you would have told my grandma that, she would have said, you know, 100 years ago, this would have been crazy, right? Today, it's absolutely normal. And I think this is just what we have to accept.
that especially younger people, you know, my kids grew up with a lot of technology that they find absolutely normal. There's nothing special about it. And I think this will keep going, right? We will accept AI in many areas as just normal. I don't see...
I don't see really, you know, one of the outcomes that's often portrayed in the movies is it's often bad, right? It's often extremes. And I just don't think it's going to happen. That's encouraging. Mike, we're bad at time, but I'm not letting you off the hot seat without answering one last important question for me. So roll back the clock eight years, about eight and a half years, and you and a small team,
In the Googleplex, not far from where I'm sitting right now, in Mountain View, have this breakthrough that using kind of neural techniques instead of statistical techniques could lead to more rapid improvements in machine translation. Can you just take us back to those days and those early conversations? And what was like the aha moment?
Yeah, so this is an interesting question. I mean, first of all, just a little bit about the history because I was so long at Google and in tech, right? So I had seen what happened in image recognition, in speech recognition a little bit, and games came a little bit later, but...
So at the time, some important papers came out, right? And some of the people who were working on these papers were sitting right next to me. You know, Ilya Suskever and who knows what, all these guys, right? And there was especially two papers that were interesting. It was the sequence model paper. All these guys were sitting right next to me. And the attention paper, which wasn't from us, but from somewhere else, it came out in 2014. And, you know, there was a...
very simple prototype on a very small database that showed good translation and had a lot of problems. What people said, oh, this is great, and other companies knew about the same thing. The problem was, how can we make this work on these billions of examples that we have at Google, right? And that was really the challenge. And this is something that
It was a team of three of us, right, really focused on. And we had a hundred things to solve that were in a way impossible, right? But eventually we had the model that was running on a hundred GPUs. At once it was in 2015, right? And I remember trying out, I went to the Translate team. They were really suspicious about this. And I said, please give us the data. We want to
try and then they head to me and I said, well, what data do you want? I said, well, give me English to French or something like that, you know, something typical. And he goes, English to French, well, we have such a good system, you're never going to beat that, you know, so...
And we tried it out and it was immediately so much better that, you know, it was just, oh my God, this is going to be interesting. Right. And it took a long time. And, you know, lots of people, we worked with the whole translate team that was like, I don't know, the 20, 30 people plus 30 people in, in production to, to eventually make this run on six languages, six languages, six languages. And later on the,
together with the Translate team on 103, and that ran then on 20,000 servers in many data centers around the world, right? There was, you know, in the beginning, it was really, really slow. It was 20 times slower than it should have been. We had then a hardware development, you know, the first TPUs we were using basically for that, which in fact had hardware bugs and stuff like that on them, which we had to work around. It was just a really, really interesting time, right? And
It felt great to work on something that is non-controversial in a way, right? So, and I remember at some point, I think the Translate team made the decision to move from all the old stuff to the new stuff within, you know, very quickly after I had actually convinced the team lead to try out the recipe, you know, to basically try it out on a
with our tools and he made it work for some obscure language pair. I remember like Gaelic to English or something like that. And he then believed it and then things started moving, right? It was a really interesting time.
It sends chills down my spine just thinking about the impact that you and that small team had. Well, again, I believe many other companies thought about the same thing at the same time, right? Microsoft, companies in Germany. But there were just so many things to get right at the same time, right? Another example is maybe the tokens that everybody talks about today.
We had used them in 2008 because we had written a system to make them work, to make actually Japanese and Korean work, right? Because there were so many different alphabets and symbols and stuff, and they were mixed. And that's where we used these things first. And then, you know, for translation...
we first had 160,000 word vocabulary and then we changed it to the two tokens because we had success with it in speech recognition, right? And it, again, it immediately changed the whole, it made things much better, much faster to train. And that's how that happened, right? It was an interesting time. Fascinating.
The fact is about over half the internet is in English, but far less than half of the population in the world speaks English. So when I said you and a small team changed the world, that wasn't an exaggeration. There's a great retelling of this story. You tell it better, but Gideon Lewis Krauss, end of 2016, published an article in the New York Times with a great kind of in-depth version of this story. Right.
I remember talking to him for a long time, you know, every week, basically. He spent a lot of time with you and the team. Yeah, yeah, yeah. He really got to know you. Yes, he spent a lot of time with us. That's right. Quite an essay. Yes. But I prefer your version, Mike. Okay. Good. Well, Mike, thanks for hanging out. I really enjoyed this one. I've been looking forward to it and you didn't disappoint.
Okay. Thank you very much. Happy to be on your podcast. Excellent. Well, that's the great Mike Schuster from Two Sigma, formerly Google. And gosh, that's all the time we have for this week on AI and the future of work. As always, I'm your host, Dan Turchin from PeopleRain. And of course, we're back next week with another fascinating guest.