Pushkin. Sometimes getting better is harder than getting sick. Waiting on hold for an appointment, standing in line at the pharmacy, the whole healthcare system can feel like a headache. Amazon One Medical and Amazon Pharmacy are changing that.
Get convenient virtual care 24-7 with Amazon One Medical and have your prescriptions delivered right to your door with Amazon Pharmacy. No more lines, no more hassles. Just affordable, fast care. Thanks to Amazon Pharmacy and Amazon One Medical, health care just got less painful. Learn more at health.amazon.com.
Hey, Happy New Year. We are very happy to be back. And I have one request before we start the show. I'm asking you a favor. And the favor is this. Would you please send us an email to problem at pushkin.fm and tell us...
what you like about the show and what you don't like about the show, and specifically, what kinds of things you want to hear more of and perhaps what kinds of things you don't want to hear. Again, it's problem at pushkin.fm. I'm going to read all the emails, so thank you in advance for sending them.
Claude Shannon is this huge figure in the history of technology. He's one of the key people who worked at Bell Labs in the middle of the 20th century and really came up with the ideas that made modern technology possible. But I'm going to be honest with you. I never really understood what Claude Shannon figured out that was such a big deal. But the people who know about technology, who know about the history of ideas, they say Shannon's a giant.
Claude Shannon is like the nerd's nerd. He's the techno-intellectual's techno-intellectual. And so for today's show, I wanted to understand what did Claude Shannon figure out and why is it so important for the modern world? I'm Jacob Goldstein, and this is What's Your Problem? My guest today is David Shade.
David is a professor of electrical engineering at Stanford. He has studied Shannon for decades. He teaches Shannon's work to his students. And David used Shannon's work to make a breakthrough in cell phone technology. And that breakthrough, that breakthrough that came to us via Shannon and Shea, it affects every phone call we make.
David and I talked about Shannon's key insights and about how David's own work built on Shannon. And we also talked about the big chunk of Shannon's life that was taken up with juggling and riding unicycles and building mechanical toys. But to start, we talked about how in the middle of the 20th century, Bell Labs wound up driving so much technological innovation. Yeah, so Bell Labs was the research lab of AT&T. Mm-hmm.
AT&T at that time was the phone company. Nowadays, we have many phone companies. We have Verizon, we have T-Mobile, etc. But in those days, there was only one phone company, and that's a monopoly. So a monopoly needs to justify its existence. So it doesn't get broken up by the government. It doesn't get broken up. Of course, it eventually got broken up. But at that time, it was a monopoly. And so one way of justifying monopoly
its existence is to say that, okay, he says to the American people, to the government, that we will always spend a certain percent of our revenue on this research lab called Bell Labs. And whatever Bell Labs come up with is kind of our contribution, not only to our bottom line, but also to technology of the country. So they have this sort of public mission to prevent the government from breaking them up.
Yeah. And so therefore, it also allows researchers a very free reign to do research that's not necessarily tied to, like, say, a particular business unit. Okay? So they can be very creative. And that's the atmosphere of Battle Labs. So Battle Labs attracted a bunch of very smart people because smart people want to work on their own problem, not the problem that the manager gives them. Yeah.
That's one characteristic of smart people. And so, yeah, that was the heydays of battle labs. Lots of smart people inventing amazing stuff. Laser was invented there. Information theory. The transistor was invented there. Sort of almost all the foundation of the information age. Yeah. Where there's hardware, algorithm, software.
in some sense, all have their roots at Bell Labs. So that was a contribution to mankind, actually, I should say, not only to America. So Shannon gets there at this time, right? He's there with, you know, when they're inventing certainly the transistor. What's he do? Tell me about his work there when he gets there. What's he working on? Yeah, so I think Shannon always have his own agenda. Uh-huh.
We know for a fact that he has been interested in the problem of communication, that idea of having a grand theory of communication, even back in 1938, I think '37, '38, because he wrote a letter at that time to a very famous person named Venera Bush. Venera Bush is very famous. He was, I think, president of MIT or dean of MIT, and then he became sort of a scientific advisor to the president.
And so he wrote a letter to Van der Rohe Bush in 1938 and said, hey, you know what? I'm really interested in this question of how to find one theory that unifies all possible communication systems. There's so many different communication systems out there, but I think there's something in the heart of every system. And I'm trying to get to the heart. And like nobody had thought of it in that way, right? It seems like part of his...
Part of why Shannon is such a big deal is like, as I understand it, people understood they were trying to figure out how to make the phone work better and they were trying to make movies be clearer or whatever, but there wasn't this idea that you could abstract it until Shannon came along. And the reason is very simple, actually, because if you have a physical system that you want to build, right? What do you see, right? You say, hey, man, the video. For example, I'm seeing you right now.
I'm not seeing you very clearly, I have to say. Yes.
I'm in a closet. I'm in a closet. Right. Then I would say, hey, how to try to improve the image. Maybe I can try to fix this pixel or do some filtering of your noise. So I'm very tied to the very specific details of the specific problem because why I'm the engineer, I need to improve the system, not in 10 years, but tomorrow. You don't need a theory of the system. You just want a clearer picture. Yeah.
Yeah, I'm in the weeds, right? I'm in the weeds. And Shannon, because of his training and also because of the atmosphere of a place like Bell Labs, could afford to step back and just look at the broader forest as opposed to the details of specific trees.
So, okay, Shannon's big idea comes out in this paper he publishes in 1948. The paper is called A Mathematical Theory of Communication. It's like his great work. Tell me about that paper. So that paper is actually a very interesting paper. In fact, when I teach information theory—
I teach from the paper itself because I thought it's an amazing way not only of learning information theory, but learning how to write a scientific paper properly. And you know, not everyone does research in information theory, but everybody has to write. Every researcher has the right to express their ideas to their peers and to the audience. So in that paper, very interesting. The first paragraph of the paper is already very interesting.
Because typically when people write a paper nowadays, they tell you, oh, how great my invention is. It's going to change the world. Every paper is going to change the world. But in fact, his first paragraph focused on telling you what his paper is not achieving. I mean, that's a master. That's a master's.
Right? I mean, how many papers that you read nowadays tells you in the beginning, hey, you know what, guys? Expectation management here. This paper is not about this, not about this, not about that. Don't get your hopes up. Hey, don't get your hopes up. Yeah. Exactly. That's exactly what he did. Expectation management. Okay. Nowadays. Today, we would call it expectation management. Yeah.
And now those things, I guess he just calls it honesty. And his whole point was often people associate information with meaning. And then he said, in this paper, we ignore meaning. We ignore meaning. So that was the first thing he did.
Which is brilliant because once you tie information with meaning, then he will never be able to make any progress. It's just too difficult and too broad and too vague a problem. Everybody gets stuck on this idea of meaning and what is meaning. And he's like, forget about meaning. So if we're going to forget about meaning, what is left?
Yeah, so actually the biggest, I think, breakthrough of that paper is to really focus on the thing that matters and cut away a lot of stuff that really doesn't, not that it doesn't matter, but it doesn't matter in terms of solving the communication problem. The communication, so then he said, okay, what is the communication problem? The communication problem is the following, is that there are multiple possibilities of a word
And my goal is to tell the receiver, destination, which of the multiple possibilities is the correct possibility. Yeah. And so in language, it's basically, it's a finite set. Language is a finite set. It's very large. But if we're speaking, and we both know that we're speaking English, then essentially you are hearing the words and decoding them. And you know that it is a series of words, and you just have to figure out which words. Right.
I mean, like that, for example? Yes, like that. Okay. So that's the frame he builds. Then what? Okay. All right. Then once you have this framing, right, then you can ask the question, okay, what is the goal of communication? The goal of communication is to communicate as fast as I can, right? And the natural question is, why is there a limit?
on how fast I can communicate to you. Because if there's no limit, then amazing world, right? We can communicate so fast. It's like instant telepathy. It's like you instantly beam me every thought in your head. Yeah, okay. Exactly. The natural question it has once you set up this finite set, as you mentioned, is okay, given these finite sets, is there a limit on how fast I can communicate to you? Uh-huh.
And so that was the question that was the heart of the paper, which is to, so he formulated this notion of a capacity that communication system is like a pipe. It's like you're pushing water through this pipe and the size of the pipe limits of how fast you can push water through it. And analogously in communication, there's this notion of a size of a pipe, which is called a capacity. Uh-huh.
And he figured a way of computing this capacity for a different communication medium.
Any communication medium, you can actually compute a capacity for that communication medium and that limits how fast you can communicate information over that medium. Whether that medium is wireless over the air or over the wire line, like I'm talking to you, I communicate over the air, I talk to my wifi, the wifi goes through some copper cables, some optical fiber, all these are physical medium, but he can compute
a capacity for each of these different mediums. And I know that part of the paper looks at, say, redundancy in various modes of communication. And on a related note, patterns, right? There's this whole section of the paper where he looks at the frequency with which letters occur in English and kind of builds an idea around that. Tell me about those pieces of the paper.
Yeah, so let's start with the word redundancy. Yeah, was that the wrong word? Was that the right word? No, no, no, no, no, no. That's not the wrong word, but it's actually the most important word, I would say. Okay. Almost. Because if you go back to the question, to the thing I was talking about, which is how fast you can communicate, right? Yeah. So what he discovered was actually there's no limit on how fast you can communicate. You can always communicate very fast.
But what the guy can hear is gibberish, and he cannot really distinguish what you're trying to say. It's like so much noise in the system that he cannot really figure out what he's saying. Even if you're face-to-face, right? Even if you're face-to-face, you're not going over the phone or whatever. If you talk too fast, the listener won't understand because you're going too fast. Yeah, and anybody who goes to a crazy professor's lecture would know about this. When a professor just keeps on talking, it's...
At million miles per hour. Yeah. And the student just sits there and nobody understood a thing. Yeah. And the professor calls the day when it's finished. Yeah. So, so basically he's, what he's saying is that, hey, you know what? To make sure that the information goes through reliably, reliably, that's the first word. Yeah. You need to introduce redundancy, redundancy in your message. Okay. Yeah.
And what he figured out is in some sense the optimal way of adding redundancy. Because you know, you can always be stupid in adding redundancy. For example, I can keep on repeating the same word 100 times to you. And then you'll probably get it. And then I move on to the next word. I cannot move on to the next word. But that would take me 100 times slow. Right? Right.
And so that's not a very smart way of adding redundancy. So what he figured out is an optimal way of adding redundancy so that you can communicate reliably and yet at the maximum, what he calls, capacity limit. And that was a totally amazing, actually, formulation of the problem and highly non-obvious. And I think that is sort of the
Amazing contribution of this guy, Shannon. It's optimization. He optimizes communication across any channel where you're balancing efficiency or speed and reliability. That is the trade-off. And he figures out how to optimize for that trade-off. Yes, yes. He figured out how to optimize that trade-off, but that trade-off turns out to be
Very interesting. It's a very interesting trade-off. So typically when we think about trade-off, we think about like a smooth curve, right? As when you tune something, then you can get better performance. But what he showed was that there's kind of like a cliff effect. Okay. And the cliff effect is that if you communicate below this number called capacity, then you can always engineer a system to make the signal
The communication as reliable as you want. So reliable, that's completely clean. Wow. Whereas if you communicate above this number capacity, then there's nothing you can do to make your signal clean. It's just completely gibberish. So it's a very sharp trade-off that he identified. It's not a smooth trade-off.
And if you're running the phone company, that's exactly what you want to know, right? So then you can tune it all the way to capacity and then not try and tune it anymore after that because it's not going to get any better. Correct. And that's the goal of 60 years of engineering to achieve his goal, his vision. His vision in 1948, it took people around 60 years to get to that, implement his vision.
Well, so you are part of that story, right? Let's let you walk onto the story now. So you, tell me about your work and how Shannon's work, you know, how you built on Shannon's work. Tell me about how you built on Shannon's work. Yeah, so I did my PhD in the 90s, okay, in the 90s. My advisor was a Shannon student. And so I learned information theory from him, okay? Now, at that time,
Information theory was almost a dead subject, okay? When I was a PhD student, the first thing my advisor told me, maybe he's following Shannon, is, hey, don't work in information theory. Wow. You'll never find a job with this stuff, okay? That's a tough moment. That must be a tough moment for you. It's pretty tough, yeah. Because at that time, there's not much progress made in the theory, right?
And there's no killer applications either. There's no very killer applications that need all this sophisticated information theory. Okay? So it was like a dead field. Was there a while when people used it to, like, whatever, make landline phones work better? Like in the 50s or something? Were people like, oh, great, now we've got this theory and we can make the phone work better? Yeah. So the thing is that
The solutions that people come up with to achieve these capacity limits is very complicated. And the electronics, the technology is just not enough to build these complicated circuits. So information theory has not had a very significant impact in the 50s, 60s, or even 70s. So it's like one of those cases where the theory is just too far ahead of the technology to be useful. Yeah.
And so people kind of start losing interest in the theory. It's like, oh, this is a bunch of math. It's not impacting the real world. And so students are drifting away from the field. But there are still always a few students, okay, who are just so enamored by the theory that they keep on pursuing it. And my advisor is one of the leading professors in this area. And he would have like one student every decade, every decade.
to do research in information. - You were that student? You were that student? - And I was not that student. - Oh, okay. - And I was not that student, okay? At that time, that slot was already taken by an earlier student. - Okay. - Who was way smarter than me. - Okay. - Who was way smarter than me. And that's it. He was a student of the decade in information theory, okay? Now, so I was assigned to work on some other problems, okay? Completely unrelated, okay? But anyway, the point though is that when I graduated, something happened, okay?
And that was the beginning of the wireless revolution. That was the time when only a million people have cell phones. And those cell phones, I don't even remember, it's like gigantically big. Yeah, like there's that famous scene from the movie Wall Street, right? That's the one that everybody talks about, where it's like bigger than a brick. People say brick, but it's actually bigger than a brick. It's like a big hardback book or something.
Yeah, and actually those days, because there's so few of these phones, it's like a prestige. It's like a prestige to have this brick. Yeah. Okay. Yeah. You couldn't get that brick. You had to be rich to get that brick. Yeah. Yeah. And so the wireless revolution was happening because people realized that, hey, you know what? Being able to communicate anytime, anywhere is really valuable. Yeah. And so people are now getting interested. And at that time, what people realized is that, whoa, this wireless technology
Physical media is really tough to communicate over because the bandwidth is so limited and the noise is so much, right? FCC was limiting the bandwidth allocation to these applications a lot. Uh-huh.
- The Federal Communications Commission. The government wasn't letting wireless companies use much bandwidth for cell phones. - Yeah, because all the bandwidth, most of them are allocated for military purposes. And there's only very little bandwidth allocated at that time for civilians. And so those bandwidth were auctioned out to companies with a very high price. And so it became very important to be very efficient in using this very expensive property. And then people realized, hey, if we want to be really efficient,
Then we need a theory which is about efficiency. So people start thinking, okay, all right, so information theory was dead, but now it's going to come back to life because we have this really important problem, really expensive spectrum that was allocated by FCC, and we want to squeeze as much out of it as possible. As much communication. We need a sort of mathematical theory of communication, if you will.
And that was the renaissance of information theory spurred by this amazing technology of wireless, which took us from 1 million phones to 10 billion phones today. Everybody has 1.1 phones. And information theory played a big role in that revolution.
In a minute, how David used Claude Shannon's 1948 paper to come up with an idea that we all use every time we make a phone call.
Hey, it's Jacob. I'm here with Rachel Botsman. Rachel lectures on trust at Oxford University, and she is the author of a new Pushkin audiobook called How to Trust and Be Trusted. Hi, Rachel. Hi, Jacob. Rachel Botsman, tell me three things I need to know about trust. Number one, do not mistake trust.
Confidence for competence. Big trust mistake. So when people are making trust decisions, they often look for confidence versus competence. Number two, transparency doesn't equal more trust. Big myth and misconception. And a real problem actually in the tech world. The reason why is because trust is a confident relationship with the unknown. So what are you doing if you make things more transparent?
You're reducing the need for trust. And number three, become a stellar expectation setter. Inconsistency with expectations really damages trust. I love it. Say the name of the book again and why everybody should listen to it. So it's called How to Trust and Be Trusted. Intentionally, it's a two-way title because we have to give trust and we have to earn trust.
And the reason why I wrote it is because we often hear about how trust is in a state of crisis or how it's in a state of decline. But there's lots of things that you can do to improve trust in your own lives, to improve trust in your teams, trusting yourself to take more risks, or even making smarter trust decisions. Rachel Botsman, the new audiobook is called How to Trust and Be Trusted. Great to talk with you.
It's so good to talk with you and I really hope listeners listen to it because it can change people's lives.
Let's talk for a moment about your role, right? Like you actually played an important role there. Yeah. So I was at Bell Labs. Uh-huh. Just like Claude. Just like Claude Chiatta. Yeah. So I spent one year at Bell Labs as a so-called postdoc right after my PhD before I moved to Berkeley to become a professor there. I spent one year there. And that's what people were talking about that time at Bell Labs. Hey, this new thing, wireless technology.
information theory is coming back to life. We can try to use information theory and adapt it and extend it to this wireless communication problem.
And so that's when I said, whoa, this information theory I learned from Bob Gallagher. Finally, there's a place to use it. Finally, I can actually make a living. Make a living out of it. Unlike what my advisor told me, it's not dead. It's coming back to life. And so that's sort of my start in the field. And yeah, so I did, I, you know, invented a bunch of stuff.
and actually connect this information theory to the real world. And every time you use a phone, you're using my algorithm, which is based on the theory of information. Huh. And so that's a cool thing to be able to say, first of all. That's a very good flex. Your algorithm, it's the proportional fare scheduling algorithm, right? Yes, yes. What is that? What's it do?
All right. So I should tell you a little bit of a story. I think a story is, and then I'll tell you what it does. Okay. So I went to, so that was the end of 1999, around 1999. So I was doing all this information theory stuff at Berkeley, writing many papers. But then I always have a thought in the back of my mind, which is, hey, is this stuff going to be useful?
And so I went to a, I decided to go to a company, a wireless company who actually built these things and see whether this theory can be used. And the company I went to is called Qualcomm. Okay. I've heard of Qualcomm. You've heard of Qualcomm, but at that time it was a small company. It was not very big. Okay. And at that time they have this problem they're working on. Okay. Which is the following. All right. So in wireless communication, there's a concept called Bay Station. Okay. And the Bay Station serves many cell phones.
in the vicinity of the base station. It's called a cell. - Is it like a tower? Is it what we would call it? - Yeah, it's like a tower. That's right. It's always on the tower. There's some electronics there. And that cell, that base station, is supposed to beam information to many phones. - You still see them. You see them when, whatever, on top of a big building or when you're driving down the freeway, right? That's what you're talking about, yeah. - That's right. And sometimes on fake trees. - Yeah, I love the fake trees. In New Jersey, they love the fake trees, yeah.
New Jersey, that's right. New Jersey factories. Yes. So at that time, they were looking at this problem, which is, hey, okay, my bandwidth is limited, but I have many users to serve. Yeah. Okay. How do I schedule my limited resource among all these users? Right? Because I only have one total bandwidth. And so at that time, people were saying, okay, maybe something simple.
I give one-eighth of the time to the end user, right? So the boost to five users, I serve this user for a little bit, and I serve the second user for a little bit, and third user, fourth user. And the idea is you're switching really fast. You're just like switching kind of. Switching really fast. Yeah. Exactly. And then when I went there, I said, okay, good. This is a problem. It's a good problem. And I said, hey, instead of fixating on this particular scheduling policy,
Why don't we do a Shannon thing? A Claude Shannon thing? You thought of, you thought of, yeah, okay. The Claude Shannon thing is what? Is to look at the problem from first principle, not pre-assume a particular solution or particular class of solution even, and ask ourselves, what is the capacity of this whole system?
And how do I engineer the system to achieve that capacity? Okay. And it turns out that if you look at the problem this way, then it turns out that the optimal way of scheduling is not the one that they were trying to design. And the reason is because in wireless communication, there's a very interesting characteristic, which is called fading. Okay. When I talk to you over the air,
the channel actually goes up and down, strong and weak, strong and weak, very rapidly. What I mean is when I send an electromagnetic signal from the base station to the phone, that signal gets amplified and attenuated very rapidly. It goes up and down. Can we say it gets stronger and weaker? Stronger and weaker. Okay, yes.
And so the optimal way that information theory tells you to do is actually not divide the time into slots blindly, but really try to schedule a user when the channel is strong. And then from that on, I designed a scheduling algorithm, which is more practical.
by sort of leverage of this basic idea from information theory. And so the base station is basically monitoring the strength of the incoming signals from all the different phones. Correct. And saying, oh, that one's strong. I'm going to grab that one. Oh, that one's strong. I'm going to grab that one. That's what's happening. Correct. Correct. And how does that... I mean, I get in a kind of big first principles way, sort of analogously it follows from Shannon, but is there anything sort of specific in...
in Shannon that leads you to this algorithm? - So remember, Shannon is a very general theory. - Yeah. - Okay? It basically says that given any communication medium or any communication setting,
you can try to calculate this notion of a capacity. So it's a very general theory. What I did was to apply it to a very specific context, which is this base station serving multiple user settings, and then apply his framework to analyze the capacity of that system. And in the process of analyzing the capacity, you can also figure out what is the optimal way
of achieving that capacity. Remember you mentioned capacity is really an optimization problem. And Shannon was able to solve this optimization problem in general, but now I specialize it in some sense to this pretty specific setting, except that this setting is used by everybody. But at that time, it was like, you know, research is about timing. And I was there at the right place at the right time because
Qualcomm turns out to completely dominate the entire third generation technology. Yeah, 3G. So when I was able to convince them that, hey, your way of doing things is no good, this way suggested by Shannon is actually far better. Please use this way. It took me a few months, but I was able to persuade them to implement it. And then it got into the standard through the domination. And then every standard after that uses the same, basically the same algorithm.
So it was good because as I said, I'm at the right place at the right time. When you try to contribute to engineering, it's too late if the system is built already because people don't want to change the whole system to accommodate your new idea. But it was very early in the design phase. So you made this breakthrough in wireless communications using Shannon's work. Were there similar breakthroughs in other domains?
Any communication medium, right? It could be optical fiber. It could be DSL modem. Underwater communication. Almost all these communication systems are now designed based on his principle. So his impact of this theory is kind of global. It's the entire communication landscape. There's a story behind
I read about Shanna that when he is developing information theory, he takes a book off the shelf and he reads a sentence to, it's actually his wife, and it's something like, the lamp was sitting on the, and she says, table. And he says, no, I'll give you a clue. The first letter is D. And she says, desk.
And when I heard that story, what I thought of was large language models. Like, that sounds exactly like a large language model. And so I'm just fishing. I'm just curious. Like, does his work matter for machine learning, large language models, et cetera, or no? Yeah. So that's a very interesting point. Now, I'm not an expert by any means in AI or large language models. Yeah.
I'm not a professional researcher in that area. But I think you can actually see some commonality, right? Is that, you know, these models, in some sense, they don't care about meaning either. Yeah, very good. Very good, yeah. Right? Actually, I just came to my, this discussion is very interesting. Because it's really just patterns.
It's just which patterns are more likely than other patterns, right? The example you gave about DESK and LAM is basically about patterns. And information theory is really analyzing sort of the number of possible patterns in some sense. So there is definitely a philosophical connection, I believe, starting from Shannon to these large language models.
So let me ask you about one other, and this is one that you are professionally involved in. Cryptocurrency and blockchain. You have studied it and you started a company, right? Is there a connection between Shannon's work and cryptocurrency? Yeah. So what attracts me to work in this area of blockchain is that
Blockchain actually has one very common philosophical connection to information theory, which is the following. In blockchain, the problem is not communication per se. It's called consensus. It's a different problem, but it essentially allows a bunch of users at different places to come to an agreement on something. Okay?
Yes. Now, the goal of designing blockchain is really to be so-called fault tolerant. Fault tolerant. Which means that even if, say, one third of the users are bad guys and send you some gibberish message, you can still, the rest two third people can still come to an agreement. Okay. All right.
So you look at this problem, it's actually not that different from communication, information theory, because it's kind of combating. The bad guys are the noise, and the good guys are the signal. And the good guys are the signal, and they try to introduce redundancy to help them to fight against these bad guys. Yes. And there's an optimization problem where the more redundancy you have, the slower the system is, the more ponderous. And so you try, the optimization problem is to try to figure out
what is the optimal number of bad guys that you can tolerate and your system still works? That is the analogous to the capacity problem. So I find the philosophical connection very appealing. And so that's sort of one reason why I got attracted to work into this area. Why do you think more people don't know about Shannon? Like all of the sort of intellectuals in technology say he's like...
one of the great thinkers of the 20th century. But most people have never heard of him. Why do you think that is? So Shannon was actually a very shy person. Very shy person. He hates publicity. He hated when people interviewed him. You remember, right? He's basically a very modest person. Remember the first paragraph I told you about? Yeah. He tells you what he's not accomplishing. Yeah.
He's a very modest, very shy person, not into publicity. And I think that sort of impact not only himself, but also everybody who works in that field. Uh-huh.
And adopt this as kind of like a metric, right? That, hey, we should all be modest because, boy, look at this guy who accomplished so much and he's still so modest. Who are we? Who are we, right? So as a result, the field doesn't really sell themselves very well. The marketing engine, the marketing DNA is not there. And so people don't know about them.
So I want to talk for a minute about the rest of Shannon's life. He writes this huge paper when he's in his early 30s, eventually goes on to be a professor at MIT. And he seems to spend a lot of his career juggling, riding a unicycle, building mechanical toys, building games. And he never, you know, does sort of great influential work again, right?
And I'm curious, you know, what do you make of that? How do you sort of fit his whole career together? So there's a theme that unifies all this in my mind, which is playfulness. Because in his mind, research is really about puzzles. He doesn't understand something. It's like a puzzle to him. And he's trying to figure out the pieces of the puzzle. Information theory was like that.
The puzzles, he sees all these real-world systems. They seem to all share some community, but nobody understood it. So there's a puzzle and he's always thinking about the puzzle. And finally, his paper basically solved that puzzle. So everything to him is playfulness. I think it's playing, it's a game. Puzzle, he needs to solve the puzzle. And that's his mind. That's how his mind works. So although it seems very different things that he did pre and post information theory, but it's actually in my mind, quite strong commonality.
We'll be back in a minute with The Lightning Round.
Hi, Rachel. Hi, Jacob. Rachel Botsman, tell me three things I need to know about trust. Number one, do not mistake confidence for competence. Big trust mistake. So when people are making trust decisions, they often look for confidence versus competence. Number two, transparency doesn't equal more trust.
Big myth and misconception and a real problem actually in the tech world. The reason why is because trust is a confident relationship with the unknown. So what are you doing? If you make things more transparent, you're reducing the need for trust. And number three, become a stellar expectation setter. Inconsistency with expectations really damages trust.
I love it. Say the name of the book again and why everybody should listen to it. So it's called How to Trust and Be Trusted. Intentionally, it's a two-way title because we have to give trust and we have to earn trust.
And the reason why I wrote it is because we often hear about how trust is in a state of crisis or how it's in a state of decline. But there's lots of things that you can do to improve trust in your own lives, to improve trust in your teams, trusting yourself to take more risks, or even making smarter trust decisions. Rachel Botsman, the new audio book is called How to Trust and Be Trusted. Great to talk with you.
It's so good to talk with you. And I really hope listeners listen to it because it can change people's lives. So I read that you recently asked people at your company to give five-minute talks. I'm curious why you did that. That's interesting to me. Why did you do that? So the shorter the talk, the harder it is to give. Yeah. So if you can't explain an idea in five minutes, then I think your idea is actually not very good. Aha.
That's good. Most good ideas you can get the point to across in five minutes. Remember, I'm an information theorist by training. So communication to the limit is what I'm passionate about. If you had to give a five-minute talk, what would it be about? About Shannon, I guess. He's my hero. He's my hero.
So one, you talked about the importance of timing in research, of not only finding the right problem, but finding the right problem at the right time, right? Both in terms of Shannon's work and in terms of your work. You know, you're also a professor and a, you know, a manager. Like, how do you help other people find the right problem at the right time?
Yeah, finding the right problem at the right time is probably the most difficult because, you know, time is everything. However, this is hard to teach. What you try to do is to be ready. So one very famous information theorist told me this. He said, you know, everybody will get lucky at some point in time in their career. However, most people, when they get lucky,
They're not ready, so they don't realize that they get lucky. And so they missed the opportunity. They went a different direction. Luck tells you you should go this way, but you went the other way. Lost it. That makes me so scared. And so what I teach my students is always be ready. It's like your muscles. You have to always train your muscles so that when you are lucky, you can capitalize on the luck. Yeah.
So you talked about Shannon's playful nature. Like he was a juggler. He rode a unicycle. You do anything like that? Do you have any weird hobbies? No. No. The only weird hobby is I love to talk to people like you. Fair. You love going on podcasts. That's the juggling of the 21st century. Who's your second favorite underrated thinker? My advisor. Ah, okay.
Bob Gallagher. Bob Gallagher, yeah. He taught me how to think about research because he learned from Shannon and I learned from him. And if you boil down what your advisor learned from Shannon and what you learned from your advisor, what would it be? What did you learn? Yeah, I learned about taking a very complicated problem and strip it down to the essential and then formulate a problem around that and solve it.
That's an art. It's not something you can convert it into a mathematical formula and teach students. It's just based on intuition, experience. And that's what Shannon taught my advisor. And that's what my advisor taught me. And that's what I try to teach my students. Really, teaching is not really about giving the formula. It's really just learning by examples. I observe what he does.
And then my students observe what I do as I interact with them. And hopefully this art will carry on from generation to generation. Finding the essence of the problem. Yeah. David Shea is a professor at Stanford. Today's show was produced by Gabriel Hunter Chang. It was edited by Lydia Jean Cott and engineered by Sarah Brugier. You can email us at problem at pushkin.fm.
I'm Jacob Goldstein, and we'll be back next week with another episode of What's Your Problem? Sometimes getting better is harder than getting sick. Waiting on hold for an appointment, standing in line at the pharmacy, the whole health care system can feel like a headache. Amazon One Medical and Amazon Pharmacy are changing that.
Get convenient virtual care 24-7 with Amazon One Medical and have your prescriptions delivered right to your door with Amazon Pharmacy. No more lines, no more hassles. Just affordable, fast care. Thanks to Amazon Pharmacy and Amazon One Medical, healthcare just got less painful. Learn more at health.amazon.com.
Hey, it's Jacob. I'm here with Rachel Botsman. Rachel lectures on trust at Oxford University, and she is the author of a new Pushkin audiobook called How to Trust and Be Trusted. Hi, Rachel. Hi, Jacob. Rachel Botsman, tell me three things I need to know about trust. Number one, do not mistake trust.
Confidence for competence. Big trust mistake. So when people are making trust decisions, they often look for confidence versus competence. Number two, transparency doesn't equal more trust. Big myth and misconception. And a real problem actually in the tech world. The reason why is because trust is a confident relationship with the unknown. So what are you doing if you make things more transparent?
You're reducing the need for trust. And number three, become a stellar expectation setter. Inconsistency with expectations really damages trust. I love it. Say the name of the book again and why everybody should listen to it. So it's called How to Trust and Be Trusted. Intentionally, it's a two-way title because we have to give trust and we have to earn trust.
And the reason why I wrote it is because we often hear about how trust is in a state of crisis or how it's in a state of decline. But there's lots of things that you can do to improve trust in your own lives, to improve trust in your teams, trusting yourself to take more risks, or even making smarter trust decisions. Rachel Botsman, the new audiobook is called How to Trust and Be Trusted. Great to talk with you.
It's so good to talk with you and I really hope listeners listen to it because it can change people's lives.