You're listening to a new episode of The Brave Technologist, and this one features Dr. Jeff Esposito, who's an engineering lead at Lenovo and has over 40 patent submissions in generative AI. Jeff had a long background in research and development at Dell and Microsoft before coming to Lenovo. He lectures on advanced technological development at various U.S. government research labs and believes that technology is at its best when serving the greater good and social justice.
In this episode, we discussed how Lenovo is shaping the future of AI with innovations like the Hive Transformer and Edgeguard, the impact of quantum computing and neuromorphic chips on AI's evolution, AI's role in building smarter cities through Lenovo's collaboration with NVIDIA and other partners, and why ethical AI matters and how technology must serve society's greater good. Now for this week's episode of The Brave Technologist. ♪
Jeff, welcome to Brave Technologies, man. How are you doing? Great, Luke. Thanks so much for having me. It's a hoot to be here. I'm glad to have you here, too. And we're here at the AI Summit in New York. The very noisy AI Summit here in New York. Indeed, indeed it is. A lot of buzzing around. I know you're speaking at the conference. Do you want to share a bit with the audience about what you're talking about? Oh, sure. We're going to talk about AI futures, which is a nice way of saying equal parts what we'd love to be and what we are. And the only way to get to the future, if I can paraphrase and abuse poor Descartes one more time...
is that our anticipation and our belief in this result allows us to scaffold and build to that result. Excellent. So I think that's kind of the core idea I want to get across to people, that it isn't some mystical Jules Verne or Star Trek journey. It's simply saying, where are we and where do we want to go with what we have? Love it. I love it. Thank you. Can't wait to get more deep into this, too. I mean, when we think about things like next-gen infrastructure for the democratization of it,
AI, what does that mean? Like, I mean, when you're thinking about, like, kind of breaking this down for, you know, people that are new to the space. Everybody likes to throw the word around autonomous and self-managing. But we actually do have that now. Codename Vena, Visual Insight Network for AI, and what I called Edgeguard, and the lawyers made me stop calling and now call Edgeguard.
What's cool about them is that they're actually able to work on their own if they lose connection to the greater network. Okay? So when we talk about self-sufficient functionality,
There it is. Yeah. Right? So what it does is it does its best. Each of these technologies combine machine learning and symbolic logic to do the best they can with the information at hand and the retained history. So it combines the best of data-driven decision-making with trended history. Okay, okay. Excellent, excellent. How do you see this becoming more accessible through new barriers to smaller players in the space? Well, I think this, you know, your podcast, Luke, is an excellent vehicle for this.
Because, you know, everybody likes to talk about how patenting things is really supposed to be about educating the public and raising the standard of practice in engineering. But I really think good old conversations like this are a great way to get the word out and to make it really accessible. I agree.
By talking about it, it becomes that much more accessible as opposed to how I read patent 17715, oh, do I have to? Right, right. And I think too, especially when in the context of you're at Lenovo, Lenovo is huge, it's just a staple, especially in engineering and the whole development community. It's a wonderful company and I am so proud of what we're doing to make the difference, to be relevant to the customers and to not just draw on a strong history
of innovation, but also to be in, all in, on what we can do with AI, innovatively, but more importantly, relevantly to the customer, 'cause it's not just tech for tech's sake. Right. If we can't produce the right results for our customers, then we're not really where we need to be, and I don't care if it's AI or good old COBOL and punch cards, you've got to be relevant. Yeah, yeah, no, that makes sense.
I think, let's dive in a little bit onto these neuromorphic chips and quantum algorithms. They're cutting edge concepts, right? How close are we to seeing those in mainstream applications, like these technologies? Or maybe we can unpack what they are first for some folks that are not necessarily-- Okay, so when we talk about quantum computing and neuromorphic chips, we're talking about those things which
adapt neural networks and neural networks are nodes or associations and circuits. And when we talk about quantum, everybody likes to say quantum. Yeah, it's such a buzzword. It's such a buzzword. I loved in the Ant-Man movie at Marvel, the guy goes,
Do you guys just put the word quantum in front of everything? But it's a classic physics experiment that's so exciting. And having done a postdoctoral fellowship in quantum computing, I can tell you that we're already there. And the whole idea, in my case, I focused on, oh God, forgive me for getting all biomedical buzzy. No, let's get in. All righty then. Pediatric oncology, your genome research. Yeah. A marvelous and wonderful thing that boils down to how do we take the data we have
crunch it better, faster and truer and be able to help children, if not outright avoid having cancer,
make sure that we're better prepared to give them the right treatment as it begins to emerge. So to me, that's a beautiful balance of technology and relevance to society's greater good. And I'm going to sound terribly naive when I say this, Luke, but I deeply believe that technology must serve society's greater good or it's missing its purpose. So neuromorphic chips, quantum technology, it's there, okay?
but it's all about what is the application we need. Yeah. Right, so when you talk about AI and hybrid AI, what you're really talking about is reaching into a toolkit and getting the right wrench, the right pliers, and the right hammer for the job. Right. Right? Right. So that's really, really what we want to talk about. The wonderful thing about most engineers I know,
is that, yes, necessity is the mother of invention, but I'd also like to say that if necessity is the mother of invention, then desperation's a wonderful midwife because you're getting that baby out now. I love it. So the application loop of quantum technologies, neuromorphic chips, and more is happening, and not just in the lab, okay? But it's got to do with what are we solving? What are we trying to do to better the world? Right, right. That's what drives...
what combinations of technologies we bring. Yeah, that's great. That's really awesome insight. You're very kind. And I think we talked about patents, right? And I think there's over 40 patent submissions this year. Yeah, we have 40 patent submissions this year where we invented the world's first Hive Transformer, which if you will indulge me for just a quick second. No, please go dig in on that. That's what we want people to know about, right? Well, certainly. Well, thank you. So a Hive Transformer.
Let's talk about bees. What do bees do? They make little bees and they make honey. Where do they do it? In a hive. How do they do it? They use honeycombs, these beautiful geometric, symmetrical, mathematical manifestations of a greater mind. Okay? And so they make these things. They put in their nectar. So how do they know when the nectar becomes honey? Well, that's through a hive mind index. So the whole point of the hive transformer, while it's still a transformer model,
is that it's about pre-curation of data and planned and purposeful storage of that data. So rather than simply saying, "Hey man," and this is no hit on the beautiful work done in large language model, "I'm a huge honkin' large language model,
Right? I'm going to be everything to everyone. And the problem with that is ultimately at scale, if you're trying to be everything to everyone, you're not much to anyone. Right, right. So what we found with the Hive Transformer is that we are able to, through NLP, collect a list from a person talking to an avatar or a chatbot, and sub-second under...
ridiculous numbers, I'd have to look it up and tell you, but the benchmarks were all there. Sub-second response, we're able to capture, build, and have immediately ready for reference these honeycomb transformers that form into a hive. So this whole idea of the hive is marvelous because if you say to yourself,
What good is that? I mean, well, it's honey, but if we step back a moment and say, why can't we collect honeycombs into hives? Why can't we aggregate individual actions into skills? And if we can collect skills, can't we collect skills and aggregate them as talents? Okay, okay. And if we can have talents, how best do we describe different persona? Okay.
Okay. Perhaps the user, the human, needs talents for diagnosing their SR675 class version 2. In other words, what's wrong with my computer? Right, right. Well, we understand from the communication the keywords are triggered. We go to classification. We pull up the relevant hives.
And now we're into it. Okay. And now the chatbot would say things like, I can't answer that. Or let me tell you once again something you already know about the weather. Into, oh, so we're talking about this specific model of the Lenovo SR675. You need it configured and you need it diagnosed for the following logs. I've got you. Awesome. No, this is great. Because I think there's a lot of focus around this. And I know from what we're doing too where, you know, the...
You said it really well around the large language models. The whole web starts to look the same, everything starts to feel the same. If all you got is a hammer, the whole world's a nail. Exactly. And that never works with panes of glass, brother. Exactly. And we're looking at ways to kind of like augment that with like real-time data or even local models and things like that. And this sounds like a really interesting... Well, the Hive Transformer allows us to scale from way down on the watch all the way up to any kind of supercomputer you'd like to create. It was built...
to be adaptive and in collective. So if you just watch nature, it gives us the answer to so many things. Are you guys looking at this mostly from kind of this consumer electronic angle at first, or is it bigger scale? Give us a sense of maybe there's something we could cover. I hate to sound this way, but I am a bit of a mad scientist. I love it. I'm not really a mad scientist. The most I ever get is slightly annoyed.
I'm sorry, I'm using myself unduly. This is great. So I was looking at this and I was unhappy with the benchmark speeds with certain large language models and people would then invest in all this time and everybody would roll their eyes and say it's time to train it again. Let's see what we get this time. And I'm like, no, we're going about this wrong. We're all about what can we get. We should be about...
did we put it where we can find it? So I found myself thinking of all things of the Amish who used to have in their barns labeled bins and drawers so they knew exactly where a screwdriver was in what barn, in what bin, right, at what point. So I thought, well, gee whiz, and I
was looking out at some bees in flowers. And I'm going, oh no, it couldn't be this easy. And so I went after it. And then as I built the mechanism, I said, well, what are the applications, Jeffrey? It should never be tech for tech sake. We can take this all the way down to the phone. We can take this all the way wherever we want in a cloud. Wow. We even have a project that I had
code name Hecate, which really had to do with humanoid robots being able to exchange information. Remember we just talked about skills and talents. So you have two humanoid robots. You have humanoid robot A learning how to pick up a screwdriver. You have humanoid robot B learning how to
carefully swing a hammer. Now these are close but disparate skills. So if we use something as simple as Bluetooth and the humanoid robots pass within sufficient distance, they do a diff on hives. And they acquire each other's hives. So now we've just cross-trained humanoid robots.
So my work kids, and it's a shout out to Drew and Dave and Ash and Dinesh and Arun and Deepak and everybody out there, it's just wonderful. And they said, oh, but this is just like Iron Man. Please can we call it Codename Jarvis? And I'm like, okay. And they squealed. So it's become Codename Jarvis. Awesome, awesome. And it works. The technology works.
Because again, it's like, think of Legos. If you don't like hives, think of Legos. You snap them together, they become what you make. Right, no, I mean, and there's so much, like, so many people want to kind of dig in on this. I know at Brave, too, we are, too. I mean, you've got people with different devices, and our whole thing is around kind of like, you know, privacy and making sure that we're not, like, exposing users' data out to folks that they don't know. You know, it's all about like kind of, you know, having that kind of ownership over everything.
Well, responsibility. And whether you're playing AI or you're just doing regular technology that involves more humans, the ethical questions remain pervasive but similar. Right, right. With an AI system, my whole approach to it has been,
proof of result and evidence of compliance. If your system, sniffy as it may be, can't provide that, we need to fix that. And that's the truth of it because you want to be responsible, you want to do the right things the right ways, you want the technology to support where you want society to go.
No, makes sense. Can't think of a better segue. I know you mentioned before we started here, there's some Smart City stuff. Yeah, Smart City Barcelona. Can you dig into a little bit about what that's all about? My pleasure. So about five days before NVIDIA was going to announce...
that they were going to take the Nim AI blueprint and take it out. And so NIMS, let's talk about NIMS. Yeah, yeah, no, no, please unpack it. They're a wonderful technology. NVIDIA inference microservices. This is what you get when you let engineers name things. What it really is, Luke, is a set of containers
that holds specially trained language models, APIs, templates for usage, and more so that it's self-contained and pre-tuned. The advantage to the customer is that rather than have to go out and say, "Okay, let me start over here with this, and now let me see if I can't knock it down to that," and lose maybe two to three months trying to make it work, it's pre-built. It's purpose-built so you can take that. So I was in the room when the term got coined down in San Jose,
by accident, okay? I think I had lost my way and I was on my way to the men's room and I just ended up in a conference room making videos. And the product manager, who's brilliant, shout out to that fellow, wonderful guy, had said, we're going to do blueprints. And I'm saying, you have
geometrically complicated things because now you're going to have NIMs that plug in like Legos to other NIMs to solve more complex problems. Why don't you just call them recipes? Well, we've already called them blueprints. So what they really are are recipes that fit a specific use case. So if you want a chatbot that's also capable of XYZ, such as in this case, metropolitan NIM usage, which is a thing, then you would take and you'd have a blueprint for that.
So what we did down in Bartholone, and every time I call it Bartholone, I get corrected by the good people in Spain because you pronounce it like that because the king did. And I'm like, you got it. I'm cool. And what we did down there is they had said to me, can you invent a couple things for us? Nine days. Doc, whenever they call me Doc, I know I'm in trouble. They said, Doc, can you invent a couple things for us before we go into...
Barcelona and I said, "When's that?" "Nine days." And I said, I pulled out the Star Trek response. I said, "Dammit Jim, I'm a doctor, not a vending machine." And they laughed and they said, "We understand." And I kind of took it as a challenge. And so I woke up without sleep for a few days and we ended up with code name Vina, Visual Insight Network for AI. And that mentioned, I think I might have done it a moment ago, Edgeguard, which I was told to call edgeguarding. And so yes, Lenovo, I'm calling it edgeguarding.
Which really has to do with adaptation. Yeah. So with Vena, it's about quality of service and using predictive caching to identify different images. Okay. So that we can move
relevant images at high quality across the network as opposed to just a gush and stream of images. Okay. Wait a minute, let me try and gather it afterwards. So it's using AI to refine AI. Okay. And the edge guard has to do with the situation when we have a hybrid network that's also connected to the cloud, but also connected to the traffic lights, also connected to the stop signs, also connected to the kiosks,
and all of a sudden we lose connectivity to the cloud. Well, usually that means game over for a little bit till we reboot. But because of Edgeguard, what we do is it adapts and it has localized history and so now it forms a new network using the most recent history that's stored locally and it does its best until it comes back online. So you might see during simulation we saw
an acceptable loss in processing speed, but nothing that turned into all the traffic lights on West 23rd are off. So this is another example of hybrid AI where we balance symbolic logic with machine learning to do more than just
count beings, but to infer and to take those rules and be able to take action on those rules. And the popular term today is agentic. And all agentic is, is any system that takes action without direct human decision making. And we've had that a lot of years. Yeah. Okay. So it is agentic. So when you combine different methodologies of
artificial intelligence, and again I have such a problem with that term, I'm so sorry. Because there's no little man in a box, there's no sentience. Right, right, right. It is a simulation, a simulcrump, an attempt to imitate that which we're familiar with. And at some point I'll bore you to tears about my ideas about robo-psychology. No, no, I love it. I'm working on my third PhD, this one in clinical psychology, because robo-psychology, Luke, isn't about the singularities come and things are sentient, it's about how do people emotionally engage
and cope with assistants that aren't alive and are only imitating people. So for me, RoboPsychology is understanding the human psyche in relation to our projected simulations of intelligence. Interesting. Now, you gave a stoplight example. Are there any others that kind of stand out like in the city context? More in the city context. I think it's super interesting for
Well, sure, sure. So we talk about digital twinning. Yeah. And we talk about that as a means of planning the city. So we use digital twins to plan the city. We use responsive technologies to manage the operations of the city. So what we really have is the full range of our own minds allowing us to create
tools that help us go, where do I want to go versus where are we? And literally, where do I want to go? Because you have your traffic lights and your traffic intersection. So what would be lovely to adapt further is an improvement such that you don't just have the Garmin effect, no hit on Garmin. I probably shouldn't mention a brand name.
But rather than just following Google Maps and following all these things, you do more than store routes. You're able to take your own data on how you walk or how you ride or how you travel in an autonomous car and then use that inferentially to determine the best paths and the best times for you.
So it really can come together in a beautiful way to make the human experience very fulfilling. This sounds very useful for people. Imagine that. I know.
I'll never do that again, Luke. I'm sorry. You're right. I was wrong. No, no, no. I love it, man. I'm glad you enjoy. I enjoy your time, too. And let me just, like, just for folks that are listening, right? Like, a lot of this stuff, I know that you all are developing at Lenovo. Like, how open is this for other folks to start adopting? Or are these tools that people can use? Like, does this hive the tools?
The Hive Transformer. These types of things, are there, is it something like proprietary to Lenovo? That's kind of the nature of the patent, right? Right, right, right. We retain the rights and royalties for it for a period of X years before the United States government. So the idea is
that we will serve our customers meaningfully. We give them a unique advantage. As a result of these technologies, specifically the Hive Transformer and another element called the semantic cascade that takes RAG, Retrieval Organic Generation, it takes us further into what I call cargo technology.
And so when someone asks me, "Cargo?" And I go, "Beep, beep. Cargo, beep, beep." And then after they hit me, we then talk about the fact that cargo really stands for context-aligned retrieval generation and orchestration. So we have that agentic element possible now on the fly.
So all of our customers will receive the right combination of these inventions relevant to their use case. We're also working on bringing them fully to production as products that stand alone in and of themselves as a subscribable suite of tools. Oh, cool.
Right? And so as that goes on, you know, what is it? Nonaka and Takeuchi, the same people who brought us the agile method, were Japanese linguists who made the point that ideas will become more powerful and amplified the more openly they're shared with people. Right. So the whole argument to patenting isn't so much to retain control as it is to educate the public. Right. So really all anyone has to do to understand it
is to read the patent or reach out to us at Lenovo. And I'm always happy to spend a little time with people to help them understand what we've created. Oh, that's awesome. That's awesome. Yeah. I learn a lot of stuff that way myself. I mean, it's out there. Because I don't know it all. I don't even know half of it. There's so much out there too, right? Oh my God, there's so much beautiful stuff out there. Yeah, it's just, it's paralyzing sometimes. It is. So I think like,
And what other new capabilities do you foresee coming from the integration of advanced hardware and software? Well, I think that what we can do is trust our imagination and sense of right and wrong and let this take us forward. This is why I always stress that technology must serve the greater good to fulfill its purpose.
So rather than opportunistic technologies, I would rather see we commit ourselves to enabling technologies. There's a wonderful startup coming out of NYU that intends to help the blind
better navigate. And so we started to talk about using their actual sticks and building in sensors so that we're capturing actual data live, but also trending that back to a device, could be their own phone, again because of the Hive Transformer technology being able to go so tiny, little units of honeycombs. It's all about how many honeycombs did you need, how many hives do you need, right? So it's a beautiful vision they have and their intention is to make life better for the handicapped.
we invent tomorrow to solve the problems of today. Yeah, yeah. We don't just turn it up to 12 because there's 12 on the dial. Right. Does that help? No, it does help. No, it does help. And that's why I want people to hear these things, right? Because so much of people get, a lot of the general public is scared around these things of AI and like, oh, my job's gone or Terminators or whatever. Well, that's that whole robo-psychology element that I want to bring to light. Because I feel that as we bring in new technologies,
The most important thing we can do is remember the importance of the human element. Yeah. Great old Renaissance thinker, Count Urbino, said "Essere umano," which is, it's essential to be human. So no matter how we may prize knowledge and the ability to make knowledge look like magic, it's all about the people. If we're not helping the people, if we're not relevant to the people, then we've missed the whole point.
Let's touch on why AI is better with Lenovo and Nvidia. Of course, of course. Thank you. And that's a very kind softball of a question. But in fact, it's actually true because I'm absolutely no salesman. I'm just a dumb engineer and today.
So here's the reason why. NVIDIA technologies, specifically the wonderful work they're doing with GPUs, and other components and software all around that, the NIMS, the NVIDIA inference microservices, all of this beautiful ecosystem of software, in combination with Lenovo servers and technologies, which I hate to use the term best of class, really are best of class. We have water-cooled capabilities on smaller servers. It's kind of freaky. It's so good, all right? But then you add in our
40 plus patents and are breaking in and setting I think a whole new direction in terms of innovation around AI making it more human making it more functionally agentic
I think that what happens is you end up for the customers, what's good for the customers is that they end up getting capabilities they couldn't have otherwise. Awesome. Awesome. That's really, I think, why Lenovo and NVIDIA are best together. I think that makes sense. And, you know, stepping back a little more, I mean, like you're an advocate for technology, serving the greater good. We talked about some of the
I'm rapidly passionate about that. Yeah. I mean, like, how do you see next gen enable AI enabling advancements in social justice and equality? That's a beautiful question. I think the whole thing is around how do we enable the greater good? Yeah. Because what we imagine is what we built. What we feed is what we find. Right. That old story about, you know, do you feed the bear or do you feed the maiden?
Right? So if we feed for the greater good, we'll create for the greater good. It's how we direct our mind. And sometimes it's super unintentional and we get some amazing stuff. Yeah. You know, Star Trek communicators to mobile phones. Yeah. So today we're talking about ideas like the Hive Transformer. We're talking about scaling down and scaling up.
as we need. And it's not so much a question of can we, but where do we want to go with it? Right. When you talk about serving the greater good, enabling social need, you have to allow people to have their authentic voice. You have to allow people to have the capacity to offer their two cents, their story. What are they going to contribute to tomorrow? And I think if everybody were to come to embrace that thinking, they'd realize that everyone has something to value to add.
And I think that the way that happens is, God bless open source, right? Right. The Von Hippel mindset of innovation says, the community needs a barn. Yep. I don't care if it's making money. We're building a barn. And you get a barn. Right. And then you find out later that it makes money. And in contrast, you have the Chesbro mindset of open innovation, which basically says,
Very positively, it's Kickstarter. If people want it, they'll pay for it. And if people pay for it, we ought to go do it because people want it. So in one way, it realistically says the authentic need can be expressed in a lot of ways, but usually through what people will pay for, patronage. So I think if we shift our models, especially at the corporate level, more towards listening to the authentic voice of the customer,
and doing just enough research and development into the market to produce what's needed, then we'll be relevant. And when we're relevant, the greater gestalt of people means we create good things for people. That's awesome, man. I can't think of a better note to kind of...
wrap things up on. And we covered a lot today and it's been just a fascinating discussion. I wonder though, is there anything that we didn't talk about that you might want our users or our audience to know about? Oh my God, that's a wonderful opening. And I think I just want people to believe in their own ability to imagine and build the future in a positive way with others. Awesome. That it isn't so much this idea of
closely held knowledge as a weapon. I think in the early 21st to middle 21st century that it won't so much be "scientia es potentia" it'll be "scientia es commercium" - knowledge is in power, knowledge is trade. And it builds upon itself. The beauty of open source is we all make money as we can off the models. And if we keep true to that thinking,
Right? Then we'll have that. Now, patents tend to be very much a matter of holding ownership, but they're also good because they bring a methodical and structured way of sharing and showing how to do things. So if we can kind of balance the two approaches, we'll have a structured and methodical way of openly sharing information. And there's Nonaka and Takeuchi's upward spiral
going all the way up, right? Because if you have structure and you share and you listen to one another, then you create beautiful things. That's beautiful, man. I really appreciate it. Finally, where can people follow your work or if they want to reach out and say hello? Oh my God.
So I am on social media from time to time. Yeah. You can find me on LinkedIn. Okay. Lenovo puts my blogs out from time to time. So if you go to the Lenovo and look up Dr. Jeff, you'll find me. Awesome. And I'll be happy to help and talk to anybody. Excellent. Well, thank you so much, Jeff. This has been really great. Really appreciate you coming on. Love to have you on again too sometime. Well, I'd love to do that, Luke. I really enjoy speaking with you. Excellent, man. Thank you very much. A genuine pleasure. Yeah, yeah. Have a good one. Have a great one. Alrighty.
Thanks for listening to the Brave Technologist podcast. To never miss an episode, make sure you hit follow in your podcast app. If you haven't already made the switch to the Brave browser, you can download it for free today at brave.com and start using Brave Search, which enables you to search the web privately. Brave also shields you from the ads, trackers, and other creepy stuff following you across the web.