Commonly, what happens is it's almost the same thing that keeps breaking again and again in very subtly different ways. And so to be able to track that through AI and mitigate the problem quickly is what we do so that the people can focus on what we think they should do, which is creativity, which is where AI can't play a role. And AI at best can regurgitate or mimic what it's seen before.
Good morning, good afternoon, or good evening, depending on where you're listening. Welcome to AI and the Future of Work. I'm your host, Dan Turchin, CEO of PeopleRain, the AI platform for IT and HR employee service. Our community is growing thanks to you, our loyal listeners. As you probably know by now, we recently launched a newsletter. Every week, we share tips and some additional AI fun facts that don't always make it into the week's show.
go ahead and register. Join us in that community. It's a fun way to engage with other listeners. We will, of course, share a link in the show notes. If you like what we do,
please tell a friend and give us a like and a rating on Apple Podcasts, Spotify, or wherever you listen. As you know by now, if you leave a comment, I just may share it in an upcoming episode like this one from Landon in Fort Wayne, Indiana, who's the CEO of a telehealth startup and listens while folding the laundry. Landon,
Landon's favorite episode is the discussion, this is from the Wayback Machine, with Ashu Garg, general partner at Foundation Capital. Love that one. About the future of venture capital and how VCs hustle like entrepreneurs to close deals. Great conversation. We'll link to that one as well in the show notes. We learned from AI Thought Leaders Weekly on the show. Of course, the added bonus, you get one AI fun fact. Here it is for today.
Emily Shumway writes in HR Dive Online that there's no evidence of jobs being entirely automated by AI. Anthropics assessment of over 4 million users submitted AI prompts as part of its Anthropic Economic Index, which found most workers use AI to augment their work, not replace it. Workers using AI to augment work
at 57% of those surveyed versus automating away work at 43% of those surveyed indicates that trend. Technical employees, particularly software engineers, made up the bulk of employees using AI for work tasks. Keep in mind this is biased by who uses Claude from Anthropic. Of all the requests sent to Claude,
a leading provider of AI coding models, 37.2% were in that job category. Very few occupations make substantial use of AI. This is according to that survey. Only about 4% of jobs use the tool for at least 75% of tasks, while a little over a third of jobs use AI for at least 25% of tasks. These findings mirror an assessment made by Indeed last fall
which confirmed that while generative AI could assist in various tasks, there were no skills for which it was, quote, "very likely to replace a human worker." My commentary, human nature changes more slowly than technology.
Even today's most advanced agentic AI apps aren't quite ready to replace the best humans. AI has demonstrated exceptional abilities to write PhD level research reports and even code new apps. But the very human pursuit of knowing the right questions to ask and telling stories that get other humans excited about the output of AI
isn't replaceable, more important. And the point we reiterate weekly on this podcast, replacing human empathy and rational thinking should never be our goal. Let's strive to augment humans where we need help. Attempting to replace us is a fool's errand that will only erode trust and impede the incredible progress that we're making together. Of course, we'll link to that full article in the show notes. Now shifting to this week's conversation,
Gu Rao is the serial entrepreneur who helped his previous company Portworx be acquired by Pure Storage for $370 million in 2020. He and his co-founder Vinod Jayaraman are back at it with Newbird, which built Hawkeye, an AI-powered site reliability engineer, or SRE, as you'll hear us refer to it in the discussion, that can identify, diagnose, and resolve IT infrastructure issues.
Gu and the team closed a $22.5 million seed extension in December 2024 with M12, Microsoft's VC arm, which extended the $22 million seed round raised from Mayfield, StepStone Group, and Prosperity7 Ventures. Gu holds a bachelor's degree in computer engineering from Bangalore University and a master's degree from Penn. Go Quakers.
And without further ado, Gu, it's my pleasure to welcome you to AI and the Future of Work. Let's get started by having you share a bit more about your background and what led you to start Newbird.
Sounds good, Dan. Thanks for having me and happy Friday to those listening. And I like your intro, by the way, of AI and how humans should embrace or view AI in the workforce. And I don't know, it's so much of a shift, but it's a continuation of that conversation is how I'm gonna parse today's discussion with you.
Just to answer your question by way of background, I've been in enterprise software most of my career. I've had a passion, myself and Vinod, my other co-founder, we've had a passion for specifically data science. And both of us had a focus on AI in our postgraduate degrees.
In being in enterprise software, you invariably take on the role of being an IT support. And so while I've been designing software as an architect and implementing enterprise software solutions, I've been up at two in the morning fielding support calls and looking through logs, metrics, alerts, traces, and
We're fortunate enough to live in this era where we have large language models that have codified, that have captured so much knowledge out there. And so what we're doing at Newbird is extracting that knowledge now and applying it to a very specific use case, which is to do the job that I don't want to do it to in the morning. So for those uninitiated,
to what it feels like to be awakened by that pager at 2:00 AM. Describe the day in the life of an SRE and then talk us through what a new day in the life of an SRE looks like when Hawkeye, the AI SRE is taking the page.
There's never any good news that comes in on that page, right? It's always bad news. And so, look, a typical alert could be anywhere from, look, on the catastrophic end, a customer is calling you saying my environment is down completely.
And the broader that kind of message, the more your palms start sweating because you don't know where to start. And any good SRE will tell you that finding the problem is 99% of the work. Once you find the problem, finding the solution is almost always the easiest part.
And so you asked the question, what does that day look like for an SRE? A broad alert comes in, you don't know where to start. Is the problem on the networking stack? Was it something related to a recent code update that we did? Is there a problem with my database? Do I look at IO? And you're looking at so many different places and you're trying to find a needle in a haystack. You're actually trying to find multiple needles in multiple haystacks and you're doing correlations across this. Now,
For those that haven't lived this life, what does that mean? You're looking at literally thousands and thousands of lines of logs with error messages, and you're looking for a specific error message. You look at a time window in which that error message happens. That's just the start. You need to take that time window and go over to another database, and maybe you're looking at metrics, and you're looking at what was my CPU health like? What was my IO activity like? This takes hours and hours and
at two in the morning and your site is still down and you're constantly still getting pinged on Slack and email saying, how much longer? Where do we stand? When is it going to come back up? This isn't something that's a fun job at all. Now, why I'm describing this this way is all of the work that I'm talking about over here, which is looking for error messages and correlating it with metrics and alerts.
This is something now AI can do. It has the knowledge in it to understand what an error message means. It has the knowledge in it to say, hey, look, I'm seeing a crash pattern here. I've seen the same thing before. There's a problem in Node.js with this version. I know this. I, as in the LLM, knows this. And it can take that information and look for a very specific error.
fact-finding mission in yet another database and come up with a recommendation. That really is kind of all we do here. I'm vastly simplifying this, but
The job of extract looking for what information I need to look for, that is the hard part. And then posing that to the LLMs and saying, hey, you've seen all of these different problems before, how would you go about solving this? Well, once you do those two things, you've created a very useful tool for SREs to go in and interface with this as a co-pilot and say, I'm facing this problem.
Give it a broad question, go do the work and let it do the work of what a human would do in many hours. Can GPUs can do this in a matter of minutes and come back with an answer. I'm happy with that. We're not trying to replace, you can't as you mentioned, replace a human function. But what you can do is enable them to be super humans.
Just like with the advent of web search, I could go look in my notebooks and come up with the information, or I could go to Google and come up with the information so much faster and look at where we are today.
So just so listeners have context, so Google coined the term site reliability engineer, and that kind of built on the concept of the traditional NOC, a network operating center where you'd have humans physically sitting in a room with a bunch of blinking lights. And we distilled the business service down in blinking lights. And the concept of an SRE was really thinking about the business impact of an outage, a degradation. And so kudos to Google for innovating.
And now we're talking about really, you know, just a level of innovation beyond what anyone in the observability space or SRE spaces has really thought about before. How do you think about the role that Newberg plays in kind of this next iteration of what we do in site ops?
So you made a very good point that while Google coined the term SRE, it's not that that function didn't exist. You've always in some way, shape or form have had IT operations and people that are responsible for running a site is a very specific term to Google. And that makes sense. But now, and I'll get to your question in just a second, but the term SRE is
for instance in a bank, maybe there's somebody that's in charge of managing the payment processing pipeline. That site, well, people kind of call the people that are managing that an SRE. So there is always somebody that is responsible for managing production application infrastructure. And that's where Newberg comes in. And what we're trying to do is empower those teams to do their job better and
better, more efficiently. A lot of the IT budget, and so for your audience, just to kind of understand the lay of the landscape, in IT operations, there's, and because we mentioned this term observability, and I'm not, I just want to
sorry for people that completely understand this, but observability tools can easily take up to 30% of an IT operations budget. And I'm talking about monitoring tools, things that are looking at logs, traces, metrics, alerts, incident management platforms like PagerDuty. All of these tools are necessary. They do a really good job, but they require a very skilled human operator to make sense of the telemetry.
What we're doing at Newbird is we're leveraging the knowledge that exists in these LLMs to help these people do this faster. And especially when it comes to repetitive tasks, this is what happens when IT operations commonly aware of
What happens is it's almost the same thing that keeps breaking again and again in very subtly different ways. And so to be able to track that through AI and mitigate the problem quickly is what we do so that the people can focus on what we think they should do, which is
creativity, which is where AI can't play a role. And AI at best can regurgitate or mimic what it's seen before. So that's extremely useful for doing things that are repetitive. Self driving cars, driving down a road, I get it, it's a repetitive task. There's really no creativity in driving. And it's the same thing that we're doing over here, responding to alerts and outages and crashes and performance issues.
It's something we've seen for year after year after year. Let AI do that and let humans work on more creative job functions. So I think you asked the question, how do we fit into this SRE organization? And that's kind of where we fit in, which is,
The low hanging fruit, the mundane activity, the repetitive tasks, let AI take care of that. Our product, which is called Hawkeye, is positioned exactly at that. And so what we let people do is focus more on automation. How do we make our operations faster, better, more efficient, more self-service? Those are things that require creativity. And now IT teams can focus on those things.
So you alluded to just what I'll call kind of a Cambrian explosion of observability tools, call it in the last decade. And they've gotten, as you know, as well as anyone, they've gotten pretty good at using like anomaly detection techniques to kind of distill signal from noise and try to isolate like where the human SRE goes and focuses their time. But Hawkeye takes it a step further. When we give the AI agency to take action, there's
more risk but also potentially a ton more benefit to the SRE. Talk us through the kinds of tasks that today's SRE organizations are comfortable giving the AI agency to automate and then where there's still some reluctance and they feel like they want the AI to stop and they want the human to take over.
You brought up three points there, so let's take one at a time. The first one, 100% that the existing observability tools are really awesome. I mean, they do a really good job in searches and storage. And what am I talking about over here for your audience, just so they understand?
The problem of observability is so large that people, they shouldn't dismiss the innovation that's happened there. And I'm talking about when you have just the storage problem alone. If you're storing telemetry from all of your applications, and let's take a cloud-native environment where you're running Kubernetes and you have literally thousands of pods.
Each pod is emitting hundreds of lines every minute. How are you even storing this, let alone searching it? And so these observability platforms have done an awesome job in storage, efficiently storing this data, allowing you to efficiently search it, do semantic searches, do deduplication, do correlation. All of that exists in the logging platforms. I mean,
They use things like Lucene and other things that allow you to do these kind of searches. And then now, that's just on logs. And then you go over to metrics and you look at tools like Prometheus that can efficiently store and sort through column or time series information. And that is also its own storage problem. So my point being,
All of this information exists, it is efficiently stored. So the problem is no longer can I observe what's going on in my environment. The problem is now shifting to can I act on it?
And for humans to be able to look through all of this information, even as efficiently as it's stored and be able to draw conclusions, that's the gap that needs to be solved where AI can play a role. And that's where Newberg comes in. Now, you brought up a point about, you said, agentic systems or agentic workflows. And here I want to draw, and this is my second point that I want to highlight,
When GenAI and LLMs came out, I think a lot of people drew an incorrect problem statement and they said, can I use LLMs to chat with my data? Because
Remember, I just set the stage for all of these observability platforms are doing a really good job in storing the information. Can I chat with that information? And people started to create chatbots. And what it does is it brings a natural language interface to your data. And in our mind, that's not the problem. Creating a chatbot is interesting if you know what you're looking for.
The problem statement is, can I create something that mimics the thought process of a job function of what a person would do? And this is where agentic workflows come in. So I'm going to take your question to make the following point that there's a difference between chatbots and agentic systems. An agentic system is something that applies or extracts the reasoning capabilities that exist in these LLMs to do the job function and
of what a person would do in the sense that we're in a chat bot, a human is still doing the reasoning. You're chatting with your data, you're asking a question, but you're driving the whole conversation. We're in an agenting system, the agent chats with the data, asks itself a question, analyzes the answer to yet again ask another question.
And it does this iteratively until it can come to some form of conclusion or not. And it may not conclude and it may then seek to involve a human in the loop to say I am stuck. I've gotten this far, can you help me further so I can continue my journey?
And that's what agentic systems and agentic workflows do. And that's how they complement the workforce by taking on the task of a person. And in our world, and I'm describing this very broadly, so to bring it specific to what we do at Newbird, what it means is can Hawkeye respond to a pager duty incident or an alert on its own and take it all the way from an incident going through your data, looking at the data, drawing a conclusion,
coming up with a final report. And that's the difference between what a chatbot is and what an agent system is. Your third point in your question was, how are people relating to this? And I want to say that it's, look, with any new technology, there are skeptics and it's all over the map. When cloud computing first came out, I remember 20 years ago,
A lot of people would say, there is no way I am moving my infrastructure off-prem into the cloud. Cloud is never going to survive. And nobody saw that as an interesting thing. And look at where we are today. So like that, I think people embracing agentic systems in their enterprise, it's going to take time, probably as long as the cloud journey, because people have an inherent resistance to, I think,
processing new technology. And so to that point, you have to be careful. And even companies like Newbert have to be responsible in how they're advertising and promoting this. So we put guardrails and we deploy Hawkeye in read-only mode, which is
At best, it's going to come up with a recommendation, but the human in the loop, the human operator has to hit the final solution. We recently started working with some feature flag companies. I just had a conversation prior to this meeting today where some customers are saying, look, Hawkeye does a really good job, but
I get it why you don't want to directly go from conclusion to action. Why don't we put this behind a feature flag where Hawkeye can enable and disable feature flags because the feature flags have been vetted by a human engineer. Makes sense to me. The ways in which it can go wrong are muted and you have guardrails. And so I think people are gonna start playing with this that way. That's a good example.
So you and I have both carried a pager and we probably still wake up in a cold sweat, even though we don't anymore. Thinking about those awful round the clock triage sessions where there's a PO, we're in the middle of a priority zero, let's say an outage. Let's say we're at whatever, an e-commerce company or something where maybe $100,000 per minute of downtime. That's not unrealistic for large sites, but
And the traditional processes, we'd immediately spin up a phone bridge, voice bridge, and
And all the stakeholders would come in. We'd have an emergency cab, a change approval board meeting, and everyone would be there, you know, drinking coffee, whatever we need to do to, you know, stay awake at all hours. And it's a very human pursuit. And ultimately, you know, there's someone who's heading the cab, the change approval board. And, you know, I profile issue, the CEO is getting, you know,
updates every 15 minutes and there are humans and there are throats to choke. Now, talk me through an environment where presumably Hawkeye is a member of that emergency cab. What does it look like when it comes to how I'm now engaging an AI SRE? The stakes are still as high, the clock's ticking, et cetera, but what does that new team look like when multiple members or at least one member is not human?
A lot of this road is ahead of us, right? These are all new. Look, even for me today, I'm one of the creators of Hawkeye. I wake up every day when I'm working on Hawkeye, we're always tweaking its personality and every day,
I'm always impressed with like, wow, did it really come up with that answer? And then I have to go in and analyze how it came up with it. And I'll get to your question in a second. What I'm trying to say is even for people like us, it's a little bit of we're in shock and awe and trying to wrap our heads around how all this works.
So I don't want to make it sound like I know exactly how the rest of the population is going to react to this, but I can hypothesize. And I think the next generation of the war room, so to speak, the painting you depicted,
When an incident is going on, there will always be humans in the foreseeable future. But they're going to have Hawkeye on Slack and they can ask Hawkeye questions so that they can come up with more concrete answers. And instead of saying, wait a second, let me look for it.
they have more meaningful information to contribute to that incident at hand. Because Hawkeye in the back is doing all the grunt work, the paperwork, the log diving, the metrics diving, the correlation, and all the dirty work that people don't wanna do so that the person can have a more informed decision. And you said a throat to choke.
Well, hopefully they're not being choked as much now because they have more richer information and can participate more meaningfully in that war room session. I think that that's how we'll start. In the way distant future will a group of Hawkeyes be working in tandem and exchanging information and resolving this on their own perhaps.
I would imagine that that war room would be far less interesting and entertaining, but that's the world we want to be in. It'll also consume less caffeine. Yeah, exactly. So really deeper questions about kind of the future of the org and organizational culture and what's left for humans that I think are really important. And one of the kind of ancillary questions that it leads me to at least need to ask you as an expert is,
When a human makes an error, we say humans are fallible and there's a lot of code and, you know, and, you know, hopefully it's a blameless culture, but at least we get to a root cause and we say it won't happen again. Who's responsible if Hawkeye does something that brings the system down to be assigned, again, not blame, but, you know, is the root cause that the bot made an error? And then, yeah, what's that like?
So look, ultimately we would hold blame as Newberg, right? We created this being, so to speak, this digital agent. So ultimately it all comes down to us. We can't step aside from that. So what do we do to mitigate this? Two things.
One of the, look, ultimately how a Gen AI work, it is generating information on the fly. And by way of definition, you wouldn't really know. You can predict within some degree of accuracy what it's going to do. We have these things called evaluations that we run on the LLM. So we kind of know what, but you don't exactly know what it's going to do. And so there's always that variability. And
Well, that's the whole point behind this. Otherwise, you have AI systems and AI ops, so to speak, has been around forever. And that's based on machine learning techniques. And so the statistical deterministic
your ability to determine what the outcome is going to be is a lot higher with those systems. And so they serve a purpose, but they're very narrow and the applicability, it has really high accuracy in certain things. So you would use it for, and I'll get to your question, but if you're doing credit card processing fraud detection, well, you would use that. You don't want that level of variability.
Where Gen AI comes in is for these more creative, unknown human functions that we do. So with that comes the ability for it to make mistakes. And so now the question you're asking is who shares the blame? Well, the person that created this agentic system, which in this case is Newberg, would share the blame. So now then taking your question one step further, what do we do to mitigate this? Well, two things.
We understand that Gen AI is subject to hallucinations because it's creating information on the fly. So we purposely restrict what it can and can't create. In our case, we have a very strict grammar in which these LLMs can write code for us.
So when I said we extract information from the LLMs, what we actually do is we're not sending the telemetry to the LLMs. We ask it to write a program using a language we have invented at Newberg, which we call RAIL, story for a different time.
But it is a very finite grammar. It has a very finite grammar and a very finite way in which the LLMs can write code. The code that it generates mimics the thought process of an IT engineer. It has a very strict language. It has certain set of commands it can execute. And mostly what it's doing is it's querying for information and correlating information across different telemetry sources.
which we interface with. So we are accessing the information from you, the customer's telemetry sources. Maybe they're using Datadog. Maybe they're using Elasticsearch. Maybe they're using Prometheus. But Newbert is accessing that information just in time, on demand. So any numerical data we're looking at is never hallucinated and is not generated by the LLM. It's actually coming out of the database. So problem number one,
That's kind of a solved problem for us. When we say we are seeing an error message, we're not making that up. That error string exists in your database. We could not have made that mistake. The only way that could have happened is that error string got in your database some other way. The second part of all of this in keeping things on the guardrail is being transparency.
Just like when you go in and you ask a doctor, something's wrong with me and the doctor gives you a pill and you don't just take it. You're going to ask, why am I taking this pill? What is wrong with me? And you have explainability, you have transparency and the doctor is going through his or her thought process, their reasoning as to why they think you're sick and why they're prescribing that medicine.
And so you can check that. And if there's a problem in their thought process, you have many ways of cross-checking that and cutting that problem from happening. And that's the same thing we do here at Newberg with Hawkeye, which is explainability and transparency. And we have something called a chain of thought reasoning. And when it comes up with a hypothesis, it's walking you through exactly how it came up with its reason.
So these are two things that we take into account when we're putting Hawkeye out there in the field and working alongside human engineers to come up with a solution. And I think that these are two very important things. I love the depth of that answer. One of the things I really like is that you've presumably intentionally branded Hawkeye not as a creepy bot avatar or as an attractive human avatar,
And by doing that, you establish some guidelines about what expectations you should have for it. And everything about the way you're describing how SRE teams would engage Hawkeye is that it's more tool-like. It has some agency, but it's tool-like. It's assisting the human. And yet, on a recent podcast, I heard you talk about, I'll use the term anthropomorphizing, but kind of treating Hawkeye like it's a being. You mentioned the word being, like it's human.
How do you reconcile those two where you've gone out of your way to not brand it as a human, but presumably you're encouraging customers to think about it as a member of your engineering team?
Yeah, Dan, that's an astute point there. Look, some of that story is going to write itself not by me, but by how people embrace agentic systems. And I think we, I don't, look, when you're in a new field like this, the last thing you want to do is be so opinionated on how you want somebody to perceive the product. And then you're not focusing on what the product can actually do.
I want to focus on what it can do and let people define how they want to interface with Hawkeye or other agentic systems.
I do think that there clearly is a personality in Hawkeye even when I'm interfacing with it and we run through many different simulations and scenarios and I can see its chain of thought reasoning. There is a personality there and why, how did that personality get in? Well, some of it came from our run books and some of it came from our training. A lot of it came from the foundational models and the vast amount of content that they've seen out there.
They've seen all of these different error messages and how people have reacted to it. So all of that is in there somewhere. There is a personality in these LLMs and agentic systems and we use a number of different LLMs, it's not just one. So when you put all of these together, there is some fingerprint of a personality that starts to emerge.
So you can't ignore that and you can't just only call it yet another software tool because it is very different from another piece of software where I go into
there's no personality there. It's just there's the menu bar and you know exactly what you're doing, but here it's different. But you also at the same time don't want to overemphasize that and force that down people's throats because people are still trying to wrap their minds around how they're going to, what will their life look like with these agentic systems in the mix?
And it's not for me or I don't think anybody here to dictate that. That's for the people and these agentic systems to kind of go figure that out. In the meantime, what we can do is be ready to understand how people are interfacing with it and see how they do want to augment the personality. And what am I talking about over here?
Well, it turns out in our field, there are some engineers that like verbose responses. And then all of a sudden, there are people that like very factual, to the point, curt responses. And so I'll give you an example.
You're interviewing somebody for a job, two people walk in seemingly the same resume, identical, same number of years of experience, yet you hire one person over the other, why?
almost always it's because that person's personality clicked with you. And so we're gonna have to face the same thing. People that are building agentic systems are gonna have to understand that it is a point for an interview. It is trying to work alongside other human engineers in the organization. So other people have to appreciate its personality, its usability and all of that. So
My long-winded way of saying, of not answering your question, that we have to keep an eye on how we're going to build a personality for these things, even though at the same time, we're not overemphasizing that this is a being. This one's unanswerable, but you're good at this. So I'm going to ask it anyway. I'm curious to get your reaction. So Gu and Dan are back here having a version of this conversation in a decade. We've always talked about in this space that the
Future of ITOps is no ITOps because it's self-healing. And, you know, everything that we're talking about using the principles of AI and Gen AI upstream, you know, we're detecting the bug in the code. We're, you know, adding a pod, you know, to Kubernetes. We're rebooting the load balance, whatever. And so these things go away before they generate, you know, the P0 we talked about.
Is that a pipe dream? It always seems it's kind of like full self driving. It always seems like it's a decade away or two years away. Do you think in a decade we'll get to the point where we're not talking about the relationship between human SREs and AI SREs, we're talking about no SRE function? I think so, 10 years is doable. I can see the path to that within 10 years that that look.
You can sit in a car today, I live in the Bay Area. So self driving cars you can get, I don't know, San Francisco, you're sitting in the backseat completely self driving, mind blowing, right? I don't care who anybody, it's just amazing to see what AI can do there. And if you can put your life in that kind of situation, and I'm looking at IT systems.
This is a much easier problem that we're solving over here. So in a decade, 100% for sure, we can get to know IT ops and let people focus on other things. Now, will it solve the hardest of problems, meaning go in and find that algorithmic flaw in your distributed application? Probably not. But running mundane IT operations, network outage,
bottlenecks, database capacity failures, IO problems, pod, crash loop backoffs. These are such easy things to solve. You'd think so. Well, maybe that and full self-driving will both be solved. I'm with you there. I'll take the over on that one. Yep. Hey, we're just getting started. I feel like there's so many more interesting directions we could go here, but I got to get you off the hot seat. We're almost out of time.
but not without answering one last important question for me. So now I want to shift to Guru Rao the human and the entrepreneur. You've had amazing success, obviously. You seem to have found the formula for entrepreneurial success. Yeah, absolutely. Very, very impressive. You're talking to a lot of first-time entrepreneurs.
aspiring entrepreneurs. Um, what is your journey taught you about yourself? And what would you go back and tell, you know, the kid at Ocarina networks or, you know, earlier in your career that, uh, you've learned making mistakes is it has been my asset. The more, you know, and the more you overthink things, that's the danger. And so I've been, you know, in a weird way, uh,
lucky to not overthink things and figure things out as you get. Look, once you're in the deep end, you're going to figure out how to swim. You can't die. It's just you're going to one way or another figure it out. Now, are you going to be the most elegant swimmer? Probably not.
But you're going to make it to, sure, you're going to figure out a way to live. And the journey may not look pretty. But if you try to make that journey pretty, then you're going to keep overthinking that and you're never going to jump in the deep end. So I don't know what that, look, everybody has their own formula for me, if you're asking for me.
Just getting in there and figuring things out as the world builds around you and dealing with situations as they arise. That's always been something that I'm comfortable doing now, but there has to be an underlying theme. You have to have some talent, you have to know exactly, you have to have some skills that you're gonna apply to that situation. And for me in this case, it's always been my passion for
computer science programming. All of the various companies I've been part of have involved some form of cutting edge algorithms that either involve compression, deduplication. And it's been around data and data science, which is again what we're doing here. So that's been one common theme, but everything else is just been, hey, I'll go figure it out. One statement that you made, it's so simple, but it says so much about the entrepreneurial journey, which is that mistakes are an asset.
Yeah, exactly. Yeah, it's powerful. Hey, Gu, this has been an absolute pleasure. Thanks for hanging out.
Thank you, Dan. I really appreciated the conversation. Yeah. I'm happy to do this anytime. Where can the audience learn more about you and of course, New Bridge and Hawkeye? Yeah, well, New Bird, yeah. Funny enough, you said New Bridge and I just smiled because the street that I live on is called New Bridge, New Bridge Drive. So anyway, New Bridge. Yeah, yeah.
But you can go to newbird.ai and we're on all the various social media outlets. I personally am not the best at social media as you can see. I'm not really active on Twitter and things like that, but Newbird is. And so follow us there and on LinkedIn.
Right. Well, we're all rooting for your success. Thank you so much, Dan. I appreciate it. You bet. That is all the time we have for this week on AI and the future of work. As always, I'm your host, Dan Turchin from PeopleRain. And of course, we're back next week with another fascinating guest.