So this is where I think the human in the loop is important. The person is asking questions. We should try to go and disambiguate with them. Did you mean? And that's a big part of this challenge response that AI has to play with humans rather than assume that it's a black box.
and I can just deliver everything with a single shot question. They're obvious things, but a lot of people, they're running fast and they're like, hey, I don't need to ask a second question. So going from a single shot to at least two shots will create a lot of value and reduce this. Like, I don't know why they did what they did. They're not supposed to do it. That kind of finger pointing between AI and humans can actually be reduced by just the second shot itself.
Good morning, good afternoon, or good evening, depending on where you're listening. Welcome to AI and the Future of Work. I'm your host, Dan Turchin, CEO of PeopleRain, the AI platform for IT and HR employee service.
Our community is growing. As you hopefully know by now, we launched a newsletter late last year, late in 2024. Join us. There'll be a link in the show notes. You get things that don't always make it into the podcast, some additional fun facts. It's a fun community. If you like what we do, of course, please tell a friend.
Give us a like and a rating on Apple Podcasts, Spotify, or wherever you listen. If you share a comment, I may...
Offer it up to our listeners in a future episode like this one from Maria in Austin, Texas, who works in marketing for a construction company and listens while writing her Peloton. Maria's favorite episode is that excellent discussion with William Osman. Great one. If you haven't heard that one of YouTube fame about the future of the creator economy.
whose homemade science videos have been viewed more than 500 million times. Great, funny conversation. William's a real talent. We learn from AI thought leaders weekly on this show. Of course, the added bonus, you get one AI fun fact. Today's fun fact, Dominic Legault published an article in Hacker Noon. With this clickbaity title, next time you hear someone say AI will replace call center agents, run. In it,
He writes that AI is making strides in areas like chatbots. However, it still has major limitations. People need to feel heard, understood, and supported, especially when dealing with frustrating or sensitive issues. AI may be able to process large amounts of data, but it lacks human empathy and emotional intelligence. He goes on to cite examples from Amazon and Del Taco, where automated ordering solutions were actually being operated by remote workers offshore.
and no AI was used despite their claims. My commentary, there's an important argument to be made that we shouldn't aspire to remove humans from customer support. It is not the argument that Dominic makes. Unfortunately, there's not much substance or data in Dominic's article. There is, however, a clickbait, a baby headline, as I mentioned. We should automate repetitive tasks where machines outperform humans, but also free up humans to do what humans do best,
Be empathetic, apply critical thinking, and learn from very few examples. Let's strive to deliver the best service experiences. When that's the objective, we'll naturally find the right way for humans and machines to partner in the call center and everywhere else. And that's quite relevant to today's conversation. And of course, we'll link to that full article in the show notes. Now shifting to our conversation for today.
Today's guest is one of the most successful entrepreneurs and philanthropists in Silicon Valley. Dheeraj Pandey is perhaps best known for founding and leading Nutanix from inception through life as a public company in 2016. The company is currently valued at about $16.5 billion on the NASDAQ. In 2020, Dheeraj started DevRev, the agentic AI company that brings all data together to help companies deliver better customer experiences.
The company's raised more than $100 million to date from amazing investors like Coastal Ventures, using a unique approach that we'll discuss in a bit. Dier is also a noted philanthropist, having founded Paramhansa with his wife Swapna to enhance human life through science and technology. He received his undergrad in computer science and engineering from IIT Kanpur and his master's degree in CS from UT Austin, Hookham Horns.
And gosh, without further ado, I've so been looking forward to this one. Dheeraj, it's a pleasure to welcome you to AI and the Future of Work. Let's get started by having you share a bit more about that illustrious background and how you got into the space. Thank you, Dan. It's a pleasure. And I came to the US in '97, and I was a PhD student at UT Austin.
And I was a systems guy, operating systems, distributed systems, networking. And I can see a lot of that come back in the last two years. So we've talked about some of this again. But in the first bubble, I figured in the industry, it's just like this bubble. There was a big bubble back in the day, as you all recall. For the telco bubble, the internet bubble. So I started working in the industry. I worked at...
at Trilogy Software in Austin, Texas for a year, year and a half. And then drove to California, been in California for the last 25 years and worked just building systems, data intensive systems, file servers and databases and data warehouses. And then in 2009, started a company that was focused on data, although it was data for data centers.
and powering a lot of virtualization workloads. It was a big sort of shift in the industry, going from physical machines to virtual machines. In many ways, the parallels here are going from humans to machines as well. So we'll talk about some of that virtualization that's happening right now. And within seven years, we took the company public, Nutanix. We were doing more than a billion by this time.
And then along the way, I figured the public cloud is here because we were shipping software to almost 20,000 customers and 30,000 sites. We were great at shipping software. We were great at doing customer support. NPS was very high. Even when we had done almost $8 billion worth of software selling, the net promoter score was 90+. So there was a whole method to the madness of delightful customers and
and what it meant to really connect all that stuff back to engineering. I figured there's a great software to be built around this. In 2020, when COVID happened, I figured that the public Cloud is actually here. I wanted to go and start a public Cloud company.
And a big convergence person, like I've always believed in the whole is greater than the sum of the parts. And that's what we did at Nutanix for data center teams. But that's what Amazon EWS was doing in many ways with infrastructure. And I think Apple did that to a personal lives as well with iOS. So this idea of really left shifting a lot of the complexity was something that was very dear and dear to me.
And if anything, you know, AI is the sort of thing that's driving everything. But what's behind AI is data. If you can't integrate everything and bring it all together, then models cannot reason. You know, models can't run workflows. Models really can't do much without data itself. So it's kind of a blood supply to the brain.
that has been a journey of the last four years. I've been a big student of AI and really want to come. And as you said, at the end of the day, it's about really balancing humans and machines. And the three beautiful words that you mentioned about being heard, being understood, and being supported.
is something that we have to do both in customer support and in product management. You and I have lived through a lot of cataclysmic technological shifts over the last several decades. Mainframe to desktop computing and desktop to cloud and PC to mobile and on-prem to SaaS and dot, dot, dot. Now we're in the midst of another cataclysmic shift. Is the shift
to AI-first software just another technological shift or is there something different this time around? Technology in the last 150 years has changed lives and improved lives.
I think right now the struggle that we have is we're still working 50, 60, 70 hours and complaining about work-life balance. I feel like in the next 25 to 50 years, we'll see another big thing come here. Well, we'd probably try to self-actualize, maybe go figure out how to get to Mars or solve climate change and do many, many hard problems that are actually besetting humankind for the last 50 years.
And the only way it will happen is if we actually leave some to the machines. And that's what I think is the journey that we're headed on right now. If we can focus on the hardest problems going forward, then we can leave some of the repetitive stuff, as you mentioned, to the machines. What's different this time? I think in the last 40, 50 years of tech, we were creating programming languages that
to instruct the machines. And that was in itself a journey because it all started with punch cards, as you know, going back to mainframes, we were doing punch cards at that time. And that was actually atoms, not even bits. So the way to instruct the machine was handling atoms. Then came literal bits with machine language, you know, you're doing passing ones and zeros to machines to instruct them to do things.
And then we invented programming languages that had a little bit of English in them. Like C was a language which had a little bit of English in it. And then obviously C++ and then Java and then, you know, Ruby and Python and Perl and all these things came about that were looking more and more English-like. Like JavaScript actually looked a lot more English-like and it didn't need to be compiled. It was interpreted. So we started to interpret a lot of the languages. You know, machines were willing to take the language
sort of human weaknesses of not knowing how to do syntax and things like that and work around it. So we started to loosen sort of the guardrails that meant what programming was compared to 20, 25, 30 years ago when we had to really compile code and optimize a lot of the stuff to where we were in the last 10 years. I think where we are now is when machines are saying stop.
You don't need to invent a new programming language for me to instruct me to do things. How about I learn your language? And that to me is revolutionary. The fact that they're learning a language and the new interpreter is an English interpreter. You know, it's a large language model and foundation models are interpreters now.
So I think that is big. And what does that mean? From punch card operators, who probably were hundreds of thousands back in the 70s, to maybe a few million people doing CC++, to maybe 10 million people doing Java, to right now where we are with JavaScript and Python, we're probably 40 million developers. But there's only 40 million people in the world who are instructing the machines how to do things. And that is unfair.
So I think AI's biggest contribution will be to take this number to maybe a quarter billion people. And these will be all sorts of people who probably never would have known how to write code can now write prompts and instruct the machines. I think that to me is a big deal.
And the second piece that AI is, machines are claiming that they could do probably as good, if not as good as humans, at least in that direction, is speculation. They're saying, look, I can speculate. Because in the past, we looked at them as things that would do deterministic work. Like it's very prescriptive. I give it an input. It goes through a program. It delivers one output.
I give you the same input, it goes to the same program, it delivers the exact same output. So it was supposed to be deterministic. I think what differentiated humans from machines was this idea of being speculative and non-deterministic, probabilistic, you know, be able to connect the dots. The fact that we can connect the dots through speculation, adjacency thinking, all this stuff was in the realm of humans.
I think for the first time, machines are saying, give me a little bit of that. I can probably do that as well. So I think it's fascinating where they're headed, where we're headed. I think as the human population dwindles, because obviously we're shrinking. This world population is not growing any further. Maybe Africa will continue to grow, but China has stalled and India is stalling. So we'll probably need some help in this direction.
productivity journey that we are on. I love the way you frame that, which leads me to maybe an obvious question. So you define the whole industry that led to explosion of storage technologies and hyper-convergence. You almost literally coined a lot of the terms that we use to describe infrastructure and storage.
Seems like a pretty radical departure, what you're doing at DevRev. Maybe talk us through kind of the through line and the founding vision and maybe the product, AgentOS. What does it do and how'd you back into that as the product? I think I'm a convergence person. I believe in bringing things together, as I was mentioning.
And Nutanix was all about that. Can we converge things and reduce silos? And if anything, AI would need those human models, you know, because with tech, we created models, object models. And like, okay, what's the sales model? What's a support model? What's a customer success model? What's product management and engineering models? You know, we had all these models that were actually buried in these work management products, whether you're doing work management in software or,
maybe operations or support or sales. So I think it's time to bring all these models together the way we understand our business and give it to AI so that it can do a few things for us. So we have to, what we call, build this knowledge graph. And the knowledge graph is the foundation for
what we have been thinking about business for the last 100 years, especially in the last 50 years, for sure. And we figured that if you don't converge the support model and the sales model and the engineering and the product model and so on, then we would not be able to do justice to what these machines can actually do with us and for us.
So we built a knowledge graph around our customers, products, people, work, users, their activity and sessions, and all the unstructured documents within the enterprise. So now you have a lot. And this is extensible. It's stretchable. You can keep adding more and more things to it.
But you have this connective tissue now, which will go and create a search engine around. So we have a search engine, which actually delivers enterprise search around everything, around your customers, your products, your people, your work, what they do, what they did in the last 90 days, things like that, around your users, and of course, around all the documents that you actually have created, the public documents, the private documents.
So we built a search engine on top of this knowledge graph. We built a workflow engine, like how do you do automation around these things so that not everything is done by humans. And then an analytics engine, because if you look at analytics in the past, it was basically left by SaaS folks. SaaS folks were like, okay, we don't believe that we can do analytics because integrating is your problem.
So they left it to the buyer's IT department to go and do Snowflake or
you know uh data breaks or things like that and be like well we gotta left shift some of that too i mean if you go to a chief customer officer and talk about ai or a chief product officer talk about ai who's going to do the integration work for them so we do a lot of that analytics for them as well now they can extend it and you know figure out what kind of questions they want to ask but they don't need a data warehousing team and data pipelining team and etl team
And a data analytics team to really figure out customer 360 or product 360 or user 360. I mean, these things have been so elusive in the past because IT didn't have time to deliver these things for a chief customer officer or chief product officer or a chief revenue officer. So we feel like we can left shift a lot of that stuff to us.
This can happen because in the Cloud, you can actually spin up a lot of warehousing and search engines and
you know, workflow engines and things like that. So that when we go to our end customer, which could be a chief customer officer, even a chief information officer, you know, they don't have to really kickstart new projects to really enable and get AI to deliver things. You know, we can deliver a conversational AI experience for customer experience, for example, in a matter of less than a month.
We can deliver enterprise search out of the box in less than a month because we've left shifted so much of the complexity of what it means to integrate data and bring data from your legacy Salesforce or legacy ServiceNow or legacy Atlassian Jira and things like that. But it all comes in one knowledge graph with one search engine, one analytics engine, and one workflow engine. So the solutions that we have been
going to our customers with is conversational AI for customer experience, which means not just our customers, but also their users and their customers. How do you put this on our website and things like that? And we don't like the word chatbot because they flatter to deceive. I think conversational AI really for the first time will use AI as opposed to using rule engines and things like that to deliver stuff.
We deliver enterprise search out of the box in very unique ways where you pay for consumption rather than subscription. Subscription is actually quite a waste in search. And at many places, we go and displace legacy apps that were built in that era of each department had its own apps, like customer support, for example. So there's Zendesk and Salesforce Service Cloud. We displaced them with this new stack. And
And I think these are the things that really have brought in a lot of delight to our customers. So I get the knowledge graph pulling together these kind of disparate systems to help you understand the customer journey and your conversational eye around it. Maybe if you could tell us a customer story. I just want to make sure I understand the personas and the value from the perspective of the enterprise as well as the end customer.
Yeah, so a customer like Bolt, for example, they actually are a payment provider for a lot of merchants. And they had Zendesk, which is a customer support software.
And AI was actually a bolt-on. Like, you know, what does AI even mean? In fact, ticketing systems didn't believe in conversations. They're like, what do you mean? So Intercom and Zendesk were fighting with each other. One of them said, you don't need tickets. The other one said, you don't need conversations. So where is the new world headed? Conversations are in the domain of machines because, you know, you want to really be able to answer within in real time if a question arrives, right?
And so you need to solve for conversations. But at some point, those conversations will get elevated to a ticket. And that's where human handoffs happen. So you need a system that basically hugs both of these things tight.
And you can't have two companies come and deliver this because you know what the finger pointing really results from here, right? So Bolt was unable to do this with Zendesk, so they replaced all of Zendesk. We went and copied their million-odd tickets in a matter of a couple of weeks, and there was no professional services for this because
Our knowledge graph is built with an airdrop technology. We do the ETL work, the extraction transformation loading work, through our technology, as opposed to making it a human problem now. And all of a sudden, they have their legacy data in DevRel. They have a new modern customer support system. But now they don't have to make a choice between conversations and tickets. They get both.
Now, what does it mean to get conversations to be truly machine-driven so that you don't need to have humans? You need a great knowledge base. What does it mean to actually create knowledge base and things like that? Now, knowledge base and deflection is still a hard problem in search. You need to be really good at deflection rates and things like that. So now they deflect a very large percentage of the conversation. They don't have to get elevated to humans through tickets at all. In fact,
going forward, they've asked us to say, can they deliver this conversation experience to their merchants? Because their merchants have websites as well. So they would like to be a service provider for conversationally, not just payments, but other things that happens in commerce as well. And we are working on a piece of technology, which is really we call workspaces. So we can deliver workspaces where now Bolt can have a workspace for each of their merchants as well. So it's like not just we
We are not just delivering value to our customer, which is bored, but also to the customers, customers and their users. So there's a big ability to really go after this value chain, which is customers and customers, customers and customers, customers, users. A lot of this will be getting done through this concept of conversations, but how it's powered by search, analytics, and workflows. I didn't think there was any debate about the future of tickets. I think tickets are an artifact of the past.
And, um,
The new unit of record, you alluded to it, and I know you said Intercom and Zendesk were kind of battling it out. Isn't the future of service either no service or a conversation? In the future, what's the role of that kind of antiquated system of record where the, quote, unit of value is the ticket? So you brought this up in your introduction about the value of humans, like the place for humans in this whole thing. And I think tickets is the place for humans.
So when things need to be escalated to someone, it's through something, and that's something at the ticket. Now, our goal is to reduce that number. Because earlier, we had a lot of repetitious road, bottom of the pyramid stuff that could have been done through conversations, and they will get done through conversations. If anything, there's a parallel thing that happens from off-site alerts and incidents.
Alerts and incidents should be in the realm of machines. They should do as much of deduplication and deflection and classification and clustering and categorization, all sorts of things machines can and must do.
But the moment they can't, it should be now a ticket. And a ticket could have a lifecycle of months maybe because it's a feature request. It basically is something that takes... I mean, unlike, let's say, e-commerce where an order gets delivered within an hour or maybe a couple of days or a couple of weeks, I think in the world of software development, for example, a ticket could last three quarters. So you do need that lifecycle and it actually will then...
kind of bore a hole into product management with enhancement requests that enhancement will get done with developers with, of course, with parent issues and child issues and GitHub transactions. So there's a whole lineage of things that's getting done just to fulfill one order, which is a feature request. So I don't think tickets are dead. It's just that we need to reduce the number.
And AI's goal is to make it reductive. You need to reduce a lot of these things. I mean, right now, a big pain point in support, for example, is L3, L4 tech support. And most things are being done where every alert becomes an incident, every incident becomes a ticket. And just getting rid of that noise is a really hard problem. And AI must help because every day people are, I mean, even small companies get hundreds if not thousands of alerts.
and many of them actually are incidents. Like we are doing supervised fine tuning for incident management. As reasoning gets better and better, and you're able to reason why this outage could have happened, for example, and that's maybe because we upgraded something that was not fully tested. Maybe let's look at the GitHub code and see what transactions were recent.
I think you will start to see people saying, look, let the machine handle even some of the incidents that are deep and probably require a lot of different signals to be looked at and so on. But
Until that time, a ticket is still something that's elevated to the humans themselves. It's an important distinction that you're making. The ticket is a level of abstraction above an incident or an alert or something that's spoken in a language of machine to machine. Tickets are communicated in the language of humans. I'll buy that one. It's an interesting response. So on this podcast, we frequently talk about what it means to exercise AI responsibly. And you and the company are really...
pioneering a lot of us in the space really pioneering the concept of replacing kind of traditional human tasks with uh with automation and we should we are we we must continue to be asked um not just what could go right if it works but what could go wrong and i can think just based on the story you told about bolt uh potentially
advising, let's say, a merchant of Bolt about a purchasing decision or the cost of something or billing information based on bias training data or misinformation or
Advising an employee using DevRev on the internal side about a customer's journey that may be misleading or inaccurate due to things like bias in the training data. You could think about very real negative consequences. As techno optimists, we both are. What does it mean to you to think about exercising AI responsibly given what we know about
about the potential impact of a false positive or potentially a false negative? Yeah, I mean, obviously a very deep question. And I would say that there's a Maslow hierarchy of needs that we need to actually go with here. And at the base of this, responsible AI is about security and making sure that people only see what they're supposed to see and nothing more. It's like the principle of least privilege, right?
we need to actually apply that here as well because as self-service really takes on sort of a big, it becomes a prevalent thing. You have to make sure that it is the principle of least privilege that's still applied to all the enterprise knowledge graph and all this sales data and support data and engineering and product data and so on and so forth.
Second piece is around machine reliability, which the fact that it should not be taking 15 seconds to respond at times and then takes a minute to respond and probably never respond sometimes and so on. Very important because otherwise we lose trust in the reliability of these machines as well. So it's equally important for us to really think of these systems like the way we thought about e-commerce systems or online systems. Like this has to be online all the time.
Design plays a big role in this because some of these foundation models sometimes take time. If anything, the 01 and 03 models, for example, they're actually caching a lot of the next token thinking because they're reasoning and they're doing chain of thought and so on. And those traces that like, now I'm going to do this and now I'm going to do this, you can't expose that to the end user. So there is going to be delays involved.
in the way, you know, as it's, and humans have done this too. I mean, there've been delays when you're holding onto a call center call, a customer service call, but you've had to really put design around it. So we have to think about delays and things of that nature with design as well.
I think on top of that, it's really important to know what you're doing with, for example, search. Search can be about adjacency things and stuff like that, but how do you really build the best vector distance algorithms and custom embeddings and things of that nature so that search can also be a big source of hallucination, for example. You can start to do things that are very, very far off.
So this is where I think the human in the loop is important. The person is asking questions.
These are simple things, they're obvious things, but a lot of people, they're running fast and they're like, hey, I don't need to ask a second question. So going from a single shot to at least two shots will create a lot of value and reduce this. Like, I don't know why they did what they did. They're not supposed to do it. That kind of finger pointing between AI and humans can actually be reduced by just the second shot itself.
And then I think on top of that, this idea of lineage and prominence, you know, can you at least point to the things that you...
really are quoting right now also becomes very important in this as well. I think we talked about synthetic data a little bit with supervised fine tuning, how you really look at generating synthetic data and how you have evaluator models because see, I think only iron will sharpen iron. I think the genie is out now and you can't put this back in the bottle. So then the question is, how do you do quality assurance of the synthetic data that you're generating?
Now, if you say it's going to be humans, it's probably not going to work because if machines generate a lot of data, how can humans go and look at that lot of data itself? So really thinking hard about a QA model, an evaluator model, a teacher model while the student's doing all this stuff.
is the yin to the yang, actually. We are spending a lot of time on evaluator models, for example. And the big piece there is moderators and what do the moderators say. So when it comes to search, for example, when we answer questions, we send it to another model and say, what do you think? Does this answer look more precise? And this evaluator model is as big a part of our thinking
as just doing a quick search and trying to instantly gratify the end user itself. So we have to think like the way we did software engineering in the last 30, 40 years, like the software developers, and then there is a QA person. Now, obviously, we got rid of QA in the last 10 years, but there's still pair programming. The idea of this pair models is probably going to be just as important to keep building their trust.
But yeah, I think by and large, you know, we have to be thinking safe in many of these things and really thinking hard about lineage and provenance, which is to me, you know, that's why I mentioned very early on about how AI is becoming an engineering problem now. You know, just doing prompts and GPT wrappers only takes you so far. You mentioned trust is such a foundational component.
component. Sometimes it seems ironic. We're talking about artificial intelligence and we're talking about trust, which seems like such a human principle. Sometimes lost in this discussion is that humans are fallible. Humans give wrong answers too. And one of my hypotheses is that while we're kind of undergoing this global cultural experiment with AI, we're less tolerant of machines making the same mistakes that
we were tolerating when humans made them. And if you think about, I say this frequently on the show, but AI is perfectly designed to replicate human bias, we shouldn't be surprised. And in fact, I think to the very thoughtful answer you gave, there are ways that we can
use challenger techniques in various things to actually improve the accuracy of the AI best answer. What's your perspective on that? The human and the machine and kind of culturally this adoption of what it means to speak to a machine and maybe get an answer that we don't agree with. You know, it's interesting you mentioned this yesterday. I was just having this exact discussion about with our leaders and employees because we probably have a caste system
that was always there in various forms with racism and things like that. But now it is emerging with machines as the sort of the lower caste. And it was there for 40, 50 years of tech because we said, be deterministic. Don't try to be probabilistic. You know, as I said, you know, early on,
But now for the first time, they're saying, but why can't I be speculative? So I think just giving them a little bit more leeway to actually do adjacency thinking, speculative thinking is itself going to be a big cultural shift. Because for that, we used to hire McKinsey.
And you're like, no, now I can't hire a machine to do that. But if we were to think about the problem statement and said, look, if you hire young people and they're not fully trained and they haven't gone through boot camp, they don't have the experience, they will make the same mistakes that machines probably are going to make. But we're going to be more sort of tolerant of failures that humans make.
than we will be of machines. This is an era that you'll see the innovators and the early adopters basically say, "Look, we'll let the machine fail just as much as humans fail."
And then the late majority in the laggards will wait for this technology to get better and better and better before they even put their hands on this. And that's fine. I think, you know, there is probably going to be... I think you can see that in Europe today, for example. You know, Europe regulates. And it's going to probably have to think hard about where do we stand in AI. And even before we've done...
Something like the first 10% of it, or maybe just the first 5%. There's so much of the regulation hawks in Europe.
And I think we have to be careful about those things that look, we're trying to be extremely unfair to machines when they've barely even been born. I mean, it's just like a less than a two year old toddler. And we're like, no, no, no, but you got to do this. You got to know X, Y and Z, the rules of the world and so on. So I feel like there's going to be a bell curve of adopters.
And the ones who will adopt fast are the ones who will actually see things to be more progressive. And they'll probably see the advantages of these things than others. And that's fine. I mean, Europe probably will be a laggard in many of these things compared to the U.S., compared to China, compared to India and many of these other countries. Cast system. I never thought about that analogy, but it's...
disturbingly appropriate. I teased in the opener that when you raised from Vinod Khosla and others, you took a very novel approach. And because you're talking to a large audience of entrepreneurs, venture investors, I think it's
useful to have you maybe tell us a little bit about how you funded DevRev. Yeah, Vinod and I, we've known each other for very long. In fact, my first job in California was a Vinod-funded company when he was still at Kleiner Perkins. He hadn't started Coastal Ventures yet.
So it's been a 25-year journey with him. Now, he might not have fully known me back then, but then he funded me at Nutanix in 2011. And they put in like 25 million over a couple of rounds and we returned probably 600, 700 million to them. And he was the first one to have invested in OpenAI as an institutional investor. And he was the first one to have talked to me in 2020 about GPT.
Because I barely knew anything back then. So I got curious and I started reading more and I'm like, wow, I got to be more current. And one thing led to another, you know, he basically said, look, I'll write you a check and go figure what you want to do. And,
Over the course of the last, I would say, four years, we've raised about $150 million. A lot of it is also crowdsourced to a lot of CEOs, a lot of people putting $2-3 million. And a lot of accelerators, smaller funds, putting $3-4 million over the course of the last three, four years. And we even did $7-8 million with a blockchain technology. Basically, it was a token.
uh we call it rev d and uh that thing was on polygon ethereum stack and people said i'm gonna you know do fifty thousand dollars and in many ways it actually became an aggregate it was a security it's not a token it's a security uh and if anything some of that could be back now that we are looking at crypto one more time in the last two three months uh maybe
I think one of the things that I still want to do with blockchain is to really reward users for their usage. Like, how do you build a community with DAO is something that's very top of mind for me. It's gotten back again in my... And once the interest rate's lower and PLG is back, we feel like the way to really nurture a community is through gamification and reward systems.
which tokens can allow for. And imagine a CMO or chief marketing officer going and burning those tokens, buying them out with cash. That's the way to really do this. Now, how will taxes work? I don't know how tax will work here and 1099s and all that stuff. But I feel like the community actually needs a way to be rewarded. And there's no other way to do this other than with tokens.
It's a fascinating topic. I wanted the audience to hear a little bit about that background. And I think we started about three different separate conversations. We might have to have you back to continue one or more of these ones. But just that topic of DAOs and tokens and maybe the future of finance is a fascinating one. Yeah, no, I mean, and I don't know about Bitcoin and some of that, but I feel like there's something to be said about Bitcoin.
Distributing rewards and tokens of gratitude, no pun intended, especially with your end users, that probably requires some disruptive thinking. Rewarding for participation or value creation in the community. Yeah.
You know, I'll let this one go way over time, but I'm not letting you off the hot seat without answering one last important question for me. I've heard you share the origin story of, you know, Dheeraj in a suitcase and a little bit of money in your pocket, you know, leaving your hometown in India. And gosh, you know, look at you now. If you could...
have a conversation with that kid, scared kid, you know, leaving India for the US out of IIT. What advice would you give that kid? I think one is, you know, humanism takes you a long way, you know, and I was always a very affable, connecting, not extroverted, but probably an ambivert, you know, person.
I would say that, if anything, the more I grow, the more I realize, as you mentioned, what values are. And to really connect with people through humanism is the best way to do business. And that really creates for a very honest company as well, because now, you know, empathy is not made up and brand is not made up. Brand is not campaigns. Brand is what you deliver.
So I would actually go back and really say that, look, your bet on humanism is not a bad one. You have an open invite. Come hang out again, all right? Thank you. Also, before we wrap, I wanted to thank Steve Kaplan, mutual friend of Deroge's and mine, who had helped me prepare for this discussion. Thank you, Steve. And gosh.
That is all the time that we have this week on AI and the future of work. Of course, as always, I'm your host, Dan Turchin, and we're back next week with another fascinating guest. ♪