Ray Wang believes decentralized human intelligence is the best model for AI because human intelligence is inherently decentralized, with people learning at different rates and possessing diverse skills, abilities, and powers. This variability makes collective human intelligence powerful, and centralizing AI would defeat the purpose of replicating this dynamic, adaptable system.
Ray Wang outlines five maturity levels of AI: 1) Augmentation, where machines help humans perform tasks more efficiently; 2) Acceleration, where tasks are completed at a much faster rate; 3) Automation with human supervision; 4) Agents, which bundle multiple skills to assist with tasks; and 5) Advisors, which can think and make decisions on behalf of humans.
The gap lies in the vendors' ability to articulate a compelling vision for AI while also providing practical on-ramps for enterprises to adopt and benefit from the technology. Vendors that fail to bridge this gap risk losing market relevance, as enterprise leaders prioritize solutions that align with their operational needs and deliver measurable value.
Ray Wang predicts billions will be wasted because many organizations lack clarity on the level of data precision required for AI-driven decision-making. Different industries have varying thresholds for accuracy—e.g., 85% accuracy is acceptable in customer experience but catastrophic in finance or healthcare. This mismatch leads to inefficient investments and unmet expectations.
Ray Wang envisions a future where industries like retail, manufacturing, and distribution share data across value chains to predict inventory, demand, and pricing more accurately. Similarly, sectors like communications, media, and tech will collaborate to understand customer preferences and monetize digital goods effectively, creating a give-get model for data sharing.
Ray Wang advises enterprises to focus on where and when to insert human judgment in AI processes. Organizations should assess whether they have enough data to achieve the required precision and identify tasks that require human oversight. The goal is not to replace humans but to enhance decision-making speed, accuracy, and quality.
Ray Wang emphasizes the need for transparent algorithms, explainability, and human-led AI models to ensure responsible regulation. He warns against centralizing AI regulation, as it risks perpetuating cultural biases and stifling innovation. Instead, he advocates for decentralized, culturally sensitive AI systems that reflect diverse ethical values.
Ray Wang envisions a future where human augmentation and autonomous robots play significant roles in daily life. He predicts a shift from consensual technologies to mindful technologies, where AI works on behalf of individuals rather than networks. He also highlights the potential for universal basic income and a purpose-driven economy as humanity transitions from menial tasks to more meaningful pursuits.
I actually think the best example of AI is human intelligence. It's super decentralized. People are learning at different rates. They have different skills, different powers, different abilities, right? That variability is actually what makes the collective human intelligence so powerful. To recreate that in a centralized model defeats the point.
Good morning, good afternoon, or good evening, depending on where you're listening. Welcome to AI and the Future of Work, episode 312. I'm your host, Dan Turchin, CEO of PeopleRain, the AI platform for IT and HR employee service. Our community is growing. I get asked all the time how you can meet other listeners.
To make that happen, we just launched a newsletter where we share weekly insights and tips that don't always make it into the podcast, as well as opportunities to engage with the community. We'll share a link to register for that newsletter on Beehive in the show notes. Of course, if you like what we do, please tell a friend and give us a like and a rating on Apple Podcasts, Spotify, or wherever you listen. If you leave a comment, I may share it in an upcoming episode.
Like this one from PJ in Tallahassee, Florida. Go Seminoles, who's a real estate agent and listens while jogging. PJ's favorite episode. Oh, is that great one with Josh Dreen, co-founder of the Work3 Institute about how leaders create human-centric jobs in the era of AI. We will link to that episode as well in the show notes. We learn from AI thought leaders weekly on this show, of course. The added bonus, you get one AI fun fact.
Today's fun fact, Adam Davidson writes in Pocket Lint Online that tech companies need to stop wowing us with what their AI features will be able to do at some undisclosed date in the future and start being more realistic with our expectations. This was in response to Apple's recent Glowtime event where uncharacteristically, I should say,
Apple said very little about the availability of its most eagerly anticipated AI features like Siri enhancements. But really all the big players including Amazon, Google and OpenAI have done the same. Another example, months back OpenAI announced voice and video enhancements to Jack TPT the 4.0 model will be available quote in the coming weeks. We still haven't seen them. My commentary,
We know enterprise trust in AI is at an all-time low. Now is when vendors should be under promising and over delivering. Transformative technologies take decades to achieve widespread adoption.
AI adoption cycles are being compressed, but we do risk another AI winter if vendors aren't more responsible about distinguishing what's vision from what's reality. Of course, we will share the link to that full article in show notes. Now shifting to today's conversation.
Today's guest is a force of nature who needs no introduction, but I'll introduce him anyway. Ray Wong is the CEO and founder of Constellation Research, one of the most respected tech research firms. He's also an award-winning author, past tech leader at companies like PeopleSoft, which became Oracle. He's a podcast host, a blogger, venture advisor, and tech pundit who appears almost daily on media outlets like CNBC,
Fox Business and Bloomberg. Fun fact, this is the first episode in 312. It was actually delayed because today's guest got pulled into an ad hoc real-time interview on Fox Business. Ray has published nearly 400 episodes of the wildly popular Disrupt TV, which is live streamed on YouTube. The show generates more than 130 million impressions per month.
One thing Ray hasn't done in that illustrious career is appear as a guest on this podcast. We're about to change that. Ray Wong, it's my pleasure to welcome you to AI and the Future of Work. Let's get started by having you share a bit more about that background and how you got into this space.
Hey, Dan, thanks a lot for having me. And thanks a lot for having me on the podcast. Finally, I'm here. So it's the first Friday. We actually haven't had to disrupt TV in a while. And that's actually kind of the reason we can do that. But yeah, so God, it's not about me, but it's really about this incredible journey in enterprise software I've been able to watch. I've been perched here for, I guess, like 20 years watching enterprise software. Maybe longer. Wait, 96? Yeah, I got in in 96. I started out...
working in consulting at Deloitte. Then ended up UI doing SAP implementations and then got into building stuff at Oracle and then left, did a startup, almost went public. We were worth 2 billion at one point in time and then we came crashing down to nothing and then went back to Oracle.
then went to PeopleSoft and then went back to Oracle. So after doing that, Whipsaw actually ended up at Forrester Research for five years, covering ERP, covering things nobody wanted to cover that suddenly became interesting. And then of course, Left joined a bunch of ex-Forrester folks.
the company got sold to a company called profit. And then I built the constellation in 2010 and we've had a great team since then. So it's been an amazing journey. And I would just say that if anything, when you talk about transformation and when you talk about, you know, how people are building the next generation in AI today, a lot of those terms, what we talk about in digital are applied now. It's just a little bit different because in the internet age and age,
in the digital age, I could easily say it was open, there were more players, it was decentralized, and things were coming down in terms of cost. In the AI world, it's a little bit different. It is closed, it is centralized, it is more expensive, and there are a few players that are going to win. So we have to change that, right? It didn't have to be that way. And I think that's where the revolution is going to happen. So Ray Wong is the guy who
that people like Neil Cavuto from Fox News and John Ford from CNBC go to when they want a little glimpse into the future. I'm going to first ask you a question about the past. So roll back the clock to November 2022, launch of ChatGPT. What has surprised you most about the world's reaction to that conversational AI flavor of AI since that launch?
So it was the ability to capture the imagination that we could personify a machine to actually be an AI, a super intelligence that's out there. And all the dystopian sci-fi that we watched during the pandemic suddenly came to life, along with all the optimism of what the future could possibly bring us in a brand new world. And the only thing I could think about, because the geeky side of my head was like,
All right, why are these people trying to solve deterministic problems with a probabilistic model? That's ridiculous. You want one plus one to equal three sometimes? No, stop. So if you're going to solve things that are creative and probabilistic, use a probabilistic model. If things that like one plus one should still equal to two, let's make sure we do it that way. So we've come back to our senses. That's the good news. But we have gone through
I mean, it all started from iRobot to having Turing to John McCarthy at Dartmouth to, hey, expert systems are here and they're actually not bad too. Wait, those don't work really well too. Hey,
Is that a Chihuahua or a muffin? And then of course, now with Gen AI. So we get the fits and starts. There's lots of AI winters. The folks that are really doing this stuff, it's very interesting. But there is a lot of effort and energy here, and there's been a lot of progress.
So we recently had a conversation with Peter Voss on this show who coined the term AGI, artificial intelligence. And I'm going to slightly butcher maybe his version of what AGI is, but it's basically getting humans to be tricked into thinking that they're talking to a human when they're talking to a machine. Kind of the Turing test. Is that the right vision? Obviously, I've got a bit of an opinion here, but I would love to hear Ray's vision for what should we be thinking about trying to accomplish with AI?
Yes. Right now, we look at five maturity levels. Level one is augmentation, right? Can we, you know, machine help us do something better, right? We were doing it, you know, we're completing 15 tasks per hour. Now we can do 40 tasks per hour, right? And that's great. Or we were having like, you know, to look at and find things across five different systems. The augmentation is there. So now it's a lot easier for me to get to the information I needed. Okay, great.
There's acceleration where instead of getting to 40 things, we're getting to 500 things, right? And that's going to happen over time. And at some point, we're going to have things that are automated but still require human supervision, right? It's when we get to agents, which is like the rage right now. Everything's agentic. Like, oh, wow, it's agentic. Okay, great. What did you mean by that? Okay, so we've bundled a bunch of skills into a, you know, I call it a bot at best, right?
right, to actually solve and help with tasks and accelerate that, fine. And at some point, we'll get to advisors that are actually thinking for you. The challenge with trying to think about this as a superintelligence is you're assuming that it's learning, and it's learning on its own, and there isn't multiple sets of superintelligence. We all have this view of the centralized Borg that's kind of like thinking and going to create all this stuff.
I actually think the best example of AI is human intelligence. It's super decentralized. People are learning at different rates. They have different skills, different powers, different abilities, right? That variability is actually what makes the collective human intelligence so powerful. To recreate that in a centralized model defeats the point. It's got to be decentralized going forward because
there are different tasks, there are different probabilities. If you really wanted to take a model out there and say, every human is really a source of energy with the source of probabilistic model that is sitting here, that's superimposed. There's a supposition on top of everything.
Then you'd be like, yes, it's the randomness of all this that actually creates that collective intelligence. I know I went super deep there very, very fast. But my point being is just saying that this is a super intelligence and it's going to be all knowing is not enough. We actually have to factor that
There's gonna be so much variability and so much choice. There's this human desire to make everything conform and centralized and we have to fight that, right? The centralized notion of scarcity versus a decentralized notion of abundance is really gonna be the battle about war we see intelligence.
So you and your team are paraded into every vendor briefing, every big tech vendors briefing. They want you in the room to talk about their vision for AI and typically, kind of finger quotes, what could go right with the technology?
The other half of your role is talking to enterprise leaders. I'd be curious to know from kind of the catbird seat that you sit in, is there a difference between what you're hearing from the vendors about their vision for AI and maybe what the enterprise leaders are expecting from those vendors? You know, that gap is really the difference between billions of dollars of revenue. And I think that's an important point that you point out, Dan. The vendors that know what's going on have
an amazing vision saying here's the future. And they've also figured out the on ramps for their customers to be able to participate in that process. The ones that haven't are the ones that will fail in the marketplace. So there's a certain amount of inertia that's required to be successful by the vendor. But there's also the end user consumption of this that actually pays the bills. That's always been the product market fit and that's the gap.
I'll give you an example. I am going to venture, and I may regret this later since it's recorded and broadcast live, but that's what makes this fun. So I would venture that billions of dollars are about to be wasted in AI, mostly because
Organizations don't know how much data they need to get to a level of precision that their stakeholders will trust. So I'll give you an example. 85% accuracy in customer experience. I'm okay with that, right? A call might go bad. You might be upset at someone. Yeah, but you're getting it better than 50-50, which is better than what I have now.
85% accuracy in supply chain? Ooh, that's bad. That's like millions of dollars per hour that could be lost or tens of millions of dollars per hour that could be lost because your supply chain needs to be up and running at about 98, 99%.
85% accuracy in finance? Somebody's going to jail. That's not going to happen. 85% accuracy in healthcare? Okay, yeah, you've already hit the limit. And so we have this notion that the data that you have and all the publicly available data that's out there that's being scraped on the internet,
There's nothing new after that, right? Most of that data has already been scraped. And so the future of these large language models that are powering generative AI, which has everybody all excited, is that we've got to get to the smaller language models and the very small language models. So the next 10% is just as valuable as that first 85%, and that last 5% is going to be as valuable as the 95% in terms of getting to a level of precision.
And so that means we're going to enter a world of data collectors where we share data, we broker data, not in an industry, but across value chains. So retail, manufacturing, and distribution are going to share information on supply, inventory, stock, elasticity of pricing, demand, interest. So that's going to be one. Comms, media, entertainment, tech.
And Telco is going to be the same thing. They're all going to be selling. They're all going to be figuring out what customers and what personas are interested in a digital good distributed on a technology platform that actually has a digital monetization backbone on the other end. And so that's going to be how we share data, share information, have a give-get model that's going to drive that. And then the other interesting piece is most organizations are going to put something out there, don't realize that
The most important question out there is who do you sue when something goes wrong? Have you thought through the liability aspects on the back end, right? And so all that's going to come into place. And of course, when we take it back to the future of work, the smart organizations are going to realize when and where do we insert a human. Okay, so many sub questions we need to unpack there. So in the last
few weeks we've seen Benioff on stage talking about agent force and Carl at Workday Rising announcing their new AI agents and Bill McDermott at ServiceNow doing something similar. And the drumbeat is every human is going to have a colleague who's a bot. There's an AI agent for every role. And
Then when I think about the enterprises and trying to make sense of this, the thing that creates angst for me is the human caught in the middle, the employees who aren't the sea levels in the deep carpet. And they're thinking, what does it mean for me? Do I
Need to upskill, reskill? Is it time for me to hang up the cleats? What do I do? Because it feels like my job, it's imminent. My career, my job, my existence as a human is being threatened. How do you kind of unpack that really complicated set of conflicting data?
Well, you know, going from linoleum to two-inch pile rug is a whole different experience. And so we get one or the other. But on a serious level, I think we have to go through where and when you insert a human. And let's go through that exercise. There's seven things that we look at. That's stuff you learn for patterns and after that you put into machine, right? So highly repetitive tasks are going to go away. If it's something that has massive volume, right?
you might add more people to it, but you're going to augment with more tech and more software over time. Most organizations are going to do that. Lots of nodes of interaction. I don't know. I might build a multitask, three things, five things. I can't do 500 things. And so that's going to be a place where machines are probably going to win out. And if you need to actually accelerate the time to completion,
That's also going to happen with more machines. Lots of volume is more machines. But when we get to complexity, this is the interesting piece. There is a lot of stuff we can't model in algos. And that's where human judgment and decision-making is still going to be very important. When we look at creativity, we're really good at making the rules, but we're even better at breaking the rules.
with the right incentives. And that's what humans do really well. And of course, there's physical presence. Do you really want to have a human interaction or would you like to have a machine-based interaction? You're going to give folks choices across the board. So when we look at it in that context, you also have to take into account macro factors, right? As a futurist firm, we look at things like population dynamics and growth. In the US, it's kind of flattish, maybe
We might get some kind of isosceles triangle out of this puppy. In China, it's an inverted pyramid. In Japan, it's an inverted pyramid. In Europe, it's an inverted pyramid. Maybe in India, you have a pyramid. And maybe in South America, you have a pyramid.
And so aging populations are really going to need some level of augmentation and AI to be able to make up for all the work that cannot be done because the replacement of the population is not going to be there. We're going to go from 8 billion to 7 billion to 6 billion to wait, how do we actually build a road? Who's going to be there to take care of seniors? How are you going to actually deliver teaching?
Where do you do learn your skills? So all that is going to change. So in the macro perspective, it's a great thing because we're aging out. In the immediate future, it's a scary thing because you don't know what jobs are going to win out. And that's the long way of answering your question. But I do believe that those organizations who thought, we're going to wipe out 100 FTEs with AI, those folks are going to find out it'll be like 15 because of where and when you insert a human.
Organizations, me in a soapbox like Klarna that say, we're going to replace Salesforce in Workday because we have this partnership with OpenAI. Do a disservice. All the other organizations not called Klarna that are trying to figure out this complicated relationship with AI. What's your coaching to enterprise leaders about how to think about where to cut, where to add, how to onboard AI in a responsible way?
So we have to start by saying that do you have enough data to be able to actually hit that precision? And the responsible way is where are we going to insert a human? Because if you don't have enough information or you have just enough to automate a small part of that task or process, then you start asking that question about, oh, yeah, I need human judgment here. I need human judgment here. I need a control here. Here's where humans in the loop.
I don't know how much software we're going to replace, but what I can tell you is that we're abstracting transactional systems. We're extracting engagement and social systems. We're extracting experiences so we can ultimately get better at decision automation.
The goal here isn't more AI. The goal is actually to make better decisions, faster decisions, more precise decisions. And that decision velocity is where we're headed. So if said vendor decides that this is going to replace two other vendors, that's great. But did you really get to better decisions? And if the answer is yes, then you've succeeded. So I was talking to the GM of People Operations for Microsoft yesterday, 208,000 employees, and he was
giving me his perspective on this question about the macro economy. And I want to ask you the same question. So one of the things he said, Phil said to me is,
30 years ago, there were 3 billion less people on the planet. And here we are, we don't have 3 billion unemployed people. So the global economy is pretty resilient, but it changes. When Ray and Dan are having a version of this conversation in a decade, Paul, it's your crystal ball. What's different about the global economy other than mass unemployment or mass extinction of the human species?
The singularity is here. I don't think we're going to head in that. I think it's going to be different. We're going to see a lot of human augmentation, and we're also going to see a lot of robots and autonomous robots that are actually playing a role in our daily lives. It's not going to be like Rosie from the Jetsons, but you're going to have robots that are doing tasks that are out there. You're going to have software robots that are actually going through contracts
Your procurement contract, you might have an agent, your own agent that's actually doing your procurement for you. So, hey, if bananas drop to 17 cents a pound, maybe we should buy like 100 of them. Okay, great. Or there's a sale going over here. Okay, place your bid now. Or I want tickets to a game. Or this new technology trend is out there. I looked at my learning profile. I've got a 60-year curriculum. You know what? I'll sign up for that course.
All that stuff will happen. But what has to occur first is the design. We're going to go from, and this is my friend Ari Kayumi, who's been studying this stuff with BJ Fogg, who looked at behavioral science. We got to go from consensual technologies or persuasive technologies where we were before, we're sucking all my time and attention, to consensual technologies where we actually have some give-get, to mindful technologies where we're
The agent and the AI is actually working on my behalf, not the network's behalf.
How do we get there? Right now, I think the more I talk about this, the more I have to say, the design of AI right now in a centralized fashion is going to hurt us more. Until we figure out how to deliver AI in a decentralized model, we will have no sense of freedom. We will not have the ability to deliver on massive innovation, and we won't be able to deliver on the personalization that we're trying to look for. So we've got to fight this
urge for centralization and actually make room for decentralization so that the innovation is there. So 2050, I hope we figured that out and we've moved to a more mindful approach. And I believe that
All this notion of energy consumption that's going to be in the way will be gone because we figured out that we can power everything with solar and batteries and small modular reactors. So the compute power issue is no longer a question. And then the question that's a little bit different is, do we enter a world of universal basic income? Because we couldn't figure out why Roddenberry didn't leave us notes on how to get to the Star Trek economy. That's a whole different problem. But
I think we still have to find purpose. And I think that notion of humanity always being restless and always trying to do more or find more or ask those questions, I think our meaning and our purpose is going to change. And I don't know what that looks like right now. I haven't thought enough about this. But I would say that our purpose from going to menial tasks and just trying to survive changes to, I don't know if it's enlightenment or someone found a chakra over here. It's not going to be like,
that, right? But it's going to be, you know, we're going to find new places of what that purpose or vision will look like. And that's what's going to capture our imagination. I love that answer. Yeah, I envision a purpose economy where we shouldn't think about outsourcing meaning or the pursuit of purpose to a bot. I frequently say anything that can be predicted is better left to machines, but anything that requires empathy or rational thinking is
That's what makes us human. And I don't think that's ever going away. You know, that quest to be data and data's quest to be human is never going to end. So a few of your answers touch on a really important theme that we talk about frequently on this show, and that's
exercising AI responsibly. So maybe just to take a thin slice of that conversation, I'd love to get your perspective on the right way to regulate AI. Super thorny issue, but starting with who should regulate it? How should it be regulated? Are we on the right track? Or what would Zahrae do?
There would be no Zahrae, I don't believe in autocracy. But I think there are five factors. Me and a friend, David Bray, put out something in MIT Sloan Management Review probably about seven, eight years back, really about people-centric AI principles. It was really built off of five things that we looked at. Transparent algos, so people can actually see what's going on. Explainability, if you're discriminating against left-handed people with purple hair and you didn't really mean to, well, hey, that's a problem, let's go fix that. And
we need to reverse that and then learn things, right? We have to build that into our systems to do and learning. And then of course, over time, we have to figure out how to train these models to get that level of precision. And of course you want to have a human led model. Otherwise what happens? Well, the bots take over and Skynet wins. So those are the basics to, to get started. But going beyond that, I,
This human urge to regulate, right? Governments have purpose in regulating, but it also corrupts governments in regulating. And so it's gonna be interesting to see how governments will be able to influence their cultural values or ethics or regional differences into their AI. I just came back from Riyadh, from the Global AI Summit, and
As a person who would always be nationalistic and proud of a country and proud citizen of whatever country I was in, I put myself with that hat on. And I'm sitting here in Saudi Arabia, I'm like, that is brilliant. Of course, you have to have an Arabic language model, right? If I was in Taiwan and Hong Kong, I would want a traditional Chinese language model, right? And
In order to preserve your culture and your values, right, it's going to come down to that. And so this notion of sovereign AI with the ethics and values of every culture, I'm going to create a new country and the day off is Wednesday and no one's allowed to eat chicken because bird flu is really bad and it's caused all these pandemics, right?
That could be the value in my country. And so I can see these AIs competing with each other for input and energy and investment and innovation and ideas. And I can't tell you what ethics are the right ethics, right? Because every ethics is a thorny subject. And so trying to impose your values from one country to another country is already thorny enough. It causes wars, right? It's going to happen at the cyber level on AI.
And so I hope people just realize that people have a lot of differences, but 90% of the time we are very more similar than people actually realize. I think it's so critical that we're having this dialogue. It is, as we both said, a thorny issue. On one hand, we have to acknowledge that these models
are trained on data that was produced by biased humans. So nobody should be surprised when AI perfectly replicates human bias. It's not like humans left to their own devices are so good at exercising good ethics. So we shouldn't expect anything more from AI. So one part of me
enthusiastically agrees that regulation from these central bodies is really just going to perpetuate biases that are latent in the system. And then the other side of me is equally concerned about having the AI vendors, quote, grade their own homework. Where do you sit on that spectrum, given that there's not a good answer right now, but would love to get your perspective? There isn't a good answer, but I would say please design for serendipity.
The best ideas in life have all happened with someone in the shower, someone bumping a beaker on experiment. Someone met an awesome person at a bar and got married, right? This is serendipity, right? I mean, there's this notion of probability that actually has to exist. Otherwise, the world will be just a really, really boring simulation. Oh, wait, maybe we are in one. I don't know. But anyways.
You know, Ray, we're going to pick that one up in a subsequent conversation because I feel like, you know, it's our obligation to at least force the conversation. It's not happening enough. One of the things we are doing, we have our Constellation AI Forum, and we're doing one in New York and one in the Valley. And one of the exercises in this forum is we've got, you know, about 100 really smart people, policymakers, practitioners, and pioneers in AI. And we've asked them to take
an exercise. In this case, the UN has 30 human rights. We're there during UN General Assembly week. And we want you to say, what do these 30 human rights look like in an age of AI? How would you recreate this? And one of the examples I've publicly said many times is, I want the right to be disconnected and not be seen as a terrorist. So-
Can I pay in cash? Can I not be on a device? What does that mean? Right. Right. And so we've got to go through these issues. I'm sorry, we're going to do it on the fly, just like everything else we do in humanity. We wait till the last minute. But you know what? It's worth the discussion. So we're having this fun exercise where people break out into groups. They're going to do a readout. It'll be fun to hear what people say. That's responsible AI to me. That's leadership is just catalyzing the conversation. Thank you.
Man, this one flew by. We're going to have to pick this up in another version, but I got to get you off the hot seat, Ray. But you're not going anywhere without answering one last question for me, right? In addition to being a tech pundit, you are a foodie like myself. So I'm giving you some like Harry Potter flu powder. And in the span of a day, you can be anywhere in the world that you want to be. Map out your culinary journey, one day's worth of culinary journey in the life of Ray.
I want to be in Tuscany in Italy for some really, really good meats and pastas. I would love to be in Thailand for just a mix of spices and seafood. I would jump into, believe it or not, Australian beef is really good if you haven't had it. So Wagyu Australian beef in Japan.
Japan, just coming off of the fish market, something really weird or exotic, like some kind of uni or some kind of sea urchin or fish egg I've never had. And then jumping back into India because there's just so much flavor and complexity out there. India, Indonesia, some good food there. And of course, France, right? I mean, some of the best cooking techniques in
right? Feel the hand out in south of France, see what's out there, see what new Michelin star chef has out there for you. And then of course, the US is the best with, no, actually, you know what? Toronto actually may be better in some ways for like regional ethnic foods, hole in the walls that you just say amazing discoveries. And then of course, I mean, you would end back either in Las Vegas or New York, at least for me. We're going to meet for lunch at Jiro in Tokyo.
Oh, yes. You're going to join me for a nightcap in Sorrento for some limoncello. How about that? Everything else we can decide. We still haven't hit South America. I got to get my... I got to go to BA for amazing steak. Shouldn't I go there for steak before I go to Toronto? That one caught me off guard.
Toronto has really good ethnic food, some of the best Persian food, some of the best Indian food. There's this place called Fishman's in Toronto that actually takes seafood. You get stacks of crabs. Look it up. It's awesome. So, I mean, there's just all these wonderful places. And then we'll all get on those. We'll all get on those Zempik and some GLP one and forget about it. Thank you for being a guest on AI and the Future of Food.
Sounds like a new episode offshoot for you. You bet. Ray, where can the audience learn more about you and the work that your team's doing? Yeah, check out Constellation at www.constellationr.com. You can find me on raywong.org. And of course, you can catch up with me. I think LinkedIn seems to be the best channel these days. So try to connect with me or follow me there. And there you go. The legendary Ray Wong. Thanks for hanging out, man. It's been a lot of fun.
Hey, thanks a lot, Dan. All right. That's all the time we have for this week on AI and the future of work. Of course, we're back next week with another fascinating guest.