This is the Everyday AI Show, the everyday podcast where we simplify AI and bring its power to your fingertips. Listen daily for practical advice to boost your career, business, and everyday life.
There's so many reasons why AI doesn't work sometimes, right? There's so many use cases, so many easy, you know, seemingly easy ways to just gain more productivity, to get more things done. But why does it fail?
sometimes. Today I'm excited for a conversation that we're going to be having about operational muscle and what that means and how I think that might be the missing key to every company's AI strategy. All right, so I'm excited for this conversation. And if you're new here,
Welcome. Thank you for tuning in. My name is Jordan Wilson and this is Everyday AI. We are your daily live stream podcast and free daily newsletter helping everyday people like you and me not just keep up with what's happening in AI, but how we can all actually leverage it to get ahead to grow our companies and our careers. And that starts here on this podcast and live stream, but it's literally starting here at
NVIDIA GTC. So yeah, if you're listening on the podcast, uh, we are technically right here at GTC, uh, with NVIDIA. I think one of the most exciting, uh, tech conferences in the world. Uh, so bringing a lot of great, uh, NVIDIA partners, uh, to you on the podcast. So, uh,
Enough about that. If you haven't already, please make sure you go subscribe to the newsletter, youreverydayai.com. We're going to be recapping today's conversation and a whole lot more. But enough chit-chat. I'm excited for today's guest. So please help me welcome Andy Lin, the VP of Strategy and CTO for Mark III Systems. Andy, thank you so much for joining the Everyday AI Show. Thanks for having me. Appreciate it. All right. So can you tell everyone a little bit, what is Mark III Systems? What is it you all do?
Absolutely. So Mark3, we're an NVIDIA Elite partner, and we specialize in working with large organizations, including Fortune 500 companies, industry, research institutions, universities on building their AI, Gen AI, modern HPC, and digital twin centers of excellence.
So, yeah, we're going to dive into all of those different things, but I want to start at the top. Tell me about this concept of operational muscle and how this can really be a missing piece for enterprises that maybe are still struggling with AI adoption. Yeah, so operational muscle is a term we sort of termed to talk about really the intangible aspects of operating AI at scale.
Most talk in the industry today is all around technology and models, right? It's the idea of platforms, GPUs, software, how you train models, et cetera, PyTorch. But actually, not a lot of talk is thought about education, around how you build teams, how you build culture, specifically for large organizations that are looking to do this efficiently at scale. When we talk about the center of excellence,
It's the idea of being able to have a centralized platform, right, anchored obviously by our partners at NVIDIA. But to be able to enable researchers, scientists, engineers, data scientists, folks training models to be able to use that tooling in a centralized way, but be able to maintain individuality and to focus on their work first and foremost. Because everyone's working on a different type of problem. Everyone's doing their life's work.
completely separately, you really can't slow these folks down. So how do you bring these two things, you know, sort of perfectly in line with each other? It's actually a really tricky thing. And when we talk about operational muscle, just to bring it back to that term, it's all around the idea of being able to enable the people process culture part of the equation to make sure you're at equilibrium with the technology.
So I think when people are dissecting AI and how they can make it work in their organization, maybe the process part of those three things pops into their mind, but maybe not necessarily the people and the culture. Explain why those things are maybe just as important as the technical side. Absolutely. Yeah. I mean, it's funny. People is actually probably the most important part of the equation when enabling an AI strategy, right? You're talking about artificial intelligence.
But it's actually the human part of it and how to build teams and how you enable a mechanism to distribute education in a practical way, I think is really the key that will determine success or failure. You know, when you talk about an organization, it's what I call the idea of me and us, right? The organizations that do it the best are
are when you have a community of researchers and data scientists and folks training models. And then on the other side, you have the technology teams who are focused on enabling platforms to serve those folks. You have an equal amount of me versus us for each of those groups. And to be able to enable that is really key.
To be able to talk about specific teaming aspect of it when you talk about people and culture is that when you talk about AI and digital twins, more than ever before, in order to enable a successful strategy at scale, you have lots of different types of people working together. I read a study that said if you're trying to, for instance, build a digital twin to simulate a factory, simulate a hospital, whatever that might be, you need 10 different types of people all working together.
If you think about it, it makes kind of sense, right? 3D artists, you've got machine learning folks, you've got developers, you have the subject matter expert maybe in healthcare if you're trying to digital twin a smart hospital. You have the nurse, you have the physician. These are people in the past that would never have anything to do with each other, right? Engineers work with engineers.
you know nurses work with nurses developers or developers but because of the idea of enabling scalable intelligence to be able to frictionlessly move anywhere through these mechanisms of ai and digital twins these groups have to work together well and i tell everyone from the intangibles perspective regardless of you know if you're a teenager just getting into the space if you're a professional if you're someone looking to reinvent yourself
You need to be comfortable working with people that have nothing to do with anything that you've ever worked for in the past. And this is a dramatic change from the past. And the organizations that I've seen do this well through programs like hackathons or getting folks together to solve problems in this way by using these mechanisms are the ones that are ultimately successful.
Do you think that maybe one of these ongoing challenges, at least when we talk about enterprise adoption at scale, is because people maybe view AI, you know, unless your company has been using it for many decades. But when we think of, you know, generative AI and large language models, I think sometimes people just think of it as,
a personal productivity tool, and they don't necessarily always think about how can this transform our department? How can this change the future of work for our sector? Is that something that you see a lot of people maybe just look at generative AI, at least at a smaller scale, as, hey, this is about personal productivity, and maybe that's why it gets siloed?
I do. I think ChatGPT has done a lot of good and perhaps not so good things as far as sort of setting the idea of what it is, right? I don't mean ChatGPT specifically. I just mean the idea of chatbots and agents, right? They are very helpful, right? Obviously, the ability to type in what you want and then have a coherent human-like response to solve your problem or to give you an answer is actually really helpful. But
Around generative AI and LLMs, the idea is to be able to make sense out of any form of unstructured data in ways that you haven't been able to make before. So conversational AI is one example, but for instance, we do a ton of work specifically in the healthcare life sciences space.
where the idea is you can comb through proteins and make sense and discover new drugs and find new precision-based therapies in ways that you would have never been able to do before using DNA and RNA strands, et cetera. And that's just one example of a way that it's going to be utterly transformational and affect millions of lives that has absolutely nothing to do with personal productivity.
So it is good in the sense that it's brought a lot of attention, obviously, in the space. And people understand where it's going. You see what's happening specifically with NVIDIA in the ecosystem around agentic AI, which is really the idea of the next chapter beyond generative. Generative is the idea of basically being able to create things like words or pictures
or based on a lot of unstructured data, right? Agentech is really the idea of having an agent essentially to use those as mechanisms, but to be able to take action like any human would in an automated way, depending on how you want it and to be able to scale frictionlessly because after all it is AI, it's an agent anywhere in the world, anywhere you might need it as far as within your business or your enterprise or your industry or your research. So it's pretty exciting as far as the possibilities that may lie ahead for us.
Sure. So you gave this great example, you know, talking about digital twins. And, you know, I think you said that a study showed you need at least, you know, five to 10 different types of people, right? So that really explains, you know, maybe how the interpersonal might change, you know, when you use AI to scale. What about
intrapersonal right like that's something i think about a lot and you know especially as we go into you know agentic ai where you know we're giving these ai systems agency right to make decisions with our data and a lot of times you you have you know mid-career professionals that are like wait
Those are the decisions I've been making. Agency is something I enjoy. Even internally, how should business leaders to really get that good fit between people, culture, process, how do we need to be changing how we think even about work? That's a really good question. I wish I had a really great answer for it. I think at the end of the day, one of the keys in this space is you need to empower the people
who are actually building these things to make them part of the solution. I think a lot of the fear from society about these agents around doing work is you're afraid that somebody's going to come over the top and force an agent down. And I think if you just think about as a human, if you have a team,
right? How can they be part of the solution to help you create agents to amplify what they're actually doing in the marketplace, right? Make them part of the solution on actually building agents to actually amplify the pieces of work that they don't like so that they can focus on the pieces of work they do like and then
and that they're great at. It's almost like I want to build a twin of myself, literally a twin, not a digital twin, but literally a twin. So I have Andy one here and I have Andy two here. What are the things that Andy two doesn't like? Andy two doesn't like things like doing expenses and doing all these things. Andy one does like working with organizations to help come up with strategies and working with our team to build things. So how can I create an agent to be able to do those things?
And I think you're right. You hit the word right on the head agency. Right. You want to give people agency to help them craft the strategy to be able to make that happen. And I think organizations that think about that, you know, from a good leadership and a good sort of organizational management standpoint are going to be the ones that are going to be successful, just like with anything else. Right. So that's a really good question.
- You know, and getting back to this, you know, the concept of operational muscle, which I love, right? Building muscle, you know, usually involves first, you know, a little pain and being uncomfortable, right? Before you can get those, that repetition in and actually be stronger. You know, in your experience so far, you know, working with, you know, different clients and customers,
What are some of those initial things that hurt clients when they're trying to fully implement it and that they really have to get through those reps and then finally they can see the gains on the other side? What is that struggle that may cause pain in the beginning?
Are you still running in circles trying to figure out how to actually grow your business with AI? Maybe your company has been tinkering with large language models for a year or more, but can't really get traction to find ROI on Gen AI. Hey, this is Jordan Wilson, host of this very podcast.
Companies like Adobe, Microsoft, and NVIDIA have partnered with us because they trust our expertise in educating the masses around generative AI to get ahead. And some of the most innovative companies in the country hire us to help with their AI strategy and to train hundreds of their employees on how to use Gen AI. So whether you're looking for chat GPT training for thousands,
or just need help building your front-end AI strategy, you can partner with us too, just like some of the biggest companies in the world do. Go to youreverydayai.com slash partner to get in contact with our team, or you can just click on the partner section of our website. We'll help you stop running in those AI circles and help get your team ahead and build a straight path to ROI on Gen AI.
I think the biggest thing is just an inability to explain maybe your first few experiments up stack. I think one of the things that we help a lot with is helping our organization that we work with set the proper expectations internally, that it's going to be a long road, right? But if you don't decide to get on now,
you're not going to be able to catch up when your competitors are already ahead of the game in a year. That's what I love about the space. It's all about sweat equity and earned equity. The amount of work you put in is how far ahead you're going to be, even if you don't necessarily get to the end of the road right away. If you train a model that's 50% effective, you may say, oh, man.
man, what a waste of time, right? But obviously over the next couple of years, your model to predict pricing, to do forecasting, to whatever that might be, may get up to 90, 95%, but you have to go through the reps in order to do that. So I think the pain is for organizations perhaps that don't set the right expectations, being able to have to explain that process. We're actually going through a similar part in the ecosystem right now, in my opinion, specifically around digital twins, right?
Because I think in the long run, what's going to happen is everyone is going to have an AI center of excellence, a twin center of excellence. They're going to talk to each other, get to communicate with each other. Because if you think about it, what's the goal? The goal is to build scalable agents, models, experiences, right, that simulate performance.
you know some expertise in the organization that's ultra scalable that can go anywhere at the drop of a hat what is scalable expertise intelligence right right now it's primarily been driven by llms and generative ai which is the brains and the ears can i talk can i understand can i listen
The next chapter is all about the eyes, because if you think about it, people are visual. We all exist in the real world. But workforces are hybrid now by sheer nature. They're geodispersed. So how do you create a mechanism to have fruitful conversations about physical spaces when people are spread out? You have to have ways to be able to create a replica of how that actually works in the real world.
If you look at what Nvidia is talking about, they're talking about physical AI, they're talking about robotics, they're talking about agentic AI. These are all the alignment of these items. Now, to tie it back to what you originally asked specifically on operational muscle, these things don't just happen because you want it to happen, right?
We all wish we could get to the end of the next five years and then, oh, it's working. But that's not how it works, right? You have to have people to build these pilots, to learn what you don't know, right? In the digital twin side, it's all around creating a 3D representation of your store, of your factory, of your school, of the human body, right? And then being able to iterate that over time to improve the fidelity and the quality of the digital twin.
And then mix in AI to be able to help you build it faster, to be able to present that digital twin to a what I call a regular person, right? Who can just use it, right? If you think about maybe my mom or something like that, right? Can they use digital twin to figure out how to plan their next trip? Can I, on the enterprise side, right? Can I present it to a facilities planner to be able to plan what my next store looks like, right? So you have to be able to mix in all those things and it just doesn't happen.
Right. You know, it starts a day at a time, you know, creating a, you know, if you're going to just want a hospital, how do you start? Right. And this is something we're working on, you know, pretty significantly out in the field today. You start with half a room. Right. You start with a bed.
Right. You make the bed great. You show people what the bed's like. OK, the bed's great. OK. Build out the other half of the room. The other half of the room. Pretty great. OK. Then you pretty soon you have a hospital. Right. You don't say, hey, I'm going to create a hospital. It's going to be ready in three months. That will not work. So because going through that process, people understand.
what you're trying to do, and they have ideas and they get bought in, tying it back to agency to be part of building what that looks like. And it creates this sort of positive feedback loop that's entirely powered by people.
Yeah, Andy, I like how you just broke down the digital twin concept a little bit because I think sometimes even myself, when you think about digital twins, you're like, okay, it's a scale. It's massive. It's being able to simulate trillions of data points instantly. But you said, let's start with one pet. So it's really turning this concept of digital twins and scaling with AI on its head a little bit.
I'm interested, why that approach, starting with just one bed or half of a room when it seems like the thing that people are most attracted to is like, oh yeah, now I can, like Earth 2 last year at the keynote, right? People are just thinking huge, huge, huge. So what's the benefit of a digital twins that's small, small, small? Absolutely. So it ties back to operational muscle and knowing what you don't know. I think
Earth 2 is amazing. And don't get me wrong, right? I'm the biggest fan. I was blown away by that last year. Yeah. But that's like, if you think about it, that's like for an organization, that's like the equivalent of being years out. And it gives you a great target. And NVIDIA is the ultimate company of being visionary in the space, right? Without them, none of us would be able to do what we do.
But to be able to execute, right, to get down that road, and this is quite frankly, is part of our job, is all about baby steps to be able to get to. When I say digital twin to an organization, a room full of 10 people, I'll get 10 different answers on what I mean. How do you create consensus, right? The way to create consensus is to build a micro version of what that looks like, a bed, right? Cloud formation, right? Part of the body, small organ, right?
Show that to all the people, have them comment on it and all agree on, yes, that's what I meant by digital twin. And then from then, if you think about it, the rest of it is just 10,000 iterations of that small piece. And I think where it goes wrong is when somebody tries to build the whole thing without consulting the consensus, it only takes a few people in that organization talking about people process culture to render the entire thing not successful.
So being able to take the pilot and the iterative approach and focus on half technology, half people, process culture, it's not only a good way to do it. I think it's the only way to be able to preserve the people consensus part of this, right, in the organization. And I think, you know, I joke a lot of times that
I love the idea of are the possible and the what ifs, right? But if I hear a fifth what if in a meeting, I'm out. Because it means that they don't really understand what it's going to actually take to grind and iterate to that process. Now, if they understand and they understand the idea of starting small and running a pilot, and I can tell that they're really built to be able to sustain the road with us, we're all in with them.
And I think the cool thing about right now in this space is that the folks that are making the first steps of which, you know, we have lots of great examples in distribution and manufacturing and healthcare and in other fields, these are going to be the leaders in three, four, five years because they decided to make the steps now. And I think that's what particularly excites me. And in every single one of these organizations,
Specifically, you have leaders and you have people bought into this process who understand what the road's going to be. And I'm extremely excited, obviously, with some of these announcements at GTC with NVIDIA around physical AI, agentic AI. You can also tell, obviously, NVIDIA's seeing this thing come together just like we have and we've believed in the last few years. So one kind of common thread that I'm picking up on here is this concept of explainability.
Right. You know, and really building that operational muscle and getting those, you know, as an example, the the ten different personas involved is maybe starting small and starting with something that's explainable. Is that the case? Right. Because, yeah, a lot of times, you know, companies that maybe are sitting on mountains of data and they've had data
for many decades, but haven't gone all in on AI yet. Maybe they just want to do the whole thing at once, you know, overnight, and they want to see transformation as quickly as possible. Is it maybe just as important to make it as small as possible and to really, you know, be able to kind of uncover the veil of explainability, so to speak?
Absolutely. And I think I get quite frankly afraid when somebody tries to go big too big too soon, just like you were mentioning. But explainability is a really important part of the equation, and it's an area that's quite frankly unsolved. There are a lot of companies that do nothing but focus on the explainability of models.
And also to a certain extent, I think, you know, you're going to see sort of the explainability of digital twins and simulations also be a field that's going to be emerged as that field grows going forward. It's to be able to explain to people, especially when there's an error, right? You know, you have a model that's 98% accurate and you may have a 2% error.
Like, why did that happen? Even though we all know maybe humans have a 10% error rate, right? Yes. But you can attribute, okay, it's that person, right? It's John who made that mistake, right? I hate to put it that way. But if you think about it- It's always John. It's always John. Yeah, that John. But, you know, if you think about it, like, I think people are trying to come to some sort of consensus or sense about what happens when that happens in an AI world or what happens in a simulated environment.
world around digital twins. So yeah, I'm honestly, I'm kind of fascinated to see, you know, where that goes. Obviously that ties into, you know, things like, you know, governance and regulation, which I feel like maybe none of us really have the answer to yet. But yeah, that's definitely something to take a look at. And I think for that reason also,
It's even more important to build operational muscle, to start small, to build a pilot, to get everyone on the same page, to go to iteration two, to make sure everyone's on the same page. Because anytime you can have a consistent community, it makes the idea of explainability that much easier. Right. You know? Yeah, that's, that's a great point. You know, another thing, uh, Andy, I'm curious about is, you know, building up this, this operational muscle and the people, the culture, the process, um,
Obviously, the buzzword in 2025 has been agentic AI. And when you couple that with digital twins and multi-agent environments,
How do you have to or how can you protect almost that people culture process side when sometimes the more and more that we get into this AI, specifically multi-agentic systems, even digital twins, it almost seems like so separated.
from some of those people, culture, process. So how do you protect that and keep that as an integral part of growing that operational muscle? That's a great, great point. I think it really just starts with basics.
and making sure that you have the right team that's empowered in place, you be able to build the right mechanisms for education when you train your first model or you build your first digital twin. Because if you think about it around a Gentic AI, around some of these concepts, what it really just means is you have lots of models or lots of simulations and they're all mixed together to simulate some form of intelligence
that matters for a business or an enterprise or a research institution, right? And if you think about it, it's sort of like having 50 different models or 50 different models and simulations or whatever that might be. If you have a good team and a good structure around each one of those, you'll be able to create a modular system that will allow you to scale from a people, process, culture standpoint, right?
I think these models and these agents still learning, you know, so using some of these terms, but everything's being rebranded. Right. Which is great. You know, you, you have, it's a living and breathing thing and living and breathing things require care and feeding by people. And behind every great agent,
there's typically a great person or a great team. You know, these agents don't build themselves, right? And that's, the agent should be an amplification or personification of the best people and the best attributes that your team has, you know? And I think that's the ability to perhaps embody the best of a company, the best of a team, the best of leadership, right? That's the promise that this space has and where we are in the cycle, right?
Right. And I think it's not just a matter of doing it once. How do you maintain it? How do you iterate around that? And that really ties the back to the muscle. Right. I think it's really interesting. A lot of organizations may think, hey, you know, I just bought a big tech platform. I just bought a bunch of GPUs. Right. Oh, I have an AI strategy. Right.
It's funny, there's a lot or on the converse side, right? You may have a lot of organizations, right? Who's like, you know, I won't have the large amount of funding for a year, right? Should I start then? And the answer is absolutely not. You need to start now because you can build an operational muscle
without technology to make the strategy work. But in my opinion, you really can't build an AI strategy without the muscle if you start with technology on the flip side. So like I said, it comes down to people. It comes down to alignment. It comes down to balance between builders and operators. If you have that, you have alignment, you're probably going to be successful.
All right. So, Andy, I think you've done a great job of, you know, laying out the case, so to speak, about why operational muscle can be a key missing piece of a company's AI strategy. But, you know, as we wrap up today's conversation, because I think it's been a great one, what do you think is the one most important takeaway for organizations, you know, to kind of glean from today's conversation, right? Because there's a lot of, you know, a lot of
new movement, right? We're here at GTC. There's so many new announcements. What is the one most important thing to build that operational muscle? I think just on a very practical note is just to learn by doing. You know, I think that
we want to sit back and you know watch all these announcements and plan and hyper analyze and worry about when we should get in when we should do this um you're really kind of not going to know the right answer it's very similar to the running startup you just have to start building stuff
because you don't know what you don't know yet. And again, that's part of the operational muscle mantra, right? Is the idea of starting by a micro example of what you're trying to do, right? So just think about, you know, what the vision is for five years, right? Am I trying to build a smart hospital? I'm trying to build a smart manufacturing plant, right? So I could simulate anything. So I could simulate any scenario as far as like around throughput. Just start very small and have a really diverse cross-functional team
Going back to the idea of having 10, 5 to 10 different types of personas working together. And part of the process and part of the journey is not just the technology making it work, but what you learn from each other. I think it's cliche, and it may seem a little bit sappy about teamwork, because you're like, yeah, of course. Yeah, of course it's teamwork. But I'm shocked how often that's completely overlooked.
And it's going through the hard work every day on building, figuring out what's broken, figuring out what actually works, and iterating over a long period of time. The organization I see most successful in this space, their overnight success took years. And it's a very close-knit team of people who have all different types of skill sets, who have all worked together over a long period of time to make it happen.
So find your small team. You don't need to be in a large company, right? If you're at a university, find other colleagues and other majors, other disciplines, people that you would feel very uncomfortable with working with maybe in 10 years ago, but who need to be part of your micro team. Because you yourself could also learn about how to build your own operational muscle from a personal journey standpoint, so that when you get to that point in your organization, you know exactly what to do.
And like I said, a lot of times it's very much the same. So I said, you know, number one, learn by doing, get comfortable with being uncomfortable with working with people completely not like you. And then just sort of have faith in the process. You know, I think from a personal standpoint and then also from an organization standpoint, if you put in the hard work, if you're aligned and you have a balance between me versus us, you will be successful. Yeah.
Such great insights on today's show. Andy, thank you so much for taking time out of your day to share with our audience. I really appreciate it. Thank you. Appreciate having me on. All right, y'all. That was a lot. My gosh, if you were out there on the treadmill walking your dog, you probably missed 90% of that. Don't worry. I'm going to be recapping it.
in today's newsletter. So if you haven't already, please go to youreverydayai.com, sign up for that free daily newsletter. We're going to have a lot more from today's conversation, a lot more from GTC and everything else you need to get ahead in leveraging AI. Thank you for tuning in. Hope to see you back tomorrow and every day for more Everyday AI. Thanks, y'all.
And that's a wrap for today's edition of Everyday AI. Thanks for joining us. If you enjoyed this episode, please subscribe and leave us a rating. It helps keep us going. For a little more AI magic, visit youreverydayai.com and sign up to our daily newsletter so you don't get left behind. Go break some barriers and we'll see you next time.