We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode 311: Holger Mueller, Constellation Research VP, On Cloud Acceleration, AI Ethics, and Enterprise Agility

311: Holger Mueller, Constellation Research VP, On Cloud Acceleration, AI Ethics, and Enterprise Agility

2024/11/18
logo of podcast AI and the Future of Work

AI and the Future of Work

AI Deep Dive AI Insights AI Chapters Transcript
People
H
Holger Mueller
Topics
Holger Mueller: 对AI伦理和安全的担忧主要来自尚未应用AI的企业。云计算使得企业自动化技术的采用周期显著加快,企业对AI的采用速度空前迅速。AI正在带来实际效益,企业不应错过这些益处。分析师的价值在于对行业趋势的理解和对软件开发速度的把握,而非信息优势。企业需要更快、更灵活、更高效、更有效率。大型科技公司容易受到效率差距的影响,而初创公司则有机会抓住这一差距。 AI监管应侧重于价值观和伦理,而非过度的法规。过度监管正在扼杀欧洲的创新,创新存在“创新引力”,硅谷是当前的创新中心。 未来几年,量子计算和AI在事务处理方面的应用将成为主要议题。 Dan Turchin: (访谈引导,未表达核心观点)

Deep Dive

Key Insights

Why is the fear around AI regulation and ethical AI largely promoted by companies without AI capabilities?

The fear around AI regulation and ethical AI is often promoted by companies that lack AI capabilities because they aim to slow down competitors who are ahead in AI development. For example, Microsoft deployed lobbying teams for responsible AI in Washington, which pressured Google to hold back its AI advancements. This strategy creates barriers for competitors while allowing these companies to catch up.

What is the significance of cloud infrastructure in accelerating enterprise automation and AI adoption?

Cloud infrastructure is critical for accelerating enterprise automation and AI adoption because it allows faster innovation cycles. Unlike traditional on-premise systems, cloud-based solutions enable enterprises to access the latest technologies without significant delays. For instance, Oracle has invested heavily in cloud infrastructure, including NVIDIA GPUs, to support AI advancements. This shift has reduced adoption cycles from decades to months, enabling enterprises to stay competitive.

What are the key challenges enterprises face when adopting AI technologies?

Enterprises face challenges such as lack of trust, governance issues, and concerns about data leakage and IP safety when adopting AI. Additionally, generative AI tools often struggle with context and complexity, limiting their effectiveness. Despite these challenges, 76% of developers are using or planning to use AI code assistance, indicating a growing acceptance of AI's potential to augment human capabilities.

How does the concept of 'enterprise acceleration' impact modern organizations?

Enterprise acceleration emphasizes the need for organizations to become more agile, efficient, and effective. This concept is driven by the rapid pace of technological innovation, particularly in cloud and AI. Enterprises must adapt quickly to new technologies to remain competitive, as seen in the demand for AI capabilities in 2023, where companies prioritized AI over traditional roadmaps.

What role does innovation gravity play in shaping technology hubs like Silicon Valley?

Innovation gravity refers to the concentration of technological advancements in specific regions, such as Silicon Valley. This phenomenon is driven by the proximity of hardware, software, and internet companies, creating a self-reinforcing cycle of innovation. For example, Silicon Valley has been the epicenter for hardware, software, internet, and smartphone innovations, making it a leading global hub for technology development.

What are the potential unintended consequences of AI regulation?

AI regulation can have unintended consequences, such as stifling innovation and creating barriers for startups. For example, Europe's stringent regulations have hindered the development of cloud infrastructure, making it harder for European companies to compete globally. Additionally, regulations often become outdated quickly, failing to keep pace with the rapid evolution of AI technologies.

What are the future trends in AI and enterprise technology that we might see by 2026?

By 2026, quantum computing and advanced AI algorithms are expected to transform enterprise technology. Quantum computing could revolutionize areas like protein folding and chemical simulations, while AI may move beyond document-centric tasks to impact transactional systems. Additionally, enterprises will likely adopt more sophisticated 'what-if' scenario planning tools, enabling better decision-making and business simulations.

How does the regulatory environment in Europe compare to the United States in terms of AI innovation?

Europe's regulatory environment is more restrictive compared to the United States, which has stifled AI innovation. For instance, Europe lacks significant cloud infrastructure, making it harder for startups and enterprises to innovate. In contrast, the U.S. benefits from a more lenient regulatory climate, which has enabled the growth of major AI companies like Microsoft, Google, and OpenAI.

Shownotes Transcript

Translations:
中文

The interesting thing is what I found and I've seen this like this in the industry is that the fear around regulation, ethical AI, safe AI and so on is largely has been promoted by the ones who didn't have AI yet. Good morning, good afternoon or good evening, depending on where you're listening. Welcome to AI and the Future of Work, episode 311.

I'm your host, Dan Turchin, CEO of PeopleRain, the AI platform for IT and HR employee service.

Our community is growing thanks to you. I get asked all the time how you can meet other listeners to make that happen. We recently launched a newsletter out on Beehive where we share weekly insights and tips that don't make it into the podcast, as well as opportunities that you can meet up with other community members. It's free. It's not spammy. We will never share your email address. We'll send a link to that newsletter so you can register for yourself. If you like what we do,

please tell a friend and give us a like and a rating on Apple Podcasts, Spotify, or wherever you listen. It helps others discover the podcast. If you leave a comment, I may share it in an upcoming episode like this one from Lowell in Nashville, Tennessee, who's in software sales and listens while doing his expense reports. Lowell's favorite episode is the one with Neil Mant, an awesome episode.

Neil's an Emmy award-winning Hollywood producer and is a great entrepreneur. Fun conversation. Go look that one up in the archives. Lowell, glad you enjoyed it. We learn from AI thought leaders weekly on the show. Of course, the added bonus, you get one AI fun fact each week. Today's fun fact, Rich McKeatron writes in IT Pro about what's required to separate fact from fiction as an enterprise leader investing in AI.

He writes that the percentage of companies planning to invest in generative AI is actually down to 63% from 93% a year ago. One of the biggest barriers to widespread AI adoption is a lack of trust.

Generative AI can be difficult to govern because there's no universal definition of how it can be used. And as we all know, regulation is still evolving. Above all the myths spread since 2022, the year when Chachaputi launched about generative AI, those that imply LLMs can outpace humans or have already achieved AGI or artificial intelligence are perhaps the most damaging when it comes to trust. Case in point,

NVIDIA boss Jensen Huang caused a stir earlier this year when he hinted at the death of coding. Matt Garman, CEO of AWS, recently suggested AI could mean developers aren't writing code within the next two years.

Yet in 2024, earlier this year in May, a survey of 1,700 developers across the Stack Overflow platform found that 76% are using AI code assistance or plan to use them in the near future. However, many admitted that their code generated by AI struggles with context, complexity, and often obscurity.

AI coding skills help developers with menial tasks, but aren't likely to replace whole jobs anytime soon. My commentary,

After less than two years into this global experiment with generative AI, we're having grounded discussions about the real value, the cost, and the capabilities. It's clear AI augments without replacing humans. And it's also clear human augmentation could make humans sufficiently more productive, which could lead to a need for fewer of them to do the same quantity of work.

It's more important than ever that we have an informed, nuanced conversation about this complicated relationship that we have with machines. Expect more of this discussion ahead. And as always, we will link to the full article in the show notes. Now shifting to today's conversation.

Holger Mueller is a VP and principal analyst covering the future of work in human capital management for Constellation Research, one of the top enterprise software analysts led by legendary tech provocateur Ray Wang.

Prior to joining Constellation, he was VP of Products for Northgate Renso. He was also Chief Application Architect at SAP and VP of Products for FICO, home of great former guest Scott Zolbe, who is the Chief Analytics Officer there.

Holger also held leadership roles at Oracle before becoming an analyst. He's a frequent blogger about enterprise software as well as sports. And his Uber backseat previews of tech events posted on his X account are always insightful. Holger studied information science, marketing, international management, and chemical technology at the University of Mannheim. And he speaks six languages.

We're going to focus on English in this conversation. Without further ado, Holger, it's my pleasure to welcome you to AI and the Future of Work. I can't believe it took us five seasons to have you on, but let's get started by having you share a bit more about that illustrious background and how you got into the space.

Thanks for having me, Dan. It's great to be here. Better late than never, right? No question about that. I still have to get my blood pressure down blushing from that intro. Yeah, that's what they say about me. I'm trying to be humble. I'm not one of the analysts who thinks to be right all the time. I have to be right more times than wrong. That's what the quality is. But I also cherish to give controversial and thought-leading and thought-provoking comments.

advisory which sometimes is decades ahead some of the things i wanted to see built by now have not been built yet i got into this role because i used to build software for a living no my team's built software for a living i was more in charge if they didn't deliver delivered what was good and new ray for my time at oracle we both worked and he kept saying like become an analyst

and I gave it a try. And I like it, right? It's much more fast-paced than building software where you work hard for three months and you really have to do a lot of marketing to show people, look, that thing moved by five inches and it was a lot of weekends, blood, sweat, and tears to get there. So good to be in the analyst side where things are even faster moving. So I really like it. So that's how I got here. We both go back decades in enterprise software and we've seen

Shifts that at the time have seemed cataclysmic, everything from mainframes to desktop or client server, from desktop to mobile, from on-prem to cloud. Now we're shifting from traditional applications to AI-first applications. Many, at least in and around Silicon Valley, claim that this technology shift is different than previous ones. What's your perspective?

No, I think it's absolutely right. And it's right for the reason that for the first time, the majority of enterprise automation is in the cloud. So before we had delays because people had to sweat and assets, write that down. There was no significant advantage from it. So the first time we saw that for enterprise, if you want to talk a little bit of the history, we saw the rise of client server, which helped one of my two-time employers, SAP, to become market leader of ERP, which they still are today.

And the reason was having the same automation capability like on the mainframe made it significantly easier for enterprise to roll out in different countries. Because they would basically say, here, France, Mexico, Canada, here's your server. You figure out your requirement. Your market's different. Here's our three. We trust it. We can roll it up. And that's what cemented it because SAP was significantly ahead of everybody.

But that was just in the enterprise, right? And still was on an own server, like other people made a lot of money on the hardware and the implementation, of course. Now, the majority of automation being in the cloud, the innovation, whatever it will be AI and God forbid, or we will know there will be something beyond AI, right? What's going to be the next thing? Like I work on quantum as an example a lot. So maybe quantum is the next thing. It will be provided in the cloud.

And with that, it will be accessible for the core of the automation and data, which is very important for both, that an enterprise has. So we will see significantly faster adoption cycles. And if you look, the mainframe to client-server was a decade. Client-server to internet architecture was more than a decade, potentially, because people were on different platforms.

Going to a cloud architecture, that's still ongoing for some enterprises who are still running things on-premises and some vendors who run things in captive private data centers, which is nothing else than a hosted data center. So the same sticking to a paid asset server, which I have to pay down, being limited to that, and can't expand my software base, my digital automation potential as I want, as I need to.

So this is why we see the significantly faster adoption. Never in my 30 plus years, I've seen that enterprises told software vendors like last year, forget about the roadmap.

I don't care what you told me in 2022. It's 2023 and I need to know what you're doing with AI. And it's okay to make sacrifices and not deliver stuff because I mean, the thinking last year was like, if I don't get AI, I will not survive 2024. You know, that was a little bit exaggerated, right? People do survive without AI or some AI or starting AI, but

That has never happened before and it shows how the adoption cycles of technology get so much faster because of the cloud. Because you're running there and your cloud vendor wants to do the newest greatest, they want to differentiate themselves. I just come back from Oracle's Cloud World. It's a classic example. We can drill into this where an old guard company, which was relevant for companies 30, 40 years ago,

realized to have the cash flow to put significantly amounts of money into public cloud investment. Oracle has never invested 50% of free cash flow back into CapEx, which they're doing since eight quarters or so. They're the only cloud which has NVIDIA spare CPUs, GPUs available for people who want to do something because they invested so much. And someone who's 80, I wish...

I always jokingly say, I wish when I'm 80 and still working, which I don't know if I want to, I would have half as much fun as Larry Ellison had on Tuesday delivering a keynote at 80. So good for him, good for Oracle, but it shows how important the cloud is and how fast the innovation from that is. And to a certain point, Oracle being a relative newcomer to the cloud,

not having the investment legacy which an AWS has already, despite being around since 15 or soon 20 years. But they have to support a lot of things. Their CapEx goes across a lot of different things and different services, whereas Oracle's CapEx, being new to it, goes into lesser areas, more focused areas. We can talk about Oracle a long time, but just as an observation here, how interesting that is in today's recent events with CloudWorld just finishing this week in Las Vegas.

So in your role, you sit at this interesting nexus where you're hearing what the vendors, Oracle, et cetera, are saying about their innovation. And then the other part of your job is listening to and educating enterprise leaders. What's the difference between what you're hearing from the vendors versus what you're hearing from the enterprises?

Well, with the vendors, it's always our job is to cut through the chase and see what is real, what is not real, right? Because there's a healthy portion of marketing on top of what the vendor does, right? Things get announced which may never ship, may ship next year, may not ship as they were announced, right? Because the reality is it's in...

and cutting through that and understanding what is the relative development speed, what is Bonner feeder product advancement, what's really going to be delivered. That's kind of like our catalytic function as industry analysts, helping enterprises who are buying things, who are making investment decisions, who want to stay something, that want to know if something is ready to go, and giving them the right advice on when is the time to go and when is the time to wait. Or go somewhere else, which happens as well. Are enterprise leaders

concerned about whether or not to make the investment in AI and AI adjacent technologies because of concerns, IP, safety dangers, data leakage, things like that? Or is it really a matter of how we have decided to make the investments and we're looking for your guidance about what investments we should be making?

So they're both, of course, like both are real situations. The interesting thing is what I found and I've seen this like this in the industry is that the fear around regulation, ethical AI, safe AI and so on is largely has been promoted by the ones who didn't have AI yet. Right. I mean, the example on the hand talking about the large guys, the other people are doing this well.

was Microsoft, which deployed not only one, but two lobbying teams for responsible AI in Washington and managed to scare Google so much that Google held back what they then had to catch up with, right? It's one of the biggest, if you look like history, Machiavelli, Duce, right? How to get your aristocratic leader in place. What are the tips and tricks of the trade? I mean, that's the latest high-tech move. When I retire, I want to write the book

about the Machiavelli moves in the high-tech industry. So that's one of the most impressive ones. And just for the people who don't know, one of the regulation teams was let go as soon as the partnership with Oakmeyer was unveiled. So there's a lot of, on the one side, fear-mongering and concern-mongering of the people who don't have because we need to get it right, but they don't realize it. But

By saying that, they make it even harder for them to get in the game because like every new technology, AI is not different. There will be good things and there will be bad things. And as long as we're doing what we're doing right now, where we augment the human, we're not replacing the human, we are extremely good as humans. Let's forget about the enterprise space for a moment if something works or not. You get a new AI, let's say a voice assistant, you talk about it one time, it understands you and you're baffled.

It understands you. You say, not good enough. I need to edit. And editing on the smartphone slab, glass slab is hard. Or you say, this is a piece of crap. Maybe I'm going to try it half a year when they say they have a new version. Pardon my French here. So we as humans are really, really good at figuring out at the current level of AI where things are. And we can go into the deep fakes and so on where we're not so good anymore. Because we're trained another way. We're trained as our function. What we see is real.

which in most cases is right when we watch a movie we know it's a movie we put another filter on but if we're in the real world and we see a deep fake and think that's on a trusted newspaper and so on we might start believing it right so that there are other aspects of that but from the automation that we see in the enterprise helping me with performance review writing a job request vision helping with my benefits enrollment we find out really really quickly where it is good or where it's not good so i'm not worried about that part so

Back to your original question, sorry for the long-winded answer here, which I normally don't try to give, right? So I try to be short and sweet and succinct. They are the ones who have realized this and are doing this, and they're the ones who are still waiting for it and saying it's real. And the short answer short is to make answers short to answer who is listening here. It is real. It's delivering benefits.

with some caution at different levels of different vendors, of course, but everybody's working on it. And there's benefits which you should not miss. Like I even say provocationally, if you write, nobody's doing this, but if you would write a job description from scratch, the best practice before was, let me see if there's something general. What can I copy and paste? Nobody wrote from scratch before.

But the copy and paste process is significantly longer than if Jenny and I write the new job requisition and then reviewing this and then doing a little copy and paste, right? So we see like this going down to 20, 30% of the time of the copy and paste best practice, which was 20, 30% of the time of the, I write this from scratch because I'm the best person to write. Which leads me to one of the big cautions, right?

where we get fooled so much when OpenAI came out. All our lives, we are learning to write. Writing is incredibly hard. So we think that something which is written in impeccable, perfect Oxford English, that must be a trusted source. That must be working. That must be correct. That is, of course, not the case when a machine writes it. It's not even the case when a human writes it because they may have evil thoughts and might want to influence us in a certain way. So this automatic...

competence gap, which we run into if something is formulated right, is something which we humans have to learn in the era of AI. But something which is written in beautiful, competent, footnoted English, which looks perfect, still does not have to be right. Something which is true all the time, but 80% of us, including myself sometimes, get fooled by it and say, well, this is written so beautifully, it must be Shakespeare. It must be right. I'm going to go out on a limb and say that your job as an analyst is tough for these days because...

there was a time when the analysts had access to information that

wasn't as readily available to the public. And given that the innovation cycles have compressed to like, literally, it seems crazy, but like days. I would imagine that a lot of times you're going into brief an enterprise leader that has all the same information that you have, and yet they're looking to you to predict or discuss trends. How do you navigate that kind of new expectation that

You have access to the same information in real time that your client does, and yet you need to synthesize it and come up with some insights that are non-obvious because everybody's an expert these days. Excellent. A great question. So I entered the space very late, 11 years ago. So for me, it was never about my information advantage to give better advice.

Because in many cases, also information advantage is proprietary under NDA, not available and so on. Despite there's a huge amount of things which are public, which people just don't know because they don't look at different vendors and so on, haven't seen that published it and said that and so on. So there's, I would say 80% of what's perceived as being proprietary is actually by now in the public domain. So I never saw the exclusive information access to be the one to make my advice better.

It was always the understanding of the overall trends of what's happening in the industry, coupled with what can be done in a day of writing software. It was interesting to listen to your intro, like how much will be generated in software and so on. So what is realistically coming out? What is the innovation speed overall?

of a vendor and i sent up my whole research around exactly that effect which i call enterprise acceleration right enterprise have to move faster become more agile become more efficient become more effective which often gets forgotten we can talk about efficiency versus effectiveness too but um

That is the key aspect. So it says that the quality of the processing which matters. And this is why I do the low level production in the back of the car seat coming from event videos because simply wouldn't have the time. It's more important to get the word out even when a grizzly Uber backseat video or like I did the preview of Oracle Cloud World at the San Diego Terminal 1, which is like a noise hell. But hey, thanks to microphone quality, I could be understood despite my fast speaking mumbling and a thin German accent.

So it's good enough, right? It gives people access to the information. It's also the reason, little pin here, why I put all my notes on Twitter, right? We can talk about Elon Musk and Twitter for the rest of the podcast. It's the best note-taking tool, right? And if you want to see me going to an event, you want to see what's happening, my notes are on Twitter and available for you to use, are available for you to comment, are available for other vendors to ask questions or to the vendor who is having the event, right? Salesforce did a great job a few years back and said, I got something wrong. Yes, please tell me.

Instead of telling me in a research note, let's talk about the classic analyst three, four weeks later. But in this current world, nobody cares anymore what happened three or four weeks ago because the world has moved on. Everything's moving faster. So we as analysts have to find ways to get our information transformation vehicles moving

up to speed with the reality how information moves because otherwise we're in this funny situation like we were in the covid part right where we know the virus was developing every week significantly enough to create a new vaccine but we needed two weeks to create to test the efficacy of the vaccine so we're in a in the rabbit and hare situation where we could never win

So as an analyst, I have to think about, or an influencer, whatever I want to say, I have to think of ways to transport my information, which is faster, time yet to consume. It can't be a 100-page research report. There's a place for that, but it can't be the one to figure out what happened. This is the Oracle open world, or what's happened to CloudWorld, actually, is CloudWorld they meant, or what's happening next week at Workday Rising. It cannot be the 100-page report, which comes out in a few weeks. So when you go to these events...

Or even you read the headlines, whether it's, like you said, Workday, SAP, Oracle, Salesforce, ServiceNow, et cetera. Gosh, it really feels like they have cornered the market on innovation. And they're so far ahead, and there's no real opportunity for startups. But we know, I'm sitting in the cradle of Silicon Valley, that innovation always wins. And

You're talking to a lot of entrepreneurs on this podcast right now who are thinking about what's the famous Bezos line, your margin is my opportunity. From your vantage point, where are some of the opportunities where maybe big tech is vulnerable? Everything big is vulnerable. Forget

Forget the tech. Everything big might be stronger, bigger, heavier, whatever, might have more scale. But with that comes what I call the efficiency gap. In order to get big, what does efficiency mean? Doing something right. And as you're pushing and doing the thing right on a global scale, you forgot the effectiveness question, which is, are you doing the right thing?

And often, and I worked for SAP and Oracle for 20 years here, often you know you're not doing the right thing anymore. But you have to do things right because that's the way how the whole company scales. And lots of the new trends which happen might go away, right? So you stick to what you're really, really good at. You stay at the efficiency game. You squeeze the price. You add more functionality to your suite. And you might get away with it. You're not always getting away with it, right? Because you didn't do something right, which some startups which grow really fast did it right.

I don't see a problem from the technology side for the startup field. I see a massive problem on the regulation side with Salmon's Oxley, which has put a dead nail into the coffin of IPOs, which is very important for any company endeavor to start also in tech that you need to have access to capital.

Despite the capital numbers are rising and family office, private equities, funding companies now because they want to have the multiples of tech getting the right, wanting to get in the AI bandwagon, whatever. It's very interesting to see. I mean, one large ERP vendor, Infor, is owned by Koch, as I say, the German industries, as a family office, as an investment. And they run on it. So multiple parts of it. So there's interesting new investment areas. But keep in mind, enterprise acceleration has come to the startup field as well.

If you and me drop what we're doing because we come out of a great idea on this podcast, we don't have to buy any servers, data center, location, right? I mean, the proverbial two guys who met at a Starbucks or two girls, right? Let's not forget them at all. And did their startup in one day and had built something immediately is there, right? Because the infrastructure, the cloud, right? Again, we're back at the cloud, right?

allows to start something immediately. And every cloud is interested in startups and gives you credits. Well, before we couldn't find the money from parents and so on, my first startup, right? So from family and so on to buy hardware, and it would take months to get a PC, right? And now, I mean, you get 100,000, whatever, 60,000, there's a competition how much cloud credits you get for free before you have to buy something, right? So there's so many more flowers which can bloom. So I think there's going to be always this thing, sweet was best of breed,

innovation versus scale situation, which is there. And the interesting thing is as an enterprise, I have to weave my way through that, right? When is my suite still good enough or when is my startup good enough for that? So that's the exciting part of it. We often wrestle with this complicated tug of war between regulation and innovation. And many would say, you know, regulation is a tax on innovation.

And yeah, as a technology community, and I've said before, we're really good at answering the question, what could go right? And we're often not so good at answering the question, what could go wrong? Now, the EU is way ahead of where we are in the United States in terms of regulating AI. We'd love to get your perspective on what's the right way to regulate AI and what's

If we don't rely on governments, agencies, international bodies to regulate AI, is it reasonable to think that the industry, the vendors could self-regulate? I know that a certain level of regulation is necessary. The problem of regulation is that it stales so quickly and the speed of fixing regulation is too slow. Back to enterprise regulation, there's a speed of regulation.

in areas which are moving as fast as we're seeing AI right now. I was really impressed both by the US, California just passed one, and Europe, how quickly it's the fastest way compared to cloud, internet privacy, online privacy, bad things happening on the internet, black record. It took decades to come out of AI. They really went fast, but still not fast enough to not have what is the biggest problem of regulation, the unintended side effects of regulation.

Which is one of the problems why Europe doesn't have, and I'm European, I'm German, as you can hear, so I'm a fan of Europe to a large point, right? There's no cloud infrastructure in Europe.

And all that we said before, I don't know how many times I mentioned it, so I can count it, right? I could count it for us, right? Without a cloud infrastructure in Europe, it's so much harder as a startup, as an innovation company, as a company to be legally compliant and to run things and participate on the innovation. It's much, much easier if I'm living in the DC area and US East from Amazon is available. All the first innovation goes there and somehow the gravity of innovation in the cloud is happening there. So it

I think the ethical principle decision is really tested with every technology. I can go down to something bad like payment software and keep every 10th bank account on the payment side and start skimming stuff and so on. So every technology has an ethical aspect of it.

And even so, there's bad actors and people doing wrong things and the stakes are getting higher. I think the ethical convention, also doing business with ethical companies, individuals, enterprises is a big aspect of that. And stay away from shady, non-difficult parts. So I believe much more in the strength of values and ethics,

Then a regulation which gets quickly outdated, slows innovation down, has unintended side effects and so on. The six largest companies in terms of revenue generation in around

AI have all been built in the United States. Probably not coincidental because of the climate, the regulatory climate in the EU. Do you feel like it's chasing innovation to the West? No, it's killing innovation. That's the problem. Europe wants to keep things safe, so safe that nobody takes risks anymore. And that's a huge problem. And I don't see it's getting worse. Nobody is stepping up, realizing that. I think

Germany is struggling with growth again. If Germany is sick, Europe is sick, right? Maybe there's a wake up in thinking. But in general, like you see the radicalization because people are not happy, which is unfortunate for Europe, right? And many of the countries that there's a lack of answers. And unfortunately, the answer has not been maybe overregulated things, right? The UK is to me a tremendous example for that.

You can say Brexit right or wrong, but once you accept Brexit, you have the chance to go through all the regulation the EU has done, good and bad, and review that. The UK should be booming as a near-shore, 60 million skilled producer.

open for investment part where you can reach Europe and most of Europe in one hour flight, in one day with a truckload, in two days or three days with a container if you put it on rail. It should be the booming wild west or wild northwest of the EU by now. What happened is the only thing the UK parliament, which actually did the math on the back of a napkin, they would have to pass two new laws, regulations every day, which is unheard of speed, right? Back to regulatory speed, right? Irregular acceleration. The only thing they passed was

was the fee they paid to EU citizens for staying in the UK. So that shows you the inability of legislative bodies from the regulatory lawmaking process to keep up with the reality of business. And that's a big, big disconnect which puts people at risk.

which gives room for bad actors, but also has the unintended side effect that innovation happens elsewhere. And innovation happens, I don't think so much for the regulatory largesse or staying back in the U.S., which is often seen, like you said,

Silicon Valley has a unique situation, right? And that's the blog post I need to write and I put myself in the pressure to write it because I keep talking about it and talking about it. If you look in the history of new technology, it was always regional because something was there, right? If you look at mining,

steel construction, proximity to coal and steel, iron ore which came out of the ground, ports, shipping. It was always location, geographical location. And you see the rise and fall of places like gold mining and whatever, like earth, out of earth, the town boom, 200,000 down to just the post office being open now and so on. It was always one-time wonders.

The difference of Silicon Valley, it's a multiple time wonder in the sense that hardware was there. Let's not forget this, right? It took away hardware from the Boston area in the US. And because hardware and software at the time, it's no longer the case now, had to be so close. Software is largely there with the exception of Microsoft creating something also on the West Coast.

Internet is largely there, if you think of Cisco. And what was interesting to see, smartphones were everywhere. There was BlackBerry in Canada, Siemens was building phones, Nokia was building phones. It all collapsed down to Android and Apple, which again happened to be in Silicon Valley. So there is an innovation gravity. Everybody talks about data gravity. And there's an innovation gravity. Now you can count. It's two, three, four.

The epicenters of something are happening in Silicon Valley, and this is why it is still the leading innovation place of the world. And the question is, what's the next one? Quantum may not be happening there. Quantum might be the pullback of the East Coast right now, if you talk about quantum. IBM is leading the way, so if they deliver what they said, they deliver end of the year. So there might be some disproportional part of it.

But again, the software for quantum has to be built and the software centers are there, except for Microsoft and AWS. And Seattle has done really well. Seattle might be the new cloud center, right? Amazon, Microsoft being there, Oracle putting their cloud center there. So we will see if cloud... But the question is, what is the next trend for Seattle to latch on, right? Will quantum go to Seattle? Will quantum stay on the East Coast? Will quantum go to Silicon Valley, right? Where you want to place your bet. And that's the reason why it's so relevant.

Sorry, I'm talking way too much. You're supposed to ask the questions here. This is fascinating. And in fact, the topic of regulation and the geopolitics of AI really deserves its own whole episode. Maybe you'll come back another time and we'll just focus on that. It's a very important topic and you need perspective there. I got to get you off the hot seat, but not before you answer one last question for me. Let's say we're back here in two years, middle of 2026.

And we're having a version of this conversation. What has changed? What are the main themes that we're talking about then that maybe we aren't anticipating today?

Great question. Like I mentioned quantum a few times, if AI would not have happened for whatever reason last year, we would all be talking about quantum, right? Because quantum has matured to do things. And for a second, for the people who say, what is quantum, right? Ones and zeros, we understand all of us, right? What if you had a computing architecture which can do like the real world, live in fractions and 0.12345 things? You can model things significantly better than

not only the fraud side where everybody's worried that it's going to break encryption. Yes, it could break encryption, but quantum encryption is already there, available if you want to be safe. But the interesting thing is in protein folding, in research, every chemical process, every process in nature can be better simulated, run on a quantum machine than on the digital machines. So that is my hope that something will happen this year in that space. So we should be talking more about quantum and the effect it was quantum in for business enterprise applications.

The other big thing I'm waiting for to happen all the time is that generated AI is transforming everything which is related to documents, right? Reading, writing text, generating, interpreting pictures, creating and analyzing video, right? Everything which is document-centric. It has not touched what the enterprise runs on, which is transactions.

But we need the algorithm and might be the transformer algorithm or new algorithm, which can tell me like, how should I run my business next year? My cash coming in is down. What do I do from an investment perspective? Which locations are the better locations? How do I simulate and trend things out? The question is, is it going to be a transformer model or are we going to use what I call the infinite computer of the cloud to do more things like Monte Carlo simulations, scenario planning, other things, right? We'll have significantly more ways to do the what-if thing

which is awfully missing and all running out of gut feeling experience. Let's run this by someone with slow decision making and with that changing it. So I think we'll see hopefully some quantum. We'll see a breakthrough in AI and we'll see significantly more what if, what can I do, what will happen here.

Well, thankfully, we're going to have a chance to have that conversation and we'll see how you did. Those are two meta themes. I think that's really, really good insights. I'm happy to be wrong with some of them. I don't nail you, right? I'm not doing conservative advice. Maybe quantum is not happening yet. We'll see. The best information we have today. I think those are good. Always room for failure, right? So, yeah. But I like to give advice which is thought-provoking. And as long as you're right more than you're wrong, right? Exactly. Exactly.

Excellent. Well, this has been a lot of fun. I really appreciate you coming on, hanging out. Great, great work.

Same here. Very thanks for having me. Great questions, as always. Every podcast is only as good as the questions asked. You made my synapses fire a few times, which I really enjoy. So, Jen, I need no coffee anymore this Friday morning. I appreciate it. Thanks for having me, Dan. My pleasure. Where can the audience learn more about you and how you work? Yeah, and anybody hearing this, I mean, you can reach out to me if you think I said something right or something wrong. I care even more for the wrong. Please don't be shy. Reach out. Let me know. More than happy to engage.

Excellent. Well, looking forward to the next version of this one already. Thanks for coming. Thank you. Brilliant. Well, gosh, that's all the time we have for this week on AI and the Future of Work. As always, I'm your host, Dan Turchin from PeopleRain. And of course, we're back next week with another fascinating guest.