We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Replay - Multi-Cloud is the Future with Tobi Knaup

Replay - Multi-Cloud is the Future with Tobi Knaup

2024/12/12
logo of podcast Screaming in the Cloud

Screaming in the Cloud

AI Deep Dive AI Insights AI Chapters Transcript
People
C
Corey Quinn
T
Tobi Knaup
Topics
Corey Quinn:探讨了多云的含义,指出多云策略通常并非指三大云厂商,而是包含本地基础设施在内的多种环境。他还提出了关于Kubernetes复杂性和多云策略可行性的质疑。 Tobi Knaup:解释了Mesosphere更名为D2iQ的原因,以及Kubernetes社区对其成功的重要作用。他详细阐述了Mesos和Kubernetes各自的优势和适用场景,指出Mesos更适合大规模部署,而Kubernetes更适合小型集群。他还讨论了开源模式的风险和机遇,以及企业如何通过差异化产品和服务来应对竞争。他认为多云是未来的趋势,并分析了企业采用多云策略的多种原因,包括遵守数据隐私法规、选择不同云厂商的特定服务以及满足监管要求。他强调了异步数据复制在成功实施多云策略中的关键作用,并分享了他从物理数据中心到云计算的经验,以及对Kubernetes未来发展趋势的预测。 Tobi Knaup:详细解释了Mesosphere更名为D2iQ的原因,指出将技术名称作为公司名称并非良策,因为技术会随着时间推移而改变。他强调了Kubernetes社区对其成功的重要作用,并深入探讨了Mesos和Kubernetes各自的优势和适用场景,指出Mesos更适合大规模部署,而Kubernetes更适合小型集群。他还讨论了开源模式的风险和机遇,以及企业如何通过差异化产品和服务来应对竞争。他认为多云是未来的趋势,并分析了企业采用多云策略的多种原因,包括遵守数据隐私法规、选择不同云厂商的特定服务以及满足监管要求。他强调了异步数据复制在成功实施多云策略中的关键作用,并分享了他从物理数据中心到云计算的经验,以及对Kubernetes未来发展趋势的预测。

Deep Dive

Key Insights

Why did Mesosphere rebrand to D2iQ?

The company name Mesosphere became a stumbling block as it focused on Apache Mesos, while the company expanded to include Kubernetes and other cloud-native technologies. The rebrand to D2iQ aimed to reflect their broader focus on day-two operations and enterprise success.

What role did the Kubernetes community play in its widespread adoption?

The Kubernetes community is credited with driving its adoption due to its large, active ecosystem. It provides resources for learning, talent recruitment, and innovation, which is faster and more extensive than any single vendor could achieve.

Why do companies still use Mesos alongside Kubernetes?

Mesos remains the platform of choice for large-scale deployments, particularly for enterprises with hundreds of thousands of nodes. It is better suited for scaling and automating data services like Kafka and Spark, which are not yet fully replicated on Kubernetes.

Is open-source a sustainable business model in the face of competition from cloud providers like AWS?

While open-source faces challenges from cloud providers offering managed services, there are opportunities to differentiate through hybrid and multi-cloud scenarios. Companies with edge computing or global regulatory needs can benefit from tools that provide a consistent experience across different infrastructures.

Why is multi-cloud considered the future?

Multi-cloud is seen as the future because it aligns with hybrid and global enterprise needs. Companies often require infrastructure in specific countries due to data privacy laws or prefer to handpick services from different providers. Multi-cloud allows for a consistent experience across various infrastructures.

What are the challenges of running Kubernetes in production?

Many companies underestimate the complexity of running Kubernetes in production. They assume Kubernetes provides all the tools needed, but additional components like monitoring, logging, networking, and load balancing are essential. The learning curve often becomes apparent when scaling or adding state to applications.

How does Tobi Knaup view the future relevance of Kubernetes?

Tobi believes Kubernetes will become a substrate, similar to how Linux is a foundational layer today. Most users will interact with higher-level APIs rather than directly with Kubernetes, just as they don't directly interact with the Linux kernel in daily operations.

Shownotes Transcript

Translations:
中文

In this case, when we say multi-cloud, it's often not actually one of the big three cloud providers that they're thinking about. Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined this week by Toby Knope, the co-founder and CTO of D2IQ, which you probably have not heard of, and what used to be called Mesosphere, which you most assuredly have. Toby, welcome to the show. Thank you for having me. This episode is sponsored in part by my day job,

The Duck Bill Group. Do you have a horrifying AWS bill? That can mean a lot of things. Predicting what it's going to be, determining what it should be, negotiating your next long-term contract with AWS, or just figuring out why it increasingly resembles a phone number, but nobody seems to quite know why that is. To learn more, visit duckbillgroup.com.

Remember, you can't duck the duck bill bill. And my CEO informs me that is absolutely not our slogan. Of course. So let's start at the, I guess, burning question that at least is on my mind, if not a bunch of other folks. Mesosphere was a company that everyone in the infrastructure space at least had a vague awareness that there was that thing over there. And last year, I think it was last year, time is speeding up, the company rebranded. What was behind that?

Yeah. So what's behind that is, in hindsight, it wasn't a very good idea to put a technology name into our company name, to be honest.

Because, you know, technologies change over time. And we obviously started the company Mesosphere in 2013 around Apache Mesos. That was the core open source project that, you know, we had been using, my co-founders and I had been using at Airbnb and Twitter. And we wanted to start a company around that to help every enterprise out there to adopt Mesosphere.

Apache Mesos, but very quickly we actually started helping people with other technologies from the cloud native ecosystem. We help folks automate things like Kafka and Cassandra and Spark and build these data pipelines on it and very quickly actually got involved in Kubernetes as well. Actually in the first year when it was announced.

And so, over time, the name Mesosphere as a company name became sort of a stumbling block for us because we always had to explain that, yes, we are the Mesos company, but we also do all these other things, right? We help you build data pipelines and we help you with Kubernetes too. And so it kind of became this anchor and we decided it's maybe not a good idea to have a specific technology in our company name. And so we

We decided to rebrand and we wanted to pick a name that expresses what we really do, what we help our customers with. That is, we help them on day two. We help them be successful on day two and be smart about their day two operations. Day two in the sense of the DevOps concept of day two, so the ongoing operations and maintenance of production systems.

That's what's behind that. You always could have gone down the path that I did, where I started with a newsletter last week in AWS, a consulting company that had no bearing on any of it. And this podcast, Screaming in the Cloud, there were three brands instead of one, which means that whenever anyone asks me, so what do you do? My answer is always, well, it

can you contextualize that question for me a bit more? It winds up effectively having to lead to us down this weird path of branding things very differently. And then of course I started another podcast with a completely separate name on top of that called the AWS Morning Brief. And it's at this point, I just sound like I'm professionally confused.

Naming is hard, especially once you have a name that is no longer accurate in some ways, but it's something that people have a definite affinity for. You have brand recognition. I mean, we had a guest on previously from Palantir.net, which predates the terrifying Palantir in the Valley by about 10 years. And it seems like their tagline has become, we're Palantir, no, not that one. Yeah.

That was lovely. Yeah, you know, obviously, like you said, naming things is hard and renaming a company is hard too. You know, we've built up a lot of brand equity over the years. And so what was important to us is actually that we don't give that up. And so the name Mesosphere actually lives on. It's now the name of our product family around Mesos.

So name lives on. Just the company has a different name. So are you finding that, again, obviously from the time that you started Mesosphere back when, when was that?

- 2013, so we're almost seven years old. - Okay, forever ago, internet time. - That's right. - There have been some, let's say, upheavals in the infrastructure space. Back then, I would have, frankly, bet the farm on Mace House. It seemed like the right answer. A lot of the big shops were doing that. And today, whenever you suggest that to people, they look at you a bit strangely and say, "Yeah, if we're doing anything net new, "it's probably gonna be on top of Kubernetes," which I have a laundry list of complaints about. But I'm curious to get your take.

How have you seen Mesos' rise and fall through the eyes of what you do for customers? Right. So, you know, I think what we're seeing with Kubernetes is really the power of community. When I talk to folks and ask them, you know, why Kubernetes?

That's the thing that people most commonly mention. It's the community in the broadest sense, meaning there's a place online where I can go to learn about Kubernetes and related technologies. There's a place I can recruit talent from. There is people that want to have that on their resume. And obviously the community is so much bigger than any single vendor could ever be. And so that's where a lot of innovation happens and innovation happens much faster in that community. So that's really the...

the most common reason we hear. Mesos started as a

a abstraction layer for large compute clusters. And while we do a lot with Kubernetes now, and we have an entire product line around it, we also still have our Mesos product line, and it is still the platform of choice for those large-scale deployments. So we have customers with hundreds of thousands of nodes in production, and they're running Mesos, and they will be running Mesos for a while.

So it's really a best tool for the job kind of situation, right? If you're

a small shop, you're getting started with cloud native, you have maybe a 10, 20, 30 node cluster. That's 20, 30 nodes is where we see most clusters out there in the industry. Mesos may be not the right choice because it is built for scale. And so what we said is, hey, let's offer our customers what they want. Let's give them Kubernetes. Developers want that. And we still keep the Mesos platform for those large scale deployments.

Are you seeing net new activity around Mesos in 2020? We do actually. So one thing that we built, that we invest a lot of time in over the years is helping customers automate data services, right? Building end-to-end data pipelines with Kafka, Spark, technologies like that. And the experience that they get around that on Mesos doesn't exist the same way yet on top of Kubernetes.

We're working on making that happen. And obviously, there's a lot of activity around building Kubernetes operators. There's various different approaches to building operators. We started an open source project about a year and a half ago called Kudo that aims to make building operators very easy. It's based on our learnings on top of Mesos.

And so, you know, the ecosystem is going to get there, but it's not quite there. And so we're actually seeing a lot of people still start new projects around these data infrastructure projects on top of Mesos. It's interesting you bring up releasing open source offerings around, I guess, anything in the infrastructure world. I mean, lately, it seems that there's been a bit of a pretty persistent narrative around

around the danger of open source as a business model because then someone like AWS comes in and launches effectively what you do as a managed service. Is that something that's currently on your threat radar? Is that something that you don't see as being particularly credible or am I missing something entirely?

so it's definitely on our radar and um i think there is you know while while this is a threat that everyone's uh facing uh there are also opportunities to build differentiated product for maybe a different uh different use case for a different customer demographic so what we see a lot these days is folks wanting to run

any combination of hybrid or multi-cloud scenarios, right? So they want a public cloud like experience like they can get from AWS, but they want it on the infrastructure that they choose.

So we see a lot of activity, we work with a lot of customers that have industrial IoT use cases, right? So let's say they have a manufacturing plant, a factory where they have thousands or tens of thousands of sensors that produce data in real time that they need to process and do things like predictive maintenance, finding outliers in the sensor data and things like that.

Those factories are often in areas where they don't have a high quality connection to the cloud. So it's not feasible to send all that data in real time to a public cloud. You have to kind of process it locally. And so essentially what those customers need is they need a mini edge cloud, right? They obviously don't have highly skilled cloud native engineers in every one of their manufacturing plants. And some of those people have over a hundred of these plants.

So what they need is really a public cloud like experience sort of in a box that they can deploy on the edge. Now, they also want to run a bunch of infrastructure on the public cloud and they also want to run a bunch of infrastructure on their existing data centers. So how do you do that? How do you operate, you know, pick Kafka as an example or Spark in a consistent way across all of these platforms?

That's one thing we're focusing on and where, yes, you can go to AWS and you can get a managed Kafka, but you can't get it in a manufacturing plant or in an air gaps case where you don't have any internet connection. So there's still a lot of these use cases out there and that's how we differentiate or it's one way we differentiate.

I've always said that one of the most effective attack ads that you could come up with about running Kubernetes would be to send someone who's considering it to a three-day Kubernetes workshop. And by the time they come back, they will understand that here be dragons. And that has sort of continued to be the case as far as talking to anyone who's doing anything at significant scale in the Kubernetes ecosystem is just the sheer level of abstraction built upon abstraction that...

fundamentally turns into something that is incredibly difficult and opaque to understand what's going on underneath the hood. So it's not the day one experience, day two experience, as you alluded to earlier in the recording.

that once you have something running and then you see a degradation or an intermittent failure, it becomes super challenging to figure out what's causing that issue and why. That's absolutely right. And the typical journey that we see a lot of people go through is something like they decide to do cloud native, they decide to do containers or their boss tells them to, they

They go on the internet, they go on Stack Overflow or wherever, and they find Kubernetes. They try it out, you know, they download it onto their laptop and have a great experience, right? The first touch experience with Kubernetes is really great. You can get a container up and running quickly or get your guestbook example up and running quickly. And so too many people assume that putting it in production is going to be a similar experience.

And the first common mistake we see is that people assume that Kubernetes is all they need, that Kubernetes gives them all the tools that they need to put a container stack into production at an enterprise. And that's just not the case, right? You need a bunch of other tools from the cloud native ecosystem around Kubernetes. You need a monitoring stack, you need logging, you need networking, load balancing, all of those things. And

Because people take this fairly agile approach where they try it out and then when they hit a wall, they figure it out. Let's say I start a stateless container and that's a great experience. Now, I need to add state to it. How do I do that? How do I get volumes? They take it step-by-step and that's where we see a lot of Cloud Native projects failing is because like you said, at some point they face the complexity and they're like, "Oh, wow."

there's actually a lot of things that I hadn't thought about. What we like to do there is make sure that people are educated about that, right? So that we say, hey, when you need to go to production, these are all the things you should pay attention to. Make sure you have proper monitoring, make sure you have proper logging,

you need a networking layer and so on. And that's part of what we teach in our Kubernetes trainings too. So we do these free trainings in the field in various different cities in the world to just highlight these problems because like you said, a lot of people just aren't aware of those.

Here at the Duckbill Group, one of the things we do with, you know, my day job is we help negotiate AWS contracts. We just recently crossed $5 billion of contract value negotiated.

It solves for fun problems, such as how do you know that your contract that you have with AWS is the best deal you can get? How do you know you're not leaving money on the table? How do you know that you're not doing what I do on this podcast and on Twitter constantly and sticking your foot in your mouth? To learn more, come chat at duckbillgroup.com. Optionally, I will also do podcast voice when we talk about it.

Again, that's duckbillgroup.com. One of the, I guess, arguments in favor of Kubernetes historically has been the hybrid story, which I'm sympathetic to, and the multi-cloud story, to which I'm slightly less sympathetic, in that it's, on paper, it looks fantastic. In practice, it means that you're not just dealing with one cloud provider's deficiencies, you're dealing with all of them.

And that's been a recurring subject of some debate on this show for a while now. Where do you stand on the idea of multi-cloud as a best practice? Yeah, it's one of my favorite topics. So I think multi-cloud is where everything is going to move. To me, multi-cloud, it also includes hybrid because every large enterprise has hybrid.

massive workloads that they want to keep on-prem for various reasons, whether it's they want to protect their data or whatever. And at a certain scale too, actually running your own gear becomes more cost-effective too. So I think ultimately every enterprise is going to be there.

Now, the reasons for why they want to do multi-cloud vary. And I think a lot of folks, when they hear multi-cloud, the first thing they go to is, oh, you know, I'm going to have this abstraction layer, Kubernetes, or whatever it may be, and I'm going to dynamically move my workloads around, and I'm going to look at where the costs are optimal,

you know, I'm going to optimize for other things. That's typically not the main reason why people do that. Although we are working with some customers that are fairly sophisticated, that are literally doing that. They're watching the spot instance price market on all the different cloud providers and then, you know, hour by hour decide where things should go. But that's only a handful and they're, you know, sort of ahead of the pack. They're fairly sophisticated customers. For most folks, the reasons are, you know,

something different. We work with a lot of companies that work globally, that work in a lot of different countries and jurisdictions. And so they need to take a look at data privacy laws and regulations around that. So when the infrastructure they stand up in China, that data that they process there for their Chinese customers can often not leave the country. So they need to run on a Chinese cloud provider.

they may be operating in Europe, so they need to run on European infrastructure and within Europe in each country. And so in this case, when we say multi-cloud, it's often not actually one of the big three cloud providers that they're thinking about. This may be some fairly small infrastructure as a service provider in one specific country that they need to run on top of for these data privacy reasons. And so in this scenario, multi-cloud makes a lot of sense, right? Because

You want to architect your stack once, you want to build it on top of an abstraction layer like Kubernetes, and then be able to stand that up in multiple countries on different IaaS for those reasons.

That's a common one that we see. Obviously, that's with companies that act globally that are working in a lot of different jurisdictions. Another reason we see for multi-cloud often is that they want to handpick certain cloud provider services that they like, right? So they may want to go to provider A for their machine learning stack and

And they want to go to provider B because they have the better managed databases. So it's more of those reasons, I think, not so much what most people go to immediately, which is dynamically moving the workloads around.

The dynamic movement of those workloads seems to be what people put up as, oh, it would be great to be able to magically deploy our entire application anywhere we need to at any point in time. Yeah, except data gravity always makes that a bit of a challenge. That's right. The joy of trying to get even that baseline...

fundamental consistent experience working between two providers, even when one of them is on-prem and you control virtually every aspect of it, is non-trivial. An argument I've enjoyed for a while now has been, great, take your provider, your primary cloud provider, whichever one it happens to be, I don't care, you probably care, I don't care, and try and go multi-region. Be able to span to multiple regions of the same provider and see what breaks. It's a good idea

baseline story for the things you're going to have to start thinking about and then some when you start going multi-cloud now there are workloads that justify that level of work and experience and stress but it's certainly not i'd say worth an awful lot of companies time and effort to do it yeah you're absolutely right um that's that experience is very similar to what you're gonna have to do in multi-cloud and there's one more use case i forgot to mention earlier and that is people in

certain industries that are regulated, they actually have to go with multiple vendors. They have to, for regulatory reasons, pick two or more cloud providers. And so they're kind of forced to do that. One of the main things you're going to have to build your own, or it's actually something we help our customers with, is replicating your data. Like you said, data has gravity. And so the people that we see that are successful at doing multi-cloud or multi-region,

they do things like using Kafka to replicate their data or using Cassandra to replicate the data asynchronously between different infrastructures. So that's not something the cloud provider offers, but we help folks manage Cassandra, manage Kafka, so it makes that a little easier. So tell me a little bit about where you came from. Most people don't decide to spring fully formed from the forehead of some ancient god in the forms of a co-founder of a company in the infrastructure space everyone has heard of.

Where were you before Mesosphere, if there can be said to be a time before the Mesospheric Era? There is definitely a time before the Mesospheric Era, yeah.

My exposure to the internet and infrastructure and HA basically started as a teenager. So my co-founder Flo Liebert and I, we grew up in Germany in the same town. And when we were teenagers, we started building websites and we started building some adventure games and things like that. So we knew how to build websites. And we grew up in a fairly small town, 50,000 people, and this was the late 90s. And

Even in that neck of the woods, companies started to hear about the internet. And so they're basically wondering what this thing is. Someone told them, "Hey, you need to be on the internet. You need to have a website and you need to have an email address as a business."

But they had no idea how to think about this and how to approach it. And at the time, it was really hard to get a website because you basically had to work with three different companies. You had to find someone to design it for you. You had to find someone to program it and then someone to host it. Those were typically three different companies. And so what Flo and I did is we said, hey, you know, we know how to build websites and we know how to run Linux servers. We just did, you know, dabbled with that on the side. And so...

We actually convinced my mom to register a company so we could program websites for people and host them. So that was sort of our first experience with infrastructure. And even back then we did HA things. We bought two servers, not one. One would have been enough to host all of our clients, but we wanted it to be highly available. So we got some experience with that, running Linux servers, running production infrastructure.

And then, you know, when you grow up in Germany or anywhere outside of Silicon Valley and you're in tech, then you hear the stories, right? You hear about Silicon Valley. And I've always imagined it to be the super futuristic place. And I kind of wanted to check it out at some point. And so in college, I found an internship at the startup down in Redwood City.

and joined them for three months and helped them build their website in PHP and Ruby on Rails at the time. So that was my first exposure to Silicon Valley. And I just loved the energy, the people that are full of ideas and the speed at which things get built. And so after I finished college, I joined that same company where I did that internship, worked full time and built the infrastructure there, built the website.

And then my next job was Airbnb. I joined them pretty early on as engineer four and so were a lot of different hats there. And one of the things I did there is also design and build the infrastructure for their massive growth. Hired the engineering team and did some machine learning work there too. That's my other passion besides infrastructure.

And at Airbnb, that's when we started using Apache Mesos. We built data infrastructure there based on Apache Mesos, which my third founder, Ben, was working on at Berkeley at the time. I should mention Ben and Flo and I, the three founders, we've known each other for a long time. Flo and I grew up together, and Flo did a student exchange and stayed with Ben's family in high school too.

We all love computers. We talked about Mesos and that's how Twitter and Airbnb ended up using Mesos. And to us as the people running the infrastructure there and the people with the pagers that would go off at three in the morning sometimes, Mesos really felt like magic. It was a 10 times better solution because we could automate a lot more things and the pager wouldn't go off as much in the middle of the night.

And so that's when we decided, hey, this is a great opportunity to start a company because the problems we were solving there with automation, they were not unique to Twitter or Airbnb or any Silicon Valley tech company.

infrastructure challenges that every company would face at some point. And this was around a time when this idea of software is eating the world that Mark Andreessen wrote about, I think in 2009.

That was still fairly new. But we saw that every company, whether it's a bank or an insurance company or a car manufacturer, will have to run large scale cloud infrastructure at some point. And in fact, in order to stay competitive in the future, they're going to have to use some of the same technologies that the best software companies in the world are using.

And so that's where we saw the opportunities. We saw that we had this tool that automated a lot more things and made infrastructure more robust and

scalable and cost effective. The only challenge at the time was, you know, it was an open source project. There was only a few people in the world that knew how to use it. And so we decided, let's form a company around it. Let's build an enterprise product around the open source core. And that's how Mesosphere was formed in 2013. I have vague recollections back in the dawn of my version of the era of computing, where

We would configure a core switch at the office I worked at. Then we rented a van, and a few of us on the tech ops team drove it down to the data center about 30 miles away and did the installation. And we learned a few things. One, we are super crappy movers. Two, it is...

Vaguely disturbing that the company decided not to spring for professional insured bonded movers for this. And thirdly, there's something very surreal about loading a piece of computer equipment that fits in a rack, two or three of you can lift it up, into the back of a van, and that van costs less than the Switch does.

It was still such a strange and surreal experience. You don't get to experience that in the world of cloud in quite the same way, but it's more than made up for it with the other hilarious and sarcastically disturbing things that it has exposed for us. Yeah, absolutely. You know, I think...

I think interacting with real hardware and a real data center, it's an experience that really, really shaped how I think about stuff. One big way is that always expecting failure, right? Because I could tell so many funny and partially funny, partially painful stories of things that went wrong in the data center, in the physical world.

I think what that taught me is to just expect failure always. Everything can fail at any point in time. And then when you build software, even if you build it on top of a bunch of layers of cloud computing, you have to expect that. And even in the cloud, machines will fail and you get that email from AWS that says, "This instance is now broken."

And I think if you don't have that experience, racking and stacking gear and seeing a bunch of physical failures, you might not think about it the same way. So I wouldn't want to miss that experience. But yeah, like you said, there's all kinds of other funny behavior that happens in the cloud. It's just abstracted away by a bunch of layers.

I've lived through some funny cloud outages there, too, where packets went in a circle and then, you know, that caused EBS to go crazy and all kinds of fun stuff. Oh, the cascade and dependencies are always the story, the stuff of legend after the fact. And it makes sense in hindsight. Every failure does to some extent. But when you're in the middle of it, you're wondering if you've lost your mind, if the old rules no longer hold, this behavior is completely inexplicable, what happened, etc.

And I guess figuring that out and living through that a few times is really, I think, the best way to learn to approach those things in a more methodical way. But oof, some of those early failures were not fun. Seeing aspects of that manifest in cloud environments is absolutely something that is definitely reminding me that old things are new again. That's true. And one thing we shouldn't forget, too, is that by using...

these cloud services, you give them a lot of control too, right? Because when things do fail, there's only so many things you can do, right? APIs may all of a sudden be read-only and you cannot,

restore your database from a backup all of a sudden, or you cannot promote your read-only database to a master instance all of a sudden. So there's definitely that aspect too, which that was much easier when we were running our old gear. You're in full control, right? You control the whole thing. And if you want to do something crazy to try and fix a problem, you can do that.

Can't do that on the cloud. So last question, I suppose, before we wrap this up, I made a prediction about a year ago. I said five years now. So we have four years to go.

where I argued that in four years now, nobody is going to care about Kubernetes. And my argument was not that it's going to dry up, blow away, and be replaced with something else, but rather that it or something like it is going to slip below the surface of awareness. Just like we don't have to worry about what kernel version we're running on an operating system anymore, we won't care what's handling orchestration in our various data centers and cloud providers. Do you think that that is an accurate prediction, or I'm going to be eating some crow?

No, I think that's absolutely accurate. It is a substrate. It's becoming a substrate

And unless you're directly involved in adding features to Kubernetes or you're using it in some other way where that requires you to make changes to it directly, you're probably going to use some other higher level API. You're probably going to be interfacing with a CI/CD system as a developer. And maybe you know that it's Kubernetes under the hood, just like you know right now that it's Linux under the hood.

But you're not really interacting with it directly that much. Or if you're a data scientist or you work in the data infrastructure world, you're much more likely to use a tool like Kudo to deploy that service versus trying to piece together your Kubernetes primitives in order to stand up that service. So I absolutely agree with that. I think that's a trend.

it's sort of this wave, this abstraction layer wave that's always behind us or the rising tide of abstractions. And so I think the same thing will be true for Kubernetes. I think most folks out there, individual developers or end users of a platform,

They'll know that it's there, but they're gonna be talking to other APIs at different levels. And we're seeing a lot of activity around CI/CD right now. I think things like Argo and Tekton are super exciting. There's a lot of activity around that and people wanting to use GitOps approaches to deploy their software. So I think those are some of the signs that we're seeing of the abstraction layer

rising, and then of course, serverless too. So if people want to hear more about your thoughts on these and other topics, where can they find you? So they can find me on the usual places. I'm pretty active on Twitter. I'm on LinkedIn. I give talks at conferences sometimes. Those are some of the first places to find me.

Excellent. Thank you so much for taking the time to speak with me today. Absolutely. Thank you so much for the opportunity. Toby Knaup, CTO and co-founder of D2IQ, formerly Mesosphere. I'm Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave it a great rating on Apple Podcasts. If you hated this podcast, please leave it an even better rating on Apple Podcasts.