That feels like it was a five minute convo and we're a whole hour back. Dude, it was a great one, huh? Action. We did it. We're here. Cool.
It feels like sometimes we get caught up in this new Gen AI world and we forget half the internet is still powered and half of the ML is still making the majority of the money for folks. I would say most of the value that people get from decisions that come from machines is not coming from LLMs right now. Not yet. And it's still debatable if it ever will. For sure. You get to see...
From this spot where you're talking to customers all the time who are actually getting value from stuff. Yeah. And they have to cross almost like this chasm. So it's kind of interesting to think about right now, like all the LLM stuff, where does it fit in the...
like adoption curve or sophisticate, you know, all these, like you see a, the bell chart or the bell curve. Right. And, uh, technology adoption curve. And, you know, we've been around for a while at Tecton and we, um, we've had to think a lot about that, uh, adoption curve. So, uh,
Just for some background, well, maybe I'll give some background for everybody, just kind of like who I am first. So I'm Mike. I'm one of the co-founders of Tekton. I'm our CEO. Before working at Tekton, I led the Michelangelo team at Uber, which is the infrastructure that powers all of the AI and machine learning and kind of like decisioning that happens at Uber. And so that's like real-time decisions, slow decisions, and
but very importantly, things that are in production. And before that, I worked on the ads decisioning system at Google. So like which ads do you see when you go into, when you type a search, right? And also very like,
production oriented. Like it doesn't, nothing matters if you don't actually put it in front of users. And that's how they print money. And yeah. And it's like, and it's a really good example of this stuff really matters for some businesses. There's some businesses where their whole, and we should come back to this, but some businesses that are whole business model involves or requires really smart decisions to be made automatically in the spirit of, Hey, let's really figure out how to get as much as possible to production. Cause that's where you kind of unlock the value.
what was getting in the way to get to production at Uber? Well, there was a lot of things at first. We didn't have a way to train models. We didn't have a way to serve models, stuff like that. But I had, you know, my first like month or so there, I had a spreadsheet that I built that tried to catalog like all the places where, you know, there's hundreds of data scientists, all the places where like people kind of wanted to use machine learning in some way, right? And this is
back in the late teens, 20 teens. And what we found is that there was a lot of projects that were super, super valuable. A lot of projects that were kind of just like random experiments that were like, I don't know if we should spend any time on this kind of thing. And then we started learning some other dimensions that we felt like were really important to catalog and categorize to allow us to figure out how to prioritize them. So like, does this team have its shit together? Like, do they even know what they're doing in the first place?
That was the official title? That was the Colin Petterick, right? Dude, they have their shit together.
But also, like, do they have enough people to, you know, if we help them build this thing, do they have enough people to, like, take it and take over ownership of it afterwards? Because we were the central platform team, right? And so one of the things we found, so, you know, we had this big list of all these projects. And so we went through this process of, like, helping them out. We'd start with, you know, start with surge pricing and then work with the ETA team and the fraud detection team, stuff like that.
And that was good. We were knocking things out, getting things to production. When we'd find a problem, we'd fill the gap, add it to the platform, stuff like that. Along the way, we saw...
We got the obvious stuff built, the model serving, model training. That's the stuff you hear about all the time. The thing that was surprisingly... We didn't really know. People didn't talk about this that much at the time, but it was a major blocker that we practically spent a ton of time on in every individual project was just the data pipelines. There's just data engineering problem. Back at this time, they...
The industry was at this stage where it was like we just spent all this time, this investment. We're figuring out. It was like a really big deal to figure out how do you get all your data together and record all the data that's just coming off the big data thing. How do you bring all the data together so you can do something with it? Right?
And now it was like, hey, let's do something with it. And so what people were doing at the time was like, let's just have a dashboard. Let's, you know, look, let's do descriptive things, diagnostic things. There's a problem. Let's figure out like what happened. But now people were like, let's get more value. Let's go to more predictive, prescriptive things, right? And that's more like forward looking.
And so doing that requires, you know, a whole new set of technology, but also it requires that strong link to the data. And so we were building those data pipelines again and again. And we realized, hey, we're doing the same thing in every single project. Let's centralize this. Let's automate it. Just bring it into the central ML platform. And that's what we called the feature store. So that was kind of like the genesis of actually Tekton because we built this feature store and then, you
That wasn't really a term at the time, but that kind of became this inflection point in AI adoption at Uber, rather, and it was kind of this Cambrian explosion. Did it make it more self-serve?
Super self-serve because you didn't need us to build data pipelines for you. We'll give you a way to configure which data you want, transform it which way, and it'll be available to your model in real time. You can build high-quality training sets so you can build your models. Dude, and you know what? There is a few different blogs that I've recently read on the Uber ML and data engineering blog. And it is such a breath of fresh air because it's all about this predictive ML use cases. And it showcases how...
wild you can get when you have this baked into your culture. First of all, I think with Uber, it's just in there and folks are learning it. Even if you're not on the data team, you can, if you want, learn a whole ton about data, ML, AI, all of that stuff. It's all about democratization there. That was our mission on the team. And it was kind of weird because literally my first week on the job, they were like, okay, you're going to go present
the company's machine learning strategy to the CEO. Like no pressure. I don't even know anything about this company. I'm like, so that was kind of a, like a, a funny meeting that, uh, you know, it went well, but it was like, you guys didn't really set me up for success on this one, but,
But yeah, bringing all that up because there's just like, you know, that was the stage where it was like, how do we do ML? And Uber was definitely at the front of things where we were trying to figure out like not just how do we do it, but like we actually want this business to be driven by it in some way. It's not like a random side thing. This was like we want to power pricing with this thing. We want to power fraud detection with this thing. There's a ton of fraud detection. And those feel like the big –
like rocks. And then you have the sand, which is where when I was reading these blog posts, it was so refreshing to see that,
Even when someone signs on to Uber now, they get different flows on what you've been identified as. And so it's very customized on the flow just when you're onboarding or when you're opening the app, you're seeing a different thing. And that, to me, felt like it wouldn't be possible unless you have that democratization for the ML, AI personalization piece of it.
100%. And so that democratization was all about like, let's make this possible for a lot of people, right? And let's try to make it like kind of easy. It's not like it was trivial and, you know, anybody can go build the highest quality, like the world's best quality thing in like 20 minutes kind of thing. But it was like, if we have a way to get value, we put some people on it, we had smart people on the team, they could figure it out and build something pretty good, right? And you did say something else that...
You recognize, and I think I read this in another one of their blogs, like the From Predictive to Generative blog post that they did. They talk about how
Different models and different use cases have different SLAs. If it's an experimental AI project, you're not going to get the same kind of love from the team if that shit hits the fan at 3 a.m. as if it is the surge pricing model, which you know is driving so much value. Absolutely. We see that with our customers today. They have...
So, so in going through this journey, we just to like finish the thing and I'll complete that thought as, uh, you know, we recognize the value of all this data, like,
Bridging the gap between the data you have and like how do you use it in some way for automated decisions? And and at the time I was running an ML platform meetup in the city here in San Francisco and it was like companies like Facebook and Twitter and like those like the obvious kind of companies all the platform teams would get together and just show what they're working on and everyone was kind of working on something like tangential and It was obvious that this is like a thing a lot of people are gonna need So we started tech ton to basically build the best version of that and bring it to everybody
So we feel like we've kind of created this category of feature store. It's kind of morphed into a feature platform. And we're the leader of it today, and that's what we do. I can say that nobody thinks as much about feature problems, features, data pipelines for ML as I do. You've been doing it for the last, what, seven, eight years? Yeah, something like that. Well, I started working on this stuff at Google in 2013. Over a decade. Yeah.
Yeah, it's been a long time. So this is why we could talk about these like adoption curves, right? Like a lot of ML projects back in the day were very experimental. They were back in the day. I'm talking about like even 2018, 2019 people were like, we would love to find a way to figure out how to use machine learning in our business, but we don't really know like what to do with this. Can you help us?
Today, it's like we know this is possible and we just got to get this stuff into production. Here's exactly what we need. And so that means that these types of projects have a much higher expectation of making it to production. So the kind of perception and the ROI calculation for this stuff is pretty different. It's like...
a lower risk. It's like we're confident we can do this and we kind of like know what the value is going to be. We know we can reduce fraud. If we reduce fraud by 1%, it's going to give us $10 million a year or...
or whatever the equivalent is. We get X percent more click-throughs on the website, blah, blah, blah. And then in the LLM world, it's a little bit different. It's like, we don't really know if we can do this. It's kind of like a Skunkworks project. We don't even know if this technology can do the thing that we aspire for it to do. With the reliability that we need. Forget even about the enterprise readiness part of it. It's just literally like, we don't even know if the thing we want to do is possible. So there's a lot of those kind of projects because people don't really understand what's fully possible. Well, it does feel like we've
got a very clear understanding now of where predictive ml can add a ton of value there's use cases now whereas maybe in 2018 those use cases weren't as clear we're like could we do stuff when i like when we started tecton i remember in one of our like investor conversations they go hey what use cases are you going to be good at and then i was like
I don't even know all the things people use machine learning for. So I can't tell you even which ones are going to be the big ones. I can tell you what we did at Uber. But today it's very well understood where the value of...
uh, like automated high quality automated decisions comes from. And, you know, I'm just saying that as a way to convey the general, the more broad concept of machine learning, because sometimes that also includes like rules based decisioning to every one of these use cases goes through, uh,
like a little bit of its own maturity journey where if you're just going to get started, the most basic thing is you start with an if statement. You know, if this person is in this country, show them this. If this person's in this country, show them this, right? And then you can like kind of
Make it a little bit more complex. Well, but if it's nighttime, do this. And you kind of grow this like business logic over time. Sometimes you'll adopt like a rules engine. And this is what a lot of certain types of use cases like in the financial world, they go really heavy on rules engines. And this is basically just like super fancy like if statements and case statements and stuff like that. Then at some point they go.
okay, there's things like just some really brittle mess and there's better ways to do it. Let's just like train a model and put a model in there. And so, you know, the rules engine basically is a model that you're building a model, but it's like a hand coded model in some sense. And so there's that like kind of maturity journey, but the use cases where the value or the value is, um, uh, accumulated is, uh, there's a, there's a couple of them, right? There's,
so we particularly focus on this real time or fast decision world, but you can think of like, you know, a lot of companies are, are basically at their core. You can look at, you know, you can look at anything from different, um, uh, like different lenses, but a lot of, especially in like the financial world, they're basically just decisioning businesses. Like if you go to like any FinTech company, uh,
Everything related to that company is just about how do you make, I guess, four kinds of decisions really well. We need to figure out how to acquire customers well. So let's be really good at doing marketing, automated marketing. We need to estimate risk for the customer. So given that this person wants a loan, how much credit should we give them? How much should we give them a credit card? Or how much should we underwrite their insurance? Or whatever the financial product is that we're estimating risk for.
The third category could be fraud detection. Is this person who they say they are? Right? And so that's like a really huge area. I hear about the fraud use case. I just am like the cartoon where dollar signs go into my eyes. Yeah. I mean, that's a really like large dollar sign one. And it's one that anyone who touches money has to deal with.
But it's like a very large category in not just how much resources do I allocate to it, but how large of an impact does this set of decisions have on my actual business performance? Like if you're the CEO of Coinbase, you care a lot about your fraud detection system. It's not like a random thing, right? And so you have the acquisition, you have the risk factor,
estimation, underwriting for your customers, you have fraud detection, and then you have something that every company kind of deals with, but they don't always think about it as automated decisions, which is just operational things. How do we make our customer support team better enabled? How do we help them do
support our customers faster. There's like a 5B or maybe like a category 5, which is like personalization, but that's a little bit broken into like the acquisition stuff. Like how do we make our products something that like customers like and want to use kind of thing. And so a lot of companies are
Really, what they have to get good at is building these decisioning systems. Like you want to have – if you're the CEO of some neobank kind of thing, really you're thinking about like I need to make –
my business and my team really good at delivering high quality decisions reliably, quickly, like quickly if that's like what it needs, right? Like that's what the product surface requires. And you want to do it accurately so you, you know, get the value. And accurately doesn't always mean like
for example, in the fraud detection, like let's catch as many fraudsters as possible to reject them. But because of the decision science behind it, like the ideal rate for fraud is not 0%. Because if you're rejecting every fraudster, you're rejecting too many good people. Right? And so having a different like threshold there where you let in a little bit more, a couple more fraudsters, but you let in a lot more good customers can actually make a really big impact to
line of your business. This is something we hear from our customers all the time. And that's, again, going back to the maturity of now that we've been doing fraud. Well, it's not like all of a sudden we woke up and said, we're doing fraud now. It's been a constant cat and mouse game. For sure. But it's become more sophisticated over time too. And doing it with ML. Doing it with ML, but the fraudsters are also using ML. And so we have customers who
who are confident that their opponents, the fraudsters, are actually training ML models to estimate or kind of imitate their anti-fraud models so they can work around it. So it is a constant cat and mouse game, but that's where when you talk about the use case maturity, which problems are people using machine learning for versus not using machine learning for, there's these categories of problems where it's
it's getting very sophisticated and fraud is one of those. And certainly, uh, how sophisticated a use case is correlates to how many dollars can be saved or gained by it. You know, how can it affect the business? So I would put like fraud in that, in that bucket, the risk stuff, um, everything from like credit decisioning, loan decisioning under, uh, insurance underwriting, that's kind of all something. Um,
you know, recommendations is a much broader topic. I'm sure you talked to a bunch of people who work on recommendations in one way. You saw, um,
there's a bunch of really good blog posts that give an overview of the recommendation, how to build a recommender. I think Eugene wrote a cool blog post about this. But it's really broad. There's a lot of different ways to do that. But if you go to fraud, it's kind of like we're trying to do the same thing again and again. We just need to do it really well so you can go a lot deeper. Yeah, because there's a lot of variety that you can get with the recommender system. And that ground truth with the recommender is...
I guess, because you don't know if we didn't show them this, would they have still bought it or whatever the, that recommendation could be with the fraud. It's probably not as debatable. You can tell you can figure this fraud. Yep. You find out, you don't always find out immediately because you find out, was there a credit card rejection or chargeback later on, but you can, you get the data cause you know, when you lost money. Yeah. Um, so that's like that, that's a real thing, but people get to that point of, uh, you know, well,
Because every company also has to make their decision of which of these decisions do we need to own in-house? Do we need to build a system for? Do we need to have a team that knows how to build these models, et cetera? Or which of these things should we just buy some API and kind of just...
just outsource this decision to someone else. And so there's different paths. Some companies say we're never, never going to buy any other, you know, rely on anyone else. And they just kind of slowly get their way to like building their more, their thing being more sophisticated over time. And the people who do this are the folks for whom their proprietary data is particularly valuable.
It's particularly predictive of fraud. And then there's people who are like, look, I basically am doing the same thing as the next guy. There's another company that's like fraud detection as a service. I'm just going to use that as my API. If it
I'm going to, you know, it'll shoot me up to 80% good, but it won't get me to 100% good. And what we're seeing now with the sophistication as you get farther in the bell curve is people are mixing both of them. We're building in-house, but we're using the APIs or the external services as extra signals. Like a gut check.
Well, kind of like a gut, but just an input to the model. Like what is the other model that that's the outsource model thing? Cool. That will give me like the sense of, uh, think of it as like, there's a credit check model, credit worthiness model somewhere that someone built based trained on, you know, all of the people across America. Right. That's great. But, um, you know, if you're FinTech company, maybe you serve, maybe you service a very specific, uh, socioeconomic, um, uh,
demographic. And so the kind of national level model is not even relevant for you. It's like an input, but it's not. And you have a bunch of data specific to those people. So you think, hey, I can't really use these other
these other things aren't as predictive as they could be because they don't have my, they don't, they're not tuned for my population, my customers. And I have all this like proprietary data that I've collected from my customers that I should be able to use. Right. So people are combining the best of both worlds. And that's like what happens when you get even farther along in the maturity curve. Right. So bringing it all, you know, back to like the very beginning of this, um,
that's what we're seeing in predictive ml people are still moving forward and they're and it's a cat and mouse game so you can't just like chill out and do nothing there's no fraud team that just has their fraud model and then they're just like cool that's done let's move on to the next yeah we're gonna go figure some of those out yeah it's it's like a it's like a thing that that the ceo reports in like earnings calls it's a thing that is like a lot of people are being hired to work on this stuff because it's very valuable right and there's a whole like category of these decisions but on the other side of
the maturity curve. It's these less problem and immediate dollar sign driven projects. It doesn't mean there's not value there, but it's that the ROI may not be as obvious or well understood up front. The impact may not be as immediately observable. Sometimes projects are
like we're gonna we want to run this we think it'll be good but we're not going to be able to like see the impact immediately yeah and sometimes and then the feasibility of the project it could be like a question mark yeah and that's you know that's just categorizes or describes a lot of like newer llm type projects this is new cool technology and it's not even super super new anymore people are getting the stuff into production but it's not at the like
you're a bank's fraud team level of sophistication. And so you need different things for different levels of maturity through that journey. And powering the business and how much it's powering the business. And going back to what we were talking about with the Uber and saying, we've got almost these different buckets of models that we support because we know how valuable they are. And we can say very clearly, this model gets better
all the love and care possible because this is what the business is built on. 100%. This model. Yeah. We've had to get really good at learning which category does a different use case fit into and being able to talk to our customers and help them even figure that out because they don't know, like Uber, you know, they're,
way out there in the, in the maturity curve. Right. And so they have all of this experience of like, well, what are these different categories and how should we operationalize different levels of support, et cetera. But if you're like a bank, you're an e-commerce company, you got three models, right?
You don't really know. Maybe everything is mission critical. You think of it as mission critical. So one of the things we do at Tekton is people come to us with these problems where they're like, look, we're just trying to figure it out. And we help them figure out what do you actually need? You're building a recommender system. How important is this to your business? And how...
critical is what are the SLAs that you want to take on as a business? If the problem is one where
well, you know, this actually doesn't matter if this thing fails and we can just go press retry. Like who cares? We'll press retry. We're going to use these predictions like next week anyway, or the predictions are just going to go into like a slide deck where we show like some forecast or something. Then you don't need to treat this. Like you guys don't want to build all the systems that are going to, and the course and take on the corresponding costs and overhead that comes with treating it like it's a mission critical thing for your business. It's just like, just be fine with pressing like retry the,
the sets of use cases that we spent a lot of time on that intend to be like really correlated with value are when, when people have decisions that are automated at a,
a velocity that humans can't be involved in and at a speed that humans can't be involved in. So like the fraud one's a good example, but also like recommendations, real-time pricing, a lot of the real-time decisions that happen in like the live customer flow where you can't actually like have a person there doing the thing. Yeah, checking. Is this what it says it is? Yeah, or should we be serving this to this person? Yeah, and a lot of those use cases
are like, as we were talking about before, they're like critical to the business. Like one of our customers is one of the biggest insurance companies in America and they use us to do all of the logic, all of the
when someone's signing up for insurance. And they are super, super careful about reliability. They're like one of America's favorite companies kind of thing. They can't ruin their brand. They can't go down, right? And it's like a super high trust thing. And so when you look at how those teams operate, they go slower than other...
like kind of cool technology Bay Area companies, but they do that intentionally. It's not like because they're not good at technology, but it's just that they're checking every single box along the way to have full reliability, disaster recovery, resilience, all of that kind of stuff. So they minimize the chance that something bad happens because they have to be there for their customers and it's that much more valuable. It's an enterprise use case. Yes. You know what I was also going to ask you is how have you seen the best teams in
translate the value that they are doing on the data and ml teams to the business oh that's a good question and you mean uh do you mean with respect to like how do they report it up within the company yeah and you can always say like hey we made or saved 10 million bucks and that's great but i think a lot of times it's much more nuanced than that right and yeah and sometimes you
can't be that clear on it or if you are you're fudging numbers you know what though i think like it tends to be the use cases that get funding are the ones where it's more clear because before you come to like a cool vendor who can help you do something you have to know like is this even being prioritized in the company back to the fraud example briefly like that's always a priority and it's never like we're not sure if we should work on this
on this kind of thing. Right. Yeah. But there's a lot of projects that are like, well, we think we could do this, but we don't really know like its value. How do they get scary? Those ones are, are actually scary because you're one reorg away from like that, that project not existing. Yeah. Right. Or one layoff away or whatever it is. Right. And, um,
So there's a couple of ways that we kind of like help our customers. One thing we do, by the way, is we have just like a value framework that we work with them on. So it's like we come in and help you ask the right questions so we can write down like with you, put the business case together with you so you can show to like whoever's upstairs the right like value. And like, hey, this is why we should spend this money. This is why we should work on this project in the first place, right? But there's a couple of like –
It all comes down to either we're going to make more money, we're going to lose less money, it's going to cost less, or we're going to reduce risk. Things are going to be less risky. The place that something like Tekton typically helps with is we help you go faster as a team. A lot of these use cases also really depends on, or it's very important for them to be able to react to the new hacker or the new fraud vector, whatever, but go from idea to
in production much quicker. That velocity. Yeah. And that helps with like that reactivity, that market reactivity I was just referring to, but also just like, you know, we try to take something that took six months and make it possible in six hours. And so how much, how many more things can you do in that year as a team? I think if you're spending in just, this is like the logic that a data, a data leader should be thinking about is like, I have, I'm spending just
raw or fake numbers, a hundred thousand dollars for this person. They can do two cycles this year, or they can do, you know, one cycle every day. How much more value am I going to get out of that person? Right. So that's like the speed dimension. The second dimension is just accuracy. Like your models will get better, right? One of the things we do is we help you build types of features, different signals into your models that you just weren't able to build before. And so you
You're getting more of the good information from your data going into your models, and your models are becoming more accurate. So it's like new signals, but it's also a lot of companies that go, look, we have all of this data, all of these cool signals in different places, but the way we architected this thing, we can only really use like, we have to choose. We've got to use, it's going to be a batch. It's going to use only the real time. All these data don't come together. And so how do we use it all together in one thing? So we help people use all of their data in every decision.
So that's the accuracy dimension. And then a third is just like the reliability and scale and stuff like that. Like as people are going through that curve and these use cases become more important, you're moving away from being comfortable or okay with like having a hacked together duct tape thing. And you're moving to a world where you actually need that level of resilience. And we have a bunch of customers who've,
Who've either had major outages with their existing systems that literally cost them tens of millions of dollars. So there's a cost component to it. Or they're ongoing costs from their system. It's just implemented in a really, like, inefficient way. Like, they recalculate every single thing every time they do a prediction or whatever. And you don't need to do that. Like, you should...
The right way to do that is with incremental compute, and there's nice ways to do that. We can help you. Actually, we had Rohit on here a month or two ago, and he was talking about that. You can save a lot of money just by tuning your data preferences just a little bit and recognizing, do we need to checkpoint this data? How fresh do we want this data? And if you are okay with it going from an hour to a day, that can potentially save you a ton of cash. Yeah, 100%.
And so we, so those are kind of like some dimensions of value, but you know, if you're working, if you're listening to this and you're working on some, you know, data science or machine learning project,
The way it's been motivated to you is you've probably heard of one of those things. Like, we're doing this because we've got to make the model more accurate. And so there tends to be a primary bottleneck. But the underlying thing is actually we care about all of these things. But, you know, there's one thing that's more painful first. Well, and making the model more accurate is one piece of it. But then you're thinking, like...
What does that enable? If the model is more accurate, what does that mean? For sure. And that depends on the use case, right? And it's really hard for platform teams to do this at scale. So like if you're the, you know, ML ops guy or the ML engineer on your company's like platform, ML platform, you know,
there's actually not, I haven't, I haven't found it. I would love to hear if someone else has it, but I've never met a platform team. And I struggled with this at Uber. Who's got a really good way to collect, um,
automatically collect and track the impact of their help to all of their internal customers. The value they're bringing to those data scientists or whoever's using the platform. That is a great call. And it's hard, right? And the way you would like to do it is say, well, what's the common currency across all these? It's dollars. So why don't we just ask all of our customers, the teams that depend on our stuff,
Like how much money is the same in you? How much money did you make? How much money is the speed up worth to you? Cool. Let's just get a dollar value for all of these and then just like bring them together. And, uh,
you know, that's kind of the best you can do, but it's still a shitty answer, right? So that's why it's a shitty answer because, you know, things change and the people who you're talking to aren't even usually good at like giving you those answers. And so you have a bunch of like kind of half answers and then you've got to aggregate and then they're also out of date and you've got to kind of
and I aggregate it around them, aggregate them together. And so it's not a very high-quality signal that you can present to your boss. Also, you may have a great product that's bringing a ton of value to the data scientist, but if the data scientist is working on the wrong project and they're not making money... Yeah, but that's actually up to...
you know, the, the, the business leader. There's, so there's someone who's the head of risk. There's someone who's the head of acquisitions who has to make sure that their team is working on the right problems. And, and it's okay. You know, sometimes you work on experiments and like they fail or whatever, but you hope you get it and it's okay to fail, but you hope you get it right. Like on average. So you're net adding value, but you need those leaders to like get it so they can have the right mindset about investing the right amount, not infinitely, but
but investing the right amount in this stuff to enable the team to be successful. I do like the idea of speed of iteration and being able to shorten that because that feels like something that you can always use as an anchor and say, look, this person, whatever we're paying them, it's like we have two of them now because of the platform. Yeah. Yeah. What, what some of the things we like learn about our customers along the way, we help them try to figure it out. It's, we try, we have a very like, Hey, we're your partner. We're here to help you kind of approach. And,
is like how many people are working on this thing today? And if you didn't have to do this stuff, like how many people would need to work on this? What else could they do? Let's talk about what's the before and after. What would this enable? Yeah, exactly. We like write that down. Let's be really crisp about that. But it comes to like, that's a thing that we see people thinking about more and more now because we've kind of gone through this, like just actually as Tekton, we've gone through this journey of in the early days,
all this stuff was basically impossible. You can never use streaming data or real time data. That was like the initial thing. Let's help people use their fast data and their decision. And so I think of like Tekton is having gone through these three phases where the first phase is let's make this stuff possible for people. And so our first customers were people who were like, Hey, we really just want to use this real time data. We have all the streaming events and we got to make our models better. We don't really know how to do it. It's impossible for us to do right now. Can we use your thing and like make it possible?
And that was great. We can put in the work and platform's excellent. We'll help you guys do stuff that was never possible before you unlock a lot of value. The second kind of stage was cool. I know this is like technically possible, but like I work in a team and I'm in name your fortune 100 company, right? I'm in a company with a bunch of like special rules, rules,
we have compliance we have weird politics stakeholders a lot of stakeholders we need to like this needs to work for our organization like we need to be able to share these things across we need to have i need to be able to report up to my boss like how much money we're spending on this thing have visibility controls all of that kind of stuff and that was really a a thing that was a bottleneck for a lot of like larger companies and and like teams that need to collaborate
It was preventing them from being able to use like modern ML tooling technology. So that's kind of like the second kind of set of things that we worked on as a company. I think we're just kind of like definitely coming out of that zone where we feel really good about like the top,
If you're a Fortune 100 company, there's no reason you shouldn't be using Tekton for your streaming and real-time decisioning. And the third thing now is really interesting where if you think about like, okay, now if you're someone who's working on these problems at any big company, you should be not blocked technically. You should be not blocked from the organizational red tape perspective. So what's the gap between what you're doing today and what you could be doing?
doing right well just think about you're probably not the best machine learning engineer in the world there's going to be someone who's a better machine learning engineer what would that person be able to accomplish that you couldn't accomplish right maybe they go faster maybe they're smarter and they come up with like better you know designs better features they can do the thing in a different way right and our goal is to help every one of those people
be the best ML engineer. Oh, now it makes sense because I saw what you guys released with helping with the features. And so this feature creation, being able to almost consult AI, which is pretty meta in a way. Yeah, yeah, it is. It's really cool. So we launched the AI co-pilot and that's for people who are building AI. Yeah, yeah.
So you hear about how do I use AI to help me write code? How do I use AI to help me write my essay or whatever? But a lot of these decisions are not...
A lot of the decisions in an organization are not driven by an LLM. They're still very, for whatever reason, there's many reasons that make a lot of sense, but they're driven by structured decisions by predictive models. But that's great that that's a predictive ML model, but does that mean that it needs to be hand-tuned, hand-built, stuff like that? Yeah.
what like why would you expect that like you are the best guy you're going to build the best model right so what we're building is a system to like a co-pilot to help every ml engineer and every data scientist build the best possible models uh for for their unique circumstance and that can be as simple as like hey right now the features that you use in your model are things that
Like, how do you come up with the features? You guys just got to think about them. You just got to, like, invent them. Well, and I remember you telling me back in the day, like, some of the best data scientists you knew always had this intimate understanding of the data. And they would have to spend so much time with the data to get that where it was almost second nature. And now...
That is a perfect use case for AI because it can just ingest all that data and give you those types of, well, have you thought about this? So yeah, there's two kinds of personas here, right? There's two, like by persona, I mean, if you go to a bank or you go to like, you know, big company, there's, you're going to see two people. One person, a sitting in a chair, person B sitting beside them and they have different skill sets. The first guy's,
really good at ML stuff. Doesn't really know what's going on with the problem. He's not like, Hey, I've been dealing with credit card default or credit card chargebacks for 10 years. Right? The second person is like, look, I've been doing credit card chargebacks for 10 years. Um,
pretty good at SQL, but I'm not an expert at machine learning. Some companies have the person who can do both of those, but that's really hard to build a whole team. They're not completely impossible to find, but it's really tough to find them. Even then, they're not the best in the world at either of those things. What we're trying to help people do is if you have subject matter expertise, allow that subject matter expert
to impact your production system directly. Right? So they can convey their intuition to the system. So you know what? Like, new users who,
sign up from this channel there's always something fishy about them i never really knew exactly like how to catch it but like that's where you should go look and figure it out and the system should help you figure out the right signals right and sketchy ass channel that's so funny they have the intuition about where's what which alleys are sketchy and which aren't right that's so true and it's so in their head it's not like they can document all they want but nobody reads that documentation right and and uh and then you have like the other people who are like
you know to take the metaphor continue it's like they're really good at driving a car but they don't know the city kind of thing right they don't know anything about the alleys and so every alley is like i don't know i'll go down it right and that's also a waste of time and so like an ai co-pilot can help automatically understand what's going on in your data so we can guide that person in the right direction and so what this adds up to is like we're going to help
automatically come up with feature ideas, automatically author, like literally build those features. And in Tekton, when you write like a small snippet of feature, a small snippet of transformation code, that becomes a fully productionized feature from right from the beginning. So it's all about like production pipelines for important decisioning systems. And so that stuff's already, all the hard part is solved. Now let's get the AI to just help people like write the stuff, write the right things and write it quickly. You've got,
The ability to just say, yeah, cool, I want to create some productionized features with that. But how are you making sure that they actually do what they say they do and what you want them to do? There's some feedback loop there where you're evaluating it. We let people evaluate on their own. A lot of use cases have pretty complex...
systems to evaluate and they've been building it for years and stuff like that. So we can allow you, we'll give you the feature data back. And this is like, we're the best at making awesome, fast point in time, correct training data. So you can then figure it all out. And then you can say, Hey, like actually this feature sucked. Let's just delete it. Right. Because you know what I forgot that you plug into a predictive ML system. So all that evaluation is,
is very mature as opposed to, oh, you're plugging into some chat bot and you have to figure out that evaluation. Right, exactly. You have to get the new. Exactly. The eval systems tend to be pretty like you already got it kind of thing. But one of the things we are starting to do is allow you to report your labels back to Tekton so we can feed that to the AI. Right.
So then the AI can go, it's not going to be perfect because it may not have your specialized evaluation system. But if it knows like, hey, these charges were fraud and these ones weren't, then what it can do is at least let me like, you know, guide my search for a better feature. So then I can find features that are like at least look really predictive from the data that I have. Right. And then I can suggest, me being the AI, I can suggest to you, say, hey, look, I found these like 10 features ideas. Right.
Check them out. They're pretty different than the other features we have in the system. Tell me if you think they're cool or not. Maybe you can say, hey, we don't like that because we never want to use that signal as a whatever, as a feature. Or maybe say, cool, let me try all 10 of these and then I'll press deploy to production for some of them. That goes back to the expert saying, no, I've been down those alleyways. Get that out of here. Or, ah.
I actually haven't thought about that. Let me try it. And how quick it feels like that would make a very quick iteration cycle too, because it suggests a bunch of features. Now you could potentially get on the other side of the coin where it's like, well, we've got a lot of noise with all these features that it's suggesting. And how quickly can you run and see if the feature has value or not?
Yeah, so this is up to the, like, your own eval system because you can run whatever you want in your own eval system. But you do raise a good point. Like, a lot of our customers, one of the things, one of the problems we tackled in the, like, stage two thing I was saying where we're, like, make it work for teams, right, is...
I've got these different teams that are kind of working on the same thing, and I'm sure they're just doing a bunch of duplicate work. And each of this domain, all of these use cases are pretty data intensive, which means cost.
Right. And so like, I'm worried that I'm running the same pipeline on this team. This guy rebuilt this pipeline and now I'm just paying double the cost. Yeah. And like we have customers where like a single pipeline can cost tens of thousands of dollars a month, you know, depending on your scale or whatever. CFO is cringing right now. Yeah. And so you want, you got to be careful about this. It's like an important thing for them. And so we have stuff to help them, you know, find duplicates like, Hey, heads up. Like these things look like they're basically doing the same thing. You want to like kill that. Wow.
Kind of thing. Yeah. And we're going to have a lot more launched here. I mean, now is also a good time to plug. We're definitely hiring in engineering and we love some excellent people who are working on these problems. Well, basically, if you're a hard worker and you're a humble person and you're curious...
we'd want to talk to you. In the USA? In the USA, we have a couple people in LATAM also. But in the USA, generally, we've got offices in San Francisco and New York. So if you want to work in an in-office environment, hit us up. But yeah, we're working on these really cool projects. And they're things where we spend time not with
random teams who don't really understand. Like if you work on an app, on the app layer, your customer is someone who's like, like the, uh,
the person who is not working with technology, where you're just solving their problem directly and they don't even appreciate what you do. And what we do is we work with infrastructure teams. We work with platform teams. A lot of the coolest companies in San Francisco, their top AI teams, they are our customers. So we work with them every day and we have shared Slack channels with them. And so we have cool ideas. We run it by them. And so we learn a lot. It's like a really cool environment from that perspective because we have
a lot of friends and you know what all the other different companies are doing in their ML and AI infra. Yeah, I love it because you're marrying the two worlds. And also you're looking at it as a production line or you're thinking this, what we're doing here on the data sense doesn't just because it is predictive ML that we are powering very mission critical products of the company for it doesn't mean that we have to totally
divert ourselves or disassociate ourselves from the lift that you can get from generative AI. Absolutely. We see like generative AI is like super helpful for us in two ways. One is the whole thing of like we help our customers do a better job building their machine learning, right? So it's like it's like Gen AI for Tekton kind of, right? But then, you know, people use Tekton for a lot of Gen AI applications too, right? And so one really important kind of like
like pattern that we're seeing that a lot of people are doing especially in the marketing side of things is uh doing a hybrid of applications so where they go uh this decisioning system we have a bunch of models that are predictive models and they feed into a gen ai application to do some generation like some personalized texts we have a bunch of models that are predicting like
What is this person's gender? What is this person's this thing? And they all have to kind of work in one coherent data system that's making an inference at one time, right? So it's like a hybrid. So if you think of like predictive ML, then there's this hybrid where it's like you use both in the same application. And then, you know, what we do in Tekton is we're delivering in any decision. It's just like...
context that's going into some decision-making system, some model to do something. You can build embedding syntax on power process unstructured data, or use all of those signals to drive a prompt that goes into a gen AI model. And so a lot of our customers are doing that as well. And the value of doing that is you get all of your data that is going into any decisioning system governed through one central location. And you get like the reuse of compute across it. So you really have like one platform to integrate all of your underlying data systems
you have one hub for that data to go through before it goes, it fans out to various decisioning systems. And that's really helpful if you have like compliance or something like that as well.