We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode 891: Conversational AI is Overhauling Data Analytics, with Martin Brunthaler

891: Conversational AI is Overhauling Data Analytics, with Martin Brunthaler

2025/5/27
logo of podcast Super Data Science: ML & AI Podcast with Jon Krohn

Super Data Science: ML & AI Podcast with Jon Krohn

AI Deep Dive AI Chapters Transcript
People
M
Martin Brunthaler
Topics
Martin Brunthaler: 最初,我们Adverity的目标是简化广告公司为客户创建报告的工作流程,通过自动化报告流程,减少手动操作和数据整理的时间。随着发展,我们扩展了数据连接能力,支持各种广告系统和相关系统的集成,以提供全面的报告所需的数据。然而,传统的仪表盘存在更新不及时、无法深入挖掘数据等问题。因此,我们引入了生成式AI技术,允许人们使用自然语言提问,从而更好地理解数据,实现数据民主化。为了确保数据对话的有效性,数据质量至关重要,需要完整且与各种来源良好对齐的数据集。我们平台中的数据质量组件可以帮助监控数据中的各种问题,例如命名约定。通过这些努力,我们希望解放数据分析师、数据科学家和机器学习工程师,让他们能够解决更有趣的问题,专注于业务价值和战略,而不是忙于繁琐的数据整理工作。我坚信,未来的数据分析将更加注重战略和业务价值,而不是技术细节。

Deep Dive

Shownotes Transcript

Translations:
中文

This is episode number 891 with Martin Bruntaler, co-founder and CTO of Adverity. Today's episode is brought to you by Tranium 2, the latest AI chip from AWS. And by the Dell AI Factory with NVIDIA.

Welcome to the Super Data Science Podcast, the most listened to podcast in the data science industry. Each week, we bring you fun and inspiring people and ideas exploring the cutting edge of machine learning, AI, and related technologies that are transforming our world for the better. I'm your host, John Krohn. Thanks for joining me today. And now, let's make the complex simple.

Welcome back to the Super Data Science Podcast. Today we've got an interesting one for you on how generative and agentic AI are transforming data analytics. Our guest for this quest is Martin Bruntaler, who is CTO of Adverity, an Austrian data analytics platform he co-founded a decade ago and that has since raised over $160 million in venture capital.

Before Adverity, Martin was co-founder and CTO at two other European tech startups, giving him over 20 years of combined experience in starting, scaling and exiting companies across multiple industries, including e-commerce, media and mobile. He holds an engineering diploma from the Salzburg University of Applied Sciences in Austria.

Today's episode should be of interest to just about anyone who'd be interested in this podcast because it touches on data analytics, transforming user experiences with modern AI capabilities and growing tech businesses.

In today's episode, Martin details how a childhood fascination with computer programming evolved into founding a globally leading platform for marketing data analytics, what data democratization really means and how the traditional dashboard-based approach to data reporting is failing businesses, why data analysts are spending too much time on busy work instead of delivering business value, how conversational AI is overhauling how data insights are gleaned for hands-on data practitioners and business users alike, and how data analytics

And finally, he provides his no-nonsense tips for tech startup success. All right, you ready for this insightful episode? Let's go. Martin, welcome to the Super Data Science Podcast. It's a treat to have you on the show. Where are you joining us from today? - Hey, pleasure to meet you. I'm based in Vienna in Austria, in the European Alps. - I think if I remember correctly, Vienna has for several years in a row now

won The Economist magazine's number one spot for most livable city in the world. Yeah, actually, it is a very nice place. I have to agree. I'm personally coming from the countryside and I had, you know, in the early 2000s, a hard time moving into the big city, if you will. I think that's the same across the world. Like after a while, I really got settled here and I love it quite a bit.

So yeah, we're here. There's also an award for the most grumpiest population or citizen, which Vienna, I think, has also won several times in a row. Oh, really? I remember things like, I lived in Singapore for 12 months and that is, I think it's the most miserable nation in the world.

which, yeah, it's funny. Like expectations management, I guess. Yeah, exactly. It's probably good marketing too. The meanest cities? You know, I think looking at the perspective from city marketing, you know, it could be an asset. I'm not, you know, like people are proud of it as well. Nice. So beyond moving from the countryside to,

to Vienna, to the big city. Martin, tell us a bit about your journey in technology. So how did you get started in it? What led you to co-found your company at Verity? Sure. So I got into computing at a, I'd say, fairly young age. I think eight or nine years old, I got on a Mica computer. That's kind of the, if you will. So I'm not that old to have a C64, but still a very basic computer where I got familiar with basic programming, which is...

I had fun creating those mini games and typing stuff into the computer from magazines and that kind of activity. And in school, I picked up a lot of networking, managing the school network. And we did have quite some fun with the computing equipment there, some hacking and understanding how this stuff works. It was pretty old equipment at that point in time.

and no internet access. So we had to arrange with, you know, there's been some popular computing magazines that we had and books that we had to read and get into working with this stuff. And then really I moved during school and afterwards I did some, you know, integrated systems programming. And in school, which was a telecommunications degree, so a lot of signaling and mathematics attached to that. But at the same time, I

joined a company that was working very closely with mobile network operators. So we were basically creating apps for mobile network operators. It used to be a technology called WAP, which

were kind of micro sites that could be delivered through SMS. And, you know, from there went into creating pretty large scale messaging systems. So we, for example, created the software that powered the American Idol SMS voting. Also here in Austria, we powered a couple of radio and TV stations with our technology. And, you know, we then exited this company in 2006, spent some time in a bigger corporation, an American corporation. At that point in time, they had a lot of, you know, activity in

I think that the slogan at that point in time was being part of every transaction that was fun for two years. But still, I wanted to do something like creating stuff, which is why we created an incubation type company, experimented with some stuff, after which I did create a price comparison engine, if you will, like the whole browser add-on

ecosystem was sprawling at that time. So we created an extension. You could compare prices in real time and suggest alternatives. And yeah, so Alex, my co-founder and Andreas, we knew each other for a while and we had an interesting problem to solve, which was reporting on TV commercials. So we did have a very hard time to get for a different project, get hands on the data in regards to the success metrics for our TV commercials.

we figured there must be something better than taking a csv file and you know creating an ad hoc report and putting it into a powerpoint presentation that we received like six weeks after the the commercials air so this is really how we got started with something uh automating uh reporting and um

In 2015, we decided to actually incorporate this as a standalone company that's at Verity today. So it's got advertising in it, I guess reflecting your advertising roots, right?

Yeah. And then the Verity, it's interesting. I don't know if that's just something that you chose because it sounds nice, but it's also, that's kind of the Latin root, that's related to the Latin root for truth, right? Yeah, exactly. So that's kind of connected to that. At least it was the thinking behind that. It evolved as a brand, but yeah. Yeah.

Dot-com domains, even at that point in time, were very rare. So yeah, we had to. That's a nice one. Yeah, the truth behind, you know, data, the truth behind advertising data in particular, it makes a lot of sense to me. You were about to say something and I just interrupted you. No, that's good. That's exactly like the story in a very short, you know, summary. Okay.

Martin, how is your English so good? I mean, your English is amazing. I wonder how that happens. Like, you know, everything that you described about your story, you know, you're in the German countryside tinkering with old networking components. And at what point do you become completely fluent in this incredibly technical way in English? How does that just happen?

uh to be honest I don't know like um and also to be fair my English training if you will is not up to speed so I spent a half year in in uh in England during my studies like as an uh an abroad um uh I'm not sure if that's still possible but in Europe you had a program where you could take

or join another university abroad. And we had a cooperation with Staffordshire Uni in the UK. And that's where I spent some time. I tend to read a lot of English material, obviously. I try to watch movies in original language. That's basically...

I personally wouldn't consider I'm very fluent, but it works for my daily business. I would say you're extremely fluent. It's so embarrassing for so many people from North America. So many of us are in this monoculture. I studied German for 12 years on Saturdays

And like, I'm embarrassed to even, even before we started recording, I was embarrassed to try to like do a little bit, you know, I can do, I can order a schnitzel and a beer off a menu and make my way to my destination on the train, you know, when I'm traveling in a German speaking country. But yeah,

To be able to describe anything, like 1% of what you just did in terms of your career and what you're doing at your company. If I try to do that in German, there's no way. Thank you. Yeah, one fun fact is though, my German accent is very strong. So some co-workers, when I talk German, I have to talk a little slower. Yeah.

Oh, so you're easier to understand in English than in German. Probably, at least in our company. That's funny. And so that's something to do with also like your kind of your regional accent from where you came from? Yeah, I'd say so. And, you know, like I never, you know, I never spoke to the book, if you will, like there's like this kind

like the concept of high German, if you will, or like proper German. Hochdeutsch. Exactly. Yeah. So I don't tend to do that and I don't tend to do that with our kids as well. So which, which helps them understand me in German as well. Right. So yeah. So it's kind of like the, it would be in, in England, they would call it the Queen's English.

Yeah, exactly. It's some parallel to that. Even though Austrian, let's not get into that too much. There is a conflict in between the type of German who speak in Austria and Germany. Habsburg Empire, Ottoman Empire. I don't know if I'm making stuff up. I have no idea. I'm just throwing out random nouns related to Austrian history that I know. Okay, nice. Let's get back to business and at Verity. So,

So when you founded Adverity back in 2015, the intention was to have more finesse around analyzing TV ad data, having something be, I guess, closer to real time, something more advanced than a CSV file. But now Adverity has grown into, over the past decade, congrats, it's your 10-year anniversary. Thank you. It's grown into a leading data analytics platform.

So how did the original mission evolve over the years into the broader offering that you have today? And we have...

And next in the conversation, I guess you can get into this a little bit now, but we are going to be focusing a lot on the kind of conversational and agentic AI elements. So you can get into those now or we can save those for the next question if you want. Yeah, sure. I mean, the way we started was really about the workflow and reviewing how agencies initially, you know, at that point in time created reports for their customers.

And a lot of the work behind that used to be going into all the ad systems that they took care of, copying the data into a type of spreadsheet, using this to create a visualization, embedding this in a PowerPoint and creating a PDF to send around to clients. So that was very roughly spoken, the workflow that we attacked.

And going back to the root, so we created a solution where it became very simple from our perspective, basically a verticalized BI solution to deal with the automated reporting. And our vision was that agencies would then use this to build dashboards that they can share with their clients and get near real-time reporting. And that's also how we expanded very quickly in terms of connectivity. So initially kind of, you know,

We literally, you had to upload a CSV as well, but then we built out our data integration capabilities, which is today the core of our business really. So we do support quite a wide range of integrations with various ad systems and also adjacent systems, measurement, finance systems, CRM systems, shop systems, all those data points that you need in order to do proper reporting is what we built out over time. And like I said, I think

Over the course of the last 10 years, I think reporting changed also how it's done within many of our customers. Dashboards are not that relevant anymore than they used to be when we started.

So yeah, it probably could bring us to data conversations as well, or how this might be changing. Yeah, for sure. Really quickly before that. So it sounds like Adverity still has some kind of specialization in working with marketing data, but you mentioned other kinds of data there as well. You know, customer relationship management data, financial data. Is that all with a view to enhancing marketing?

a company's perspective on marketing? Or would you say that Adverity today could be used as a more general purpose enterprise analytics platform? Yeah, the platform itself could absolutely be used for other verticals as well. I think for us as a company, from a go-to-market perspective, and also when you look at the connectivity portfolio, there's obviously a bias towards marketing. And we, as a company, in our DNA, have a lot of marketing knowledge baked in. So all our

implementation consultants, professional services staff, they know marketing inside out. So I think as a company, even though the technology could be applied to other fields as well, you need to do more than just extending your connectivity portfolio. You need to train people and staff accordingly to be able to work in other fields. But that said, I think there is adjacent or marketing adjacent teams that are very likely to adopt this solution as well.

This episode of Super Data Science is brought to you by AWS Tranium 2, the latest generation AI chip from AWS. AWS Tranium 2 instances deliver 20.8 petaflops of compute, while the new Tranium 2 Ultra servers combine 64 chips to achieve over 83 petaflops in a single node. Purpose built for today's largest AI models.

These instances offer 30 to 40% better price performance relative to GPU alternatives. That's why companies across the spectrum from giants like Anthropic and Databricks to cutting edge startups like Poolside are choosing Tranium 2 to power their next generation of AI workloads. Learn how AWS Tranium 2 can transform your AI workloads through the links in our show notes. All right, now back to the show.

Now let's get into the more advanced topics that I just said that we'd put a pin in a few moments ago in terms of generative AI and agentic systems and how those fall into a platform like Adverity's. Adverity is currently launching its Data Conversations product, which lets users query data in natural language and even get proactive recommendations. It sounds to me like

The talking in natural language, this is leveraging the generative algorithms like the GPT powered kind of experience that we expect in our tools increasingly today.

And then the proactive recommendations, that kind of sounds like an agentic system to me. So do you want to talk about these particular features and how you're thinking about generative and agentic AI at Adverity? So I think when you look at the proposition of data conversations, which we are about to launch, there is...

one overarching theme, which is data democratization. And it used to be, and this is not a new term or a new topic, but it used to be that data catalogs and BI tools and dashboards have been considered to democratize data access somehow.

um turns out that those dashboards mostly are you know created once not reviewed uh not up to date and always are getting challenged by the user and then also don't give you a capability to drill down and ask more questions to understand like what's going on um so usually

You would see people looking at a dashboard, figuring out what this means, comparing it in their mind with yesterday's data, challenge that, go back into the source data system and see like, is this actually the data that can trust this data and all those issues. So I think there's an evolutionary step happening here with technology like generative AI that allows you to actually use human language to ask questions about the data.

And in order to get there, you need a lot of things done right before you can do that. But once you have this, you know, a solid data foundation created, you can actually take this to quite a solid use case, which we have done so far.

I think one of the strengths that we have is a deep understanding of the lineage, source of data, meaning of data. So we manage a data catalog. We call that data dictionary that has a clear connection to source and also a deep description of what's going on, like what kind of meaning is behind the given attribute.

That helps us ground our system quite a bit to give propensities. Nice. Yeah. So let's talk a bit about that. In order to have, if we have listeners at home, a lot of our listeners are either hands-on data science practitioners, like machine learning engineers, AI engineers, data scientists themselves, or people who are interested in building products or companies that

the leverage generative AI, what are the kinds of lessons that you've learned in implementing a product like data conversations at Verity? What do you need to do? What are all the things you need to line up in advance of bringing in a large language model and having conversations work effectively with data? You talked about

a moment ago about the issues that you typically see without this kind of conversation in place where people have a dashboard and it's not exactly the information you needed, it's too fixed in its outputs, and so the people end up going and digging under the covers into the raw source data to try to really find answers, which adds strain onto the data analyst team. So I get all of the advantages of being able to have a conversation with your data analyst

But what are the things that you at Adverity, that our listeners, if they want to be making this similar kind of transition, what do they need to get right in order for that conversational aspect to work out? So I think one really critical piece is the quality of the data underneath. So each source, and there's many aspects of data quality, if you will. Also from an academic perspective, you can list those out. But from a more practical perspective,

You need a complete data set that is also very well aligned with all the various sources that you have. So harmonization plays a role in this as well. And we built up actually a data quality component in our platform that helps you monitor all those issues that you can have in your data. There's specific monitors for data quality in marketing. There is a concept called naming conventions, for example, for campaign names.

that we can monitor and act on in an intelligent manner. But there's also simple things like if you onboard a generic source from a database or from a REST API, you want all the data types need to be aligned, you know, data formats need to be aligned. You want all your data to be harmonized in UTC, for example. You need to clean up some stuff. This is also why there's some transformations going on usually either by splitting up, combining various sources and all those things.

But I think it's very critical to get the quality right. You need to be alerted. If something's going wrong, you want to prevent kind of, not saying dirty, but problematic data sets to hit your production environment. And I think we can help in this discipline quite a bit. You can help in the discipline, like by having these kind of data quality reportings

built into the platform. Yeah, but also the multi-layer approach to this. So we keep always a raw data set that can then be used as a starting point to reiterate on transformations, for example. So you can always go back to the previous state and improve your transformations. There's also obviously today an AI system helping you to compose those transformations. But if you, and this is specifically always very useful for those type of generic sources, but you can, you know,

It's kind of a simplified data wrangling exercise, if you will. And then once you're satisfied with that, there's a component that helps you monitor the quality as it flows through the system. There's anomaly detection and all the things that you want to monitor. Right, right, right. Yeah, so built-in anomaly detection would be key to this working out. How about when you think about

there's a huge amount of breadth of capabilities that you could potentially get from a conversational interface. When you're designing a conversational product, how do you...

How do you figure out, okay, this is the range of things that we're going to support or not support, and then how do you select the right large language model for that breadth of features that you decide to support? Yeah, let's start there. I have more kind of follow-on questions from that, but I feel like that's kind of a good starting point. Yeah, I think that's useful.

And maybe one thing to add to the previous question, in terms of quality, like I already said, the data dictionary, descriptions, understanding of lineage is very critical as well. And this goes also into the design of our conversations interface and how people can interact with that. We iterate very quickly. So we're going through

I'd say a pretty fast-paced development cycle with this, adding features every week. And we have a dedicated team taking care of benchmarking and analyzing the quality of responses. So we're using a

frameworks to monitor that. And the data science team is having a continuous test on, you know, we have a kind of a predefined set of responses that we expect from our questions and we can monitor on those and improve and test models as we go. And to be fair, the plan is also at the moment we committed to one model, but there is also the plan to use different models for different aspects of our capability.

So for example, we could use a different model to compile our SQL query, a different model to do the pre-flight qualification of a question, a different model to do the actual conversation. So yeah, that's also possible. Nice, nice. So I imagine something like, you know, obviously...

The questions that I asked you were kind of tricky because I'm trying to get at what are the things that people need to be doing in order to build these kinds of conversational interfaces like you did, but obviously there's proprietary things involved. Yeah, I think there's no trade secret in building, if you will, a lot of...

And the type of APIs they offer are similar in regards to their capabilities. And you see like all models kind of reaching the same capability. And, you know, basically the leaderboards change just, you know, every other month you'd have another leader, but everyone's catching up to the same state of quality, if you will.

I think where it then boils down to is how you put the components together to create a compelling and exciting use case on top of that. And I think in terms of how this works from a technical perspective, it's pretty straightforward. You can qualify a user input into a type of question, select a model that you want to run with.

basically feed it with a system prompt and additional information about the model, which is very critical to get the answer right. I use this to create a SQL query, verify it's actually a valid query that can be executed.

fire the query, use the data to run some basic analysis and create a decent, nice answer for the user. And for us, the use case then circles a lot around the table that we generate from that kind of response. Because what our approach to this is, is first of all, in terms of democratization, we are targeting two sides of the business, one of which is IT and the other one is the business user.

And both have a requirement to access data. So rather than going through a full chain of various teams, you know, so it used to be that you had to create a ticket to get access to a data set. The data set would then be prepared within two weeks and put onto, I don't know, a Snowflake table or whatever. Today, a Snowflake table, it used to be something entirely different. Yeah.

And with this, you can actually run the query, create a table in near real time available for your further analysis. And that's kind of exciting for us.

or whether they're a business user who probably has no experience writing code, in either case, this kind of data conversation is able to serve that user

but something that's interesting about what you were saying is this idea of it generating a table of results. And so that's something interesting. So why don't we actually, something that I probably should have asked from the very beginning is what is a user's experience like when they have this kind of data conversation? So maybe give us kind of like the user story of what your experience is like

Getting access to data in the Adverity platform before data conversations as well as after. I think, and to be honest, I can't speak of all use cases, but the predominant use case is to extract data from sources. Say, for example, you want to run marketing reporting across Google Facebook.

You name it. So basically you extract data from those source systems, prep it in a way that is useful for reporting and put it into a table. And here are some differences are to be made, like some put it into various tables or many tables, some already adopt a one big table scheme.

So you end up with a database, big queries, snowflake, you name it. So we support quite a wide range of databases and have a full snapshot of the source data in your database. And I think most will then

connect this with a dbt model to do some further transformation, probably create subsets of data for different kinds of reporting. I can't speak of the models itself, so that's up to the customer. And then in turn, this is usually connected with a BI system. So mostly, you know, the things that the industry standard would be Power BI for many organizations. There's still Looker, Looker Studio, Tableau, to name a few.

And that's basically where users would today access the data. Like the end consumer would look at a BI tool to run a query to create a dashboard or look at a pre-canned dashboard that someone else in the team has created. Can I ask a clarifying question? So all of these kinds of technologies that you just mentioned, like your data source being Google or Facebook for marketing data,

taking all those data from those various sources from a client, putting that into a Google BigQuery to allow you to have SQL-like queries over these vast amounts of data happen rapidly. You could also alternatively be using Snowflake for that. Then you mentioned dbt downstream for data transformations. And then finally,

some kind of tool like Power BI, Looker, or Tableau in order to be able to see, to visualize the data, to be able to kind of get summary metrics that look nice. All of that happens within Adverity. So all that kind of

So that is very interesting to me. So confirm that that's the case, yeah? No, I'm just mentioning one of the cases. And here it becomes interesting because you would look at this from a modern data stack perspective, obviously.

as in like a highly composable or like a very modular stack of components that kind of create a data solution for you. I think what you get with our solution, you don't have to use those tools. So basically... I see, I see, I see. So you're giving the kind of... So that's what was confusing to me because I was like, so all of these different technologies are supported within Adverity? So you're saying that basically...

That is a common workflow. Google BigQuery, Snowflake, dbt, into Power BI, local tableau. That's a common kind of thing that a client of yours would typically be doing before they start working with Adverity. But then with Adverity, you get one platform

Absolutely. And so there is...

You don't have to use dbt with us. We can execute dbt on the customer's behalf as well. So you don't have to manage an orchestrator or run some other complex machinery to run dbt. We can do this on your behalf as well. But I think one key point is we already provide a harmonized, aligned and properly structured data set for any purpose in marketing.

So if you go with the default and create a database that receives data from us, it's ready for analysis in your BI tool. So you don't have to do all the in-between work. And our transformation capabilities are pretty powerful as well, like I initially said. So if there's some need to change the shape of data before it lands in your database, that's also a possibility that our customers have.

So I guess, you know, it saves quite a bit of effort to create the data set that you require. And also it hooks natively into or you can connect it natively with any BI solution that works through an OData interface. We have a concept called data shares where customers can create a subset of data that then can be shared with an external application as well.

So a lot of what we do really is we have baked in governance and enterprise capabilities that make it very powerful to use as compared to something that you have to build on your own. Because what many people don't see is that there is actually a lot of work in creating and maintaining a full modular modern data stack.

And probably also going back into the whole world topic of AI, I think people need to start concentrating on the business value and strategy on top of all this kind of, if you will, I'm not saying magic, but this amazing machinery that you don't actually want to build a modern data stack with a lot of components and orchestrating a huge number of SaaS tools. I think that's certainly something that I'm very excited about.

This episode of Super Data Science is brought to you by the Dell AI Factory with NVIDIA, delivering a comprehensive portfolio of AI technologies, validated and turnkey solutions with expert services to help you achieve AI outcomes faster. Extend your enterprise with AI and GenAI at scale, powered by the broad Dell portfolio of AI infrastructure and services with NVIDIA industry-leading accelerated computing.

Right, exactly. So what you're saying is that instead of investing your engineering time and money in your organization...

on rebuilding and maintaining these same kinds of complex data stack interactions

that all of your other competitors are doing that it would make a lot of sense to focus on finding a solution. If you're working with marketing data, then Adverity could be a great solution for you and your data. Then your engineering time and effort doesn't need to go into reinventing the wheel yet again on a similar kind of data stack to all of your competitors.

you can instead be investing on actually saying, okay, the data are going to flow properly. We can export them in various places if we want to and use them in other tools if we need to for some reason. But we can now concentrate on using the data that we have, getting some actionable insights from them

and getting an ROI, getting a return on our investment in our engineering team. Yeah, exactly. And maybe also what is very interesting about the marketing ecosystem, it's very dynamic. There's lots of changes happening on the source side as well. So you don't want to keep up with that. If you go with a typically horizontal ETL solution,

keeping up with the changes on the source side, it might be very tedious and problematic. So if you have someone to rely on to get this data in without you taking care of that, that certainly already saves quite some money. And maybe going back to the question around

the data set itself and how you bridge the gap to other departments, say, for example, finance or whatever data sets you have that sit in your organization that you want to connect in terms of reporting.

I think you can see our solution as in the marketing data set kind of magically fills itself into your data warehouse environment. We take care of everything there. It's a table that holds this data. And you can then connect it with other sources that you have within your organization and still keep it isolated as much as you like. Perfect. So now I have a better understanding of what the workflows are like

for somebody who is trying to work with marketing data, trying to get actionable insights from marketing data, whether they are a hands-on data practitioner or whether they're a business user. With the Data Conversations product that Ed Verity has now released with his generative experience, how does that change everything for a user? Previously,

Previously, Adverity has already been making life easier by handling the interaction between all these different kinds of databases, data transformations, outputs, as well as, like you mentioned recently a few moments ago in the podcast, keeping up with all the changes that data providers like Google and Facebook, marketing data providers make. So there was clearly already this kind of value to Adverity before,

What is the added value and what is the change in the experience like with data conversations? Yeah, I think the most standout difference is that rather than looking for the correct dashboard or correct set of data that you want to look at,

you can formulate a natural query and we're going to figure this out for you rather than looking for a specific data set on your own or really asking a different team to get you to the data that you need in order to answer your business type question.

I think that's kind of the first and most basic need that we address with data conversations is accessing data and putting it to good use because like I said, the tabular response can be immediately actioned upon.

by either continue to use it in, say, for example, a Google Sheet or materializing this in a Snowflake table or BigQuery table, for that matter. So you can immediately work with the data set once you're satisfied with the result. But on top of that, there's also--

business analytics questions that you can ask about the data set. And we try to do the best in order to get to a solid response. So you can ask for the best performing campaigns and the system will figure out how to connect the dots and create a useful response for that matter. So all of...

and all the the marketing type specialized or subject matter expertise that we have built is also built into the system so um that's that's helping you to get to to the response a lot faster and i think the second huge benefit is you get a system that you can actually explore uh deeper i mean there's been basic you know um features for that and or basically are in many bi tools that you can drill down into the data set and you know get get more detailed data but

If you want to branch out into other data sets and understand, for example, why a trend is happening this way, you can do this with the conversation. You cannot do this with an explore function in, say, Looker or Adverity for that matter.

Very nice. So basically you get more depth of response, you can explore data more deeply, and it's easier particularly, I guess for any kind of user, I was going to say particularly for a business user, but it's easier for anyone to be composing their questions in natural language relative to a SQL query or something like that. Yeah, absolutely. And again, going back to the enterprise capabilities, I think what's very critical as well is to give IT

a tool that they can still manage on an audit rather than having a wild mix of SaaS tools connected together

um that don't necessarily um give you the type the level of governance that you need in order to comply with a lot of regulations that we face today so yeah that's another aspect of this nice all right so for people who are our uh you know you know our core audience people out there who are uh who are data analysts data scientists

Do you think that they should be worried about tools like this encroaching on their roles? Or does it actually...

Do automated conversations like this around data actually free up data analysts, data scientists, machine learning engineers to be tackling more interesting problems? I'm assuming it's the latter. I'd be surprised if you answered any other way. No, absolutely. I have the same understanding. This is going to allow a lot of analysts, also data engineers, to focus on more strategic stuff.

topics that are important for the business and focus on what kind of value you can deliver for your company rather than having to, not playing, but

The technology allows you to do less busy work, if you will. Yeah, you're not just getting messages like Slack messages, emails from finance, HR, marketing, asking you for, oh, can you just do this data pull for me real quick? And you're like, well, there goes my day. Yeah, sure, let's do this. And rather than that, you can point your colleagues also to the solution that helps you focus on the real stuff. I think...

Something that certainly will matter in this role moving forward is there's a lot about how you govern those tools and make sure that the responses are correct and right. So having tools to monitor and supervise what's going on in your business and that's critical. But yeah, I don't think that...

Any jobs are at risk. On the contrary, I think the profiles will change and I don't see a major issue. You have to obviously go with the trend in a way, but that has always been the case.

We've talked now a fair bit about generative elements and how being able to have a natural language conversation with your data can allow you to have an easier time of getting insights from your data. You can get deeper into your data.

I'd like to talk about the other big buzzword in data science and AI today, which is agentic AI. And so I don't know, would you apply that term, agentic AI, to some of the features that you have in the Adverity product that are kind of autonomously, it seems to me from my understanding of the product,

that there are processes that are kind of automatically, you talked earlier about data quality, for example, doing anomaly detection. Would you consider that kind of autonomous process that is looking out for issues, maybe flagging opportunities as agentic? And if so, would you say that you have agentic features built into the Adverity platform? I have a very hard time with the term today because it's been

used across the board. Like, you know, when AI entered the scene, it basically meant like technology like JetGPT, but very quickly, many companies adopted the term for non-AI type problems, basically statistical problems or machine learning type capabilities, but that still haven't been or didn't make use of any genuineness. Any kind of data model.

And the same, exactly. And the same happened with agentic AI. Like I think many people, there's certainly like a common denominator, but many people would have used this as agents left and right.

My understanding of this really is to give a chat GPT or a generative AI type capability access to functions in a way that they can take action on behalf of the user. Sometime in the future, and there's like this concept of establishing trust with AI solutions. I think we are not there where people would

trust an AI solution to make decisions on their behalf fully. So there needs to be some level of control and monitoring baked in, but where they can actually call an API, use the result or even trigger some sort of action without some means of user interaction. That's my take on this at the moment. Some sort of autonomy can be baked into this as well. But for me at the moment, it's mostly around trust.

actions, and also interestingly, retrieval of information. Nice. All right. Yeah. So this makes sense to me as a better definition of agentic AI. I was definitely, there are, I guess you could ask kind of any individual to define what agentic AI means to them. And I was using it far too broadly, just as people use AI to describe any kind of data modeling process where

I'm thinking about, okay, if there's any kind of autonomous system, but I think if this is a completely programmatic autonomous system, we probably shouldn't be calling that agentic AI. So I liked your definition there of where it involves, probably you have a large language model in the mix in an agentic AI system, and ideally, like you described, it has access to tools that

that'll that allow it to be taking various kinds of actions i think that's the key part there it's basically maybe we could define it i don't know if you'd agree with this but an llm that can take actions yeah exactly so i think that's that's a much more realistic uh definition of this because on the other hand if you if you define it too broadly people have expectations for a system that is not able to behave in a way that they would expect as well um

Yeah, and I think it's mostly also now in a phase where we establish standards. So like a couple of weeks ago, you had all this hype going on about MCP. There's likely another hype going on about A2A in the near future. So I think we are at the phase where actually companies are going to set standards in terms of how this might look like in the future. So having a capability like MCP in place and having standards

A lot of SaaS products, and we certainly are going to do this as well, offer MCP capabilities to hook the company's tool of choice, which could be Gemini or OpenAI or whatever agent that the company will go with.

into your environment, I think that's very valuable and will probably truly help to get some agentic solutions out there. Nice, yeah. So you mentioned MCP there, kind of casually. I did an episode on this recently, episode 884, if people want about seven minutes of detail on model context protocol MCP. But this is an open source framework from Anthropic

that provides kind of a standardization for agents taking actions. And so it sounds like, maybe fill us in a bit more on how it sounds like you're actually integrating that into Adverity.

Yeah, so we will offer capability to connect this. You know, it's in an early stage at the moment. You need, you know, cloud desktop in order to connect this in a proper fashion. But this will surely develop into a very good solution where you connect your chat agent with services and let them take action. And, you know, like we discussed earlier, it might not necessarily be a Cheney type action. It could be anything from statistics to data.

you know triggering some some sort of action in a remote system or even consuming information from that system so getting a listing of in our case for example getting a listing of data sources connected and you know if they are all um properly connected uh create links to reauthorize those like really interacting with the system in a in a natural way um that's what we are we are building out here

Build the future of multi-agent software with Agency, A-G-N-T-C-Y. The Agency is an open-source collective building the Internet of Agents. It's a collaboration layer where AI agents can discover, connect, and work across frameworks.

For developers, this means standardized agent discovery tools, seamless protocols for interagent communication, and modular components to compose and scale multi-agent workflows. Join Crew.ai, LanqChain, Lama Index, Browserbase, Cisco, and dozens more. The agency is dropping code, specs, and services no strings attached. Built with other engineers who care about high-quality multi-agent software.

visit agency.org and add your support. That's A-G-N-T-C-Y dot O-R-G. Very nice.

I love that. MCP is one of the hottest topics right now that I come across at any conferences that people are talking about. It's cool that you're integrating that at Verity as well. It can be tricky to see into the future and make predictions when things like MCP come out of nowhere and all of a sudden become a standard. We can anticipate that that kind of thing, as well as AI advancements, LLMs continuing to, as you say, ratchet up on the leaderboard time over time and

all the big players at the frontier of AI capabilities having more or less fungible APIs for a lot of different tasks. So the point of me saying all that is that it's tricky to be able to see into the future, but given your role as the CTO of a company that's

Taking technology and turning it into more streamlined experiences for users, what are your predictions for how things will continue to evolve five years from now? How will somebody, whether they're a hands-on data analyst, data scientist, or a business user extracting data from companies,

their systems, their providers. What will that experience maybe be like in five years relative to today? Like you said, it's very hard to predict five years of being in this business for a while and know that things can change very quickly. But I think overall,

A lot of focus will shift to strategy rather than having to do, we quickly talked about or touched on busy work, but there's lots of workflows that currently need some sort of manual interaction that are fully automatable in a way. And people will then have a chance to focus on the business and outcome of data and how it's going to be used rather than the mechanics behind that. So that's certainly something that will change quite a bit.

I think also we didn't talk about that. We focused quite a bit on AI, but there's also an interesting move in our industry as a

with regards to storage and disconnecting compute from storage, which is a shift that is happening underneath what we do today. So looking out into the future, the mobility of picking compute connected to data, if that makes sense, will certainly be something that will change the system quite a bit as well, or the way we work today quite a bit. So having a raw repository that sits in an accessible environment

object storage environment in a standardized manner because historically that's not been standardized. Today, the industry kind of settled on Iceberg as a format for that. And being able to connect this with various query engines depending on purpose and likely in the future also with a generative AI capability, I think that's very valuable to businesses to pick the right solution for the right case.

So that's certainly going to change as well. But, you know, five years, it's a long, long, long time frame. Because we are talking maybe half year, year time frame here. But yeah, I think mostly strategy and focus on

on the why rather than the how is probably something that will change quite a bit in the next five years. Right, right, right. So that's tying back to your point earlier in the conversation about tools like Adverity allowing you to have your engineering team, your IT team focused less on integrating systems and keeping everything up to date

and more on actually extracting business value. Yeah, I'd say that. Less busy work, less repetitive tasks, if you will. Less of the, find me these data real quick. Real quick, I need it now. Exactly. Nice. So we've now kind of looked into the future a little bit, but another question that just occurred to me

is given your experience now as a serial entrepreneur, you've spent a long time at tech companies in senior leadership roles like CTO, CEO,

What kinds of lessons do you have from your past experiences that we can learn from? What are the key things, maybe mistakes that have happened or things that you've learned that you now avoid based on that experience and that our listeners can avoid as well? Also a very good question because it depends. Yeah.

So I think one critical thing is don't overthink stuff. If you want to build something and have something in mind, go build it. Because there is nothing, you know, thinking about how nice it could be and, you know, like overthinking architecture. I think we are victim to this or have been victims to this as well. But

I think you can get the ball rolling very quickly. Also today with the capabilities of AI based coding tools. I'm not saying vibe coding. I'm saying, you know, try out various solutions, use what you have. I'm in the camp of use boring tech as well.

proven tech that works and compose an exciting use case and keep talking to the customer because that's the most critical impact or most critical feedback that you need. So from the start, we've been very customer centric, taking feedback seriously and integrating this with our solution. So that's probably something that I would always encourage in any company. Sensible approaches there. Yeah. Listen to your customer,

And don't vibe code, but do take advantage of LLMs for code generation to be able to get a POC stood up more quickly and not just spend all of your time in the planning stages on a potential product or feature. Yeah, absolutely. Cool, nice. Great advice today, Martin. I've really appreciated it. And it's great to hear the exciting things that you're doing at Verity.

Before I let you go, I ask all of my guests for a book recommendation. Okay, yeah, sure. I recently read a book about how Vienna influenced ideas in the modern world. It's called Vienna, How the City of Ideas Created the Modern World.

It's an exciting book showing how a lot of concepts in architecture and many different fields have been created in the city. So it was an interesting read and I also learned quite a bit from this book. Fantastic. That is exactly the kind of reading that I wish I had all the time for because that kind of thing about

We really stand on the shoulders of giants in terms of what we're doing technologically, linguistically, scientifically,

And Vienna, for sure, plays a huge role in the renaissance of ideas that has led us to a lot of people in the world not having to worry about shelter and nutrition, great child mortality rates and healthcare, all these kinds of things that we enjoy today. Vienna played a big part in that and I would love to learn more about it. Yeah, absolutely. Good recommendation.

Nice. All right. And then, Martin, for people who want to have your thoughts after today's episode, how should they follow you or Adverity? I do have no social profile. I do have a LinkedIn profile, but I'm not...

I'm not hanging out on any of the popular spaces. And email is always appreciated. And I respond to emails. Other than that, LinkedIn certainly works as well. At Verity, visit us on our homepage to get more insight on our product. But personally, yeah, LinkedIn. And LinkedIn, yeah, for connecting with you. And then I guess, what's your email then if people want to reach out? That's a very generous offer to make.

Yeah, that's martin at adverity.com. There you go. That's pretty easy. Nice. We'll have that in the show notes. All right, Martin, thank you so much for being on the show today. I hope you enjoy the rest of your day over there in Vienna. And yeah, maybe we'll check in on you and the Adverity journey at some time again in the future. Thanks for all your insights today. All right. Thanks for having me. Cheers.

Cool episode with this super successful but also super modest Martin Bruntaler. In today's episode, Martin covered his journey from programming on basic computers as a child to co-founding Adverity, a marketing data analytics platform that simplifies integrating data from multiple sources and then gleaning actionable insights from those consolidated data.

In particular, he talked about the concept of data democratization through Adverity's new Data Conversations product, allowing users to query data using natural language rather than relying on fixed dashboards or SQL expertise. He talked about the importance of data quality, anomaly detection, and proper data descriptions for effective AI-powered data conversations.

How generative and agentic AI tools are freeing data professionals from routine busywork to focus on strategic value creation and analysis. And his advice for entrepreneurs, including don't overthink solutions, use proven technology, start building quickly, and always prioritize customer feedback.

As always, you can get all the show notes, including the transcript for this episode, the video recording, any materials mentioned on the show, the URLs for Martin's social media profiles, as well as my own at superdatascience.com slash 891.

Thanks, of course, to everyone on the Super Data Science podcast team, our podcast manager, Sonia Breivich, media editor, Mario Pombo, our partnerships team consisting of Natalie Zheisky and Nathan Daly, our researcher, Serge Massis, our writer, Dr. Zahra Karche, and our founder, Kirill Aramenko, who does a ton for this show behind the scenes. Thanks to all of them for producing another insightful episode for us today.

for enabling that super team to create this free podcast for you. We are deeply grateful to our sponsors. You, yes, you can support this show by checking out our sponsors links, which are in the show notes. And if you yourself would ever like to sponsor an episode, you can find out how you can do that at johnkrone.com/podcast. Otherwise support the show by sharing it on social media, reviewing it on your favorite podcasting platform or YouTube subscribing. If you're not a subscriber already,

But most importantly, just keep on tuning in. I'm so grateful to have you listening and I hope I can continue to make episodes you love for years and years to come. Until next time, keep on rocking it out there and I'm looking forward to enjoying another round of the Superdata Science Podcast with you very soon.