We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode 321: Can AI Really Predict Startup Success? Data Ethics, Trust, and Insights from Megh Gautam, Crunchbase CPO

321: Can AI Really Predict Startup Success? Data Ethics, Trust, and Insights from Megh Gautam, Crunchbase CPO

2025/2/3
logo of podcast AI and the Future of Work

AI and the Future of Work

AI Deep Dive AI Chapters Transcript
People
D
Dan Turchin
Topics
Dan Turchin: 我担心新一代人会过度依赖LLM,缺乏批判性思维能力。我们需要教育下一代如何批判性地评估LLM提供的信息,并培养他们独立判断的能力。这不仅仅是教育问题,也是一个涉及行为、社会、心理和技术等多方面的复杂问题。 我们应该教导孩子不要轻信陌生人,也不要轻信LLM提供的信息。要教育他们对信息来源进行质疑,并寻找其他证据来验证信息的真实性。但即使这样,也并非万无一失,因为可信度本身就是一个概念,并非绝对真理。 在LLM广泛应用的背景下,如何界定“真相”以及如何追究虚假信息传播的责任是一个复杂的问题。我们需要建立一套机制,来追溯信息的来源,并对虚假信息的传播者追究责任。 我们需要对LLM输出的信息进行更严格的审查,而不是像现在这样盲目地相信它们。我们需要认识到,LLM也可能犯错,也可能提供虚假信息。 Megh Gautam: Crunchbase利用自身专有数据和AI技术,预测初创公司未来发展,包括融资能力、估值等。我们非常重视数据的准确性和预测模型的可靠性,并投入大量资源来确保数据的质量和预测的准确性。 我们的AI预测模型综合运用多种数据来源和预测方法,包括历史数据、新闻报道、合作伙伴关系等。我们不依赖于单一的模型,而是采用多种模型来提高预测的准确性。 在处理预测结果的不确定性时,我们根据不同平台(API和UI)采取不同的策略。在API中,我们可以提供更详细的信息,包括置信区间等。而在UI中,我们只发布那些我们有绝对把握的预测结果。 我们与法律团队密切合作,在预测结果中明确声明其概率性,并提醒用户不要仅仅依赖我们的预测结果来做出重要的财务决策。 我们对自身专有数据的保护和与LLM厂商的关系持谨慎态度。我们认为,专有数据不应被轻易暴露,以免被滥用。但我们也意识到,内容创作者需要获得合理的报酬,因此我们支持内容创作者与LLM厂商之间建立市场化的交易模式。 消费者行为将很大程度上决定LLM对公开内容的访问方式。如果消费者更倾向于使用LLM来获取信息,那么内容创作者就需要允许LLM访问他们的内容。但同时,我们也需要确保内容创作者能够获得合理的报酬。 我非常看好这种市场化模式在文本、图像、视频等多种内容形式上的应用前景。我相信,市场机制能够有效地协调内容创作者和LLM厂商之间的利益关系。 在LLM可能导致信息失真和品牌受损的情况下,维护用户信任至关重要。我们不能让虚假信息损害我们的品牌形象和用户信任。 我从过往的领导和同事身上学到了很多,并将其应用于职业发展中。其中一个重要的经验是:要不断学习,不断成长,并享受工作过程。

Deep Dive

Chapters
The podcast starts by discussing the challenge of applying appropriate skepticism to information from LLMs, especially for generations growing up with this technology.
  • Difficulty in applying skepticism to LLM-generated information
  • Generational change and reliance on LLMs for research

Shownotes Transcript

Translations:
中文

What I'm most worried about is when there is a generational change, like the generation that grows up on LLMs or has Chad GPT or pick any particular LLM vendor as the thing that helps them with their notes, with research. It'll be really hard to be, okay, how do I apply the appropriate amount of skepticism to it?

Good morning, good afternoon, or good evening, depending on where you're listening. Welcome to AI and the Future of Work. I'm your host, Dan Turchin, CEO of PeopleRain, the AI platform for IT and HR employee service.

As you might have noticed, I lost my voice, but you know what? We got such an amazing conversation today. I couldn't bear the thought of leaving you hanging. So here we are. We play through the pain. Let's do this. Our community is growing. If you like what we do, sign up for the newsletter. We launched it a couple months ago. There is a link to join the community in the show notes. Of course, if you like what we do,

It's a like and a rating on Apple Podcasts, Spotify, or wherever you listen. If you leave a comment, I just may share it in an upcoming episode like this one from the listener mailbag from Tabitha in Ottawa, Ontario, who is a financial analyst and listens while folding laundry.

Tabitha's favorite episode is the one with Rodrigo Lange, CEO of Samba Nova Systems. That was a great one about how he launched an AI unicorn and how open source AI is making LLMs better, faster. Tabitha, glad you enjoyed it. We learn from AI thought leaders weekly on this show. Of course, the added bonus, you get one AI fun fact each week. Today's fun fact.

The Press Gazette summarized the current state of content publishers wrangling over fair use with LLM vendors. OpenAI is reportedly offering news organizations between $1 and $5 million per year to license their copyrighted content to train its models. Although it has been reported that News Corp's deal is worth more than $250 million over five years. Meanwhile, Apple is reportedly exploring AI deals with the likes of Conde Nast, NBC News,

People and Daily Beast owner IAC to license their content archives. Jim Mullen, who's the CEO of Reach, the UK's largest commercial publisher, shares a different perspective. Jim says, we don't want a situation like we had with Google 10 years ago. We gave Google access and became hooked on referral traffic. We must be aligned as an industry on how LLMs use our base intelligence. My commentary,

A year ago on this podcast, we said this will be the year when the relationship between LLM vendors and content owners will define the future of AI. It has certainly played out as expected. More fireworks are ahead as disputes move from call it backroom dealmaking to courts around the world. There is hope for content owners. There has never been a better time to be a creator. Of course, we'll link to that article in today's show notes.

Now shifting to this week's amazing conversation, Meg Gautam is not only a friend, but also one of the best product minds in Silicon Valley. He's currently Chief Product Officer at Crunchbase, the definitive repository for data about startups, funding, and valuations. Prior to Crunchbase, Meg was Head of Product at Twilio and Director of Product Management at Dropbox and Hearsay Systems. He's also an active angel investor and

Meg received his Master of Science in Management Science and Engineering from Stanford University, go Cardinal, my department as well. And his bachelor's degree in Information Technology from the National Institute of Technology, Durgapur in India. Without further ado, Meg, my friend, it's a pleasure to catch up and of course, welcome you to AI and the Future of Work. Let's get started by having you share a bit more about that background and take it into the space.

Dan, this has been the conversation I have looked forward to the most. It has been an absolute honor sort of watching you. We got connected. We were obviously in the same department over at Stanford. And then I go to you, I still go to you for a lot of advice. So this is

This is kind of bringing all of those conversations to a podcasting forum, which is great. Thank you for that intro. So I started at Crunchbase about a year and a half ago.

And it's been a journey to get us to the point where we feel that all of our proprietary data and sort of everyone knows about Crunchbase, but what they have the perception of is like, hey, should I check out this company on Crunchbase? Should I look at how...

how this company is doing. We're basically taking all of that trapped in demand of like, how is the company doing? Can this company fundraise in the future? What's the prediction? And then bringing that front and center. So we have a big launch coming up in Q1 of next year. It's basically the future of Crunchbase where we look at the past, we look at our extremely proprietary data set, and then we project forward and tell you

what is likely to happen with a particular company state. So it's a lot of old school prediction models coupled with the new school Gen-AI LLM goodness that you're seeing out in the market. In past lives, I was a Tullio, really had never been part of the dev tooling space, but looking at how the messaging, the voice, the video business grew was fascinating.

I spent more than three years at Dropbox looking after new products, looking after video as that trend completely exploded. And then before that, I was at Hearsay for about four years handling sort of new product expansion for them as well. So it's been a really fascinating journey and I'm excited to talk about it. I'm excited to talk about all the new trends that are shaping the future of work. You and me both.

So to startups and investors, of course, Crunchbase is a household name. I'm curious to know from you, what have you learned that surprised you that maybe the casual user of Crunchbase doesn't know?

Yeah, it's a great question. So every single company that I was at previously, we used FrenchBase in some capacity or the other. It's like to find investors, to find companies to go target other companies that look like this particular company. This is obvious in hindsight, but the base of data will inform how good the prediction models are. Like if your foundation isn't well curated, you're not going to have great insights. And just the amount of care

that we have put and continue to put and sort of are investing in making sure both the ingredients, the foundation, and how it shows up for the end user are trusted is a really, really core focus of my work here at Crunchbase.

So I want to unpack that use case. You said coming in next year, some predictive modeling might indicate which companies are likely to be successful, what valuation, that sort of thing. Talk us through the AI behind that. Where's the data coming from that you train on and how are you making those predictions? And then to the extent you're going beyond those two that I mentioned, what are you predicting? Totally. So when I got here, I'll kind of walk through the process where we...

It seemed to the outside person looking in where it's like, oh, you're all kind of looking at the AI train and seeing where it's going. But we have always been pretty instrumental and pretty forward thinking in terms of how we could use the latest AI technology. But AI is not as powerful if it were not to solve a core kind of user slash customer problem. So we took...

six months to just like have a tour of our customers to be like what are your burning questions what are the answers you can't get anywhere anywhere because we want to use ai to be different and better and in that framing what we what we realized was there are questions about the present like what's happening right now make me smarter and it really becomes an roi question of like hey 30 seconds tell me what's going on if i want to dig deeper i will go dig

And then it comes to prioritization. There's so much data out there. How do you make sense of it? And then customers were coming to us repeatedly, investors saying, hey, too many inbound pitches. How do we prioritize startups? So many seed funds. Who do we go to? So many accelerators. Who do we go to? Salespeople, same question as they look at prioritizing accounts.

And so there's the inside part of it, which is basically, hey, this is the signal from the noise. And then extracting that forward into the prediction is basically, we know what's happening right now, like what the milestones are, both financial and newsworthy. Like somebody strikes a partnership,

And we have all of that. Our community helps sort of update the core sort of data points that go into a particular company. So we have seen their journey, right, from being incorporated to raising money to being on a tear in terms of their entire growth trajectory. And we are taking the macro as a coloring brush almost, which is like what's happening in this space. And in many cases, these are interesting when you redefine industries.

Like LLMs, outside of folks who really, really paid attention, wasn't really a thing about three or four years ago.

They're like, yeah, you know, it's happening. It's there. ChatGPT really gave it its moment. And then after that point, it got popularized, got operationalized. And we are seeing that in real time where we're like, hey, disproportionate attention in terms of funding, disproportionate attention in terms of newsworthy stories. Like, why is something getting picked up versus the other? And that's helping inform our models, right?

to go build the right sort of predictive capabilities to say, hey, this thing will happen over the next six months or the next 12 months. And in many cases, it is unbound because the future is just that. We're like, hey, we don't have a time bound duration for when

when an event will happen. We just think it's more likely. And each delta in the likelihood of it. So I'll give you an example. If a company has announced a partnership with a large enterprise, they're much more likely to be gravitating upmarket, which means it's predictable revenue, high revenue

quality. And all of those are things that our customers and our users are typically expecting, slash don't get anywhere in a really consumable format. And that's the value of using predictions as well. We're using the typical sort of

not being beholden to one particular model. I don't know if you saw the Menlo Venture Survey. It came out yesterday or two days ago, which was even large enterprises are using three-plus foundational models just to be able to train workloads separately. We're like, that's fascinating because it is quite counterintuitive. You would think that there would be one sort of winner-take-all model

But we've been kind of followed that entire trend where we're like, hey, we're not backed into any architectural choice. We're pretty bespoke in terms of how do we make targeted improvements in terms of the questions that our users are asking.

That was not a planned plug, but loyal listeners know that several months back we had Naomi and Derek from Menlo Ventures talk about the previous version of the Menlo Ventures AI survey, which was a fascinating episode. I encourage you to go check that one out in the archives. So Meg, so that's a fascinating use case and specifically because it's different from the typical kind of LLM, NLP kind of language parlor tricks that

we're getting very accustomed to, I shall say. I like the fact that it's more a traditional use of AI for machine learning, predictive analytics.

Now, one of the challenges that we always have to confront as product people, as data people, when we're using probability to make predictions, is there are always going to be false positives and false negatives. So crunch-based, let's say, makes a prediction, and it's possible that there's a confidence interval, and it's possible that it's wrong. It would be a false positive. It's also possible that the model might be

say not confident enough and so it knows something that it doesn't share a false negative. How do you think about kind of the responsibility or how do you communicate that there is some kind of a confidence threshold when you're making these predictions?

That is such a great question. And we have wrestled with this because it affects all parts of the product. The product manager, the designer, how would you represent this? Is it an arrow? Is it something else? Then the lawyers. It's obviously a thing everyone at Crunchbase cares deeply about. So what ended up happening was...

we naturally stratify. So we have an API product, which is the predictions and insights product, where we're able to provide a lot more color because you're not constrained by real estate. So we can give confidence intervals. We can give like, hey, we are unsure or we think this is stable to growing.

And that has landed really well with our large customers who have teams who can make sense of it. Because there's a translation layer we're providing, there's a dictionary. But we have to make sure on the consumption side that they are able to sort of use all of this information. But on the EUI, you just have very limited real estate.

So we have to be absolutely sure. And that threshold changes where we're like, hey, for high precision, high recall, we are going to go publish them. And that drifts from like, hey, it's like between 0.8 and 1 or in many cases for us as we backtest, it's like 0.9.

plus 2.1. And that's where we have absolute surety. And we also want to make sure that those are the most interesting sort of insights and predictions that we are propagating to the world.

I think to the uninitiated, sometimes don't appreciate the liability that you have when you make these predictions. And you mentioned, you know, there's, I'm sure there's conversations with legal. And for example, I mean, these aren't life and death decisions, but someone makes a financial commitment based on a

a credible prediction from crunch base and you would hate to be in a position where it's wrong, which it can be, it's probabilistic, and they come back to you and they have maybe a case for damages. How do you think about that trade-off and how have you managed to get a product to market given that there's that risk?

Yeah, we have a really stellar legal team that has sort of kept apprised every step of our development. And we are very clear about, hey, this is probabilistic. Please don't base your financial decisions based on this. It should be a signal and an input.

But the more you start disclaiming, the more value you kind of take away from it because a lot of this isn't available out in the world. So you have to get ahead and then do a lot of education like, hey, why are we saying the things which we're saying? And then for folks who really want to know and sort of make it a core part of their entire business process, we have to have explainers, which is like, what did we backtest on? What were the intervals?

and it really is like progressive disclosure and that rabbit hole as you know dan is like pretty deep like it is as deep as a person wants to go consume so we we have to be really careful in terms of like not losing people in the details but also being extremely comfortable sharing what we believe is true within the bounds of um of liability obviously yeah good answer um

This rationale that I typically use would not hold up in a court of law, but I think it's credible, which is that if you compare the accuracy of, let's say, a crunch-based prediction using AI probabilistic modeling, it's likely more accurate than a human doing their own research with limited data and probably some biases. So I'd say it doesn't introduce any more bias or risk than

not using crunch-based machine learning. But I realize that it's easier to blame a machine than a human, but I don't know if that's an argument that you've discussed internally. We have. So if we look at sales and customer prioritization use cases, it's a very simple thing for us to be like, okay, how do you do, as an example, territory math? Do you just randomly assign by geographies? Do you know what a person is good at? Do you know the features of a company that you close versus the one that you don't?

And those are just like very helpful conversations where you, companies tend to give ground grudgingly. They're like, hey, they'll start small. And then once you establish a wedge of like, hey, we're better, you have nothing. And we are much, much better than the alternative. You sort of earn the right to play at a much higher level there.

So I want to flip the perspective here. In the fun fact, I mentioned the complicated relationship that content owners have with the foundation model owners. And I referenced a decade ago, they had a complicated relationship with Google. And in some respects, you wanted Google to hoover up your data because then it would drive traffic back to your site.

I'd argue this situation is a little different because some work or answer gets derived from your content. There may or may not be any attribution and there may or not be any traffic that gets driven back to your content. How does Crunchbase think about the proprietary data that you own and whether or not, or I should just say open-ended, what kind of relationship you'd like to have with the LLM vendors?

Yeah, so I'll caveat all of this with it is so new and emerging because even Google itself, there's like the Gemini answer box. And then it's basically like providing the answer to questions. And that in most, if not all cases, will not drive any traffic to your site.

So at that point, the posture we've taken is the proprietary signals we have, like that shouldn't be exposed out so it can be derived and then there's no like value slash sort of attribution ascribed to it or it's like gone down.

But if there is, so Crunchbase, as an example, has a news email as well as a news site. So there we profile the unicorns, like the last one was about how to survive the sort of post-Zerp era.

And that can definitely lead to an interesting structure of like, hey, that's available and you can read on that. But there are breadcrumbs that you can follow back to the site, which has provided the data to go make it available. But I can't stress enough, we've looked at scenarios across the board where we're like, hey, how is content being accessed? Is it inside of all of these walled gardens? If I'm a chat GPT user, do I have references? And then do I

go to a URL. It's the same with perplexity or it's the same with using like the Gemini answer bot. And we're keeping a very close eye on how these relationships are evolving. But I do think you've absolutely hit the nail on the head. It is very, very different than 10 years ago because there was clearly like aggregation of demand,

and disaggregation of the tax-defined information. Here, it's not that case. It's still a very, very emergent behavior. And the thing we can do is probe. We're like, hey, we'll strike a partnership here. We'll see what happens over there. But it's very premature to commit because you just don't know where consumer behavior is going to go. To your point, the parlor tricks might become old. We haven't talked about hallucination. There was the whole sort of...

sort of article around like the google answers asking people to eat rocks like i don't think that goes away it it has been abated i think to a large extent but it's a very fast moving world

That's a logical extension of what we had just talked about with the liability. An LLM hoovers up your data, hallucinates something that's not true, and maybe it's a prediction or something that is not what the crunch-based service, prediction service would provide, and someone takes some action. Clearly, crunch-based isn't liable for that false information, but it gets determined that the source was crunch-based.

How do you think about that scenario? Yeah, it is. And I think liability, there's the strict sort of legal definition, and then there's brand. Your brand will take a hit because you'll have all of these sources, and we just can't afford that because now trust...

and attention are like the two scarcest resources. So anything that tries to take away either trust from our end users or this entire affinity towards attention, which is like, I want this one thing. And now that we have you in our interface, you can do these 10 other things, which you had no idea you could do before, are things that we will not look at super positively.

So this one's a little wonky, but I got to ask because we're both nerds. So there's this somewhat obscure protocol called robots.txt.

And every site can choose with your public facing content in the header section of the page. You can restrict certain bots from crawling your content. What is Crunchbase's perspective on if it's outside of a firewall, sorry, outside of a paywall, can any LLM access it? Or do you actually use it like a robots.txt to restrict LLMs from crawling?

I don't know what the latest is here, but we have had extensive debates, which is like chat GPT has, I think multiple crawlers as an example. Um, and then you have the same exact thing applied to perplexity. And then you don't want to get into this game of like whack-a-mole where you're like, Hey, I blocked this, but this other thing is available. So we are, we,

we are trying to take a very principled approach to like, Hey, if you're paying for the content and I think most crawlers respect paywalls, but this is also a thing which we are sort of evaluating as well, but it's so fast moving that I couldn't tell you right now, what's the disallow sort of crawlers in terms of, uh, in terms of our robots.txt. But it's a, it's a, it's a great question. Outside of the wonky part, just curious to get what, what does Meg think about, uh,

You know, should content that's outside of a paywall that is accessible to the public, just should that be in general, you know, Meg, as web browser, as consumer, would you expect for anything that's available publicly on the public Internet to be accessible through an LLM?

Yeah, this is me. So just my personal opinion, I think consumer behavior will drive a lot of this. So if I have pivoted entirely to using perplexity because I just want the answer, and if I want to go down the rabbit hole, I will. Or if I've chosen to use the Gemini answer box and I don't want to scroll through the sponsored links, then it becomes really important for the content creators to go allow that.

But I'm trying to reconcile this with creators also need to be paid a fair value. And I'm also trying to reconcile with this. I think it was the Goldman Report where the creators, independent content creators, will be the largest class of profession in the next five years. So as that ramps up...

And you're available everywhere. I feel like there's got to be a protocol or a way to go make this not be as contentious as it has to be argued in court, just like you're seeing right now with a lot of the kind of leading-edge, first-gen sort of AI vendors.

We're seeing a lot of marketplaces appear, startups, to automate that conversation between the content creator where the LLM owner, not just the big ones, but anyone who's looking to build a large language model can go and essentially bid for content. And there's kind of a market-based way of figuring out what it's worth. I like that approach, me in a soapbox, but I think that's consistent with what you were saying as well.

Got it. Are you bullish on that long-term? I am. Okay. I think, I think no, no, I guess what you said, I think there's value in the content. I think, um,

Content creators shouldn't be tricked into or learn later that their content was hoovered up by an LLM. I think it should be transparent. And I think when we as consumers want that content, we want to celebrate the creator community. I said my commentary on the AI fun fact, I think there's never been a better time to be a creator because there's so many more ways that involve less friction to get access to great content.

But I think part of the foundation, just kind of at a systematic level, I think there have to be these kinds of marketplaces so that it's easier for content creators to monetize what they create. Yeah, I think that's a totally viable path. And I think the market mechanics will help drive the incentives in the right way. So in that world, are you bullish on this for not just text? So like images, videos, artwork, all of that as well?

All of it. Anything that a creator might build that that creator wants to get into the hands of more consumers.

And it goes beyond. It could be documents, papers, books, as well as, of course, images, video, maybe emergent, maybe things involving AR, VR, virtual worlds. I think all these artifacts you can create as a creator, LMs just turn out to be a very efficient way to disseminate that content. And I think as a creator, you should get to choose if you can.

It's a business decision whether or not you think it's positive for you to share with an LLM. But if you make that decision, the LLM, there should be a market and the market should decide what that content is worth.

Yeah, and I think this is basically like taking tried and tested models, right, where you had like the AdWords model where you have the bits for the stuff that people, where there's inherent demand for, and then sort of going against the disintermediation where you could basically be as granular as like one piece of content versus a subscription where you have like, yeah, I think that's super, super...

like viable path forward because this this makes sense and i i think it aligns incentives in a way which i don't think any other model does right now so you mentioned which i agree that uh trust and attention are the scarce resources right um so with regard to the trust component let's say in this world where there are these nice market-based patterns for for creators and consumers

What happens when the LLM vendor, whoever the technology owner buys some content that turns out to be false, misleading? Who is responsible for the dissemination of truth? And I mean, even kind of you can extend that to the philosophical question. What is truth in a world where we're constantly exposed to these LLM parlor tricks? What does it even mean? Yeah. What is truth anymore? Yeah.

I can always rely on you to have the true principle-based philosophical approach on this, where there are ground truths, and then there are truths that are prevalent inside of an organization.

And then the other axis is like, does it do harm versus not? And I think it's a very, very complicated question, especially as it leads to consumers. Because if you're making decisions based on that particular information outside of just like entertainment, and I think power metrics are just that. They're entertainment. They're a way for you to consume some things, like probably content, but

But if you're making decisions, I think the chain of custody really matters. It's like what led to this, which led to this other thing. And it goes back to like, what are the ingredients that went into the soup that you had? And if that soup did not sit well with you, then which particular ingredient was it? And that traceability,

I don't think exists today in the broader sort of consumer market. Yeah. What's your take on that? You know, back to like what I was saying about, you know, humans can make bad investment decisions and so can models or machine learning models, right?

It applies here where we always knew that it was a bad idea to trust Dr. Google. Google will tell you anything about any symptom that you know. And so people have been making bad medical decisions, financial decisions, life impacting decisions based on what they see from Google. I do ever think that this time around, it's different because to your point, I like I like the term chain of custody. There's such a kind of willing suspension of disbelief, like

I feel like we haven't yet been educated about how these LLM outputs are generated. And there's just way too high, I believe at this point, at least in this global experiment, there's way too high an assumption that what you get from the LLM, it's not cited, it has no sources, it seems authoritative. I call it credible nonsense, but there's just a knee jerk reaction to assume it's true. And I

I'd encourage everyone to assume it's not true first, and then know that the burden of proof is on you. Before you act on anything from an LLM, I think it's harder than questioning what came from Google because it's deriving some stuff from some sources that, to your point, you don't know what the ground truth data was, you don't know what happened.

in the meat grinder that went into generating the output. So I'd encourage everyone listening to really apply more scrutiny to what you see from an LLM versus what maybe you thought might be true from Google. Yeah. Let me take the other side of that argument. I think as a society, I think we have kind of demonstrated that skepticism isn't a thing that comes supernaturally to us. What I'm most worried about is

when there is a generational change, like the generation that grows up on LLMs or has Chad GPT or pick any particular LLM vendor as the thing that helps them with their notes, with research, it'll be really hard to be, okay, is this, how do I apply the appropriate amount of skepticism to it? And I think it's,

a fascinating problem, but it's one of those wicked problems where it's behavioral, societal, psychological, and technological. And I don't think there is one solution that solves for it well enough. And I don't think education alone will sort of do it. There has to be a solve that has to get repeated, rinsed, and tried for different kinds of populations that will lead to an increased trust. Because

these apps already have our attention. So now it's kind of working backward to the trust part. We're both parents. What should we be telling our kids about how to and how not to consume content from LLMs? Oh, it's all about how believable is this? And can you find another source to go prove this out? And

But even those are weak, because believability is a construct. It's not a ground truth. And then finding another source that gets whipped up is pretty easy. So I definitely haven't struck upon the right balance there, but I 100% agree with you. More skepticism is the right way forward. I think as parents, it's an opportunity for us to

important conversations about ethics and morals and principles and questioning your sources in a positive way, not in a cynical way. Yeah. That we are ultimately responsible for the decisions we make and whether it's a stranger giving you candy or an LLM feeding you misinformation, you have an obligation and I think it's an important opportunity that we have. I'm an AI enthusiast. I believe these are opportunities that

to help instill critical thinking and rational judgment and things that otherwise may... We didn't have these opportunities necessarily a generation ago. I think that's what parents should be doing. For sure. I think bringing it into a physical analogy, like don't trust strangers or don't... And I think there's a precedent for it, which can mold it into a more desirable behavior. So I like that. I'll steal that.

Still liberally. Let's educate more parents. All right, Meg, this one's flown by. I got to get you off the hot seat, but not without answering one last important question for me. So you've got that great track record. You've been around hearsay social, hearsay systems, legendary group of leaders, Twilio, Dropbox, and now Crunchbase. Who are your role models and what have you learned from them?

This is such a great question because this gives me an opportunity to sort of give back as well. Because, like, Dan, you've been foundational to me making decisions in my career as well, which is what do you learn? I think your whole framework was like, am I enjoying this? What am I learning? And then how are you growing?

And I've used that time and time again as I sort of jumped from one place to the other. But I've worked with a fantastic group of leaders and a group of folks who I still to this date I'm in touch with. Clara at Hearsay, now at Meta, Mark, who was at Twilio, who

who started his own company that's going really well. He was, I think he was the longest manager employee relationship I've had. And he was foundational to a lot of stuff that I've learned. Then a Dropbox, it's like a fascinating group of people that were just like amazing at their discipline and great at the people management part of it. So this is the strongest sort of group of folks that,

in terms of talent density there, like Genevieve, Akhil, Ruchi. It was just an amazing sort of group of folks who just want to be next to them so you can absorb all of the stuff that's kind of coming out of them. And yeah, I've been very lucky. And I hope to continue to put myself in those situations and then give back as much as I can to folks who are just starting out.

great answer and that really is a great list and i expect big things from you because you're now a mentor to you know all the other the younger generation that's looking up to you like uh like you look up to some of your role models thank you then appreciate it yeah hey this one's been been a blast uh thank you to everyone for tuning in to uh old college buds meg and dan hang out uh amen congrats on all the success and uh great catching up please uh

You got an open invite to come back again and curious to hear about you and the adventure at Crunchbase. Thank you so much, Dan. You bet. All right, well, that's all the time we have for this week on AI and the future of work. As always, I'm your host, Dan Turchin from PeopleRain. And of course, we're back next week with another fascinating guest.