Welcome to Most Innovative Companies. I'm your host, Yasmin Gagne, joined by my producer, Josh Christensen. Hey, Josh. Hey, Yas. Josh, where were you when you heard about Sam Altman getting fired from OpenAI? Well, you know, this is a story I'm going to tell my grandkids one day of the time that Sam Altman left. Wait, which time are we talking about?
Him leaving, him coming back, him reportedly coming back. We're going to get into it on this episode, but I'm talking about like the first time. The Friday before Thanksgiving. Ooh, yes. Black Friday, as we now will be calling it. No longer the Friday afterwards. I was seeing a show. I was actually in a lovely reading of a play called Squeaky.
That was really great. And then I took my phone off airplane mode after the show and saw the news and was like, oh, this is the thing. This is going to be important. And it was. Where were you, Yaz? I was at the Texas Renaissance Festival. Hell yeah. And I saw the notification on my phone and I turned to my husband and I said, my
My lord, Sam Altman hath been fired from his post at OpenAI. Did all the people around you have to, like, feign the sort of, like, Ren Faire thing? What does this artificial intelligence? How can one's intelligence be artificial? Do you know the word that everybody says at a Ren Faire all the time? I don't know. Huzzah!
Oh, that makes a lot of sense. I've also heard Ren Fairs are like super horny, but that's a... They're so horny. That's for another time. Josh, it's one of the horniest places I've ever been to. And it's proof that the American education system is broken because there were people dressed in Gilded Age costumes. There were juggalos. Oh, come on.
There were people in Mandalorian outfits. That's the anachronism. And I was like, guys, we had one theme and no one could keep to it. The prompt was Renaissance. You are wildly off from that. But this is, we'll talk more about the Renaissance fair afterwards. But the Sam Altman news was wild.
Later on, I'll be chatting with USAFacts President Poppy McDonald about making government data more accessible. CEO Sam Altman was ousted on November 17th by the company's board. Then he was reinstated on November 22nd. And it was like the most insane five days at Fast Company HQ. Here to help me figure out what's been going on at the company are Fast Company senior writers Ainsley Harris and Mark Sullivan. Welcome back. Thanks.
Thanks for having us. Hello. So let's just dive in. Give our listeners who maybe are not as keyed in a sort of short timeline. Well, it started on the Friday. That was one of the things that made it so weird is this all came down on pretty late on Friday afternoon. Word came down. I think it was actually just a very short statement by the board saying that Sam Altman had been dishonest with them. And that was pretty much all the information we got.
And that's what set off all this speculation about, you know, why he was fired. So that was that was kind of the first thing that happened. And then I suppose the second thing was, oh, OK, Brockman's quitting, too. And who's Brockman? Greg Brockman is the president and chairman of the board. That was a big one-two punch. And that kind of sent us into a fairly kind of a crazy weekend, you know, sort of speculation and kind of waiting for the next shooter to drop.
At some point, a new CEO was named. Tell me a little bit about that and who he was. Well, that was Emmett Shear, who was CEO at Twitch video platform. The board, or at least the way it looks, is that the board moved very quickly to get him in. And there was like a lot of confusion about
Whether he was an interim CEO or whether he was the CEO, like different reports had it different ways. Ainsley, you can correct me, but yeah, they moved really fast, almost as if they were anticipating the immense pressure that would build up on the money people's side to get Altman back in.
And around that same time, Microsoft, which owns, I mean, give me the percentage, you know, how much of OpenAI does Microsoft own, but... 49. Also gave Sam Altman and Greg Brockman jobs, right? It could have been optics, bargaining chips, but the word was that they were going to start a new advanced AI division within Microsoft that presumably would work on some of the things that are more like what OpenAI is doing.
And then around that same time, again, all this was sort of happening almost simultaneously. Ainsley, what can you say about how workers felt at the company? First of all, there was a lot of confusion. I mean, Sam himself only learned that he was fired moments before the news was released to the world.
And similarly, Greg, also a member of the board, did not find out until shortly before the board published this blog post. Everyone was in the dark, I think. Adding to the confusion, OpenAI had actually given all its staff the whole week of Thanksgiving off.
So a lot of people were already on planes, you know, traveling away from San Francisco. So, you know, I think there was just a lot of scrambling, a lot of people, you know, landing at an airport only to find out this bomb had gone off. So I think that, you know, there was...
Definitely kind of just a moment of complete confusion. And then I think what we saw was that the folks who are in San Francisco really rallied and came together and showed a huge amount of support for Sam. There was reports of people gathering at his home. Eventually, there was a petition that was circulated, which I think upwards of 90% of OpenAI staff signed saying that they would quit unless Sam was reinstated as CEO. Why?
Was Sam Altman forced out by his company's board? Mark, let's start with you. Yeah, good question. We don't have a 100% clear answer even today. You know, there was a ton of speculation about it. I personally feel a little bit vindicated because I wrote that I thought that there might be a
a breakthrough of a research breakthrough of some kind that had real implications for AI safety and responsible AI. There was concern about the safety of that, you know, and so the board, in this company, one of the board's major functions is to hit the kill switch if something is
being developed and productized that could be harmful or risky. And that's exactly what they tried to do. I don't know if their method was great, but that's what it was. Actually, just take a couple steps back. OpenAI has a really interesting company structure because it started out as a nonprofit. I say Ainsley nodding. Can you just give us an explanation of the role of the board in this situation and kind of how the company and nonprofit interact?
And please explain it to our audience like they don't know anything because I totally know how this company structure works, totally know how it works. But just explain it to me. I don't understand. Honestly, but seriously, I really don't understand how this not for profit for profit way. It doesn't make any sense to me.
Yeah, no, I think the important thing to understand before you go into the particulars of this structure is that the whole purpose of this structure is to ensure that OpenAI's pursuit of a
AGI or artificial general intelligence benefits all of humanity. That's sort of part of their core mission. You know, I think the idea with this structure is there's an assumption sort of built into that mission that AGI will be incredibly valuable, you know, on the trillions and trillions of dollars order of magnitude valuable.
And so if you believe that and you want to do good in the world, then it makes sense to try to create a corporate structure, or at least this was the theory behind OpenAI's structure, that ensures that OpenAI follows through on that mission instead of just taking all that money to the bank.
So the idea is that you have a sort of traditional corporate structure, but wrapped around it is this nonprofit structure designed to ensure that OpenAI fulfills its mission. And so there's also a sort of capped profit structure built into the corporate side where beyond a certain point, once OpenAI has paid back its investors,
It would then be, you know, in a position to sort of distribute money in a more like nonprofit way, in a philanthropic way and benefit humanity. So that's sort of like, I guess, the theory behind it. I think what's been so fascinating about this whole episode over the last couple of weeks is that what we've seen is that no organization,
Corporate structure is really powerful enough, it seems, to really rein in those economic incentives around that the investors who have backed this company have in terms of their own return. So obviously there was enormous panic, tons of major Silicon Valley investors, as well as
Of course, Microsoft have supported OpenAI, invested in OpenAI. They are really banking on an amazing return on this investment. They don't want to see anything go south. The money really kind of stepped in here, I think, and
tried to sort of stabilize the ship. And, you know, I think it raises a lot of questions about like, how meaningful is this corporate structure? You know, however you draw the lines of control on an org chart, if really money is the most powerful sort of part of the org chart, I'm not sure how meaningful all those other sort of structures and components are, I guess. Yeah, I think that's the central point of this thing is that Silicon Valley ethnocentrism
Ethic has won out. And, you know, Sam Altman, in a lot of ways, is the embodiment of that mindset, you know, executing on massive amounts of investment. I do want to come back to Sam Altman in a second, but just to pause here and say, you know, it strikes me that the board actually did its job, right? Is that worth saying?
Yes. Yeah, yeah, I think you could say that. I mean, we obviously don't know. There's, you know, I think been reports around that Sam has maybe agreed, according to, I think, the information to an internal investigation as part of his return. So theoretically, you know, more people, if not us, but more people will soon know a little bit more about exactly what went down in terms of the board's concerns. They do still feel, I think, despite all the reporting that we've seen,
it still feels like a little fuzzy exactly what got them to the point where, to your point, Yaz, they felt like they really needed to stand up and execute their responsibilities. And even a member of the board, Ilya Sutskiver, eventually tweeted that he regretted his decision to oust Sam, right? Yes, even Ilya, who...
by all reports, was sort of leading this coup, if you want to call it that, eventually signed the petition himself, as well as Meera Maradi, CTO, Fast Company cover star, of course, and also the interim CEO that the board named before it moved on to Emmett.
I feel like anyone who didn't sign the petition was like, you know, offline in Bali because, you know, it just seemed like there was like no outliers, which was completely fascinating to me because I think, you know, folks who know the company well will tell you that it's really the COO, Brad,
Like Cap, as well as Mira, who really run OpenAI day to day. If they're really in charge of the operations and a lot of the project research and product direction, you know, what is Sam exactly at OpenAI? If not, like a very important cult leader, you know? Yeah. You know, you didn't need him to come back because you felt like your work product really depended on it. You wanted him to come back.
at least it was my reading, because, you know, you felt that he was really essential to the mission and to OpenAI's sort of reason for being. Did you have a sense, you know, because as you intimated, obviously, like, we know what Meera Muradi does, right? You wrote a cover story about her. Sam Altman is, you know, seems to be the face of OpenAI, but what can you say about his role as a leader and what he did sort of day to day and what he means in the industry? Yeah, I mean, he has essentially...
led the charge in AI commercialization and the ability for generative AI to kind of turn this corner where suddenly every corporate leader in America is saying, you know, what is my AI strategy? The reason people are asking that question is very much because of Sam Altman. There is a really interesting dynamic here where I think people who work for OpenAI and sort of believe in what
his vision is sort of believe that there's this inevitability to AGI that, you know, we're going to get there eventually. Better to sort of get there the right way, better to be the ones who are first. You know, how exciting would that be to be part of that team that was first? But also that you really need someone who,
is a visionary sort of thinking through the implications and able to engage with all the different stakeholders about those implications. So I think people sort of see him as that leader in the field broadly. You know, he's the person who goes to the White House. He's the person doing meetings in the EU. He's the person with all the right Silicon Valley connections. And so he can kind of bring together all these different worlds. And I think, you know, that is part of what makes him incredibly valuable.
Mark, I want to turn to you. Obviously, we don't know the full extent of the changes that are going to happen at OpenAI, but what can you say about the new board that was recently announced? Well, there's three new members, which are fairly well-known names. Brett Taylor is the former CTO of Facebook, co-CEO of Salesforce, longtime Valley player, another person like
Sam Altman, who's very well connected and a bit of an operations guy, not the visionary type, but an execution person. Then the former secretary of the treasury, Larry Sommer, got a board seat.
I don't really know what the thinking is there other than to have possibly somebody who's connected in D.C., which he is. And then Adam D'Angelo, who is CEO at Quora, who is reportedly one of the original people that raised the alarm and voted for Altman's ouster. The other thing was...
You know, the ex-board member, Helen Toner, is gone, but she was also part of the problem. She published a paper that was critical of open AI that Altman objected to. So that was part of the tension there.
So that kind of gives you an idea of what's going to change on the board. And reportedly, there's going to be, eventually, there's going to be nine members on the board. And as far as I know, there's still a possibility that Microsoft might get a seat on the board too. Yeah, it seems to me like that
That may have been an oversight on Microsoft's part, not having a board seat before. But Ainsley, I saw you nodding along to some stuff that Mark said about the board. What can you say about its makeup? Well, I find it fascinating that we have now, so far at least, a completely male board. It is very difficult, of course, to find women in technology, but it's not impossible. Interestingly, I think one of the reactions or a sort of set of reactions that I saw when
the initial news of Sam's ouster was unfolding was, you know, folks who are part of this camp that I would call sort of the techno optimists, the people who do not want AI regulation, were very quick to come out and point out that people like Helen Toner, former board member now, did not know how to, you know, build things. Doesn't
Sam Altman, I mean, Sam Altman doesn't have any technical skill. You know, he has some, but he's certainly not an Ilya, I guess you could say. But, you know, I think there is this sort of easy to dismiss sort of view that, you know, yeah, folks who aren't in there sort of like writing the code and building the product don't really get
it. However, it's interesting. I have not seen those same folks come out, you know, raising their fists in protest about the fact that Larry Summers is now on the board. Exactly how, what does he know how to build? Fred Taylor knows how to build product, even if he's not like in their coding. Centrist economy. Yeah. Yeah.
But yeah, we haven't seen that same kind of reaction. And it just, it's a little sad that, you know, it just feels like there's this double standard. And obviously this is like an ongoing problem in Silicon Valley. Isn't this like part and parcel with like the whole point of going back to the not-for-profit structure and like pitting the brake on,
on generative AI and moving too fast. And part of the issue that we've seen in technology like AI or like facial recognition programs before is bias in technology based on the people who are the stakeholders. And if you have a homogenous room on the board, in the coding, in the development labs, I'm speaking like a neophyte now around what this is, the people who do the numbers of the things.
But if it's made by a homogenous group of people, it's a technology that's not going to work for all people. This is part and parcel with computer science and the development of the internet and computers for the past 50, 60, 70, 80 years since World War II, if not longer.
What I think particularly of your mission is that you want this technology to benefit all humanity. Your board really needs to reflect humanity in a more comprehensive way. So I think it is a good thing that they are expanding the board.
There were, I believe, unfilled board seats. And that's part of the reason the board was so small. I believe that they've said that they want, you know, nine members on the board going forward, which is, I think, more spots than existed before. We'll see how quickly they're able to fill them. And again, yeah, who those folks are.
It's been interesting, you know, on our Slack, there's been a lot of conversation about Microsoft CEO Satya Nadella's role in all of this. We ran a cover story on him kind of recently that really didn't portray him as a sort of ruthless killer. But that's what this whole incident seems to suggest to me. What can you say about that, Ainsley? Yeah, he's a man who seizes the opportunity. I think it's pretty clear.
On the one hand, I think he deserves a lot of credit for how quickly he moved in. And I think that is a reflection of the relationship he's been able to build with OpenAI. You know, there's been various reporting, you know, over the last year or so about, you know, maybe some tension between OpenAI and Microsoft, particularly as they have, to some extent, almost competing products at this point, as much as they are sort of collaborators and partners. You know, it's not...
a perfect relationship, I wouldn't say, even if, you know, the people are working well together, just structurally, I think there's some challenges there. But I think it's clear that he has navigated that well enough that Sam was willing to look to him for a safe landing. So, you know, I think that's a huge win for him. But to, I think your point earlier, Yaz, you know, it is surprising in some ways that Microsoft was so vulnerable here. That makes a lot of sense. Mark,
What can you say about the outlook for the company now? Is Ilya going to lose control? What does the product roadmap look like? Just tell me about what we know. Yeah, I don't think...
Ilya loses total control. I think the way you hear OpenAI people talk about him, he's kind of a revered figure and was sort of at the bottom of some of the really foundational research that makes the company valuable at all. The major theme here is that the research and safety people have lost power in the company, but that's been going on for a while now. And
You know, you're seeing a lot of moneyed interests gaining influence here and power here. People, including OpenAI employees, that a lot of them are vested and they want to see this pay off eventually. And I think that you're just seeing a continuation of the trend of money influencing this once really idealistic company.
I mean, this is like a really stupid thing to say, because I think AI is sort of world changing technology and Facebook was a social network that became something crazy. But it sort of felt like Facebook starting to put out more ads and become terrible. And to some extent, it should just go back to its days of like very misogynistically rating women. That's what Facebook should just go back to. Ainsley, what does this mean for us, the consumers and
humanity right now? What can we learn from this whole incident? And wrap that up in 30 seconds or less. The impact of gender. Here I go. Yeah. One thing that I've been thinking about is just, yeah, it's interesting you bring up Facebook. I think there was this
real fear and almost anger for a while that it felt like people in Silicon Valley, you know, it's all these computer science majors who have never taken a literature class, who have no understanding about the humanities or political science. And here they are, you know, controlling the outcome of our democracy. And so I think there was this feeling like, you know, we need to bring the philosophers, we need to bring the humanists, the political scientists, like into these companies.
And what's sort of fascinating is it feels like it's almost like that message has been internalized. If you're a computer scientist, you're also an effective altruist and you have a whole philosophical approach to your work and your job and AI in the future. You know, I think it's really fascinating. And I think
I think we're going to start to understand a little bit better soon, you know, just that there has been this complete transformation in how people in Silicon Valley think about their work. And it is not just a bunch of, you know, like coding robots doing this work. I think people actually want to think of themselves in a different way and do work in a different way. And so there I see like a little glimmer of hope, right? I mean, I guess.
although there's always been sort of heartless libertarians coding things there. Let's not discount that philosophy. Yes, there are competing philosophies. There's not one dominant philosophy. This whole incident brought into focus for me that this change has been massive and I think it's been kind of under-recognized. We're going to take a quick break, followed by my interview with Poppy McDonald. ♪
So first of all, congratulations on USAFacts making our 2023 Most Innovative Companies list. Thank you. Before we even start talking about your career broadly, can you give us an overview of what USAFacts does and its mission? Absolutely. So USAFacts, our mission is to empower Americans with the facts.
And we do that by taking government data from over 70 sources, bringing it all together in one place and organizing it by topics Americans care about. So health care, education, economy, jobs. And we just want to make it easy for Americans who we think are very confused right now about what is the source of truth to go and look at data. So we're nonpartisan, we're not for profit, and we're dedicated to making government data accessible and to exposing the facts.
And USAFacts was founded and funded by someone in our entrepreneurial ecosystem. Tell me about that. Yes, so it's founded and funded by Steve Ballmer. He's the former CEO of Microsoft. And when he retired from Microsoft, his wife challenged him to do something from a philanthropic perspective. And their goal is to lift children out of poverty and to give everyone the same shot at the American dream.
And being a numbers guy who used data to drive his business decisions, he said, like, wait, before we do that, let's look at what government programs exist to help lift kids out of poverty. Are they working? And then let's make sure we're filling in gaps rather than duplicating existing efforts. A few people said, hey, we'll help you out. They were financial analysts. Give us two weeks. We'll pull that data for you. And that's what it would have taken at Microsoft. It probably would have been a lot faster. And it took them six months to pull the data.
And that was a big light bulb moment for Steve, where he said, if it's this hard for me with my resources to get access to data about our government, how hard must it be for a voter, let alone even somebody in Congress who we found it's very hard for them to get access to government data? You know, data in and of itself can be nonpartisan. I know that's a goal of USA Facts. I think it's also fair to say we sort of live in a post-truth world to some extent. How do you deal with that in your day to day?
So absolutely, we know that there is historic distrust in every major institution in this country, from academia to banks to government even. And so there is going to be some skepticism about when we pull this data. The way we handle it is a couple of things. One, we only use government data.
and there might be skepticism, hey, can I trust that data? We would say that the people who produce that data are civil servants that work for decades across multiple administrations. And so we use only reported data, and then we don't do any forecasting or try to make any predictions about where that data is going. We think that's where judgment comes into play. We will only include published data in government data.
So no polling, no forecast. We really keep it to just what are the facts as published by the government. I think I read that in its first year, so prior to you coming on, it launched and it was an amazing service, but maybe not enough people were sort of going on the site and accessing it. And I'm curious how you got more people to click without coming up with like a clicky headline.
The reason Steve hired me from media is he built this resource. He brought all this government data together. And then he said, not enough people are using it. I built this to be a resource to Americans, not to a small group of people. And so he said, how do we grow it? And at the time, we had a few products. We had a search box that said, what government data are you looking for? We had an annual report. And we still produce that every year. It's an annual report on behalf of government. It's 100 pages long.
And then we have a 10K, which is a financial reporting that the SEC requires corporations to report their finances publicly to their shareholders. And so we do that for taxpayers. And I said, I think that's a big lift for people coming for the first time to experience government data. Yeah, asking someone,
- 300 pages of government data is a lot, right? - It is a lot. And it's a lot of visuals and it's friendly and it's colorful, but like as a first entry point. And I said, I don't even know if people realize like what government data I'm searching for. And so my thought to Steve at the time is coming from media, we've gotta make it a little bit more snackable, a little bit more bite-sized.
And so let's go in and see, based on what's going on in the news right now, based on what we know people care about or what's keeping them awake at night, what's government data that could help inform that? Certainly COVID, a huge example of where people were looking for really local data about what was happening in the pandemic, but even thinking about things like schools. I send my
daughter to high school, like how are schools serving my kids or how is inflation impacting my family or my livelihood? So really trying to take what's in the news and then make the data relevant, localized when possible, and make it accessible to people in a shorter form, article formats, in interactive data visualizations. And it's really resulted in very nice growth for USA Facts and the audience we're reaching. So there's a presidential election next year.
And I'm curious how you all are preparing for that. So we know this elections are a time when Americans get even more confused, probably about what's going on. They turn on the news and depending on what channel they're on, they're probably getting very different versions. And the incumbent may be saying everything's going great.
The challenger might be saying, "Everything's terrible." And people are wondering like, "How do I cast my vote? What is going on in this country?" And so we have done special projects before where we've done a voter center that looks at the candidates side by side with their positions. And then we also just try and say like, "What are the issues that are going to be driving this election?" And then do a big effort to bring in all the relevant data as local as possible so that we can make it available
at people's fingertips when they are hearing those issues being debated. And then certainly do fun things like during debates, we're live tweeting relevant facts, like what do the numbers say as the candidates are debating issues? And a lot of times that data doesn't come into play in those debates.
When you think of USA Facts as a resource, it's obviously a resource for individuals, but you've also published some data. I'll give you an example. You published a data on, you know, 50 years of women's roles in politics, the workforce and the domestic sphere. Some of the findings were that women were earning less median income than men.
How do you think companies can use this information or executives when they're sort of like making policies for their workforce? Being nonpartisan, our goal is to provide the data so that people have the numbers and can make their own decisions about what to do with the information. And so as people say, like, well, things
seem a lot better for women, right? Well, what do the numbers say? Yes, there's been some great wins, right? The women's labor force participation grew from 34% to 60% over a 50-year time period. And they're still earning 82 cents for every dollar earned by men. And there are certainly reasons for that in terms of like the occupations that they hold, but it's not
parity, right? Even when you look at Congress, it feels like you see a lot of women represented in Congress, but there's still just a quarter of Congress. And to get parity with the population, that number would need to double. So we're just really trying to say, here's the data. And then you decide as an individual, as a business, as an elected leader, what do you want to do about it?
that data. And we're really hoping for a healthy debate, but we think a healthy debate about how to move our democracy forward starts with facts. And we want to move it away from, you know, adjectives where we think it is right now. How are you using AI in your organization? We are a small but mighty team. So we are about a 47 person team trying to bring 90,000 sources of government data and make them available to the public. And we think AI has so
so much potential and opportunity in terms of not only helping us use things like Code Interpreter to think how do we write code faster to build those data pipelines, but also the large language models that would allow people coming to our site, allow Americans to use natural language to ask questions about things that are of concern to them and go right to the database and get the relevant facts and information delivered to those individuals. It certainly lets us
scale what we offer and the value we can offer and the questions we can answer beyond even what our small team tries to put out every single day to be relevant to the questions people have. And I go from there. Just to talk a little bit about the funding model, it seems like it's all funded by Steve Ballmer. Is that always the case? Do you make any money as a nonprofit that you're able to plow back in? What does the financial model of the organization look like? I'll say that's
something I had to leave behind from media where I was on the revenue side and I had to constantly think, how are we going to fund the journalism that we want to create? And I'm so fortunate because Steve Ballmer funds USA Facts 100%.
And when I think about, hey, here are ways we might be able to monetize this, he says, stop thinking like that. That's from your old world. I want to do this as a gift that I'm giving back to the American public. He's even said, hey, maybe you'll be out of a job in a few years because I would love for the government to do this themselves. It should be that this is the
people's data, taxpayers fund the collection of this data. And we would love for the government eventually to do this as a service to their shareholders. The shareholders of this country deserve to have transparent, open access to the data. And so Steve's committed to doing it. He's had members of Congress say, please don't give this
to the government because we don't have that same sort of customer service orientation that you all do in terms of, you know, what we obsess about every day is how do we make this data relevant? How do we make this more accessible? How do we optimize it for search? Like, that's what we're obsessing about every day. Now, you have a background working in politics. And, you know, prior to Politico, I think you worked for some senators. I'm curious whether you're
you're ever worried that that might lead people to sort of mistrust or not trust the data you're providing? Early in my career, I did work on Capitol Hill. And I think the value of that is I understand how challenging it is to work inside a Senate office, to work inside a House of Representatives office in terms of there's a reason that they don't go and get the government information.
when they're trying to make a vote. They've got very lean staffs. They don't have people with technical skills. I know how hard it is to get the data being at USA Facts, right? In some cases, it's in PDFs. In some cases, it's coming to us on CDs that we have to download onto a CD-ROM. Like members of Congress and their staff don't,
do not have the ability to take that data and do the analysis and do the mapping that would be required to be able to inform their decision. And so I have a lot of empathy for how hard their jobs are. And so when I'm thinking about our customers, whether it's my time on Capitol Hill or my time in media, I'm thinking about how, given how quickly they have to work and how lean they're staffed, how do we take this data and make it accessible?
We were just on Capitol Hill two weeks ago, and we are meeting with Democrats and Republicans, people on both sides of the aisle. And we are really committed that this country can only move forward if people from both parties are trusting the data and are using it to move our country forward.
Okay, we are back with Ainsley and Mark, and it's time to wrap up the show with Keeping Tabs. This is where each one of us shares a story, trend, or company that is not open AI that we are following right now. Ainsley, let's start with you. What are you keeping tabs on? Yeah, I'm fascinated by this trend, partly obviously influenced by the fact that I have two small children and spend a lot of my weekends going to birthday parties. I am fascinated by the sort of culture
corporatization of childhood fun, you might call it. There are all these places now that, you know, are basically like birthday party factories and there's whole franchise models that support them. I just got a pitch the other day from this trampoline park called Altitude that's expanding nationally that I've definitely been to birthday parties at here in Chicago. There's also, you've probably heard of these stores called Camp that are basically-
Toy store combined with like this whole experiential thing that sort of like an indoor playground meets experience meets I don't even know what. And I mean, this is just a huge business, but I think it's a really fascinating one too, where it feels like, you know, we all complain about how like kids don't know how to be bored and like they're on their screens. It's like even when they're not on a screen, we need to somehow package their fun for them. Mm hmm.
And so it makes me, I bet these places are actually really fun in many cases, but it also makes me a little bit sad, I think, that we can't just like let kids loose in a park or something. I feel like the thing is, if you're a white kid, your parents might send you to the camp store. If you're a brown kid, we all know we got sent to Kumon on weekends.
I want statistics on how many adult parents have torn ACLs at a trampoline park because that is a menace and a hazard. I'm a millennial. I'm like mid-generation millennial. When I was a kid, there were some like really great but also some really janky like kids. They were usually not corporatized. There were like obviously like the Chuck E. Cheese and then Discovery Zone was a thing. But in just outside of Rhode Island, Massachusetts, where I grew up, there was this place called United Skates of America.
which was like a roller rink arcade laser tag combo type place. And it was disgusting and not regulated or corporatized at all. Everyone had their birthday party there. It was great, but also like real janky.
this sounds like the the predecessor of climb zone uh which is now like wall climbing plus plus laser tag yeah mark what are you keeping tabs on football big weekend for football yeah but yeah specifically the whole deon sanders story at colorado um i'm
Is that Coach Prime? Coach Prime. That's right. So I've got kind of an angry old man take on this. Classic. Because, you know, I kind of grew up watching college football. But the whole thing with him is that he basically imported a team.
to Colorado through what they have now called the transfer portal, which allows teams to quickly get talent. And it's just becoming a lot more like that's like their draft sort of, you know, where you can. And there's all this demand for Deion Sanders to come in and make Colorado good and winning and really quickly. But they ended up going four and six. They lost like their last six games. They got beat this weekend. Josh, what are you keeping tabs on? I'm keeping tabs.
tabs on. So there was a study that Harris did over the summer around basically different generations thoughts on what sort of like money assets you need to be happy. And like, as you can tell, most people were like, so funny. Now, this is with all the caveats of any sort of like poll that it's a snapshot in time. And it's a sample size of this was like 2000 type people. And that's broken up from different generations. But like,
Most generations had like around like Gen Z was like, oh, you really need like $128,000 a year to be happy. Gen X was like $130,000. Boomers were $124,000 a year. Millennials?
$525,000 a year, according to the survey, millennials said would be the annual salary they needed to be happy, which is maybe an outlier. But I also think there's good reasons behind the sentiment of why millennials are currently like, we need more money. One, our generation has the brunt of student loan debt.
Mm-hmm.
And that completely burst the bubble, no pun intended, on my thoughts of financial security whatsoever. So I'm just a constant, me and my wife all together, just like have much bigger inflated ideas of what actual financial security looks like. Because we watched in a formative time in our lives when the entire economy and all stability came crumbling down around us. And then we're like,
enter the job force and do all these things that your parents did. And it just wasn't a reality. So I get why there is this anomaly in this sort of a study. Well, I didn't think about it that hard, but I remember seeing those numbers right after I bought a really expensive pair of loafers that I'm returning today on Black Friday. And I was like, yeah, I could use way more money. And the other thing I thought about is that no matter what generation I'm in, I am several tens of thousands of dollars away from being happy.
Yeah, there's no what amount. And there's something really sad about that. Yes, what's your keeping tabs? So there's a psychic. There's a psychic that... I'm so sorry. We need to stop and just say that that is the best transition we've had on this show ever. Yeah. So there's a psychic. So there's a psychic that the Daily Mail relies on a lot named Baba Vanga because she apparently predicted 9-11. And she has released...
I know. She has released some predictions this year that make me think that I'm a psychic too, because honestly, of course. Did she predict Sam Altman? No, but she predicted a host of dark events, including a major economic crisis, biological attacks, and an assassination on Vladimir Putin, along with droughts and floods. Oh, just some classic biblical.
things. You just have to pepper that in there. That's so easy now. Climate change. Anyone can predict a drought or a flood. They happen like every day. Just a soupçon of pestilence and famine. Just add that in there. I will say the article is deeply confusing because...
Because it does say... It makes perfect sense to me, Yaz. I don't know what we're talking about. It does say there's a list of things that Baba has predicted before, including the 2004 Boxing Day tsunami, 9-11, as I said, before the COVID pandemic. But it also says Baba Vanga even predicted her own death correctly foreseeing she would die in 1996 at the age of 85. And
And now I'm like, where did they get these new predictions from? There's a lot I don't know, but Baba Vanga, I am thinking about you. And I think when we get laid off and when this magazine folds, that's going to be my backup plan. Oh, God. I don't know what else to add to that. I don't know what to say. You're going to have to have a cool name like that. Baba Vanga? That's a great name. How about Yazabanga? You don't think Yasmin Gagne is a cool name?
Gagne is good enough? Yeah, that's pretty good. I think it needs an O at the end, though. Yasmin Gagne? Yeah, that's pretty good. Yasmin Gango.
I hate this. This is a time for us to hard launch our host joint name, Jazz. Jazz. I personally voted for Yash, but... Jazz. Anyway, that's it for Most Innovative Companies this week. Ainsley and Mark, thank you so much for joining us. Thank you. Thanks. Our show is produced by Avery Miles and Blake Odom. Mix and sound design by Nicholas Torres. And our executive producer is Josh Christensen.