Hey there, and thanks for listening. We want to know more about our audience. So stick around at the end of the episode to hear about how you can provide feedback and potentially walk away with a $75 gift card.
♪♪♪
One of the so-called godfathers of AI wants it to be less human. From American Public Media, this is Marketplace Tech. I'm Novosafo.
We'll get to that AI story in this week's Tech Bytes Week in Review. Other developments this week that got our attention, one from Florida, where a federal judge temporarily blocked a law that would ban kids under 14 from getting social media accounts. And another from Meta, which announced an energy deal with one of the country's biggest operators of nuclear reactors.
And that's where we begin with our guest this week, Jewel Burke-Solomon, managing partner at the venture capital firm Collab Capital. I asked her why Meta would bother signing a deal to keep open a struggling nuclear power plant in Illinois.
Meta has been doing a lot in the AI space and they need more energy to be able to continue in this AI race. And so like some of the other major tech companies, Meta has seen an opportunity with nuclear power and has signed this deal with Constellation Energy to be able to keep a nuclear power plant open that was potentially going to close. And with this deal, 20-year contract, it will remain open, which is a lot of work.
Hopefully we'll save some jobs and also help Meta achieve their goals of counterbalancing all of that's going on with their development of AI with clean energy kind of counteraction.
Now, AI is extraordinarily power-hungry, right? I mean, we have not just meta, but all the big players that are in the AI space looking around, scrounging around for power. That's right. It's necessary for these companies to continue to develop. They have to have the power sources to fuel what they're developing. And why are AI companies so interested in nuclear power as opposed to, say, solar or wind or anything else?
Well, nuclear power does provide this carbon free power, reliable power in ways that, you know, some of the other energy sources may not be as reliable. And also, you know, it's available. So there are these power plants around the country and these energy companies around the country that have a problem as well, where they were.
in many cases, going to run out on their existing tax subsidies. And so we're looking for a lifeline, if you will. And so there's this, you know, what looks like a pretty even exchange in terms of what these power companies are getting, nuclear power companies are getting, as well as what these major tech companies are getting by doing deals with them.
Is there another solution? Because at some point we're going to run out of nuclear power plants to keep open. What else is tech looking at, big tech looking at to try to solve their power problem?
Well, I mean, obviously we see these data centers are going up everywhere. I live in Atlanta and in the South. There's a big story about where these centers are going up and how they're going to be impacting the communities that they're going around in. Of course, I think folks are looking at solar as a potential option, water, energy sources, all of these things. But, you know, the hot story this week has been around how nuclear can kind of solve the problem.
And I thought it was interesting that Google made a partnership with this company that's trying to put together smaller, modular nuclear reactors that perhaps they can get up and running faster than the typical 11-year timeframe to get a traditional nuclear power plant up and running. So lots of different efforts to kind of find creative solutions to this problem, right? Absolutely. Yep. And speed is the name of the game. Yeah.
I think we're going to see a lot more creative, quick, efficient solutions coming online here in the coming months and years. We'll be right back. If you're struggling to keep up with all the latest innovations in tech and what they'll mean for your life, TED Tech has you covered. Get ahead of the curve with digestible downloads on some of the biggest ideas in technology, from AI and virtual reality to clean tech. Find TED Tech wherever you get your podcasts.
You're listening to Marketplace Tech. I'm Novosafo. We're back with Jule Burke-Solomon, Managing Partner at Collab Capital.
Florida enacted a social media ban last year for kids under 14. They're not supposed to be able to open up accounts. Now, that might sound good to some people who are trying to protect their kids from some of the ills of social media. But a federal judge this week blocked the state from enforcing its ban. The law is being challenged on First Amendment grounds. Now, can you help us understand what the tech industry stance is as as to why they would want to oppose this ban?
Yeah, I mean, the tech industry really is saying that
I should be the one to make these decisions and it shouldn't be, um, up to broad in wide sweeping bands. They're saying that these bands, um, as they're written are, are too broad. Um, they're not specifying the specific platforms that they're talking about. Um, they may be impeding on freedom of speech they're saying. And so there are a lot of these, um,
Tech industry groups that are really fighting to make sure that these laws don't hold and that they're blocked really at every turn. It's interesting that this fight is happening in Florida. And here is the tech industry opposing something that you'd think that there would be some common ground on terms of like personal liberties, etc.?
Yeah, this story is pretty interesting in terms of who's on what side of it. You know, I'm reading it and I'm a parent and thinking,
It's shocking to me that there is so much of a fight around something that I think we can all agree, you know, young people having access to social media is having a negative impact on their mental health, you know, on learning outcomes. And it was surprising to me reading this story that there is so much energy around blocking laws that would
you know, hopefully help curb some of the social media addiction that we're seeing in young people. You know, it's interesting. You're in Atlanta and there's another case in Atlanta that's very similar. This one about requiring age verification for kids under 16. A judge may rule in that case in the days and weeks ahead. And one of the arguments that the state of Atlanta is making or the state of Georgia, excuse me, is making is that this is their kind of
effort to get age verification, at least for kids under 16. It's sort of like, you know, when minors have to be checked to make sure they're not being served alcohol in bars. And yet again, we have the tech industry fighting back against this rule for similar on similar grounds, freedom of speech and overly burdensome regulation. So for you in Atlanta, how do you see this rule and the tech industry's opposition of it?
Yeah, I mean, it sounds like from what I'm reading here locally that we'll have a similar fate as what has happened in Florida. Again, it seems the trend is that these laws are not holding, they're being blocked. And there are a lot of groups who are really fighting against them, including Net Choice and some of the other industry associations. So we're
We'll continue to follow track the story, but it looks like it is going to be up to parents to really control what their kids are doing online. And they're not going to have a ton of support from, you know, these laws enforcing it. All right. And let's switch to our final story here. This also has to do a little bit about kind of the give and take of tech and its benefits versus its costs.
And now the man known as the godfather of AI, his name is Yoshua Bengio. He's starting a nonprofit lab to train artificial intelligence in a different way than other companies have been training them. And the idea is to make them less like humans, less, not more. And this would be a complete paradigm shift to what AI companies are doing now. And he warns that what they're doing, they need to stop doing. He says it could literally endanger human survival itself.
which, you know, isn't too scary. Here's a clip of Benji at a TED Talk a couple of months ago. Recent studies in the last few months show that these most advanced AIs have tendencies for deception, cheating, and maybe the worst, self-preservation behavior. But if they really want to make sure we would never shut them down, they would have an incentive to get rid of us. But it might be just a few years away or a decade away.
Now, you work with a lot of startups and there are lots of newcomers in the space looking at different ways of using AI, developing AI. Is there a safer way to develop this technology? Are there people working on that?
Certainly. Yeah, I mean, every pitch we get these days that has AI in it, everyone that's starting a startup today is thinking about how they can leverage AI. And there are certainly safer ways to go about that. I think it's all about being mindful from the start in terms of how folks are developing models and integrating AI.
agentic capabilities into their solutions. So it's actually really encouraging to hear from one of the AI godfathers talking about what we should be concerned with, the realities of what's really happening
You know, Law Zero, his organization's mission is really about prioritizing safety over commercialization. And unfortunately, we haven't seen that from some of the larger companies, that prioritization of safety. So I think he's on to something and hopefully the industry kind of heeds the warnings that he's sharing with us, given his deep knowledge. And hopefully it helps us.
helps everyone be a bit more mindful in terms of how they're developing these companies. And the smaller players, the startups you work with, do you think they're taking his warning seriously? Absolutely.
Some are. I think that those, depending on what their mission is and if they're mission-driven and also thinking about the outcomes of what they're building, they are thinking about some of these warnings and thinking about this idea of developing agents that are more of the scientific AI and really on the side of observation and explanation versus mimicking human behavior. I think that's
something that some companies are considering, not all. I think a lot of companies are certainly wanting to be in this race and finding the fastest path to AGI. But
I'm hopeful and certainly for our part as we're evaluating companies, we are looking into what are their frameworks for ensuring safety and privacy and making sure that the data that they're using to train their models is owned, is, you know, they're doing all the
the right checks to ensure that there is a safe outcome for both the users and their partners, stakeholders as well. So that's something that we're evaluating. I can't say the same for all venture capitalists, but I do think that people are certainly considering these issues more deeply than prior.
That was Jewel Burke-Solomon at Collab Capital. You can find the full video of this episode of Marketplace Tech Bytes Week in Review on our YouTube channel, Marketplace APM. And subscribe if you haven't already to watch us every Friday. Jesus Alvarado produced this episode. Daniel Shin also produces our show. Gary O'Keefe is our engineer. Daisy Palacios is the supervising producer. Nancy Fregali is the executive producer. I'm Novosapo, and that's Marketplace Tech.
This is 8 p.m. Real quick before you go. We'd love it if you'd please complete a short anonymous survey by going to marketplace.org slash survey. It would only take about 10 minutes. And as a token of our appreciation, you can enter your name to win a $75 gift card once you've completed the survey. You do all of us at Marketplace a huge favor by filling it out.