We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode The Biggest Breakthrough Technologies Coming This Year

The Biggest Breakthrough Technologies Coming This Year

2025/1/23
logo of podcast KQED's Forum

KQED's Forum

AI Deep Dive AI Chapters Transcript
People
A
Allison Arieff
C
Casey Crownhart
J
James O'Donnell
M
Matt Honan
S
Sam Liang
Topics
我撰写了关于生成式AI搜索的报道。我认为这项技术是搜索技术的彻底变革,它与谷歌推出PageRank技术一样具有根本性的意义,它将从根本上改变我们获取信息的方式,无论好坏。 这项技术的好处在于,它可以通过更自然的问题来获取复杂查询的答案,而无需使用关键词。 然而,这项技术也存在严重的缺点,例如准确性问题。此外,它还会对依赖搜索流量的媒体和企业产生重大影响,因为‘零点击搜索’的增加会减少网站流量。

Deep Dive

Shownotes Transcript

Translations:
中文

Hi, I'm Bianca Taylor. I'm the host of KQED's daily news podcast, The Latest. Powered by our award-winning newsroom, The Latest keeps you in the know because it updates all day long. It's trusted local news in real time on your schedule. Look for The Latest from KQED wherever you get your podcasts and stay connected to all things Bay Area in 20 minutes or less.

Hey, have you heard of On Air Fest? It's a premier festival for sound and storytelling taking place in Brooklyn from February 19th through 21st. I'm Morgan Sung, host of KQED's new tech and culture show, Close All Taps, and I'll be there at the fest to give a sneak preview of the show, along with an

From KQED.

From KQED in San Francisco, I'm Alexis Madrugal. Despite some recent events, there's progress happening in this world. Every year, MIT Technology Review creates a list of their 10 breakthrough technologies.

The list spans the whole of the science and technology enterprise from AI to climate to biomedicine. So today, the magazine's editors and writers will give us a tour through the near future from longer lasting HIV prevention drugs to small language models and a bunch of robo taxis and climate solutions, too. It's all coming up next after this news.

Welcome to Forum. I'm Alexis Madrigal. There are so many dimensions to technological change. Many of them fall into the realm of what we call artificial intelligence, of course, but there's so much more to the world of science and technology that's been a bit overshadowed. There's continuing progress on climate solutions across the world, as well as remarkable advances in biomedicine and robotics.

On Forum, we often talk about the problems that have resulted from the rollout of new technologies. And we're going to keep doing that, of course. But today, as we go through MIT Technology Review's list of 10 breakthrough technologies for the year, we get to focus mostly on what might go right in these realms. We're joined first by Matt Honan. He's the editor-in-chief of MIT Technology Review. Welcome, Matt.

Hey, Alexis. Thanks for having me. Yeah. We're also joined by Allison Arieff, Editorial Director of Print at MIT Technology Review. Welcome. Thank you. So, Matt, I did say we're going to talk about the other stuff for the rest of the hour. But first, I do want to ask, we're talking about a tech industry where the captains of that industry were sitting in a line behind Donald Trump at his inauguration.

Does the increasing political entanglement of the tech industry change what you're doing or have to do at TechUp Review? That's a good question. I don't think that it immediately changes what we have to do. I mean, I think it changes a little bit how we cover some of the news. We're not a fast twitch news cycle publication, but...

I think that part of what you're seeing is a lot of people lining up to generate a lot of hype for their companies to get in close to the administration, but also to generate a lot of hype. And one of the things that I think we really try and do is cut through hype and BS and try and be a voice of reason.

Having said that, I will admit I'm pretty surprised by some of the folks on stage. Mark Zuckerberg, whatever. And Elon Musk, I think, has gone through a true political journey. For example, Sudar Pichai, in 2015, he wrote a blog post. I believe the title was...

With something like we won't let fear change our values. And, you know, it's pretty remarkable that, you know, you go from writing that blog post to nine years later being on, you know, sitting up there and sort of paying homage to the new administration. Well, we're going to talk about that stuff for the next four years. So for the rest of this hour, let's talk about one of the breakthrough technologies that for you really stands out.

So I wrote the story on generative search, generative AI search. So if you go to Google now and you type in a query, a lot of the time, I think probably now most of the time for me at least, you'll get an answer that uses a language model to basically go through search results.

collect a lot of the information from them, and then put it together for you in a summary. And those summaries, ChatGPT is doing the same thing. There's another startup called Perplexity that's going even further in terms of it uses multiple language models and is pulling from all kinds of sources.

And these are really, I feel like, a complete evolution of search. It's very different. I think it's as different as when Google launched and had page rank technology that instead of just looking at keywords, looked at links. I think it's fundamentally different. I think it's going to fundamentally change the way that we do information retrieval. For good or for ill.

That's a good question. I mean, I think mostly for good. This is one of the issues where I'm maybe a little bit more of an optimist about it than some folks are. I don't necessarily always hold that view, but I tend to like it. I tend now to go and ask Google queries where...

You can sort of by asking a question or asking sort of a longer question especially, you can make it have one of these generated answers or it's more likely to get one. And I think that it's very valuable in trying to get an answer to a complex query where you don't have to like load it up with keywords, which often then just gets you garbage results anyway.

I mean, in my view, Google search and Bing and just about, you know, whatever, DuckDuckGo were already pretty hard to navigate. You know, I mean, you have like maybe a couple of good results. And then after that, or maybe not even. And then after that, you got to really dig around. Well, you had Wikipedia. You had whatever else was. No, I mean, to your point, you know, someone pointed this out to me, like,

If you try searching minerals like quartz or something, right, and you look at the AI summary versus what you would get if you just, you know, click down the first 10 links, the first 10 links are all going to be non-scientific answers to what quartz is or does, whereas the summary is actually more likely to be that. Let's set aside, you know, some of the accuracy issues around particular queries as well as maybe accuracy.

Just the idea that AI shouldn't be doing that. Last thing on that question is the effect on the media. I think it's the guys over at The Verge who start talking about Google Zero, where essentially no Google traffic is coming in. Many media organizations have survived on that traffic if it gets summarized by an AI instead of people going to their page. Do you see that as being...

worrisome, even just for your own? Oh, it's incredibly worrisome. And to be clear, I don't think that this is just a purely beneficial technology, right? It's going to have serious downsides. The accuracy thing, I don't think you can undersell that, right? It's a big issue. But the zero-click searches, which basically have been going up for years,

I talked to the company SparkToro that coined that term for my story. And this guy Rand Fishkin had just a quote that's really stuck with me, which was, if your business is dependent on search for traffic, you are both in long and short-term trouble. And it's true. It's not just going to be the media. It's going to be all kinds of – although it's certainly going to affect the media. It's going to affect all kinds of businesses that do depend on that traffic.

And I think, like, something that I've been looking at is, I mean, you know, I've spent the past, like, I don't know, I mean, it's almost embarrassing to say, but like 20-something years staring at various, you know, analytics dashboards of, you know, varying sophistication.

And there used to be, like, if you looked at one of those dashboards, say, five, six, you know, certainly seven years ago, and you see the slices of the pie where the traffic comes from, like, a third of the pie would have been Facebook, right? And today, like, that slice of pie, like, it's not even a snack, right? It's not going to sustain you. And I think Google is going to go in the same direction, or search traffic is going to go in the same direction. Wow.

More for another day. More for another day. Alison, let's talk about the construction of the list itself. What are the criteria that you're looking at to determine, okay, this thing on, that thing not on? Sure. From the beginning, I think we look at this as not this kind of slim window into tech anymore.

Google meta, whatever. But what else is going on? So it's a really, really broad idea of what might constitute technology. So I think we try and begin by casting as wide as possible. Like, what are some really amazing technological solutions that we've seen?

happen. I'll give you an example of a few of them to give you a sense of kind of... The scope of it? Yeah, we just like toss stuff out and this is all of our beat reporters and editors and everything. Actually, our whole staff jumps in and sometimes, you know...

Someone from marketing might have a great idea, whatever. But those would include things like biotech houseplants, which are these plants last year that were kind of glowed in the dark. You know, more fun than anything, but like, you know, an interesting change. EVTOLs, which are...

Electric vertical takeoff and landing vehicles. Flying cars. Flying cars, basically. AI companions, which of course we're seeing a lot of. Non-addictive painkillers. Not something you might assume is technology, but it's a medical breakthrough that would rise to the occasion on our list. Wearable ultrasound machines, IVs.

hyper-realistic deep fakes. So we go through like this massive list and we decide, is this just like a cool thing that happened? Or did something happen in 24 that represented a real shift in

in what the thing was able to do. You know, we had autonomous vehicles on the list last year until the last minute, and then we took them off because there were still too many problems, you know, lots of crashes and weird things happening. Whereas this year they made it, which I know we'll talk about later because...

They were deployed to a level that seemed like it was a real thing. And it does, Matt, I'll bring you back in here. I mean, it does seem like that you're trying to hit these technologies at the point at which sort of like regular people are using or encountering most of these things rather than, you know, the first point in the time of a technology when it becomes sort of visible to those maybe on the cutting edge.

Yeah, I think that's right. I mean, it's not always that regular people will encounter them, but that we think it's poised to hit a degree of scale, right? Like sometimes it might be, like, for example, we had Climeworks, which is doing carbon removal on the list a few years ago. We read a story about that a few years ago. And, you know, most people aren't interested.

aren't going to encounter that but because they had a large facility set up we felt like it was it was at a tipping point and and it's also like to to point something out we we sometimes although not usually but we sometimes put things in the list that we don't necessarily like endorse you know like like so sit so two years ago we had cheap military drones on the on the on the list because it was pretty clear from the war in ukraine that those had taken off

I didn't mean that as a, you know, that those were, that those were becoming a much bigger deal. And, you know, most people aren't going to counter those, hopefully, but they're something that, that seemed like it was changing the world. And I think that's, you know, we really look for things that are on the,

Not just have the potential to change the world, but we feel like we're on the cusp. Are already on that trajectory. Yeah. That makes a ton of sense to me. Sometimes the science and technology studies people out there, the scholars are like, ah, technology in use. Stuff that's like actually, you know, in deployment as opposed to just, you know, the frontier research. Yeah.

We're talking about the MIT Technology Review's list of 10 breakthrough technologies for 2025 with Editor-in-Chief Matt Honan and Editorial Director of Print Allison Arieff. Of course, we also want to hear from you. We know we have a lot of technologists in the listening audience, possibly stuck in traffic today, which is terrible out there. How has a technology breakthrough made a difference in your life, in your company? What kinds of

tech breakthroughs might you like to see in the coming year? You can give us a call. The number is 866-733-6786. That's 866-733-6786. The email is forum at kqed.org. And you can find us on Blue Sky, on Instagram, or KQED Forum. Of course, you can always join the Discord community. I'm Alexis Madrigal. Stay tuned for more right after the break.

Hey, have you heard of On Air Fest? It's a premier festival for sound and storytelling taking place in Brooklyn from February 19th through 21st. I'm Morgan Sung, host of KQED's new tech and culture show, Close All Taps, and I'll be there at the fest to give a sneak preview of the show, along with an

IRL deep dive all about how to sniff out AI. You'll also hear from podcast icons like Radiolab's Jad Abumrad, Anna Sale from Death, Sex, and Money, and over 200 more storytellers. So come level up your own craft or connect with other audio creatives. Grab your tickets now at onairfest.com.

Welcome back to Forum. I'm Alexis Madrigal. We're talking about and through the MIT Technology Review's list of 10 breakthrough technologies for 2025. We've got Editor-in-Chief Matt Honan, Editorial Director of Print Allison Arieff. Going to add a couple of other voices into the conversation as well. Want to bring Casey Crownhart, Climate Reporter with MIT Technology Review in. Welcome, Casey. Thanks so much.

And also James O'Donnell, artificial intelligence reporter at MIT Technology Review. Welcome, James. Thank you. And of course, I want to invite listeners into the conversation as well. The breakthrough technology they think should be on the list. Numbers 866-733-6786 or forum at kqed.org.

James, let's talk a little bit of AI really quickly here. One of the breakthroughs that you anticipate involving AI is what you're calling small language models. Large language models are things that people have talked about, LLMs. So what is the difference between the small and the large?

Yeah, it's a good point. So for a while, you know, for years, large language models were getting better by basically training on more and more data, right? So instead of training on one book, you could train on a catalog of books. And then instead of training on a catalog of books, you could train on the entire internet, right?

And that has taken us through the last few years with companies like OpenAI and Google and others who have built really, really massive models. And the idea is that by training on enough text data and then even spreading out to image and video data, by training on enough of that, you can get a model that's generally pretty good at writing things, at understanding things. It's not perfect and it's not necessarily creative, but it's pretty good at working with text

The downside is doing that is really, really expensive. It's also a little bit legally precarious because these companies have been sued for training on data that is copyrighted and the companies may say, well, it's publicly available and that has to still sort of work through the courts of how that's going to shake out. But in the meantime...

You know, there's an effort to say, well, maybe you don't need a model that knows everything that's ever been posted on the Internet. Maybe we can build a model that's specifically tailored for a certain task. So you see examples like this in all sorts of industries. I just got an email the other day from a law firm, for example, that's trying to build a legal AI model that knows case law, essentially, and can interpret things.

And so for that, you might be less interested in training on every, you know, word ever published in any language ever. Correct. Yeah. So you can sort of narrow the scope of that. So that's what we mean by small language models. And would that also address some people's concerns? It probably should be everyone's concerns about the energy intensiveness of some of these AI applications.

Yeah, so you could think of the energy intensiveness in AI as sort of falling in two different camps, right? Like there's the training model, which is what we were just talking about, where you feed a bunch of stuff into it. And that can take anywhere from hours to weeks to months for servers to sort of build that model. That's really, really intensive.

And then there's what you call inference, which is where you actually use the model. So smaller ones are helping out a little bit with both, particularly if you have to train AI models on fewer pieces of data. They are going to require less energy to do that, but maybe we can talk about this later. There's also the question of how much transparency these companies have of whether or not we actually know how much energy they're using to build these. Yeah.

Last thing before we move to some climate stuff. One of the things that has gotten people that I know in this world very excited and also maybe not just excited, but a little scared is this Chinese model DeepSeek, which appears to have been trained for a tiny, tiny fraction of the cost of the big American models. Do you want to talk a little bit about that? Does that fit under that rubric of small language models?

Yeah, you know, I'll be honest, I haven't covered that model in particular. And I know we have a China reporter, Si Wei, who's been working on that and is maybe publishing some work on it this week. But I do think it falls under that general theme we're seeing in the industry, which is you've got to do more with less, essentially. You've got to figure out how do you build models that don't require the entire Internet and don't require all of this copyrighted data and don't require a bunch of money to spend on...

spend on servers to train these models. And there's definitely an international competitiveness there that's going on with models built in the US versus models built in China. That's playing out in a big way in this administration. Cool.

Casey Crownheart, climate reporter. You know, I think most people perhaps thinking about climate breakthroughs might be expecting a new kind of solar cell or a way of controlling wind farms or the grid or something. Are those the kind of focuses for you in this list or are there other things?

Oh, that's a great question. Yeah, pretty much no. The climate items, we have featured a ton of energy technologies in the past. This year, though, we have a couple of different picks. So we were kind of focused on tech in transportation with cleaner jet fuel, heavy industry. We put greener steel on the list.

And then perhaps my personal favorite, we put cattle burping supplements on the list as well. Tell me more about those. I'm aware of them as a problem. Large methane generators. There's a lot of cows in the world. Methane, very potent greenhouse gas. How does this help them? Is it like probiotics? They're just like out there, like taking stuff from an Instagram influencer? Yeah.

Basically, I mean, it's sort of, you know, kind of in that ballpark. Yeah. So like you said, livestock is a huge part of global greenhouse gas emissions, depending on the estimate, somewhere between 10 and 20 percent of all climate pollution comes from livestock.

Cows are a huge chunk of that. They tend to burp up a lot of methane. They have kind of a funky digestive process that they have some little microbes in their digestive tract that produce methane. And so there's a huge range of different kind of drugs that you can give to cows to help them

not do that quite so much. You can add this to their feed or to their water or give it to them in sort of like a capsule to basically tweak the microbes in their guts, which is really interesting. And what kind of cuts are we talking about in the amount of methane they produce?

It depends on the drug. So right now, there's one that's approved by the FDA in the US. And the company says that it can cut methane emissions by about 30% in dairy cattle and more in beef cattle. But we're also seeing a lot of startups kind of working to bring their own products to the market. One is called Ruminate, and they say that they can get their emissions cuts even higher. And it kind of depends, again, on whether it's a dairy cattle or beef. Cool.

We're going to come back to cleaner jet fuel and greener steel, but I want to bring Allison back in to talk a little bit about the biomedicine slash healthcare breakthroughs. One that's on this list are these sort of longer-term preventative HIV drugs. So if you are someone who's familiar with sort of PrEP, a thing that you can take, where does it fit in that kind of health space? Well, this is...

It's a drug that was in a trial in June of last year called, I'm probably pronouncing this incorrectly, sorry, linacapavir. And unlike PrEP, which I believe is taken every day, this is injected once every six months. And in the trial, it protected over 5,000 girls and women in Uganda and South Africa from getting HIV and was 100% effective, which is amazing.

A very large breakthrough. Yeah. The other one, you know, I was interested in is, you know, since Matt and I were baby technology reporters, you know, we've heard about stem cells, right? And I think people have been very excited about them. They're able to, you know, stem cells are able to turn into all the other types of cells and tissues in your body to rough approximation. But the way you all describe it in the list is that stem cell therapies, quote, that actually work.

Do you want to say anything more about those? Yes. There was some great progress, again, last June, which was clearly a big month for healthcare stuff, when a stem cell study was for type 1 diabetes, formerly called juvenile diabetes, where a person's body attacks the beta islet cells in the pancreas.

In this trial, which was carried out by Vertex Pharmaceuticals, some patients who got transfusions of the lab-made beta cells have been able to stop taking insulin. And instead, their new cells make it when it's needed. So no more seizures, no more insulin injections. And these are, of course, the words that anyone with this condition always wanted to hear. So again, I'm excited.

Yeah. Doesn't doesn't maybe rise to the level of conventional technological breakthrough, but absolutely is in our book. Matt, another one of those kind of more unusual. Well, yeah, I mean, people think about this as science and technology, I think. It's a powerful new telescope in Chile. You want to tell us more about that?

Sure. I mean, the Vera Rubin Observatory, one of our writers described this as being able to peer into the fourth dimension, which I think is pretty cool. It's going to be the largest digital camera ever built. And it's basically what this is built to do is to study space-time. And so it's going to

give astronomers this basically this time lapse view of the sky. And it's going to give us, you know, the ability to study things like dark matter. It's really going to, for the first time, help us create, you know, a super detailed three-dimensional map of the Milky Way. You know, I kind of, like, one of the things that I love about this type of thing is that

You know, as human beings, we've been peering at the sky for many thousands of years since, you know, before we had civilization where, you know, we're lying on our backs looking up at the sky. And,

This is one of those technologies that advances, it's going to dramatically advance our knowledge, but it's also just like awe-inspiring. It's such a giant, new, big project, and it's going to help us, I think, tap into that same sense of awe and wonder that we've had with the sky ever since we were first people. It's also interesting, too, at least for me, I've been on quite a trajectory of space stuff where I...

I used to find it really interesting. Then I kind of lost interest during a lot of the, well, I guess we just don't have a space program in the U.S. anymore. But it feels like astronomy and these kind of sky surveys and these things are maybe, like we're not actually leaving the solar system, right? So now we have to kind of be okay with like, well, we're just going to learn more about the sky via these amazing observatories.

Yeah. I mean, you know, when I was a kid, I remember watching the shuttle launches and thinking that by the time I was my age, like I would have been to space. Right. You know, like, like that's just, oh, that just seems inevitable. We're going to have a colony on the moon. And, and my expectations have been tempered, but that doesn't mean that like, you know, that doesn't mean that, that we can't still do these really cool things, you know, like, and, you know, from building space stations to building these incredible telescopes to help us learn more about what's out there. Yeah.

Let's bring in caller Harry in. Lafayette, welcome. Hello. Hey, Harry. Go ahead. Yeah, my technology revolution is a very simple one. It's speech recognition. It came on board, I would say, about 30 years ago. And we've got, you know, it's everywhere now. But I was fired for not being able to type fast enough as a doctor, psychiatrist. Wow.

Harry, appreciate that. You know, it's interesting in my mind. James, I'm going to bring you in on this one.

I have actually... Basic speech recognition, it's true. I remember, like, was it Dragon? I think that was the software company that made early speech recognition technology. You know, I kind of expected it to get better a lot faster. Now I'm still surprised that Siri makes mistakes in transcription. And, like, quite howlers of mistakes. James, is there anything that you're monitoring in your world where you feel like, okay, we're going to have essentially...

at least as good as human speech recognition with fewer of those kinds of mistakes. Yeah, I think Siri was definitely, as you said, a lot of people's exposure to this, you know, speech recognition technology. But honestly, it stayed sort of the same quality for a long time. And you see

that changing now, especially as Apple has announced that it's working with OpenAI on various technologies. And I think if you try some of the speech recognition that's in ChatGPT, so people might think of that as just text, but it also has a voice mode and a video mode even, I think you'd be pretty blown away by how real-time and accurate the conversations can be, where it can respond to you almost immediately. You can interrupt it, and it will adjust its course based on

what you're saying. So I think the frontier now forward is making that even faster, but also making it adapt to different dialects, accents, language, things like that, making it more universal. But I think it's come a long way. It's just not necessarily in everybody's products yet. Yeah, such a good... Alexis, can I... Yeah.

Can I jump in? Like, I'm curious if you've used Otter. Like, Otter is a transcription software. I use it now for recording interviews and stuff like that. It's like one of those products that's amazed me. And yeah, you have to go in and clean it up. Like, you for sure have to go in and clean it up. But, you know, I think of that, and I think back to when I started...

you know in this industry and i would record interviews and then i would use a tape recorder with like foot pedals to slow down did you use foot pedals oh man yeah yeah yeah yeah i'm sure you weren't using a loom like yeah no i know the thing you're talking about i know the thing you're talking about yeah

Wow. Yeah, no, no, it's true. I guess what is interesting for me on things like Otter or Trent or like these sort of competing is that you still do have to do some cleanup, you know? Like you would think that we would have gotten beyond that, especially given just the explosion of language using technologies that we now have, you know?

Yeah, point taken. And like, it's also, like, I also still find Otter, for that to be a pick on it, it's the one I use, pretty bad with accents as well. Yep, totally. I will also say it's very aggressive because if you use it to record something, it then begins to record everything. Oh, yeah. Yes. Yeah. Got it. Got it. Yeah. Which maybe you don't want. Yeah.

Steve over on the Discord writes in to say,

The only specific that I know of being close at hand is new battery chemistry to reduce problematic use of lithium. Casey, do you want to take that one? I guess I'm always, whenever somebody mentions battery technologies, I'm reminded of one of these science and technology scholars, people who called it the better battery bugaboo. Like everyone's always like, no, the batteries are almost here. The best battery is almost here. Do you see it like that? Or do you see that there is a like kind of looming demand

breakthrough in battery chemistry? That's a great question. I love batteries. I think that...

Number one, I think we've seen that there have been a lot of ideas in battery technology for a long time, a lot of research. What's been really difficult is that lithium-ion batteries have gotten better very quickly. So it's been a moving target. You know, they're 90% cheaper than they were a decade ago. That's great for making EVs cheaper. It's made it very difficult for new technologies to break in to the market and to have new options.

I think that the next few years, you know, if companies are able to scale these technologies, I think that we'll see maybe two different kind of areas of potential battery breakthroughs. I think we'll see, you know, better high performance batteries. Solid state is always getting a little bit closer, just always kind of tricky with manufacturing and scale up.

And then on the flip side, kind of really cheap batteries using abundant materials. You know, we're seeing, for example, sodium ion batteries are starting to really come online and China in particular is starting to really run away and make those. Does that mean I might eventually have like a different AA? Yes, absolutely. I think battery chemistry is always changing. And I think it's just a matter of kind of what catches on and what companies start to really make it scale.

I'm also interested, I mean, are lithium ion batteries going to be with us essentially for a very long time? Well, it's always hard to predict the future. But like I said, I think now it's the industry standard. There's billions and billions of dollars of factories now that are set up to make lithium ion batteries. So, you know,

Seeing all of that, you know, go to waste, I don't really foresee happening. There are some plants that could be retooled to make something else. But I think because of all of the investment that's gone on into lithium ion, I do think that we'll see that around for a very long time. Some path dependence here.

We're talking about the MIT Technology Review's list of 10 breakthrough technologies for 2025 with Casey Crownhart, climate reporter with MIT Tech Review, James O'Donnell, AI reporter there, Allison Arieff, editorial director of print and editor-in-chief, Matt Honan. We want to hear from you about technologies you're interested in, 866-733-6786 or forum at kqed.org. We'll be back with more right after the break.

Welcome back to Forum. Alexis Madrigal here. We're talking about MIT Technology Review's list of 10 breakthrough technologies for 2025. You can check them out, techreview.com.

We've got a bunch of folks from the publication here. Matt Honan, Editor-in-Chief. Allison Arieff, Editorial Director of Print. Casey Kronhart, reporting on climate. And AI reporter James O'Donnell. James, I'm going to send this one to you first. The magazine also published a list of the eight biggest tech flops of 2024, which included something known as AI.

AI slop. I know what it is, but please explain for those who have not heard this excellent term. Yes. AI slop is the term that many people have attributed to or given to all the stuff on the internet that's probably or almost definitely made by AI, but nobody really wants

to see or nobody really gets anything out of. So, you know, in the beginning of last year, there were a lot of like AI generated images of shrimp. Jesus is the specific one. So Jesus Christ in the sort of image of a shrimp. Feel free to Google that, everyone. Yeah, exactly. So other things like that sort of spread through the Internet. And so,

The reason that's sort of bizarre to follow is because AI models are making it just cheaper and cheaper to create massive amounts of content. But there's not a whole lot of like taste or curation or artistic creativity in a lot of that stuff. So it just spreads over the internet. Images are one thing. And I think this year we're going to see the equivalent of videos for that. So as generating videos with AI becomes cheaper and more accessible, that will be added to the slop pile as well. Yeah.

Matt, do you want to mention any of the other shadow and any of their tech flops? Um, actually I kind of want to come back to, uh, to, to AI slot for one second. I can't just like the, the thing that's so remarkable to me is like when you think about all these photos of shrimp, Jesus, like we're boiling the ocean to make that stuff. Right. Like it's like, you know, like it's AI is a pretty energy intensive technology and it's just like, Oh my God, I can't believe this is what we're using it for. Um,

Yeah, I don't know. Point taken. Good point, Matt. So I would go back to one of the other things, which is we had it as a flop, and I think it's...

I think it's super interesting, which is this list that Antonio puts together every year. He had Antonio Regalada, who is one of our senior editors. He had what's called Woke Gemini on there. And when Google rolled out Gemini, it had taken a bunch of steps to sort of make sure that it wasn't...

that it wasn't, you know, perpetuating all kinds of biases. Like basically if you did something, if you search CEO, you don't get all white men. All white guys, exactly. Yeah, yeah, yeah. Precisely. And it went a little overboard. So, you know, if you asked it to generate images of German soldiers in World War II, it would generate people of like multiple races. It was...

you know, it was sort of like refusing to basically be historically accurate in service of being, of trying to combat the bison. So what had really happened is it had gone way too far in the other direction.

And to me, yeah, it's a fail, but it also highlights what a tightrope a lot of these companies are having to walk. Towards the end of the year, when we were looking at it, to your point, Alexis, we asked it to generate an image of pharma companies or biotech CEOs, and it generated an image of 12 probably 60-something-year-old white guys.

Then it turned out you were actually just searching the real world. Right, yeah. This stuff is tricky, and then you can go to Grok, and it'll do whatever you want on X. Yeah, yeah. I'm not going to defend AI slop, and then I'm going to go back to the phone and say, I'm not going to defend AI slop. I will say this, that I think in particular the way the mistakes that large language models make are actually more interesting to me than when it gets it right,

Because it kind of reveals the innards of this like insane system that we've really built here to like to generate the kind of text that people want. And we sort of have built this little smiley face on it. Like, oh, it's just a nice little chat bot. But actually, it's like a compression of the entire Internet. Right. So there's a lot of weird stuff in there. And the slop is.

is when you get to peer into that in the language model sort of part of it. And I can't help but find that like totally fascinating. I'm glad that Boiling the Ocean interests you. Let's bring in Sam Liang, who actually is the Otter founder in Mountain View. Welcome.

Thank you. Thank you. I just learned that you were discussing about speech recognition, and I think you mentioned Otter. I just want to call you in and see if you have any questions I can help with. Yeah. I guess when do you see – here would be my question. So obviously there's some kind of accuracy that's approaching like –

100%, but it's not quite at 100%, right? How long and how much work has to go in to get from, say, 98% to get to 100? What do you actually need to do to get there? Yeah, it's really hard to say a precise number. We're definitely continuously improving. The problem with accuracy is that if everyone speaks as clearly as you guys, the accuracy can be high, but

the issue is that a lot of people don't speak perfect English. You know, just like myself, I speak with accent. There are people who are, you know, coming from all over the world. I mean, they don't speak perfect English. That's one issue. The other issue is that...

the the common words english words can be recognized pretty uh accurately but you know when you're in a meeting uh there are a lot of people's names uh that are hard to recognize and also their acronyms uh their jarvans um there are um all kinds of you know you know new companies are created all the time and they don't have common names how do you recognize them

So those are challenges. And also when you were in a noisy environment, sometimes when you're doing a Zoom, you hear background noise in that room. There could be a fan, there could be something generating noise, or some room have a different reverberation, acoustic condition.

All those are challenges. So the accuracy will continue to increase. And also with the help of the large language model, even if sometimes we don't hear a word clearly based on the context, we can guess what that word is. Which is what human brains do anyway, right? I mean, that is actually one of the things that's so interesting about how humans process languages. A lot of the time, we don't actually hear exactly correctly anything.

but we kind of know what to expect in that place, in whatever someone's saying, and so we're able to kind of fill it in. Absolutely. The context plays a big role, but most of the speech recognition system you are using today are actually not using that. That's actually a system we have built into Otter. For example, if you're in the meeting and there are 10 people invited,

On that meeting, we actually use that as the context and say, hey, when someone mentions a name, we'll prioritize those names in the guest list. Right. That makes so much sense. And if we know you are, for example, a doctor, we'll prioritize medical terms. Yeah.

Sam, thank you so much. The additional context, so interesting. Thanks so much for calling in. You know, when I was listening to, and I guess James, this comes to you, but you can send it somewhere else if you want. As I was listening to Sam talk about

the little difficult things that they can get most of it. It was really reminding me of listening to people who work on autonomous vehicles, talk about edge cases as they, as they call them, right? Like the, the unusual circumstances that occur, you know, once every million miles that these guys drive, right? That once every million miles, there's someone walking down the street carrying a broom and a chicken, and they got to figure out what is the way Mo or what does the, you know, Zooks do with that?

Robo Taxis is on the list. Why now and how do you think this will continue to play out?

Yeah, I think the why now is just that more and more people are getting exposed to these. The companies that operate them are getting contracts and getting approval to take them to more cities. So I think more people than ever have taken one of these. Unfortunately, that doesn't include me. I live in Boston and we don't have them yet and I haven't been able to do them on my travels to San Francisco yet.

I will say to your point about the edge cases, I think the way that these companies tend to handle that is by building those edge cases in simulation, right? So like you're not going to find a lot of training data about a shopping cart, you know, run away racing down the street and crashing into your car, but you can build that in simulation models. And so they've been steadily doing that and exposing these models to more and more scenarios and proving themselves that they can be safe. And again, they're not

perfect and they make mistakes in ways that are somewhat different than human drivers but they are getting spread to more cities and I think that's why this year we felt like putting them on the list. Yeah.

You know, Alison, I wanted to come to you on this one too. Listener Barbara writes, "As a San Francisco resident who lives near Union Square, the driverless Waymo Google-owned taxis are frightening to many of us pedestrians. Not only did they take away many taxi driver jobs, but have actually made living downtown and maneuvering the streets more frightening on top of the electric scooters, electric skateboards, and other wheel-like devices." Before you were at Technology Review, you were also at Spur, which people know works on urban design issues.

Did you have any qualms about putting robo taxis on the list or did it feel like, no, we need to recognize that we're at this inflection moment with this particular technology in our cities?

I didn't have qualms about putting them on the list, but I do have qualms about them in general, as anyone who has followed my work over the years would understand. I think that the focus of AVs continues to be on the experience of the person inside the car with, frankly, a blatant disregard for the person outside of the car.

I even see papers and simulations about how amazing it would be, how efficient cars could be if they didn't have to stop at all within cities, that they would go faster. And you can quite easily find these simulations online. And there's this...

kind of attitude of treating pedestrians as something that's in the way because it would just be so much easier. Like these cars could run so much more efficiently if they didn't have to worry about pedestrians. I was crossing the street downtown the other day in a crosswalk and the Waymo cut out and told me to back up. Wait, how did it tell you that? It said back up, please. It said back up, please.

Wow, I haven't had that happen. And I told it to back up. You said, no, you. And it did. So that should not, I mean. I mean, Alison, wait, hold on, hold on. Did you say, did you or did you not say, I'm walking here? Did you say it like that? That's what you should have said. I just want you to know. I guess my point is, is that,

A call out to all the people working on this technology. I know that's here and that's inevitable. But please, let's remember that our urban experience is very much about the experience of walking down the street and looking at things and safety for pedestrians. And there's all this focus on use cases and everything, but I don't think there's all that much understanding of how important

Us just walking down the street looking and things and passing other people is very much part of the urban fabric that AVs need to pay more attention to. We'll move away from this topic in one second. I have to say, I see them every single day walking across the mission into the station. And I think they've gotten a little bit better at the nonverbal communication, like the edging into things. I feel like I can read Waymos better now than when they first started.

came out and that you know but maybe not I know when they've never talked to me they've never chastised me for my pedestrian behavior um

Casey, I want to bring you back in on green steel. It's kind of an interesting one. I mean, I think of people working on cement a lot in climate. And I think of people working on, you know, like as Holly writes in to say, alternatives to fossil fuel based plastics. But steel, what's going on there? Yeah. So we use steel constantly.

all over the place. Buildings require steel cars. And most people probably don't think about it all that much. But making steel accounts for about 8% of global greenhouse gas emissions. So is that just the heat? Is that what it is? They just like you need to get very hot temperatures, which means you burn a lot of fossil fuels.

- Yep, high temperatures, but also the chemical reactions required actually kick off carbon dioxide as well. 'Cause you need to basically, when you have iron ore, when you turn that into steel, you're basically pulling oxygen off and you usually do that with a fossil fuel. So carbon, oxygen makes CO2.

And so we're seeing companies, a lot of different approaches to try and basically make steel in a different way that either reduces emissions or basically cuts out greenhouse gas emissions altogether. Oh, wow. And are they succeeding in that? How much can you really cut it down?

Yeah, it depends. One of the leaders in this space is a company called Stegra, and they're using a process that will take hydrogen that is made with renewable electricity and use that hydrogen to do those chemical reactions.

And they say that their process can cut out above over 90% of the emissions needed to make steel. Sort of a claim at this point. The company is currently in the process of finishing up their first kind of demonstration scale facility, which should be opening in the next year or so. They should start production in 2026 to make a couple million metric tons of steel. So we'll see if that holds up. But it could be, yeah, a really big significant cut.

A couple listener comments on technologies that they're interested in. Josh writes, and it's a really good point, "One thing I've not seen in the media is the rise of engineers using LLMs for non-language tasks as a plain English probabilistic programming language. Why write complicated code to say orchestrate the interaction of multiple systems when you can ask a small model not to write that code but to actually perform the task?"

For people skeptical of the usefulness of yet another chatbot, there are huge uses for LLMs behind the scenes that consumers or workers will never see. Reorganizing data with an LLM can be amazing. You've got a big mixed list of people's names and email addresses, but they're not formatted so that you could actually send it. You can just dump it in there and say, give me a thing that I can stick into Gmail, and it pops out. That part of it sometimes...

As Matt says, it's worth boiling the ocean. Matt, this one's also coming to you. A tool on Discord writes, I'm excited about brain-computer interfaces enabling people with various kinds of disabilities to perform a variety of natural interactions, including motion control, controlling objects in the household, operating robotic wheelchairs, and more. Also, BCIs combined with robotic exoskeletons could be used for, say, carrying heavier objects in warehouses, etc. What do you think? Anything to note on brain-computer interfaces? Yeah.

Yeah, I mean, I think they're super exciting as well. We actually talked about that this year. I think we even talked about it last year as well and didn't feel they're quite to the point where we were comfortable putting them on the list yet. But there's so much happening in that space. I mean, one of the probably the best known is Neuralink, which is an Elon Musk company, but there are many others.

And I think they're absolutely correct. I mean, the ability for that technology to transform people's lives is just astounding. And then, of course, there are the potential downsides as well with not just the health risks of it, but what will it mean to have an always-on internet connection in your brain?

is out there, but I think that's kind of dwarfed by the potential upsides. And also this morning, we're thinking about what could go right as opposed to what could go wrong. Last comment from listener Catherine who writes in to say, I can't think of one technology that hasn't been advanced by government-funded research. Vaccines, the internet, space exploration. These days, we're not just cutting government programs. We're demonizing science and the people who do publicly funded work.

I'm concerned that all we'd be left with is technology that lines the pockets of corporations, not the advances that serve humanity. Thanks for that, Catherine. Also, people may want to check out what's happening over at the NIH. Check in with your local bioscientist right now. There's a lot of stuff going on there in the news. We have been talking about the MIT Technology Review's list of 10 breakthrough technologies for 2025 with Allison Arieff, Editorial Director of Print. Thanks for joining us. Thank you.

Matt Honan, Editor-in-Chief. Thanks, Matt. Thank you. Casey Crownheart, Climate Reporter and Battery Lover at MIT Technology Review. Thank you, Casey. Thanks so much. And James O'Donnell, Artificial Intelligence Reporter at MIT Tech Review. Thanks, James. Thank you. I'm Alexis Madrigal. Thanks, everyone, for letting me go back to my tech reporter days. Stay tuned for another hour of Forum Ahead with Mina Kim.

Funds for the production of Forum are provided by the John S. and James L. Knight Foundation, the Generosity Foundation, and the Corporation for Public Broadcasting.

Hey, have you heard of On Air Fest? It's a premiere festival for sound and storytelling taking place in Brooklyn from February 19th through 21st. I'm Morgan Sung, host of KQED's new tech and culture show, Close All Taps, and I'll be there at the fest to give a sneak preview of the show, along with an IRL deep dive all about how to sniff out AI.

You'll also hear from podcast icons like Radiolab's Jad Abumrad, Anna Sale from Death, Sex, and Money, and over 200 more storytellers. So come level up your own craft or connect with other audio creatives. Grab your tickets now at onairfest.com.