This is the Everyday AI Show, the everyday podcast where we simplify AI and bring its power to your fingertips. Listen daily for practical advice to boost your career, business, and everyday life. One of the most powerful AI tools available to literally everyone and for free has been updated in a couple of really big ways.
I'm talking about Notebook LM, none other than the winner of our 2024 AI Tool of the Year. So this model or this feature, I guess, from Google is extremely powerful. And I think it's worth revisiting mainly because of two big updates.
That is, Google has updated kind of the Gemini model that now runs Notebook LM and is making the extremely popular and sometimes viral audio overviews multilingual. So we're going to go over those updates and a little more today on Everyday AI.
What's going on, y'all? My name is Jordan Wilson, and welcome to Everyday AI. This is your daily live stream podcast and free daily newsletter, helping us all not just learn AI, but how we can actually leverage it and use it to grow our companies and our careers. So if that sounds like kind of like, yo, that's kind of what I'm trying to do.
you're in the right place maybe you listen all the time maybe it's your first time if it's your first time we do this literally every day monday through friday at least live 7 30 a.m central standard time uh so shout out to our live stream audience and
Uh, one thing I like to say, it's kind of the realest thing in artificial intelligence. A lot of the things that you maybe, uh, see or read, or, you know, if you watched certain tutorials on YouTube, a lot of it's, you know, a little prefabricated, right? It's, it's very polished, very edited. So on today's show, I'm going to be doing something live. Uh, so, uh,
i needed to use notebook lm anyways for a project that i'm working on here in boston so i said what better way to explain some of these new updates than to even just walk you through my process as i do it all right so thank you for tuning in and if you haven't already please go to youreverydayai.com there uh we're going to give you the daily news for today uh we're just going to put it on there and i want this uh episode to run
too long, but also you can get the highlights from today's podcast slash live stream. So if you're hearing something and maybe you're out on the elliptical or doing three things in your house, just listening to me in the background, you're like, wait, what did Jordan say about that new feature? Well, it's all going to be in our newsletter. So make sure you just go check it out at youreverydayai.com.
All right. It's funny. I was actually talking with someone today. So I am in Boston here in case you're watching the live stream and podcast audience. This is one of those. You might want to check out your show notes and just watch the video for this one. It is a little more visual, but I'm going to be walking through this. But anyways, I was having a conversation with someone today here in Boston. Like, hey, what's your show going to be tomorrow? And I'm like,
I have no clue. It's always fun, yet sometimes frightening. But I hand kind of hand of the reins over to our newsletter audience and said, what do you all want to hear more of? There's a lot of new things that were announced this week, this past week that I thought could be a great episode. So I asked you all in our newsletter. So another reason you should be signing up and reading it.
But I said, hey, you know, OpenAI and ChatGPT released their new shopping feature. Claude had a bunch of new, very powerful enterprise integrations like Zapier. And then I said, Notebook LM has some really cool updates that we haven't covered yet. And I said, what do you want to hear? And you all decided Notebook LM. So nothing like me not having a clue what I'm going to be working on hours each day and handing it all over to you. But, you know, kind of my joke is I work for you all, except it's for free. So, all right.
All right, enough chitchat. Let's dive in and talk about what's new. Actually, I'm going to go out of order as long as you don't mind. Last year, can we just get a little wild right now? Maybe. All right. So what I'm actually going to do right now
is, uh, and, and, and hopefully, uh, y'all can see, uh, my screen here. Uh, I'm going to start doing a little bit of deep research. All right. So, uh, let me first, before we even go into what's new, um, because I don't want this to go too long. I'm going to be doing some deep research in the background. And, uh, like I told you all, uh, I'm here, uh, at IBM think, uh, IBM's conference. So, uh, very, uh, excited to be partnering, uh, with IBM for this. And, uh,
Let me say this. I obviously follow everything that all the big tech companies do, but I'm a small business, so I'm not using IBM's products and services on an ongoing basis. Right. We consult with some clients who are, and I've had a lot of great IBM guests on this show throughout the years, but I always need to do my research. Right. So this is something, you know, whether it's for a conference,
Podcast episodes, I use Notebook LM all the time. And where I normally start is by doing multiple deep researches first. So I'm gonna be running these in the background, but I want to show you all, this is live, this is the work.
I would have been doing anyways. So I said, hey, I'm glad that you all voted to see Notebook LM because I needed to do this work anyways for the conference tomorrow. I'm excited. Or sorry, the keynote is today. So I'm excited for this and I needed to be doing this anyways.
So what I'm going to do, I have a very simple prompt. I'm going to throw this into multiple deep research products right away. Nothing crazy here. So I'm jumping around. I'm using perplexities, deep research. I'm using Google Gemini's deep research. I'm using, let me do that.
Uh, I think, I think that's all of them. So, uh, I know for, uh, Google Gemini's deep research, I have to click start research, uh, for chat GPT version, which is really good. Um,
Oh, but I didn't do it correctly. Uh, for their version of deep research, I'm going to have to answer some questions. So I'll at least walk you through this. Then we're going to talk about what's new inside notebook LM. And then we're going to come back and use it and show you these new features. Cause it's like, I could just show you these bullet points, but you might as well see, uh, hopefully some of the benefits in action. So first, um,
ChatGPT is the only one that asked me questions. So essentially what I said in this prompt, I said, please give me a month by month breakdown of IBM's Watson X and Watson AI updates from month to month, starting in January 2024 and ending in May 2025.
please slowly research, go step by step, all that good stuff that I normally do. So I have to ask, answer these questions from chat. It's saying only official product updates and feature announcements from IBM or also third party. So I'm just going to say both normally I'd go through and go through a process, but I'm doing this live. So,
I'm just going to go quickly. Two says, should I include Watson X governance and Watson, Watson X data updates or only Watson X AI? So I'm just going to say all. And then three, do you want the updates to include technical details, model changes, API improvements, or only high level summaries? So for this, I'm going to say mainly summaries, but some technical details. So let's do that. All right. Perfect. So what,
Now that we did that and we have our deep researchers researching, let's talk about what is new in Notebook LM. So I told you the two things. Number one is we have the new Gemini 2.5 Flash model, which is a thinking and a reasoning model now powering Notebook LM. And then we also have Flash.
50 plus new languages that the audio overviews can work in. All right, so let's first go over the audio overview updates. So now there are, like I said, more than 50 supported languages letting users hear AI generated document summaries in many tongues, but sides just English, which is what it was only available on
And this is a pretty big technical step considering Google's user base, right? They have users all over the world. So it's really, I think, broadening who can actually use this tool, right? Because I think a lot of people...
were initially drawn to Notebook LM, right? It's been out for a long time, but I think people really didn't start paying too much attention to it, which is sad because even before the AI audio overviews, which are a fantastic feature, by the way, even before that, it was a revolutionary AI product. But I think a lot of people didn't start paying attention to Notebook LM until the audio overviews, which are these kind of AI deep dive podcasts where two AI hosts have conversations about
just the documents you upload. So, you know, many people from all over the world are like, Hey,
like, Hey, what about my language? So notebook LM and the Google team have been rolling out a lot of great kind of quality of life updates, but they said that this was one of the biggest ones as well as iOS and Android apps, which I believe both of those are rolling out. Not yet, but there is a signup for those, but the audio overviews and 50 new languages that's out now. So there's
it's very easy to click your preferred or to pick your preferred language and there's so many options uh you know some popular uh widely spoken languages across the globe like spanish mandarin uh hindi german and a lot more also it's important to know that notebook lm is still experimental uh but now it's gonna appeal to a lot more people so
You know, I have been following kind of the conversation along on Twitter and on Google's blog. So, you know, Google says, yeah, there's there's bugs. We're getting this worked out and they've had a lot more time to work it out in the English language. But I think this move right now puts Google ahead of many of their rivals who haven't offered such wide options.
language support, not even just with the audio summaries, but just in general. Right. When we talk about the future of large language models is multimodal. A lot of the big players aren't supporting 50 languages right now.
So Google is also signaling that multimodal AI isn't just nice to have, it's kind of essential. So pretty exciting updates there. And when we go in and do this live, I will show you how to select a different output language and we will test it as well. I haven't tested this yet.
So we're going to be doing it live. Sometimes I like doing these things live and, you know, I get to figure out or sorry, find out and learn alongside you all at the same time. So, yeah, none of this has been edited or scripted or anything like that. All right. Next.
And how busy has Google been that this wasn't even on their blog post? Their other big update was updating the actual model running notebook LM, which is a really big deal. So their tweet from notebook LM said, it's been a busy week for us. So busy that we forgot to mention that notebook LM is officially powered by Gemini 2.5 flash.
The 2.5 models are thinking models, so you should start to see more comprehensive answers, particularly to complex multi-step reasoning questions. And this is huge.
Okay. This is huge. And let's just start. Why? Well, if you don't follow large language model updates, you know, day to day, like myself, maybe you're more of a casual listener to this podcast. This is big. The difference between kind of quote unquote old school transformer models and quote unquote new school reasoning or thinking models. The gap is wide. You know, these newer models that think or
or reason, plan ahead. It's almost like they use this chain of thought reasoning that normally a quote unquote experienced prompt engineer could still squeeze this kind of juice out of a large language model, but you have to be extremely experienced. You have to know what you're doing and really put in the time to get the best or the most out of large language models. But these thinking models are much different, right? They plan ahead. They think they
They reason, you know, it's fascinating reading, you know, whether the raw chain of thought or the summarized chain of thought, you know, to see kind of how these models are thinking. You know, it's really interesting, sometimes scary because you'll see a model on its own start to go down path A and then
and then realize path A might have a dead end, and oh, I actually need a fork, and I need to create a path B, path C, and I might step back a couple of steps. So you can learn a lot if you're a dork like me and read Chain of Thought or summarize Chain of Thought. It helps you write better prompts. It helps you use these models better. But it's pretty big now that Notebook LM is powered by a thinking model in Gemini 2.5 Flash.
And don't let that Flash moniker fool you, right? Because, you know, I would say when the Flash series first came out, people really thought of this as, you know, oh, this is Google's, you know, cheap and fast model. And yes, it is. But Gemini 2.5 Flash, if you look at different benchmarks, in some benchmarks, it is a top five benchmark.
model, the flash, the quote unquote flash, the one that's supposed to be, oh, this is the small and cheap model, right? If you're using it on the back end on the API, it is extremely powerful. I would say it is one of the more impressive models in the world just because number one,
how fast it is. If you are using it on the backend as a developer, it's extremely affordable in terms of the price per performance. So if you are using it inside notebook, LM or inside Google Gemini or inside, you know, AI studio, you're not paying for the actual usage. Uh, right. But previously before this notebook, LM was running on Gemini 2.0 flash, which was not a thinking model. So now we get answers that show much more nuance. Uh, and hopefully in this example, uh, we'll,
we'll have something that can maybe flex or show kind of its thinking capabilities. I mean, we'll see we're doing this all live.
So that is, uh, that are, that's two of the things that are new and I'm going to be demoing a couple of the other, uh, new advancements, not as new. So these are both, uh, I, I think the audio overviews came out just over a week ago and Gemini 2.5 flash, which again, they did even put out a, a blog post about this because it came out, uh,
on a Friday afternoon, right? That's Google doesn't stop shipping anymore. Uh, it's, it's like, I'm looking out my, uh, my, uh,
here in my hotel room at the, uh, the Habba. And you know, Google is, it's like a shipyard. Like I'm looking at all these ships and I'm like, that's Google. Like they're not like they haven't stopped shipping. I don't think since December, even on the weekends, I mean, they're squashing bugs, adding new updates. Uh, it's, it's, it's, it's pretty impressive. So, uh, let's jump in. Uh, let's do this live. Uh, this is always fun. What could go wrong, uh, doing, uh, this live, um, on Apple's,
absolutely terrible hotel Wi-Fi. Nothing could go wrong, right? Okay. So as a quick reminder, here's essentially what I told these different models. So I said,
Please give me a month-by-month breakdown of IBM's Watson X and Watson X AI updates from month to month, starting in January 2024 and ending with May 2025. All right, so what I'm going to do is I'm just going to copy and paste all of this information into Notebook LM. So first, I am here in Part 1.
perplexity, I probably just should have scrolled down to the bottom and just click the copy button. That would have been a little, little better, right? All right. So I'm going to copy this information and I'm going to go into notebook LM. So I am on the notebook LM plus a notebook. LM is free to use. Uh, if you want a little, uh, better limits, better data protection, uh, then you should probably be on the notebook LM plus. Uh,
So I'm just going to, and actually, let me just give a 30 second primer on how this works and why I think it's extremely special. It's a grounded model. So what that means is it uses the Gemini 2.5 flash model, but it is only going to work on the,
the information that you enter. So think of all the different ways that you can use notebook LM, right? Uh, you could put in all your, all of your meeting notes, you know, long email threads. If you're working on a project, right. You can do all of these things in, uh, chat GBT and Gemini in, uh, Claude's projects, right? There's so many different ways to do this, but the downside, right.
There's a con to that as well. You know, great pros. You can do this in a lot of different fashions or there's a lot of different ways to pet the cat. I'm not going to say skin the cat. I don't, I don't like that saying I like cats. I'm never going to say there's different ways to skin a cat. There's different ways to pet a cat, right? You can pet a cat with your, your elbow, your hand, you know, if the cat, you know, rubs up on you, that's a different way to pet the cat. So you could, you, you could do this in a variety of ways. But,
But let me just, let's just jump straight in. All right. So first I'm going to paste all my information. All right.
this'll probably make a little bit more sense when I do this live. All right. So for our podcast audience, all I did, I went to notebooklm.google.com. Like I said, I have an account, but it's grounded. So what that means now is I pasted in the results from Perplexity's deep research. And now, you know, my, this model is grounded. So the quick primer is I can now go into, you know,
this notebook LM and I only have information about Watson X and I can say, you know, what's
what's Chicago known for, right? I hit enter the response. I get back. It's going to take a second because it is using a thinking model and it doesn't, it doesn't say anything, right? It says based on the sources provided and our conversation history, there is no information about what the city of Chicago is known for. So as an example, I can, if I go into Gemini and I use 2.5 flash actually, I can't in here. Oh yeah.
Oh, yeah, there we go. And say, what is Chicago known for? As an example, it's obviously going to give me an answer on what Chicago is known for. Right. So there we go. And you can see the thinking inside. If you are in Gemini or Google's AI studio, you can see the thinking. Unfortunately, you can't see the thinking there.
in Notebook LM, even though you're using the same model. So if you do want to see like, oh, what's the difference? You might want to go into Google Gemini, but you'll see here when I'm using Google Gemini, the same model, it's giving me a response and saying, here's what Chicago is known for because it's still using its own internal knowledge base. It's still accessing the internet when it needs to. So that's the big difference with using Notebook LM. It is grounded only in the information that you put in.
All right, so now that we got that out of the way, and you can probably then see and understand why it might be extremely impressive to use a model that can think.
A model that can reason only with your data. That is huge, y'all. Yes, obviously, we have a lot of thinking models, right? A lot of great thinking models that we can use, but you can't necessarily control, at least not easily with a lot of iteration and some, you know, some basic to advanced prompt engineering skills. You can't necessarily control where they think.
Right. You can't say I mean, you can say like, hey, only using, you know, the files in this project, you know, the information in this project. Right. You can try to control its thinking, but very often it will go outside of those bounds anyway. It might use its own internal data. It might go out and use information from the Web. So there's very many instances where you.
only want a model to use the information that you've given it and absolutely nothing else which is why i am personally extremely excited for this
All right, so I'm clearing out this chat. I'm going to go ahead and label here inside Notebook LM. I'm just going to label the source. It's good practice. So I'm just going to say perplexity, deep research, saving that. I'm going to jump over. I'm going to use, here's Grox. I'm going to scroll to, I think it's at the bottom there to copy. There we go. I'm going to go add a source, paste in text.
Click insert. If you are brand new, there's different ways that you can add sources inside notebook LM. You can connect directly to your Google drive, obviously Google slides, different links to websites, YouTube videos, or just copied texts. And I am on the website.
plus Notebook LM+, which is part of the Google Gemini One plan. You get access to this, so it's not a separate subscription. That's another good thing to know. So as an example, if you already have access to Gemini Advance in your organization, then you have access to Notebook LM+, so you can have 300 sources, which is a ton of information. I'm going to go ahead. Oh, I already pasted the second one. I'm going to go up and label that. I'm going to label it GrokDeepResearch.com.
There we go. I'm going to go into now Google Gemini and their deep research 2.5. So good. Their new version is extremely impressive. I will say early on, Oban AI was winning the deep research game. Now I'm not so sure. All right. So we're going to go in. We're going to paste text here. All right. And then I'm going to label that here in a second. Once it's done as Gemini deep research.
Okay, I'm going to save that. And then we're going to see if our last one is done yet. It's not quite done. Open AI's deep research. Usually it had been the best until Google's updated their deep research to 2.5 Pro.
Not the flash version. Google's Google Gemini deep research uses 2.5 pro, which is also the kind of the big brother of 2.5 flash, which is what notebook LM now uses. So it's still or sorry. So chat GPT's version of deep research is still going. But let's go ahead while we wait. And I'm going to go show some of the other new features. So aside from audio overview now having 50 languages.
There's more. And actually, for the sake of timing and doing things in order, right? We got to get our PEMDAS correct. I always joke about that with my wife. Like, right. There's so many things to do. I'm like, all right, what's the PEMDAS on this one? What's our order of operations?
All right, so order of operations, actually, because it might take a minute, we actually need to look at the languages and the outputs. So right now, it's very easy to use these different languages. It's actually as easy as clicking. So I'm going to go to settings and I'm going to go to output language.
And then you are going to get something that says configure settings. And then there's all of these different new options. I mean, there's so many here. So I'm going to, as an example, I wish I was actually bilingual. I'm not. It's embarrassing to say. So I'm scrolling through here. I was trying to find Spanish. I know Spanish is,
is one of there we go. Espanol. All right. I'm going to go Espanol, Latin America and click save there. All right. So FYI, I haven't done this yet. I hope it works. If not, I'll reach out to the Google Gemini team, but I'm sure they're already on it. So I'm going to go ahead and also click customize. All right. So on this deep dive conversation, the audio overview on the right hand side, I'm sure many of you have heard it. If not, essentially there's a male and a female and
AI generated podcast hosts, you know, they banter around a little bit, but they essentially have a conversation about just your documents that you upload. So very useful. So I'm actually just going to click generate nothing else. You can customize the instructions, but you know, in this case, I'm not going to mainly because I'm probably not going to be able to understand 90% of it because it's going to be in Spanish and I am not fluent in Spanish.
All right. But as we wait for that, we can also then talk about a couple of the new updates as well that, you know, new ish.
So two other ones. So like I said, we have the new Gemini 2.5 Flash, which we're going to show off as we ask it. Hopefully a tough question here in a minute. We have the audio overview now available in 50 plus languages, and we're letting that run now. A couple other new ones that I don't think we've talked about on the podcast, at least. Maybe we did a YouTube tutorial on some of these, but one is Mind Maps, which I really like.
really like. So essentially when you're using notebook LM, there's three different pains, right? So on the left hand side, you have your sources and you can add a source. Then you have a chat page.
And then you have a studio on the right hand side, which is essentially where you have your audio overview, as well as you can create different notes, different preset notes, or you can create notes manually. So notebook LM works a little bit different than some of the other large language models or AI chatbots that you're used to working with. But in the middle position,
pain, uh, you know, you can also click overview there, but here's where the mind map is. All right. And that's one of these new features. Uh, a lot of people struggle to find it, uh, cause essentially, like, especially if you're not zoomed in or if you're too zoomed in, right. So like on my screen right here, you can't really see my maps.
And once you start chatting, because you can obviously chat with all of your documents and sources, just like you would inside Google Gemini or ChatGPT. But then that kind of mind map piece disappears. It's really just in the summary. So I'm going to go ahead and click mind map. And then you'll see on the right hand side, it says generating mind map.
And I'm not actually sure if it's going to generate the mind map second. And we might have to completely wait for the audio overview to go. I've actually never tested that before, trying to generate both of them at the same time. Usually the mind map just takes a couple of seconds to generate, but maybe it just put it in queue. So, oh no, it didn't. Okay, there we go. So the mind map is now done. So we can at least look at this.
So what you will notice here is when I switch the output language, it also, let me just see. Let me just double check here. Okay, it didn't. I didn't know if it was going to change the actual language of the text updates. It did not. All right, so let's just walk through it. And the reason I said that is because the name of the text
The name of the mind map is in Spanish now. So I'm like, oh, is the content of the mind map going to be in Spanish? And it's not. It's in English. So it looks like even when you change the output language, it does not impact the the mind map. So but here's what's
amazing, right? And I'm going to be kind of studying up on this for kind of the IBM work that I'm going to be doing this week. So it automatically started breaking this down into four categories, right? And then like any, if you've ever used an interactive mind map, very cool. I love them. If you're a visual learner, I
honestly, like, right. Notebook LM has so many use cases. I think so many people should be using it, dumping all your meeting transcripts in there, long email threads, you know, all your files, your, your Google docs, whatever. Um, but you know, another thing is just when you're trying to learn a new topic. And I think, uh, both with the audio overviews and with the mind map, uh, you know, I don't know any better tool to learn something new than notebook LM. So, you know, now
You know, it kind of gave it a title. It said IBM Watson X and Watson XAI updates January 2024 to May 2025. And then it broke it down into four major categories. So it says platform and ecosystem updates, Watson XAI updates, Watson X governance updates, and Watson component updates. So I personally follow the XAI category.
Actually, I probably follow both of these, but I'm kind of curious because I haven't followed the Watson X AI updates as closely as some of the others. So I can break that down. And now it pops out foundation models and lifecycle featuring in capability updates, auto AI and rag updates and pricing adjustments.
So I actually want to learn more about the auto AI and rag updates of the Watson X AI platform. And then I click it again. So if I zoom out here, right, so now we're already four tiers deep in my interactive mind map, which is really cool. And I see at least for two of these sub points, there's even more. So it looks like there was some updates here in April 2025.
So I can click that. So when you click on an actual element, what it does is it also sends it back into the chat. So essentially, if you just want to know more about something, you can click kind of the middle of that little element and it's going to break it out into the chat interface, which is what it's doing right now. But I can also see that there's, you know, some other elements.
as I bring the mind map back up, I mean, for our live stream audience, this is pretty cool. If you're a visual learner, right. And you can expand all of these, right. So, uh, for our, uh, podcasts or sorry, for our live stream audience, uh, I'm going to zoom out here and you'll see, you know, just how impressive, uh, this actually is. I'm not going to, you know, go through and read all of these, but, uh, I'm
I mean, y'all, this is like so zoomed out. This looks like, you know, in all of those crime shows when the crazy person that can't sleep, like I feel it's usually like Liam Neeson or Mel Gibson, right? And they have all these, you know, pictures on the wall and all these notes and it looks like wild. And it's like, whoa, this is like visual chaos. So it's kind of like that instead of chaos.
It's clarity, right? Because now we have this great mind map overview that I can dive into a lot more. Extremely impressive. And then you'll see, obviously, because I clicked the output language to Spanish. Now the text that I entered in here is in Spanish as well.
All right. So we have our audio overview. So another thing I haven't tested, we'll see if I change the output language back, we're going to find out number one, I'm guessing the audio overview will be gone, but we'll see if all of our current notes that are in the middle of the chat are reset or not. So let's first, I actually have to remove this and re add it to the stage here.
as a tab. So hopefully you all can hear this audio overview in Spanish. So let's go ahead and take a quick listen.
I started it without actually sharing my tab. Here we go.
All right. Hey, live stream audience, anyone speak Spanish? Let me know. It sounds, so again, I don't speak. I can understand a little bit. It sounds like things are going correctly here. Estos últimos, ¿qué? 17 meses, hasta mayo de 2025. La idea es destilar lo más importante de toda esta información, ¿no? Pensemos que la IA avanza rapidísimo. Así que estos cambios son un buen reflejo de esa carrera.
And so one thing I noticed so far sounds
Are you still running in circles trying to figure out how to actually grow your business with AI? Maybe your company has been tinkering with large language models for a year or more, but can't really get traction to find ROI on Gen AI. Hey, this is Jordan Wilson, host of this very podcast.
Companies like Adobe, Microsoft, and NVIDIA have partnered with us because they trust our expertise in educating the masses around generative AI to get ahead. And some of the most innovative companies in the country hire us to help with their AI strategy and to train hundreds of their employees on how to use Gen AI. So whether you're looking for chat GPT training for thousands,
or just need help building your front-end AI strategy, you can partner with us too, just like some of the biggest companies in the world do. Go to youreverydayai.com slash partner to get in contact with our team, or you can just click on the partner section of our website. We'll help you stop running in those AI circles and help get your team ahead and build a straight path to ROI on Gen AI.
Right. Again, not fluent. My my Spanish is extremely bad. But the it sounds great.
pretty on par. So, hey, Spanish speaking audience, let me know. Did that sound pretty on par to you? One thing I noticed, a couple of things. It doesn't look like the capabilities to join live is there when you're using a different language. So that's actually something maybe it's only available right now in the English language, but you can actually talk
to the AI hosts and ask them questions and they will respond to you and listen to you. So maybe that's not available as of yet. It doesn't look like it is. The other thing is normally there's two hosts that kind of banter with each other. So I'm going to kind of click around here, see if we get the other host. 30,000 tokens. There we go. Una barbaridad.
Okay, so we do get both hosts there. So it just took a while to get our second host in. So there you go. Right away was able to create a customized podcast for myself in Spanish. All right, so I'm going to go up to settings here. I'm going to change the output language back to English. And then I'm going to refresh.
I'm going to refresh this page and I'm curious. So now I'm just going to type in the middle and I'm going to see if we're back in English in the chat pane, which I do believe we should be. I'm just going to say, you know, bullet. I'm just going to say explain what Watson X is in one sentence.
There we go. We should now be the default language. I just didn't know if you started something in Spanish, if it was going to stay in Spanish. It does not. So that part's working as it probably should. And it says, based on the sources provided, Watson X is IBM's overarching enterprise-focused artificial intelligence and data platform. And that is correct. There we go. Perfect. All right. So that is, we just saw...
at least one of the new updates and let me go over actually two. So one of the main ones on the show, one of the other kind of side ones. So another cool one to look at here is there's this new discover sources. So on the left hand tab and the sources, you can manually add sources one by one, or you can click this discover sources, which is kind of like
which is kind of like a traditional Google search, and then you can just choose. So, uh, let's just say I'm going to type in IBM, uh, Watson X, let's see. And then I'm going to type in AI. I'm going to click submit, and then it's going to bring in, uh, what it deems to be good sources that then I can automatically add those, uh, versus, you know, manually searching and bringing them in. Uh, so, you know, people, I think have mixed opinions on this, uh, but
You know, as I scroll down here. So one thing I wish, I wish that this was labeled and I could see the actual URL, right? In some instances, I can kind of,
make sense of what's here. Right. Uh, so this second one says, you know, IBM Watson X Wikipedia. Uh, the first one just, it doesn't say anything. Uh, so it's just pulling in what I believe would be a, a title and maybe, uh, the first part of a meta description. I would assume this is from IBM's website, but I don't know. So I would have to actually, uh,
click on it and then I can see, yes, it is from developer.ivm.com. All right. So then you can import those sources either all at once or one by one. So let me just do that as an example. I'm going to bring in the IBM Watson X from Wikipedia. There we go. And then last but not least,
I'm going to go back over to our chat GPT, one that went for 12 minutes. All right. So I'm going to get our research here. I'm going to copy it, jump back in to our notebook LM, add this as a source, paste the text. Bam. All right. So we've kind of...
uh done a little bit of everything except the one big thing uh testing out the new model which is gemini 2.5 flash it's a thinking model so hopefully uh this will make a little bit of sense here so i'm going to um i'm going to ask it something maybe a little tricky all right so i'm zooming in here um
So I'm saying, please carefully analyze all of the source material. Actually, I'm going to do two, two kind of quick prompts here. So first I'm saying, please analyze all of the source materials and give me a factual month by month breakdown of IBM's Watson X and Watson X AI updates from month to month, starting in January, 2025 and ending with May or sorry, starting in January, 2024 and ending with May, 2025. So yeah, unfortunately you kind of just get these three dots, right?
right now, right? So if you're, you know, ever texting someone and you're waiting for them to text back, that's what you kind of get. So maybe in the future, I don't know, maybe we'll get the chain of thought because I would really be interested to see how Gemini 2.5 is thinking, but only thinking in the confines of your data, which would be extremely fascinating for dorks like me, right? I spend so much time kind of reading either raw chain of thought or summarized chain of thought because I think it's
It's really a cheat code if you want to be better at large language models. So you'll see still, it's been about 30 seconds, 40 seconds. So as it's going, all right, it's done now. And here's a great breakdown, okay? The good thing about using...
The good thing about using notebook LM, as you'll see on my screen for our live stream audience, it always sources things as well. Right. So I can click on these different sources. So as an example, let me go to something.
So it says some Watson X AI updates for what month is this? January 2024. It says the auto AI feature was enhanced to support ordered data for all experiment types. And I can hover over that and I can click that. And then it's going to take me back to that source guide. So that is from the rock deep research. And then it finds that exact piece. And then I can go and read more about that if I want to.
All right. So as you'll see, I mean, for our live stream audience, that's a lot there. That's a lot. Let me get back to our chat interface there. A lot of information month by month. So I'm going to be reading this tonight, right? And probably creating a, just an audio overview based on this. And I'll probably have a conversation with it to help me better understand all of these things. But,
But, you know, one other thing is I wanted to test this out a little bit more. So I'm going to say, you know, please identify underlying trends based solely on IBM's product roadmap and updates they made to the Watson X and Watson X AI platforms. All right.
So this is interesting. So I'm giving, this would have not worked on Gemini 2.0, something like this, where you're kind of,
calling on the model to do something that a traditional large language model could not do very well. So I guess maybe on Gemini 2.0, this may have worked. I didn't try this exact thing, but it's going to work much better on a thinking model. So it actually spit it out pretty quickly. And it said, based on the updates and information provided of the sources for the IBM Watson X and Watson XAI platforms,
several underlying trends are evident in IBM's product roadmap. So it's kind of thinking between the lines here. So it's saying, okay, rapid and diverse foundation model, evolution and expansion, strong emphasis on enterprise governance, trust and responsible AI, commitment to hybrid cloud, multi-cloud and global availability, et cetera. That's good. So now I'm going to say, you know, please identify,
Any change in course, whether overt or under the radar, that the IBM platform went, or sorry, that the IBM Watson X platform went through over the course of this period. And I'm going to say something like, you know, please try this.
unearth information between the lines, you know, but keep it factual, right?
Right. So I'm kind of having in testing here if Gemini 2.5 Flash is able to really use this ability to now reason and to think about the information. Right. So I'm not just asking for factual recall. All right. And as we wait here, think of how something like this could be extremely useful. Think let's say you have a daily meeting. Right. Right.
your team, you know, maybe your remote, your hybrid, and it's recorded every single day and you've been doing it for years. You could literally upload all of those transcripts or at least, you know, run a little automation that you could just batch convert them all, throw them into notebook LM plus you probably want the plus version for that. And then run a similar prompt, say, Hey,
I'm the manager of this department. You know, here's our, our, our transcripts of this 10 person meeting, you know, give me a performance report on, you know, John in marketing, you know, what are some things I'm missing in terms of his performance? You have our daily, uh, you know, our daily meeting transcripts.
what are things i'm missing uh where is he you know where's john excelling where is he struggling what are projects he uh commonly drops what are projects he you know really knocks out very quickly so you know even just having something like this that connects all of your data but can use a little bit of reasoning and a little bit of logic very quickly extremely powerful so let's quickly look at the um all right here we go a great one right here uh
So the first thing that it found is that IBM went, it shifted from a primarily IBM centric model offering to a broad, diverse and open model ecosystem. So it said initially IBM prominently featured its own granite model.
However, now there is a wide array of third party and open source models, including Meta's Lama, Mistral models and some others. So I obviously knew that. Right. But if you didn't follow something like this very closely and if you're just looking at information that companies put out, you know, sometimes they might not say, hey, we're shifting our strategy. Right. They just might put out new updates.
Obviously, I've been following that, but a pretty good example. And that's actually what I was hoping because I know that it started with just granite models. And then more recently in 2025, they've shifted to include some more access to open weight models like the ones listed there. So
i know this was a a kind of a longer uh version but i wanted to do a couple of things number one you all asked for this episode uh you wanted to see what was new uh inside of notebook lm but i also wanted to give you kind of a practical example because you know people are always asking me uh hey
Jordan, how are you using AI or how can you stay up to date on all of these things? Well, I just gave you a little look into how I work, how I operate. Right. I use notebook LM all the time. So generally I do start with multiple deep research tools. I'll throw them into notebook LM. Sometimes I'll continue chatting with those individual deep researches, but I'm probably going to go in and have a conversation with this notebook.
that I just made. I'm probably going to listen to an audio overview and then ask questions, but it's a great way to learn. And now that this is powered, uh, by Gemini 2.5 flash, a thinking model, huge, uh, opens up access to 50, uh, new, uh,
output languages. Great. Both for text and for the audio overview as well. Those two, not as new, but new ish features the discover sources and the mind maps. Again, I think notebook LM is a tool you can't afford not to use. All right. That is a wrap y'all. If you want to know more on notebook LM, I've done a couple of episodes.
They were a little old, but if you want to get the basics, go listen to episode 383 or 370, where I covered Notebook LM in a lot more depth. We did some live demos there as well. Just know those are going to be a little outdated by now. So just keep that in mind. So I hope this was helpful. Let me know in the comments if it was. Do you like these live ones? Are they distracting?
Are they distracting? Right. It's one of the, like I said, the requests that I get a lot is people just want to know, like, hey, Jordan, how are you using AI? Can you do more demos? Like, I want to practically see. But again,
I encourage you to think of all the different ways that you can use this, right? Whether you're using your company's own information that's publicly available, whether you're using, you know, uploading, you know, transcripts, I think is a great use case. Learning something new or if you just want to talk to an AI expert.
in, in, in conversation, but based on your data or only based on the data that you provide, uh, this is great. Uh, so again, uh, you can't afford, I think not to use notebook LM, if I'm being honest. All right. Uh, so if this was helpful, if you're listening on the podcast, please, please, please, uh, Spotify change some things. You know, if you want to help more people, um,
you know, learn AI, I'd really appreciate it. If you could leave us a, a review, Spotify kind of changed their algorithm recently. So fewer people are hearing the everyday AI show. So if you are finding value on the podcast or even on the live stream, if you could leave us a review, especially on Spotify,
uh, that would be great. We'd appreciate it. Uh, yeah. Now, unfortunately all the, uh, the big tech conglomerate, uh, podcasts are, are getting a little more shine. So, uh, if you, if you enjoy the work, if it helps here, uh, please consider leaving us a review on Spotify, uh, share this on social media, if this was helpful and, uh,
More importantly, go to youreverydayai.com, sign up for the free daily newsletter and make sure to join later today. I'm probably going to throw a post out on LinkedIn after the keynote here at IBM Think. I'm excited about this partnership with IBM. So make sure to tune in for that. So thank you for tuning in now. Make sure to join us tomorrow and every day for more Everyday AI. Thanks, y'all.
And that's a wrap for today's edition of Everyday AI. Thanks for joining us. If you enjoyed this episode, please subscribe and leave us a rating. It helps keep us going. For a little more AI magic, visit youreverydayai.com and sign up to our daily newsletter so you don't get left behind. Go break some barriers and we'll see you next time.