This is a new episode of the podcast AI Unraveled, created and produced by Etienne Newman, a senior software engineer and passionate soccer dad from Canada.
If you're finding these deep dives into the world of AI valuable, please take a moment to like and subscribe to the podcast on Apple. Welcome to this deep dive. We're aiming to cut through the noise and bring you the really crucial and fascinating developments in artificial intelligence happening right now. We've got a really interesting mix today, everything from massive funding for a new
AI safety venture to incredible breakthroughs in medical AI and even how it's being used for business outreach and understanding dolphins. It's a lot to cover. The field is moving so fast, it's hard to keep up sometimes. Exactly. So let's get right into what people need to know. Sounds good.
It really is striking just how wide the impact of AI is becoming across, well, pretty much every domain you can think of. OK, let's unpack this first big piece. Investment news. Ilya Sutskiver, you know, a major figure from OpenAI's beginnings. Right. I saw this. He's launched a new company, Safe Superintelligence Inc., or SSI. And get this. They just called in $2 billion in funding. $2 billion.
And the valuation? $32 billion. Right out of the gate. That's staggering. And the focus is explicit, right? Safe superintelligence. Exactly. AI that goes beyond human smarts, but with safety as like the number one priority baked in from the start. And the investors are pretty heavyweight too, I gather. Oh, yeah. Greenoaks Capital led it, but Alphabet, Google's parent company, and NVIDIA are in there too. Significant investors.
Google and Nvidia. That's interesting. It makes you wonder, doesn't it? Are they hedging bets, trying to influence the safety direction or just getting in early on something potentially huge? That's the question, isn't it? What does their involvement signal about their own strategies, maybe even their own concerns about where AGI is heading? Yeah. Putting serious money into a company so focused on safety. Yeah. Definitely raises questions about their motivations. OK, so shifting gears a bit.
From the money side to tangible progress in health care, this is pretty amazing. Ah, the diagnostic AI. Yes. At the ESCMI Global 2025 Conference, they presented results for an AI-guided ultrasound system. It's called Ulti-R AI. Point-of-care ultrasound, right, post-CUS. Exactly. And here's the kicker.
It diagnosed pulmonary tuberculosis, TB, 9% better than human experts. 9% better. That's a significant margin in diagnostics. It is. Specifically, 93% sensitivity and 81% specificity. Sensitivity is catching the disease if it's there. Specificity is correctly saying it's not there if it isn't. And those numbers are good. They're better than good. They actually beat the World Health Organization's targets for the non-sputum tests they use for TB triage.
That's a really high bar to clear. That really is noteworthy. I mean, think about the potential impact. Faster, more accurate TB diagnosis, especially in places with limited resources, that could save a lot of lives. Totally. And the tech itself is fascinating. It looks at lung ultrasound images, even from devices you can hook up to a smartphone. Okay. And it uses three different AI models working together to interpret the images, spot patterns. And it sees things humans might miss. Apparently, yes. Really subtle stuff like tiny lesions on the pleura.
the lung lining, things that are just hard for the human eye to pick up consistently. It opens doors for much earlier detection. That's where AI often shines in imaging, right? Picking up those subtle patterns from vast amounts of data that maybe aren't obvious to us. It could,
Well, revolutionized diagnostics for lots of conditions, not just TB. Absolutely. But, you know, with all this rapid progress, the ethical side always comes up. Inevitably. And there's some noise from former OpenAI employees. They're raising concerns, pretty serious ones, about the company going for profit. Ah, yes. The shift away from the original nonprofit mission. Their argument basically is that chasing profits could, you know, undermine that whole benefit humanity mission. And...
potentially compromise safety standards along the way. It's that classic tension, isn't it? Powerful tech versus cursel drive. They're worried the bottom line might start outweighing the greater good. It's a crucial debate about priorities and who's accountable. And they're not mincing words. One person, Todor Markov, who's now at Anthropic. Another major AI player. Right. He apparently called Sam Altman a person of
low integrity and said the original charter was just like a smokescreen. Strong words. Yeah. The core point seems to be that the nonprofit structure was essential to make sure AGI helps everyone, not just shareholders. They feel the shift is sort of a breach of trust. It really highlights the immense responsibility involved here. Who controls this tech? What are their real goals? How do we keep it aligned with human values? These aren't just philosophical questions anymore. Definitely not.
OK, let's pivot to something more immediately practical. How AI is being used in business right now. Right. The automation side of things. Yeah. Particularly lead outreach. Developers, marketers, they're hooking up AI models with tools like Zapier. Zapier, yeah, for connecting different apps. So they can automate things like figuring out which leads are good, personalizing emails, basically streamlining the whole sales pipeline. Makes sense. A lot of that early outreach stuff can be pretty repetitive.
AI handling that frees up people for the more human parts of selling. They gave this example using something called a Lindy AI agent. You basically set up an AI assistant for outreach, give it access to your lead lists, tell it to research them, draft personalized emails. Then you build a workflow in Zapier. Like step-by-step instructions. Kind of. Starts with a trigger, like new lead received. The AI agent processes the info.
Then it can loop through steps like using search perplexity to find more info. Perplexity, the AI search tool. Right. And then maybe another AI node drafts the email based on that research. Then it exits the loop. Maybe another AI summarizes what happened. It sounds super efficient. The personalization at scale is the key advantage there, I think. Moving beyond generic blasts to something tailored.
That usually gets much better results. Definitely seems like it. Okay, shifting again. Hardware, a huge strategic move from NVIDIA. Ah, the U.S. manufacturing plant. Yeah, they're planning to manufacture AI supercomputers in the United States. And the scale is, wow, up to $500 billion over the next four years. Half a trillion dollars. Good grief.
Where are they doing this? Producing their Blackwell chips in Arizona and setting up the actual supercomputer factories in Texas. They're partnering with the big names, TSMC, Foxconn, Wistron. Right, the major contract manufacturers. The goal seems clear. Strengthen the supply chain, make it more resilient, and just meet the insane demand for AI infrastructure.
That makes a lot of sense, especially given, well, everything going on globally with tech supply chains. Bringing critical manufacturing onshore is a big strategic play, positions the U.S. strongly. NVIDIA's CEO, Jensen Huang, basically said it's about meeting demand, creating jobs and boosting economic security, that kind of investment.
It really tells you where they think AI is going. Oh, absolutely. It shows they believe the demand for massive compute power for AI is only going to grow. And it's strategically critical. It's a massive bet on the future of AI. Okay, now for something completely different, but also really cool, AI and animal communication.
Oh, is this the Dolphin Project? It is. Google developed an AI called Dolphin Gemma. It's designed specifically to analyze dolphin sounds, their vocalizations. Using data from? Decades of recordings from the Wild Dolphin Project. And apparently this AI can already spot patterns, even generate sounds that mimic dolphin sequences. Generate dolphin sounds. Yeah. Okay, that's fascinating. And get this, it's designed to run on a
a pixel phone so researchers can use it for real-time analysis out in the field. That portability is key. It tackles this hugely complex data set. Belphin sounds are incredibly varied. Clicks, whistles.
Trying to find meaning in that is a massive task for humans alone. For sure. Google's using their sound stream tech, working with the Wild Dolphin Project to kind of digitize and break down the sounds. They already know about things like signature whistles, like names. Right, the individual identifiers. And the squawk patterns they use when socializing. But the big hope, the really exciting part, is whether AI like this can finally tell us if dolphins have a true structured language. Imagine if they do.
The implications for understanding animal intelligence, communication itself, it would be revolutionary. A whole new way of seeing the natural world. Totally mind-blowing potential. Okay, slightly lighter topic now, but interesting culturally, social media trends. Uh-oh.
What now? Remember those AI-generated action figure portraits that were everywhere for a bit? Vaguely, yeah. People making themselves look like superheroes or something. Sort of, yeah. Stylized action figures. It got really popular. But then artists started pushing back, creating hand-drawn versions as a kind of counter-movement.
Ah, the human touch versus the algorithm. Exactly. It kind of illustrates that ongoing dialogue, doesn't it? AI creativity versus human craftsmanship. What's authentic? What's the value of something made by hand in the digital age? It's a great example. While AI can make cool images fast, the artist's reaction shows we still value that human skill, the intent, the personal expression.
It's not just about the final picture. It definitely sparks that conversation about what art is and what we value in it, the process or just the result. And those questions will only get more complex as AI gets more capable creatively.
It's not necessarily AI versus human, but it's definitely a changing relationship. Okay, just circling back quickly to Ilyas Sutskiver's SSI, that new safety-focused company. Right, the one with the big funding. The fact that Google and NVIDIA are backing it so heavily, it really positions SSI as a potentially major player in the AGI race, doesn't it? Especially with that explicit safety angle. Absolutely. You've got Sutskiver's expertise plus serious resources from tech giants.
Their focus on safety from the start could really influence how the whole field approaches these challenges. Definitely one to watch. And just a quick note on the open source side, DeepSeek v3 got deprecated on GitHub models. Ah, yeah, models move fast. Developers are being told to move to newer alternatives. Just shows how quickly things evolve, right?
what's cutting edge today is old news tomorrow in ai it's constant learning and adapting if you're working in this space no doubt about it okay and finally just one last example that blew my mind democratization through ai okay a high school student using ai algorithm and public astronomical data identified over one and a half million previously unclassified space objects a high school student
1.5 million objects. Seriously? Seriously. Apparently one of the biggest amateur contributions ever to astronomy. It just shows how AI can give individuals, even without huge resources, the power to make major scientific discoveries. That is incredible. It perfectly illustrates how AI can unlock insights from massive data sets, empowering almost anyone to contribute to science, democratizing knowledge, accelerating discovery. That's
That's powerful stuff. It really is. Okay, so that wraps up our main points for this deep dive. A lot going on, as always. And just a reminder, if you're looking to deepen your own tech skills, maybe master some in-demand certifications, definitely check out Etienne's AI-powered JamGat app. Right, the one for certifications. Yeah, it covers over 50 certs in cloud, finance, cybersecurity, healthcare, business, all sorts of fields, designed to help you ace them. We'll put the links in the show notes.
You know, looking back at everything we covered, the safety focus with SSI, the medical AI, the business tools, even the dolphins, it seems disconnected. But it's all part of the same wave, isn't it? That's a great point. It's all interconnected. Each piece informs the next. Exactly. From foundational safety to very specific applications, it's all building this, well, this rapidly evolving AI ecosystem.
So the final thought for everyone listening is maybe to reflect on that interconnectedness. How are these advances collectively changing how we learn, how we solve problems, how we interact with intelligent machines? Yeah. What kind of future is all this technology actually shaping? And maybe more importantly, what role do you want to play in that future? Something to definitely ponder. Yeah. Until our next deep dive. And yeah, don't forget to check out the JamGetTech app if your certification hunting links are right there in the show notes.