We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode AI Daily News June 20 2025 ⚠️OpenAI prepares for bioweapon risks ⚕️AI for Good: Catching prescription errors in the Amazon 🎥Midjourney launches video model amid Hollywood lawsuit 🤝Meta in talks to hire former GitHub CEO Nat Friedman to join AI team

AI Daily News June 20 2025 ⚠️OpenAI prepares for bioweapon risks ⚕️AI for Good: Catching prescription errors in the Amazon 🎥Midjourney launches video model amid Hollywood lawsuit 🤝Meta in talks to hire former GitHub CEO Nat Friedman to join AI team

2025/6/20
logo of podcast AI Unraveled: Latest AI News & Trends, GPT, ChatGPT, Gemini, Generative AI, LLMs, Prompting

AI Unraveled: Latest AI News & Trends, GPT, ChatGPT, Gemini, Generative AI, LLMs, Prompting

AI Deep Dive AI Chapters Transcript
People
Topics
OpenAI 正在积极构建内部协议,以应对其先进模型可能被滥用于设计生物武器的风险。他们训练模型拒绝有害请求,部署始终在线的系统来检测可疑活动,并计划与政府和非政府组织举行生物防御峰会。同时,OpenAI 文件推动对 OpenAI 的治理和领导层进行独立审查和监督,以确保 AGI 的发展是负责任和可问责的。我认为,这些措施对于防止在国家安全和生物技术等领域发生灾难性滥用至关重要。

Deep Dive

Chapters
OpenAI is developing internal protocols to mitigate the risks of its advanced models being misused to design biological weapons. They are implementing refusal mechanisms, always-on detection systems, and advanced red teaming to address potential threats.
  • OpenAI's proactive steps to prevent misuse of its models for biological weapons design
  • Dual-use capability of AI models
  • Internal protocols including refusal mechanisms, detection systems, and red teaming

Shownotes Transcript

Translations:
中文

This is a new deep dive from AI Unraveled, created and produced by Etienne Newman, who's a senior engineer and a passionate soccer dad up in Canada. Please remember to like and subscribe. Welcome back to the deep dive. This is where we take, you know, a whole stack of information articles, research papers, our own notes, and we really try to distill it down to the key insights just for you.

Today, we're diving into, well, a really fascinating collection of recent AI innovations. It's a snapshot of what's happening right now in June 2025, our mission, to get you quickly up to speed on the real cutting edge of AI, from the big profound implications down to some honestly pretty surprising real world applications. That's right. And the pace. I mean, it's just incredible, isn't it? It feels like things are changing daily. We're going to touch on quite a bit today. Everything from these really critical discussions around AI safety protocols,

the intense talent wars shaping the industry, all the way to very practical stuff, like how AI is transforming healthcare and even what's next for smart eyewear. It's a really diverse landscape. It really is. And actually, before we jump right into that first topic, I wanted to quickly mention some great resources from Etienne Newman. He's really committed to helping you get ahead in the AI space. He's put together some

Excellent AI certification prep books. We're talking Azure AI Engineer Associate, Google Cloud Generative AI Leader Certification, AWS Certified AI Practitioner Study Guide, Azure AI Fundamentals, and the Google Machine Learning Certification too. You can find all of those over at djamgate.com. And don't worry, we'll put all the links right there in the show notes for you. It makes it easy.

Okay, so let's kick things off with something that's probably keeping a lot of AI folks up at night. Safety, governance. You see headlines like OpenAI preparing for AI-enabled VitaWeapon risks, and it really hammers home how urgent these conversations are, doesn't it? It absolutely does. And it shows they're taking proactive steps, right? It's not just theoretical anymore. Essentially, OpenAI is building out these internal protocols because there are very real concerns that

their advanced models could potentially be misused, specifically for designing biological weapons. They call it dual use capability. A dual use. Yeah. And they're actually anticipating that the models coming after their O3 reasoning model, that's their next big one, will likely trigger a high risk status under their own preparedness framework, specifically for biological threats. Wow. Okay. So...

How do you even start to mitigate something that sounds so serious? What can they actually do? Well, they're putting a few things in place. One big part is training the models themselves to refuse harmful requests. Just

Just build that refusal right in. They're also deploying what they call always-on systems designed to detect suspicious activity. And they're doing a lot of advanced red teaming. Red teaming, like trying to break it themselves. Exactly. Dress testing it to find vulnerabilities before someone else does. And they're not doing it in a vacuum. They've got a biodefense summit planned for July with government researchers, NGOs.

And it's not just open AI, you know, Anthropic activated similar quite strict protocols when they released their Clawed 4 family. So what's the bigger picture here? What should we take away from this? I think the key takeaway is that these AI leaders are moving past just talking about risks. They're actively building guardrails. They see AGI or something close to it on the horizon and they understand it's a potential double edged sword.

It's really about trying to prevent catastrophic misuse, especially in areas like national security or biotech, before the technology gets potentially too powerful to easily control, setting a standard, hopefully.

And related to that oversight piece, it's not just internal protocols. There's also this push for transparency from the outside, right? Leading to these OpenAI files. What's that about? Yeah, the OpenAI files. It's basically an archival project run by some tech watchdog groups. They're documenting concerns around OpenAI's governance, its leadership.

especially as it relates to AGI development. They're pushing for more independent review, more scrutiny on things like structural changes, remember, when they removed the profit caps for investors, or instances where safety evaluations might have felt rushed, even potential conflicts of interest at the leadership level. So it's about accountability, really, keeping pace with the tech development.

Exactly. It's trying to shift the conversation around AGI from just when will it happen to how do we ensure it happens responsibly and accountably? The argument is that transparency and strong regulatory frameworks are critical because this tech is often moving way faster than public understanding or even government awareness.

Okay, let's shift gears a bit. This next one is maybe a little surprising, maybe even slightly unsettling. It's about how interacting with AI might actually be affecting our brains. An MIT study looked into this. Yeah, this MIT study is pretty eye-opening. They had students write essays, some with help from LLM chatbots, some without, and they used EEG headsets to measure brain activity. What they found was, well, pretty significant. The students using the LLMs showed markedly reduced brain activity, specifically brain connectivity.

They measured it using something called DDTF. It scaled down the more external help the students got. Scaled down how much? The group using the LLMs showed the weakest brain coupling, and they saw up to a 55% reduction in the relevant EEG signal.

That strongly suggests lower cognitive engagement during the task. Wow. Fifty five percent. That's that's a huge drop. It almost makes you, I don't know, hesitant to rely on these tools too much. Right. Is there a tradeoff here between efficiency and actual thinking? That's the million dollar question, isn't it? I mean, these A.I. tools are undeniably great for efficiency. They speed things up.

But this study really does suggest that over reliance might come at a cost. It could potentially hinder critical thinking skills, maybe even long term cognitive development. It's definitely something we need to be mindful of, particularly in education. Absolutely. Food for thought there. OK, moving from our brains to the workplace. What do people actually want from AI at work? Stanford did a study surveyed about 1500 workers. What did they find?

So the Stanford study has some really interesting findings, and it highlighted a bit of a potential disconnect, actually. The main takeaway was that workers overwhelmingly want AI tools to assist them, not replace them. They really value transparency, understanding how the AI works, and opportunities to learn new skills alongside the AI.

Okay. Assist, not replace. That makes sense. What was the disconnect you mentioned? Well, they found that something like 41% of startups coming out of Y Combinator, for instance, are focusing on automating areas that workers themselves consider pretty low priority.

Huh. So what do workers want automated then, if not the high-priority stuff? Mostly the low-value repetitive tasks. Things like scheduling meetings, data entry, you know, the tedious stuff that eats up time. The goal for them is to free up that time to focus on more important, more engaging work.

The researchers even developed this thing called the human agency scale, and they found that for nearly half the occupations they looked at, people preferred an equal partnership between humans and AI. Interesting. Any fields particularly resistant today? Yeah, maybe not surprisingly, arts and media professionals showed the strongest resistance to automation. Only about 17% of creative tasks got positive ratings for AI assistance. So the big message for companies building these AI tools seems pretty clear then.

I think so. It really underlines that ethical AI design in the workplace has to be about augmenting human potential, right? Making people better their jobs, more productive, maybe even more fulfilled. It shouldn't just be about automating tasks purely for efficiency gains, especially if it bypasses what workers actually want and need.

Right. Focusing on the human side. OK, let's pivot now from the human impact to the the high stakes world of AI business and production. We've seen some wild deals lately, like that story about a solo owned startup selling for 80 million dollars. Yeah, that was incredible. A real testament to finding the right niche. It's the story of Shlomo, an Israeli developer.

He created Base44, which he called a vibe-based AI coding interface. Vibe-based? Okay. Laughs slightly. Yeah, basically it allowed non-programmers to build apps using natural language prompts. And it just took off. Got like 10,000 users in three weeks. Mostly just word of mouth. And he owned the whole thing himself. Pretty much bootstrapped it. Yeah, he was the only shareholder.

Which makes the next part even better. When Wix acquired them, he gave his eight employees $25 million in bonuses out of the deal. Wow, that's fantastic. So Wix bought them. Yep. Wix plans to integrate Base44 into their website building platform. Shlomo apparently felt they were the best partner to help scale it up. He'd only started it as a side project back in January. That's amazing.

So what does a story like this tell us about the current AI market? Well, it really highlights that these smaller AI startups, especially if they have a really unique user experience or target a specific niche function, they're becoming super valuable acquisition targets. Big enterprise companies are snapping them up to quickly integrate that specialized capability.

shows innovation can still come from, well, anywhere. Definitely. Now, speaking of big companies making moves, Meta seems to be really aggressive in the AI talent war. They apparently tried to buy Ilya Sutskiver's new company for like $1,000.

$32 billion. That's the report. Yeah. Meta apparently made a serious attempt to acquire, say, Superintelligence, which is the new AI venture co-founded by Ilya Sutskiver, who is OpenAI's chief solitist, of course. And yeah, the valuation being thrown around was over $32 billion. And Sutskiver said no. Apparently so. Rebuffed the acquisition attempt and also reportedly turned down a separate effort by Meta to just recruit him directly onto their team. So what did Meta do then? Just walk away? Not exactly.

Seems like Plan B was put into action.

They've since hired Safe Superintelligence's other co-founders, Daniel Gross and Nat Friedman. And they've also apparently acquired a stake in their venture firm, NFDG. Ah, OK. So if you can't buy the company, hire the key people and invest in their network. Pretty much. It just shows how determined these big players are to get top AI talent under their roof one way or another. It really underscores that fierce competition, doesn't it? The talent war isn't just a buzzword. Not at all.

We're seeing these multi-billion dollar plays for companies or just for key individuals, especially if they come from those top tier labs like OpenAI or DeepMind. The fight for the best minds is absolutely intense right now. And speaking of foundational players, NVIDIA. We all know they make the chips, the sort of picks and shovels for the AI gold rush. But they're also building this quiet investment empire, right? And now...

humanoid robots in their own manufacturing. NVIDIA has been quite busy on the investment side, backing dozens of startups across the whole AI ecosystem. Chips, robotics, foundation models, you name it. But this latest news is definitely a shift. They are seriously exploring using humanoid robots to help manufacture their own high-demand AI hardware. Humanoid robots building NVIDIA servers. That feels very meta, like AI building AI. It kind of is.

The reports are that Foxconn, who manufactures a lot of electronics, and NVIDIA are in talks to deploy these humanoid robots at a new factory in Houston. This factory is specifically for NVIDIA's AI servers. When might this happen?

The deployment could be finalized pretty soon, actually. And the robots could start working by Q1 of next year, right when the factory starts churning out NVIDIA's new GB300 AI servers. If this goes ahead, it would be the first time an NVIDIA product is made with humanoid robot help. And it would be Foxconn's first AI server factory using them on the line, too. So what's the potential impact here? Is this the future of making complex tech? It could be a really significant shift. If it works, if it's efficient and scalable...

it might just kick off a new era in high-tech manufacturing. AI design hardware being built by AI powered machines, it could totally change supply chains. Okay, shifting back to accessibility for a moment, Adobe,

They've just launched a mobile app for Firefly, their generative AI tool. Yeah, that's a nice step forward for creators. Adobe is making its Firefly tools available directly on your phone, iOS and Android. So you can generate images, text effects, that kind of thing while you're on the go. So making it more integrated into daily workflows, basically. Exactly. It makes generative AI much more accessible, much more mobile first. Adobe seems to be positioning it as a sort of everyday creative companion you can carry in your pocket. Right.

Okay, let's move into our final section, AI in action. Looking at the diverse, sometimes really surprising ways AI is showing up in the real world. And I want to start with a fantastic AI for good story. Catching prescription errors way out in the Amazon. Oh, this is a great one. It really shows the positive impact AI can have. It's about Samuel Andrade. He's a pharmacist in this remote place in Brazil called Caracarayi. It's huge, like bigger than the Netherlands, but only about 22,000 people.

And patients there, they often travel for days by boat just to get to the pharmacy. Samuel used to be totally overwhelmed, spending hours just trying to cross-check drug databases for potential interactions or errors for maybe just a few patients, while dozens more were waiting. This sounds incredibly stressful and potentially dangerous for patients. What changed? Well, now he has help from an AI assistant. It was developed by a Brazilian nonprofit called NoHarm.

This software basically flags potentially problematic prescriptions for him, helps him verify everything much faster. Quite a story behind it, too. It was built by a brother and sister, Anna Helena, a pharmacist, and Henrik Dias, who's a computer scientist and the CEO of No Harm. They train their AI model on thousands and thousands of real-world drug interactions, dosage errors, adverse effects. That's amazing. So how much difference has it actually made for Samuel and his patients? A huge difference.

The software can process hundreds of prescriptions really quickly, identifying those red flags, interactions, potential overdoses, and it gives him links to medical sources to back up every warning. Samuel said it's quadrupled his capacity. And crucially, it's caught over 50 errors since he started using it, things that might have slipped through otherwise. And no harm. They're supported by grants from Google, Amazon, NVIDIA, the Gates Foundation, others.

And they offered this software for free to public health facilities in Brazil's poorest regions. It's in about 20 cities now. It even helped another rural doctor avoid dangerous dosages for patients who traveled really far to see him.

So you'll put it nicely said many things slip past our eyes. The system lets us crosscheck information much faster. That is just such a powerful example of A.I. making a real difference, isn't it? It absolutely is. It shows how A.I. could be a genuine lifeline in places with limited resources. It's not replacing the pharmacist's expertize. It's amplifying it, creating a vital safety net, improving patient safety where it's needed most. OK, let's switch gears again.

To the creative side, but maybe a more contentious one, Midjourney.

They've jumped into the AI video generation race, but they're doing it right in the middle of a big legal battle. Yeah, talk about timing. MidJourney just launched V1, their first AI video model. It lets users take an image generated or uploaded and animate it into a short clip, about five seconds, extendable maybe to 20. You can use automated motion or give it prompts like low or high motion. Okay, standard AI video stuff. So what's the legal fighter storm part? Is it about the data they use to train it?

Exactly. The launch comes while Midjourney is already facing a major copyright lawsuit from artists. And that lawsuit specifically names this new video service as a potential future infringement concern. So the legal challenges are kind of evolving alongside their tech. So what's the big takeaway there for the creative world? I think it's a really stark reminder that as this generative video tech gets better and more widely used, those fundamental questions about fair use, copyright, creative ownership,

They're just going to get louder and more complex. These legal fights are probably going to shape how this whole field develops. Makes sense. Okay, looking ahead now, wearables and consumer AI. Meta is teaming up with Oakley for new smart glasses. Seems like the race for our faces is heating up. Agrees. Yeah, Meta is definitely pushing hard into smart eyewear. They've unveiled this new line with Oakley aiming for that intersection of fashion and AI-powered augmented reality.

There's a limited edition model called HSTN for about $500 and other styles coming later this summer starting around $400. What makes these different? Are they targeting anyone specific? They seem to be targeting athletes, at least partly.

They boast IPX4 water resistance, which is good for sweat and light rain. They say double the battery life of the current MetaRay bands, and they can record video in 3K resolution. Plus, you get the built-in voice assistant, camera features, integration with Meta's whole AI ecosystem, and Essilor Litsotica, the parent company, is offering different frames, lenses, even prescription options. So this isn't just a gadget. It's a serious platform play. Absolutely.

It shows that smart eyewear is becoming a major battleground for consumer AI. You've got Meta, Apple, Google,

They're all vying to put intelligent, connected experiences right in front of our eyes. Expect to see a lot more of this. Okay, here's a wild statistic from China. A couple of AI avatars, virtual influencers, generated over $7 million in sales on a live stream in less than a day. It's pretty staggering, isn't it? $7 million in just seven hours of live streaming. That performance significantly outpaced many, many human influencers. It really shows the commercial power these digital personas can wield.

What does that even mean for marketing and for human influencers? It feels a bit dystopian. It definitely raises some big questions, questions about authenticity in advertising, for one.

Is it OK if people don't know they're interacting with an AI? Yeah. Then there's the obvious potential for labor displacement. What happens to human influencers? And it forces us to think about digital consumer psychology, how people connect with these non-human entities. It's clear AI influencers aren't just a gimmick. They're seriously disrupting marketing and entertainment. Yeah. Fascinating and a bit unnerving.

Okay, another medical breakthrough. A company from Taiwan, Essar Glasses, they've unveiled the world's first AI anatomy table. That's right. Sergo Glasses has launched this tool that uses AI to visualize anatomy. It blends augmented reality with real-time diagnostic data. It's designed primarily for surgical training and planning.

Imagine being able to interact with a detailed 3D anatomical model, almost like a hologram, integrated with real data. It's a whole new way to learn and prepare for procedures. So this could really change medical education and maybe surgery itself. Potentially, yes. It suggests that medical education and even surgical practice are heading towards a major digital transformation.

Using spatial computing, AR, AI, tools like this can drastically improve precision, enhance understanding, and give trainees unprecedented access to complex anatomical information.

Okay, we're almost out of time, but let's squeeze in a few quick hits, other interesting bits of recent AI news. Sure. Quickly then. Perplexity now lets you generate short VO3 videos, complete with audio, just by tagging them on social media, like TwitterX. Pretty neat. OpenAI rolled out a record feature for ChatGPT. It can capture audio from meetings or brainstorms, then summarize and transcribe it for you. Could be a big time saver. Oh, that sounds useful. Yeah.

And in drug discovery, Sandbox AQ, which is backed by NVIDIA, released a huge dataset, GaseAR, with like 5.2 million synthetic protein drug molecule structures to help train AI for discovering new medicines. And finally, researchers at Mass General Brigham developed an AI tool called AI-Seqy. It reads chest CT scans very quickly to look for calcium deposits, which are indicators of heart disease risk, helping doctors spot issues faster.

Wow. Okay, so that was quite the deep dive. We've covered a huge amount of ground today from AI safety and those really serious bioweapon protocols through the business side with talent wars and robotic manufacturing, all the way to AI saving lives in the Amazon, creating video, powering smart glasses, and even outperforming human influencers. It really shows how AI is touching almost everything. The pace is just relentless. It really is.

AI isn't just some abstract tech concept anymore, is it? It's this incredibly dynamic force that's actively reshaping national security.

industries worldwide and yeah, our everyday lives in ways that are just incredibly diverse and impactful. Staying informed feels more important than ever. Absolutely. And thinking about all this AI potentially catching catastrophic risks, but also maybe changing how we think AI assisting doctors, but also creating virtual celebrities. It makes you wonder, doesn't it? As these advancements accelerate, how might they reshape not just our jobs or industries, but our very perception of what it means to be human and what it means to learn in this rapidly evolving landscape? Something to definitely mull over.

And remember, this deep dive into AI innovations is part of the AI Unraveled series, all brought to you by Etienne Newman, dedicated to helping you navigate this space. If you were thinking about getting certified in AI, boosting your career, definitely check out Etienne's prep books. Again, that's Azure AI Engineer Associate, Google Cloud Generative AI Leader Certification, the AWS Certified AI Practitioner Study Guide, Azure AI Fundamentals,

and the Google Machine Learning certification. They're all available over at djamgettech.com. Right.

And also, don't forget Etienne Newman's toolkit. It's called the AI Unraveled Builders Toolkit. It's packed with resources, AI tutorial PDFs, more certification guides, even audio and video tutorials to really help you get hands-on and start building with AI. All the links for the books and the toolkit are right there in the show notes. Easy to find at djamgate.com. Thanks so much for joining us on the Deep Dive today. Yeah, thank you. And please be sure to like and subscribe to AI Unraveled so you don't miss our next deep dive into the fascinating

the fascinating world of artificial intelligence.