We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode AI Daily News 20250318: 🇨🇳 Baidu Launches Ultra-Cheap AI Models ⚖️Judge Rejects Musk’s Bid to Block OpenAI’s Evolution 🧬Harvard Team Creates AI Agent for Personalized Medicine

AI Daily News 20250318: 🇨🇳 Baidu Launches Ultra-Cheap AI Models ⚖️Judge Rejects Musk’s Bid to Block OpenAI’s Evolution 🧬Harvard Team Creates AI Agent for Personalized Medicine

2025/3/18
logo of podcast AI Unraveled: Latest AI News & Trends, GPT, ChatGPT, Gemini, Generative AI, LLMs, Prompting

AI Unraveled: Latest AI News & Trends, GPT, ChatGPT, Gemini, Generative AI, LLMs, Prompting

AI Deep Dive AI Chapters Transcript
People
E
Etienne Newman
Topics
我观察到百度发布了具有成本效益的AI模型Ernie 4.5和Ernie X1,它们在性能和价格上都对现有的市场领导者构成了挑战。Ernie X1的价格只有竞争对手的一半,并且对个人用户免费开放,这将极大地扩展先进AI的可用性。Ernie 4.5在情感理解、逻辑推理和编码方面也取得了显著进步,这将使其能够应用于更广泛的实际应用场景。百度的定价策略可能会引发市场竞争,并使强大的AI工具更容易获得,从而改变人们构建和使用AI的方式。 此外,我还关注到法院驳回了马斯克阻止OpenAI发展的请求,尽管马斯克对AI发展速度和潜在风险的担忧是合理的。OpenAI的非营利性质也受到质疑,这引发了关于AI发展方向和伦理问题的讨论。后续的法律诉讼将继续关注OpenAI是否偏离了其最初的使命。 令人鼓舞的是,哈佛和麻省理工学院的研究人员开发了TixAgent人工智能代理,该代理能够根据患者的具体情况提供个性化的治疗建议,并考虑药物的代谢过程、现有疾病、药物相互作用等因素。这标志着个性化医疗领域取得了重大进展,并有望改善患者的治疗效果。

Deep Dive

Chapters
Baidu launched Ernie 4.5 and Ernie X1, AI models that are claimed to be cheaper and more efficient than existing models like DeepSeek's R1 and GPT-4. This could increase accessibility to advanced AI and further democratize the field.
  • Ernie X1 is positioned as a budget-friendly competitor to DeepSeek's R1, offering the same performance at half the price.
  • Ernie 4.5 claims enhanced EQ, improved language skills, and advancements in logical reasoning and coding, surpassing GPT-4 in multiple benchmarks.
  • Baidu's pricing strategy could significantly impact the AI market, leading to increased competition and accessibility of powerful AI tools.

Shownotes Transcript

Translations:
中文

This is AI Unraveled, created and produced by Etienne Newman, senior software engineer and also, well, passionate soccer dad from Canada. If you're finding these deep dives valuable, please take a second to like and subscribe on Apple. It really helps us reach more people just like you. Yeah, it does. People who are ready to understand what's happening in this wild world of AI.

You've sent us a ton of fascinating AI news, actually, for March 18th, 2025. So we're going to try to break down what it all means. Think of it as a personalized analysis just for you, highlighting those big changes and maybe some surprises along the way. Yes. There are definitely some surprises in here, a real snapshot of this dynamic period in AI evolution. You've got all the bases covered. So today we're going to be getting into Baidu, launching these really cheap AI models. Yeah.

And what's the latest with Elon Musk versus open AI? Big breakthroughs in personalized medicine. Yeah. AI and image watermarks. That's a tricky one. Yeah. AI nurses. A really interesting AI system for finding brand new chemical reactions. A prediction that might completely change how we do software development.

And some really promising custom cancer vaccines. Yeah, we've got a lot to cover. We do. And our goal, like always, is to cut through the noise and give you the clear takeaways from these stories, the aha moments. Exactly. That'll leave you feeling informed and ready for what's next. Well said. Okay, let's jump in.

Starting with Baidu, they just unveiled Ernie 4.5 and Ernie X1. Yes. It's like a pretty bold move, wouldn't you say? It really is. Baidu is definitely trying to shake things up. They're positioning this Ernie X1 as like a budget friendly competitor to DeepSeek's R1. They claim it performs just as well, but it's 50 percent cheaper.

And get this, it's free for individual users like you through Baidu's chatbot platforms. That's huge in terms of who gets to use advanced AI. Yeah, and Ernie 4.5 itself, they're talking about enhanced EQ, enhanced language skills. And when they say EQ, they mean a more subtle, almost empathetic

understanding of language. Right. But also they're talking about improvements in preventing those AI hallucinations, the made up stuff. Yes. And advancements in logical reasoning and coding. Those are big deals, right? Absolutely. Huge. These are core areas that directly affect how reliable and useful these big language models are.

If Ernie 4.5 is genuinely better at these things, that means we can use it for way more real world applications. And they're also claiming it beats GPT-4 on multiple benchmarks while costing a fraction of the price. We're talking about 75 and 2.20 cents per million tokens compared to like

1% of GPT-4's cost. That's a massive difference. It is. That kind of cost advantage could be a game changer for businesses and individual developers out there. People who want to use powerful AI but can't afford the really expensive stuff. And then you have Ernie X1, which you said is their first reasoning model. What exactly does that mean? So Ernie X1 is built for more complex multi-step thinking, like deeper logic and understanding.

And the fact that it matches DeepSeq's R1 in capability, but again, at half the price, that's really significant. It's using a step-by-step thinking process for stuff like tough calculations and really understanding documents. It's not just about getting the answer, but showing you the logic behind it.

So big picture, what does all this mean for AI going forward? It feels like Baidu is, you know, really turning up the heat. Yeah, definitely. Their pricing strategy could lead to way more competition. We might see the big players rethinking their pricing, making AI more accessible to everybody. That could democratize access to powerful AI tools and really change what we see people building with it. Okay, let's shift gears. A judge just rejected Elon Musk's attempt to halt open AI's development. This is a big story, right?

Huge. And this ruling is really important. Now, Musk's concerns about AI going too fast and the potential risks, those are valid.

But the court basically said there's not enough legal ground to stop open AI right now. However, other parts of his lawsuit are still going forward and there's going to be a fast track trial in the fall. So this isn't over. No, it's not. It's interesting that open AI brought up these internal emails. Apparently, Musk himself was interested in merging open AI with Tesla as a for profit. That's pretty different from what he's saying now. It is interesting to see that shifting perspective, the heart perspective.

of Musk's lawsuit is that OpenAI and Sam Altman, you know, they've strayed from their original mission. They were supposed to be nonprofit focused on benefiting humanity. But now he says they're chasing profits.

OpenAI and Altman deny this, of course. They say their nonprofit structure is still there and any for-profit stuff supports that core mission. So the court's saying we're not going to stop AI development based on these claims right now, but we are going to look into whether OpenAI has stayed true to its original purpose. Exactly. And if we zoom out a bit, this rolling shows us how the legal system is trying to deal with regulating AI. There's this tension between encouraging innovation and dealing with the ethical concerns and risks.

it's really hard to regulate something that's changing so fast. Okay, let's talk about something positive. Researchers from Harvard and MIT made an AI agent called TixAgent specifically for personalized medicine. That sounds amazing. It is. TixAgent gives you super personalized treatment recommendations.

It uses multi-step reasoning and pulls from a massive, always updating library of biomedical knowledge called Tool Universe. This toolkit has 211 different tools and trusted data sources like OpenFDA, which has info on approved drugs, and OpenTargets, which focuses on finding, you know, drug targets. 211 tools. That's incredible. What can this system actually do? It can analyze how different drugs might interact in your system.

It can flag any reason why a treatment might not be right for you based on your specific conditions. And it can recommend treatments in real time that are tailored to you. And it goes deep. It evaluates drugs not just on the outcome, but on the molecular level. Wow. How your body processes the drug. Pharmacokinetics, they call it. It takes into account your existing conditions, other meds you're on, your age, even your genes. So it's not just looking at the illness. It's looking at the whole patient. Exactly. It...

synthesizes evidence from all these sources and it keeps refining its recommendations as it learns more. Think of it as a super smart AI assistant for doctors, helping them fine tune treatment plans for each individual patient. And the potential for personalized medicine is huge. Huge. Tx Agent could really accelerate precision medicine. Doctors could make better treatment choices. Patients could have better outcomes and fewer side effects because the recommendations are so personalized. Okay, now for something a bit concerning.

It seems Google has developed a new AI model that's really good at removing watermarks from images. Doesn't this raise some big copyright issues? It definitely does. Reports say Google's Gemini AI model, the Gemini 2.0 Flash version, can erase watermarks.

even the tough ones from Getty Images. And it's not just removing the watermark, it's actually generating and editing the image underneath so well you can't even tell it was there. It fills in the gaps, everything. And this model is experimental with no restrictions on how it's used.

That sounds like a recipe for trouble in terms of copyright. That's the problem. The tech itself might be useful for things like image restoration, but being able to remove watermarks so easily when they're meant to protect ownership and prevent unauthorized use, it could lead to a lot of copyright infringement. So what does this mean for creators and the platforms that host their work?

It seems like this could really undermine how we protect intellectual property online. It challenges the whole idea of watermarks as the main protection. Content creators and platforms might need to find better, maybe more technologically complex ways to protect their work. And we might need new laws or guidelines to address how these powerful AI tools are used. Let's talk about AI in health care again.

We're seeing more and more AI nurses being used in hospitals, but it seems like human nurses aren't too happy about this. Yeah, there's definitely pushback. Hospitals are using AI for things like patient monitoring, managing meds, even some administrative tasks.

For hospitals, it makes sense. AI can give them constant data and automate routines, making things more efficient. But nurses are worried about losing that human touch, right? Exactly. They're concerned about the patient experience becoming less personal, less empathy, less nuanced judgment, all those things that are so important in healthcare. And of course, there are concerns about jobs and the ethics of letting machines take over parts of patient care. It seems like a tough balance to strike.

AI can help hospitals run better, but we can't replace the human aspects of care.

especially in a field as sensitive as healthcare. That's the key. The future of AI in healthcare is probably about finding ways for AI to support human healthcare professionals, not replace them. It's about finding that balance between efficiency and making sure patients are still getting the best possible care with that human connection. Okay, on to something completely different. This AI-driven system for finding new organic chemical reactions is

That's fascinating. Could have huge implications for science. Huge. Scientists have built an AI that can decode massive amounts of mass spectrometry data. We're talking terabytes of data to find brand new organic chemical reactions that we didn't even know existed.

Humans just can't process that much information manually. And this could impact so many fields. Absolutely. It could speed up drug discovery by helping us find new ways to create complex drug molecules. It could revolutionize material science, helping us discover new materials with amazing properties. And it could just generally speed up research in synthetic chemistry.

So we could see new medicines, new materials, better industrial processes, all thanks to AI-analyzed these huge datasets. That's incredible. It shows how AI is changing how we do science. We can now process and understand information in ways we never could before. This could lead to breakthroughs in so many areas, from medicine to energy to industrial chemistry. Imagine how much faster research could go. Now, for a prediction that could completely change how we make software.

Kevin Weil, the chief product officer at OpenAI, thinks AI models will be better than human coders by the end of 2025. Faster, more accurate, even more innovative. That's a big claim. It is. He's basing this on the fact that AI coding assistants are already as good as or even better than human developers in certain areas, like optimizing code, debugging and automating repetitive tasks.

He thinks this trend is just gonna keep getting stronger. - If he's right, what happens to software developers? Will their jobs change completely? - Probably. They might have to focus on higher level stuff, like designing the overall architecture of software, managing the AI coding processes, and doing more strategic planning. Less writing code from scratch, more overseeing the AI that does it. And this could change who companies hire too. They'll want engineers who know how to work with and manage AI coding tools.

And we saw that report from the CEO of Y Combinator saying that a lot of their startups are already using AI to write most of their code. Yeah, almost a quarter of them say that 95 percent of their code is written by AI. And these are companies making real money with small teams. Wall calls it a democratizing effect.

AI could make it possible for anyone to create software, even if they don't have years of coding experience. That's an interesting thought. AI making software development accessible to everyone. What does that mean for the future of software? Who gets to build it? That's the big question. On a more hopeful note, let's talk about the progress being made with custom cancer vaccines.

This sounds like a major step forward for personalized medicine. It is. We're seeing incredible progress with mRNA-based cancer vaccines that are tailored to each patient's specific cancer. They train the immune system to recognize and attack the unique cancer cells in that person's tumor. And the clinical trials are showing some really promising results. They are. We're seeing tumor shrink and survival rates increase in patients who've gotten these personalized vaccines.

Big companies like Moderna and BioNTech are leading the charge and they're targeting

melanoma, lung cancer, pancreatic cancer with this approach. It's almost hard to imagine. Could personalized cancer vaccines become the standard treatment for many types of cancer? That's the hope. If these trials keep showing success, it could mean much higher survival rates for cancer patients and maybe less need for chemo, which can be really tough on people. And there's hope that similar personalized approaches could work for other diseases too, not just cancer. Before

Before we wrap up, there were a few other interesting AI developments around March 18th. Oh yeah, definitely. FIGURE announced their new factory, BoatQQ, for building humanoid robots. They're planning to make a lot of them. That's a big step for robotics and automation. Petronas AI released a new multimodal large language model that acts like a judge, making sure multimodal AI apps are reliable. That's a cool idea.

PicoLabs added new effects to their AI video platform, making video creation easier for everyone. Sesame open sourced their realistic AI voice tech demo, CSM-1B, for text-to-speech. That's a big contribution to voice synthesis. And Vogen AI made voice agents that can design and improve themselves, learning from real failures without any human input. So as you can see, the world of AI is exploding with innovation and disruption.

Cheaper, more powerful models, huge advances in medicine, and maybe even a complete change in how we make software. That's a lot to take in. It is. What does it all mean for you? What new tools will you be using? How will healthcare change? What skills will be valuable in the future? What are you most curious about? We've only scratched the surface today. Every one of these stories is worth exploring further. We encourage you to do just that. Yeah, definitely. And if you found this deep dive valuable, please consider making a donation.

You can find links in the show notes. Your support keeps AI Unraveled free for everyone. Absolutely. Also, if you want to reach a really engaged audience of professionals interested in technology and innovation, consider advertising here on AI Unraveled.

You can reach thousands of smart listeners. Check out the contact info in the show notes for more details. Sounds good. You know, thinking about that Google AI that can remove watermarks makes you wonder, what does it mean for the future of authenticity online? How can we trust what we see when AI can manipulate things so easily? Something to think about. It is. A lot to think about. It is. Until next time, this is AI Unraveled. Signing off.