We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode AI Daily News March 14: 🤖OpenAI Pushes for Federal Shield in AI Action Plan 🧠Cohere’s New Efficient Enterprise AI Model 🌐Gemini Taps into Google History with Personalization 📚OpenAI Urges U.S. to Allow AI Models to Train on Copyrighted Material

AI Daily News March 14: 🤖OpenAI Pushes for Federal Shield in AI Action Plan 🧠Cohere’s New Efficient Enterprise AI Model 🌐Gemini Taps into Google History with Personalization 📚OpenAI Urges U.S. to Allow AI Models to Train on Copyrighted Material

2025/3/14
logo of podcast AI Unraveled: Latest AI News & Trends, GPT, ChatGPT, Gemini, Generative AI, LLMs, Prompting

AI Unraveled: Latest AI News & Trends, GPT, ChatGPT, Gemini, Generative AI, LLMs, Prompting

AI Deep Dive AI Chapters Transcript
People
E
Etienne Newman
Topics
Etienne Newman: OpenAI呼吁美国政府制定联邦AI行动计划,以应对州一级法规的碎片化和与中国的竞争。他们认为,统一的国家AI战略对于保持美国在AI领域的竞争力至关重要,并特别强调了数据访问和知识产权保护的重要性。他们还建议禁止DeepSeek等被认为具有国家控制性质且构成安全风险的AI模型,以维护国家安全。 OpenAI: 我们需要一个联邦AI行动计划来应对州一级法规的碎片化,这阻碍了美国的AI发展,并使我们落后于中国。我们需要一个统一的国家战略,包括国家AI基础设施、确保人人享有AI的途径,以及一个专注于AI伦理和安全的联盟。此外,我们需要解决数据访问和知识产权保护之间的平衡问题,并禁止像DeepSeek这样的国家控制的AI模型,因为它构成了重大的安全风险。 OpenAI: 我们认为,为了保持美国在人工智能领域的竞争力,并应对来自中国的挑战,迫切需要一个全面的联邦人工智能行动计划。该计划应包括建立国家人工智能基础设施、确保公平获取人工智能技术、以及成立一个专门负责人工智能伦理和安全问题的机构。此外,我们还建议对像DeepSeek这样的构成国家安全风险的模型进行监管。

Deep Dive

Chapters
OpenAI advocates for a federal AI action plan in the US, emphasizing the need for a national AI infrastructure, equitable access, and a consortium focused on AI ethics and security. The statement highlights the global competition with China and the complexities of balancing AI development with data protection and intellectual property rights.
  • OpenAI's proposal for a US federal AI action plan
  • Focus on national AI infrastructure and equitable access
  • Concerns about state-level regulations hindering progress
  • Mention of China's data advantage and the need for a unified national strategy
  • Debate around data usage and copyright laws

Shownotes Transcript

Translations:
中文

Welcome to AI Unraveled, the podcast that, well, unravels AI, I guess. It's brought to you by Etienne Newman, senior software engineer and a busy soccer dad from Canada. And if you like these deep dives into the world of AI, be sure to hit that like button and subscribe on Apple Podcasts.

All right. So today we're not tackling a huge topic like usual. We're focusing on just one day, March 14th, 2025. That's right. We're going to see just how quickly things are moving in AI. We've got a whole bunch of news and developments from that day. And we want to figure out what they all mean together. You know, the big picture.

But most importantly, what it means for you. Think of it like we're looking at a single frame from a movie. It's a movie that's moving super fast. But this one frame gives us a glimpse of what's going on. And on March 14th, there was a lot going on. We're talking about government policy, international competition, new tools for businesses, personalization, getting even more personal.

personal security risks. Oh, and AI is changing how the whole world does business, too. So, yeah, a lot to unpack. Where do we even start with all of that? How about something big? OpenAI made a public statement asking for a federal AI action plan in the U.S. They want the government to get more involved.

Interesting. It is, isn't it? And they're not just asking for anything. They're being very specific. They want a national AI infrastructure. So basically the computing power you need for advanced AI. They also want to make sure everyone has access to AI. It can't just benefit a select few. Right. Makes sense. And get this. They want a special group, a consortium.

focused on the ethics and security of AI. So they know this stuff is a big deal and needs careful thought. Oh, and they brought up China too, didn't they? They said all these different state-level AI laws could actually slow down progress in the U.S. and put us behind China. Yep, 781 different state laws. It's a lot to keep track of. OpenAI basically said we need a unified national strategy if we want to stay competitive. And they brought up a really good point about data.

You see, China has unfettered access to data and that's a huge advantage for them. That's a pretty big concern. It is.

Especially if you look at how U.S. copyright laws are applied. There's a real question there. How do we protect intellectual property while still encouraging AI development? It's a tough balance. Speaking of tough, OpenAI also said we should ban models like DeepSeek. They said it's state controlled and a security risk. Yeah, that got people's attention. It basically says that national security is tied up with how AI develops and who controls it. So

To sum it all up, it sounds like we could see the government getting a lot more involved in AI. That's what it looks like. More funding, clearer rules, a national strategy, the whole nine yards. It also shows just how important AI is becoming on a global scale and how technology and national security are, like,

intertwined. So from big policy stuff to something businesses can use, Cohere launched their new AI model, Command R plus MAD. Right. This seems like a step towards AI tools that are more specialized. What do you mean by specialized? Well,

Instead of trying to make an AI that can do everything, Command R+ is designed for specific things businesses need to do, like summarizing documents, handling information in multiple languages, answering questions, and processing data securely. It's like having a team of AI experts, each really good at one thing. And this shows that the AI market is getting more mature. Companies are focused on solving specific problems and making AI fit seamlessly into their workflows.

So for businesses, this means more tailored AI solutions to make them work better. Exactly. Imagine AI that can handle customer service, manage knowledge within a company, and analyze data way better than before. It makes me wonder if we'll see even more of these specialized AI models. It wouldn't surprise me. By focusing on one thing, these models can get really good at it. I mean, much more accurate and efficient, and that's what businesses are looking for. All right, let's move on to something that affects everyone, even if they're not into AI.

personalization. Google's Gemini is using your past activity to personalize things even more. It's pretty wild, right? Google's taking all that data they have about you and putting it to work. They have this new feature they're testing called, get this, Gemini 2.0 Flash Thinking.

Basically, it figures out when using your personal data would give you a better result. So it's like Google knows what you want before you even ask. Kind of. They're starting with your search history, but they plan to use photos and YouTube too. It's getting very, very personal. Sounds cool, but also a little creepy. What about privacy? That's the big question, isn't it? But Google's added some controls. You can choose to opt in, disconnect your history, and there's an age restriction so only people over 18 can use it.

It shows they know they're dealing with sensitive stuff. They're even giving free users access to some of these new features like gems and better deep research. It seems like they want to make this kind of personalization available to everyone. So this could mean more tailored AI for you, but it also means you need to think about your privacy settings. It's great that AI is advancing so quickly.

But there are some downsides, too. Remember that story about the Google engineer who was accused of stealing trade secrets? Yeah, that was a big deal. It shows how valuable AI knowledge is these days. We're talking about the designs of data centers, blueprints for AI chips, stuff like that. And it wasn't just any theft. It was for companies in other countries. Exactly. Which adds another layer of complexity. It's not just about companies competing. It's also about national security and global economic leadership.

So what happened with that engineer shows just how high the stakes are in the world of AI. It's like countries are fighting over the blueprints for the future, right? It's a good way to put it. And this links back to what we were talking about earlier with open AI in China.

Losing a technological edge can hurt not just one company, but a whole country. So we need strong security measures to protect these AI advancements. Then there was that report about foreign hacking groups using Google's Gemini to plan cyber attacks. That's unsettling. It shows that AI can be used for good or bad. The same technology that can help us can also be used by people with bad intentions.

These hackers are using Gemini to write malicious code, find weaknesses in systems, and even research their targets. It's a serious problem. So what do we do? We need to get better at cybersecurity.

It needs to be as sophisticated as the AI it's trying to protect us from. This means we all need to be careful and make sure our systems are secure. But hey, it's not all doom and gloom. AI is also showing up in our everyday tools, like the updates Microsoft made to Notepad and the snipping tool in Windows 11. Those are small changes, but they can make a big difference for people who use those tools every day. What kind of changes?

Well, Notepad now has AI that can summarize text for you, which is a huge time saver. And the snipping tool can straighten out shapes you draw, which is great for people who need to annotate screenshots. So AI is not just for futuristic stuff.

It's already making the tools we use every day better. Exactly. It's happening gradually, but it's making things more efficient for everyone. And that's a good thing. Okay, let's zoom out again. March 14th also had some news about robotics, drones, and AI being used in global supply chains. That's right. This is a huge area where AI can make a big impact. We're talking about a world where goods are moved around the world much faster and more efficiently, all thanks to AI. Wow.

So less human error and faster deliveries. That's the idea. Imagine AI predicting problems, finding new routes automatically and managing inventory in real time. That would change everything. What about the people who work in those industries? That's a valid concern. There will be some disruption to the workforce, but it also means new opportunities. People will need to learn new skills and adapt to this new way of working. So AI is going to reshape global commerce. Absolutely.

It's going to change how we buy and sell things, potentially even making things cheaper and more readily available. And it'll definitely have an impact on the future of work. All of these changes bring up another topic, regulation. Lawmakers in Illinois are concerned about how AI is being used in health care. Health care is super important, so it makes sense that they're taking a close look at how AI is being used.

patient safety, data privacy, the accuracy of diagnoses, and the potential for bias in algorithms are all huge concerns. They want more transparency and accountability. They do, and rightfully so. This is about building trust and making sure AI is used responsibly in healthcare. We might even see new regulations coming out of all this. Meanwhile, OpenAI is asking the U.S. government to let them train AI models on

on copyrighted material. Yeah, that's a tricky one. OpenAI says they need access to this stuff to make AI better, especially when it comes to creative writing and understanding language. But what about the people who own that copyrighted material? That's the problem. Creators and copyright holders are worried about their rights and whether they'll be compensated fairly.

This is a debate that's not going away anytime soon. It's a clash between innovation and protecting what already exists. Exactly. And it's a tough one to solve. We can't forget just how fast AI is moving. March 14th was a busy day for announcements about new models and applications. It was like every hour there was something new.

AI2 released their Ulmo 232b model, which is supposedly better than some of the big ones out there, but uses less computing power. That's a big deal for open source AI. And Microsoft and Xbox introduced Copiler for gaming. So now AI is helping you play video games. It's not just about work anymore. Alibaba launched their AI assistant called Nucork. Google added YouTube support to their Gemini API. And there was even a

prediction from Andrzej Karpathy that websites will have to be redesigned for LLMs. And let's not forget about in silica medicine getting tons of funding to develop AI powered drug discovery.

OpenAI even said they have an AI that's really good at creative writing. It seems like there's no limit to what AI can do. And to top it all off, Google's deep mind is working on using AI to control robots. I mean, come on. The future is now. It really is. And it's not just happening in labs. It's everywhere. Over half of American adults have already used an AI chatbot. And in China, AI is booming with everything from chatbots to AI toys. And even with all this progress, OpenAI is still worried about security.

They're still saying that models like DeepSeek are state-controlled and pose a risk. It goes to show this is a global issue with serious implications. For you, it means new AI-powered tools and applications are coming out all the time, and they're going to change every part of our lives, from healthcare to how we shop to how we play games. Ew, that was a lot. And remember, all of this happens in a single day.

If you're finding these deep dives helpful, please consider supporting the podcast. You can find donation links in the show notes. Every little bit helps us keep AI unraveled free for everyone. We really appreciate your support. So today we looked at a snapshot of AI on March 14th, 2025.

We saw policy debates, new business tools, more personalized AI, security risks, AI in everyday software, global supply chains changing, new regulations being discussed, and a ton of new technology. It was a whirlwind. And this was just one day. The world of AI is constantly changing. It is. Now here's something for you to think about. With AI becoming so important globally and being used in more and more parts of our lives,

How do you think society will change in the next few years? It's a big question. A big question with a lot of possible answers. And if you're looking to reach a large audience of professionals who are interested in the future of technology, consider advertising on our podcast. You can reach thousands of people who want to understand the innovations that are shaping our world. Thanks for listening to our deep dive. We'll be back soon to explore more of the fascinating world of AI.