We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode AI Daily News June 12th 2025: 🏭Nvidia to Build First Industrial AI Cloud in Germany 🧠 TBC Goes All-In on AI with Dia Browser 🎬Disney, Universal Sue Midjourney Over Copyright 📊NY Requires Disclosure of AI‑Related Layoffs in WARN Notices & more

AI Daily News June 12th 2025: 🏭Nvidia to Build First Industrial AI Cloud in Germany 🧠 TBC Goes All-In on AI with Dia Browser 🎬Disney, Universal Sue Midjourney Over Copyright 📊NY Requires Disclosure of AI‑Related Layoffs in WARN Notices & more

2025/6/13
logo of podcast AI Unraveled: Latest AI News & Trends, GPT, ChatGPT, Gemini, Generative AI, LLMs, Prompting

AI Unraveled: Latest AI News & Trends, GPT, ChatGPT, Gemini, Generative AI, LLMs, Prompting

AI Deep Dive AI Chapters Transcript
People
E
Etienne Newman
主持人
专注于电动车和能源领域的播客主持人和内容创作者。
Topics
主持人:作为主持人,我认为纽约州正在实施一项新规,要求公司在进行裁员时必须披露是否与人工智能或自动化技术有关。这项新规通过在强制性WARN通知中添加一个简单的复选框来实现,虽然形式上看似微小,但象征意义重大,代表着在提高技术透明度方面迈出了重要一步。州长希望借此更好地掌握人工智能对就业市场的影响。如果公司因技术创新或自动化而裁员,他们必须明确指出是人工智能、机器人还是其他自动化技术。虽然目前还没有公司公开承认使用人工智能导致裁员,但这项规定的存在本身就具有重要意义,为其他州树立了榜样,可能促使未来出台更严格的法规,例如要求公司为失业工人提供再培训。我个人认为,这项措施旨在提高人工智能对就业影响的透明度,并促进关于劳动力适应和再培训的讨论,这对于帮助人们适应不断变化的工作环境至关重要。

Deep Dive

Shownotes Transcript

Translations:
中文

Welcome to AI Unraveled. This is a new episode of the show, created and produced by Etienne Newman. He's a

senior engineer and I believe a passionate soccer dad up in Canada. That's right. And before we really jump into this deep dive, just a quick reminder, please do like and subscribe wherever you're listening. It genuinely helps us out a lot. It really does. Okay. And welcome to the deep dive. So this is the part of the show where we take, you know, your sources could be articles, research papers, notes you've gathered, and we try to cut through all the noise. Yeah. It gets straight to the important stuff. Exactly. Find those key insights, those nuggets of knowledge.

think of it as uh well maybe a shortcut to getting up to speed on some pretty complex topics saves you the reading time hopefully we try so today we're diving into a whole collection of ai news items all from one specific day june 12 2025. right a listener sent these in a really interesting snapshot yeah it covers a lot of ground you've got policy changes new products launching some

some pretty intense legal battles heating up. It's quite a mix. And looking at just one day like this, it's actually incredibly revealing. It gives you a real sense of the pace, how fast AI is moving, and just how broadly it's touching everything right now. Absolutely. So our mission today for this deep dive is to unpack all these different pieces, see what they tell us together about where AI is at and where it might be heading. Let's do it. Where do we start?

Okay, let's unpack this stack. First up, something hitting the workforce, specifically coming out of New York,

This source points to a new rule about job cuts. Ah, yes. The New York Warrant Act change. It's pretty notable, actually. New York is the first U.S. state, I believe, requiring companies to specifically say if layoffs are tied to AI or automation. And how are they doing that? The source mentioned something pretty low key. It is, yeah. It's basically just a little checkbox. It's being added to those mandatory warrant notices, you know, the ones companies have to file 90 days before big layoffs. Just a checkbox.

Seems minor. Well, yes and no. I mean, procedurally, it's small, but symbolically, it's actually a pretty big step towards transparency, right? OK. It's part of Governor Kathy Hochul's broader strategy to try and get a handle on AI's impact. Companies literally have to tick a box now if technologically

technological innovation or automation was a factor. And they have to name the tech too. They do. They have to specify if it was AI or robotics or some other kind of automation. So it forces a bit more honesty potentially. What are those sort of ripple effects people are thinking about? Well, legal experts are definitely watching this one. They see this kind of soft measure. It's just disclosure. It doesn't ban anything, but they see it as a potential first step.

Towards what? Stronger rules. Exactly. It could pave the way for regulations down the line that have more teeth. Maybe things like requiring companies to actually chip in for retraining workers whose jobs get automated. It's like the first signal flare for policy trends. Interesting. Has anyone actually ticked the box yet? Has any company admitted to AI driven layoffs under this rule?

That's the interesting part. According to the source, as of June 12th, no, not a single company had publicly blamed AI using this new checkbox. Really? Huh. What does that tell us?

Are companies maybe like hesitant to admit it? Worried about the PR head? That's a really good question. And it's what this source makes you wonder. I mean, there's definitely a strong possibility of underreporting. You could imagine the headlines. Totally. Company X lays off 500, blames AI. That's not great press. So, yeah, they might look for other ways to frame those workforce reductions, you know.

restructuring or something vaguer. It makes sense from their perspective, I guess. It does. But look, even if companies aren't using it much yet, the fact that this requirement now exists in New York is still really significant. It sets a precedent. Ah, so other states might copy it. Could happen. Other states facing similar shifts in their workforce might look at New York and think, okay, that's one way to start tracking this. So for you listening, why this matters? Well,

It's about finally getting some transparency on AI's impact on jobs, even if it's imperfect. It puts the issue out there. Right. And it could spark more conversations, maybe more action around how we help the workforce adapt, how we retrain people as AI keeps changing the game. And that adaptation piece is so critical. Learning new skills is just essential now.

Absolutely. It's not just about knowing about AI. It's about knowing how to work with it or even build with it. Definitely. And, you know, for anyone listening who's thinking about boosting their own skills in AI, there are actually some great resources out there. Etienne Newman, who produces this show. Right. He's put together quite a bit. Yeah. He has a whole series of AI certification prep books. Things like, let me see, Azure AI Engineer Associate, Google Cloud Generative AI Leader Certification.

AWS Certified AI Practitioner Study Guide, Azure AI Fundamentals, Google Machine Learning Certification. Quite a list. Covers the major platforms. And there's also something called the AI Unraveled Builders Toolkit.

which sounds like it's more hands-on tutorials, guides, even audio and video stuff to help you actually start building things with AI. Yeah, practical tools. We can put links in the show notes for those. They're all available on his site, djmgate.com. Perfect. So yeah, resources are out there at djmgate.com if you want to empower yourself, adapt, thrive in this changing job market. Okay, let's pivot a bit. Okay. From policy and jobs to how AI is, well, causing some friction in places we usually trust for information.

First one that caught my eye involved Wikipedia. Ah, the AI summaries experiment. Yeah, that didn't last long. They paused it pretty quickly. What happened there? Why the quick stop? It was basically immediate pushback from the core community, the volunteer editors who actually build Wikipedia. Ah, the humans. The humans, exactly. The Wikimedia Foundation hit

pause because the editors raised some really serious concerns. They were worried about factual accuracy, first and foremost. Can the AI get it right? Right. And beyond that, could these summaries damage Wikipedia's reputation for reliability? Does AI content sort of

devalue the human effort that goes into curation. So it wasn't just, is the tech good enough? It was deeper than that about the whole Wikipedia model. Precisely. The editors were worried about bias creeping in, violating their whole neutral point of view principle, the NPOV. And fundamentally, does this kind of automation undermine the collaborative editing process that, you know...

is Wikipedia. That's fascinating. It really throws that tension into sharp relief, doesn't it? Automation versus like trusted human process. It really does. Even as AI gets better, that human oversight, that judgment, especially for something built on trust like Wikipedia, is still seen as absolutely crucial. Quality and integrity, yeah. And look, this tension isn't just on Wikipedia. It's hitting the broader media landscape too, especially news organizations. Right. Our source mentioned news sites are getting, quote, crushed.

by Google's AI tools. Yeah. Specifically the AI overviews in search. Yeah, that's a huge deal for online publishing. The economics are getting seriously shaken up. How so? Because people get the summary and don't click the link. That's exactly it. If Google's AI gives you a nice summary right there on the search results page, why click through to the original article? And fewer clicks mean less traffic. Which means less ad revenue, fewer potential subscribers,

It hits their bottom line directly. Ouch. Yeah. So it's really disrupting their business model and it's making those calls for compensation much louder. Publishers are saying, hey, you're training your AI on our content, our reporting, and then using it to bypass us. They want a framework for getting paid for that value. So something for you, the listener, to maybe think about next time you use one of those AI overviews.

Handy as they are, consider where that info came from and the impact on the original source. Definitely raises big questions about, you know, how sustainable online journalism and content creation will be if this trend continues. Okay, let's shift gears again. Let's talk legal battles. Seems like some big names in entertainment are taking on AI image generators now. That's right. Disney and Universal are going after Midjourney. It's a copyright infringement lawsuit. Okay, what's the main argument? What are they claiming? They're alleging that Midjourney basically vacuumed up video

up vast amounts of their copyrighted materials, specifically mentioning their famous characters, to train its AI image model.

Without permission, obviously. Right. The source notes that Midjourney's founder has apparently admitted, maybe somewhat casually, to scraping images from across the Internet without getting licenses or permission from copyright holders. That training data is the absolute core of the lawsuit. You're saying the AI learned by infringing. Essentially, yes. That Midjourney built its capability on their protected intellectual property. And what do the studios want out of this? Just money?

Well, money, yes, damages, but they also want injunctions. That means legally forcing Midjourney to stop using their copyrighted stuff for training and maybe even stop generating images based on it. And the lawsuit says Midjourney isn't stopping. According to the suit, yeah. They allege Midjourney is basically refusing to change its practices and is even releasing newer AI models that are getting better at recreating their characters in really high detail. Wow.

This feels like it could be a really important case. Oh, absolutely. A landmark case potentially. How this shakes out could set huge precedents for copyright law in the age of AI. How do you handle training data? What are the rights around AI generated outputs based on existing IP? Big, big questions for media, art, everything. Definitely one to watch.

Okay, let's move from the courtroom to new tools actually hitting the market. Our sources mention a new web browser that's going big on AI. Yeah, TBC, the browser company, launched the Daya browser, and they're describing it as going all-in on AI. All-in how? What does it actually do differently? Well, instead of AI being like a separate chatbot you open, Daya weaves it right into Daya.

the browser itself, like into the URL bar. So you type prompts there instead of just web addresses. Exactly. You can chat with your open tabs, ask for summaries of pages instantly, even draft emails or documents right there in your browsing flow using the AI. So the AI knows what you're looking at. It has context. Precisely. It's context aware. It can apparently analyze stuff across multiple tabs, help you synthesize information. It can even learn your writing style, supposedly, to help you draft things. Hmm.

And it has these things called skills, basically specialized AI agents for tasks like shopping or coding, maybe research. And these agents remember context from your history and tabs to be more helpful. OK, that sounds very much like this idea of agentic AI we keep hearing about. AI that doesn't just answer questions, but actually does things for you. It really leans into that. Yeah. Acting more like an assistant that understands your goals and can take multiple steps.

Beta access just opened for existing Arc users on Mac, by the way. And they mentioned privacy. They did. Emphasizing local data encryption and quick deletion of data from their servers after it's processed. Trying to address those concerns up front. Interesting. It really shows how...

that core tool, the browser, might be fundamentally changing, becoming less of a passive window and more of an interactive AI-powered workspace. Totally. And speaking of making AI work for you, the sources also had a piece on connecting existing AIs like Claude to other software you use. That ties back to the agentic idea again, building systems. So for you listening, what's the deal here? How can you make Claude talk to, say, Google Docs?

So the main point is that these AIs, like Claude, are increasingly being designed not just to chat, but to integrate, to connect to your other tools, Google Workspace, Slack, Notion, whatever you use. So Claude becomes like a central hub. Kind of. It moves beyond just being a place you ask questions to being a component in a larger workflow. You can potentially tell Claude to do something in another app.

Like, "Hey, Claude, create a new Google Doc summarizing this webpage." Exactly like that. Or draft a Slack message based on this email. It acts within those other apps based on your instructions by connecting through tools like Zapier, which the source mentioned. Zapier acts as the bridge. Got it. So you don't have to copy and paste between everything. The AI can orchestrate tasks across different platforms. That's the promise.

It makes the AI potentially much more powerful as a productivity tool. It's not just about talking to the AI. It's about the AI working across your digital environment.

If you're interested in the nitty gritty, the how to of setting that up, we'll make sure to drop some links and resources in the show notes. Yeah. Learning how to actually build these connections, these workflows, that's becoming a really valuable skill, leveraging these tools effectively. It really underscores that need for practical AI skills again and

Just to reiterate, if you are looking to build those skills, maybe get certified, those resources from Etienne Newman, we mentioned the AI Unraveled Builder's Toolkit with tutorials and guides or his certification prep books. Right, covering Azure, Google Cloud, AWS AI. They're designed for exactly that, helping you get hands-on or get certified. You can find them at djamcake.com. Definitely worth checking out if you want to level up your AI game. Okay, let's shift focus one more time.

Let's look at the really foundational stuff the infrastructure AI runs on and the cutting edge research pushing the boundaries. First, some infrastructure news from NVIDIA. Right. NVIDIA announced they're building what they're calling the first industrial AI cloud, and they're building it in Germany. Industrial AI cloud. What's the significance of that location and focus? Well, it's a huge investment in computing power, but specifically tailored for European manufacturing and research.

It's seen as a big strategic play. Oh, so? It helps boost Europe's AI sovereignty, meaning less reliance on, say, U.S.-based cloud infrastructure. And it's aimed squarely at accelerating the digital transformation of Europe's industrial-based think factories, supply chains, R&D, and

Having that kind of powerful domestic compute is really critical for large scale AI in the region. Got it. And on the research side, Meta made some noise with something called a world model. Yeah. Meta launched this really comprehensive world model. The explicit goal here is to push forward robotics and self-driving car technology.

OK, unpack world model for us. What is that? Think of it as trying to teach an AI basic physics. Like how does the world actually work? How do objects move, fall, bump into each other, interact? Like intuitive physics for AI. Exactly. Meta's model, it's called VJPA2, is massive.

It's got 1.2 billion parameters and they trained it on over a million hours of video just watching things happen to learn about gravity, momentum, object permanence, all that stuff. A million hours of video. Ow. And did it work? Can it like use this knowledge? They reported pretty

It's reported pretty solid results. They tested it on complex robot tasks like picking up and placing objects it hadn't seen before in new settings. It achieved success rates in the 65 to 80 percent range. And it does this using visual goals, planning out multiple steps. OK, now the source mentioned a surprising detail about how fast it runs. Yeah, this was pretty eye catching. Meta claims VJP2 runs 30 times faster than Nvidia's main competing model.

which is called Cosmos, while apparently still achieving state of the art results on video understanding benchmarks. 30 times faster. That's huge if true. It's a very big claim. If it holds up, it makes these complex models much more practical for real time applications like robots or cars that need to react quickly. And there was one other interesting comparison they made. Something about humans. Right. They also put out new benchmarks showing how well humans do on similar physical reasoning tasks.

And no surprise, humans score much higher, like 85 to 95%. Okay, so AI is getting better.

but still has a long way to go to match our intuition about the physical world. Precisely. It highlights the current gap. While AI is making strides, it's still nowhere near human-level common sense when it comes to just, you know, understanding how stuff works physically. So for you listening, why this research matters, well, models like this are key for safer, more capable robots and autonomous systems. But that human benchmark...

It's a good reality check on where AI still needs fundamental breakthroughs. Lots of work still to do. Okay, let's wrap things up with just a few quick hits. These are other bits of news from our June 12th sources that really show the sheer volume and variety of what's happening. Yeah, it was a busy day, just rapid fire. Sam Altman talked about OpenAI's first open weight model being delayed, but promised it would be very, very worth the wait. Okay.

Apple execs were defending their cautious AI rollout, saying they held back AI features for Siri to ensure quality first. Meta's AI app. It's getting new AI video editing tools. Apparently you can change outfits, locations, lighting in videos just using text prompts. Pretty wild. Yeah. Mistral, the European AI company, launched Mistral Compute, basically their own AI infrastructure offering, an alternative to the big cloud players. Right.

The Windsurf browser gave its coding assistant Cascade Web Awareness so it can now pull real-time info from the internet to help developers better. Starbucks is testing an AI tool called Green Dot Assist to help out their baristas. And MidJourney, back to them, launched a video ranking feature. They're asking users to rate,

AI-generated video clips to help train their upcoming video model. Man, just look at that list. Yeah. From coffee shops and coding tools to creative video effects, fundamental infrastructure, big tech updates. It really hits home how AI is just...

everywhere advancing on all these fronts at once. It really does. All these different developments just from one day's worth of news, they underscore how deeply AI is weaving itself into policy, into creative work, into our daily tools and into basic science all simultaneously. There is just so much happening.

Keeping up, understanding these shifts, it feels more important than ever, right? Whether it's for your career, the tools you use, or just navigating the world. Absolutely. And like we've touched on, building your own understanding, your own skills in this space is key. And again, if that's something you're focused on,

Those resources from Etienne Newman, the certification prep books covering Azure AI Engineer, Google Cloud Generative AI Leader, AWS Certified AI Practitioner, Azure AI Fundamentals, Google Machine Learning. And the AI Unraveled Builders Toolkit with the tutorials and guides. Right. They're there to help you deepen that understanding or get

practical skills. Check them out at DJMGayTech.com. We'll have the links in the show notes, designed to help you certify or start billing. Good resources to have. So as we close out this deep dive on the AI news of June 12th, 2025, maybe here's a final thought for you to mull over. Looking at everything we discussed, the job transparency rules, the copyright lawsuits, these new integrated AI tools like DIA, the agentic systems, the foundational research like Meta's World Model,

Which one of those threads do you think will actually have the biggest, most direct impact on your own life or your work over the next year or so? It's pretty amazing to think how much future change is packed into just one day's news cycle, isn't it? It really is. And that wraps up this deep dive into the AI news from June 12th, 2025.