We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
People
主持人
专注于电动车和能源领域的播客主持人和内容创作者。
Topics
我观察到美国青少年使用ChatGPT辅助学习的趋势正在上升,Pew研究中心的数据显示,使用ChatGPT做作业的学生比例从去年的13%上升到今年的26%,这个数字可能被低估了。青少年对AI在学习中的应用态度也在发生变化,对AI用于研究和解决数学问题的接受度提高,但对用于写作的接受度仍然较低。值得关注的是,2024年,黑人和西班牙裔学生使用AI做作业的比例高于白人学生,家庭收入的影响减弱。 此外,一项在尼日利亚进行的研究表明,在教师指导下使用AI作为辅助导师可以显著提高学生的英语成绩,效果惠及所有学生,甚至对成绩落后的女生帮助更大。这项研究结果引发了关于AI是否会彻底改变教学和学习方式的广泛讨论,但同时也存在不同的观点。 关于TikTok禁令,我认为这不仅仅是数据隐私问题,更重要的是其先进的AI推荐算法带来的国家安全风险。TikTok强大的推荐算法能够精准地向用户推送感兴趣的内容,这既能有效地吸引用户注意力,也能加剧信息茧房效应,甚至被用于政治宣传。如果AI技术进一步发展,生成个性化内容的成本降低,那么像中国政府这样的国家就可能利用TikTok平台,向数百万用户推送定制化的内容,以达到政治目的。这将引发数据隐私问题演变为国家安全问题的风险,并可能促使互联网权利法案或社会层面隐私观的改变。 目前,美国政府已经维持了对TikTok的禁令,但具体执行时间尚不明确。与此同时,围绕TikTok的潜在买家也出现了各种猜测,包括Elon Musk、Kevin O'Leary和MrBeast等。TikTok方面则尚未对此表态。

Deep Dive

Shownotes Transcript

Translations:
中文

Today on the AI Daily Brief, is the TikTok ban actually about AI? Before that on the headlines, educational use of ChatGPT is on the rise. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. To join the conversation, follow the Discord link in our show notes. Welcome back to the AI Daily Brief Headlines Edition, all the daily AI news you need in around five minutes.

Today, we have a really interesting topic to kick us off. It has to do with a bunch of research that's come out recently. And the TLDR of it is that AI use in schools shows absolutely no sign of slowing down, according to new research from the Pew Research Center. In a follow-up to a survey conducted at the end of 2023, Pew asked 1,400 American teens whether they were using ChatGPT for homework or school assignments. 26% said they had, twice the number from the previous year.

I find it very hard to believe that only 26% of teens are using ChatGPT for homework, regardless of what the average school policy is. But here we are. I think if you're just looking for trend lines, the Pew Research Center has had double the number of people admit using it than they did last year. Now, the 2023 survey had asked teens whether they had heard of ChatGPT, with about two-thirds saying they had, 67%. Now, 78% of students have heard of ChatGPT, and the number who have heard a lot about it is up by 50%.

A big portion of the Pew survey is attempting to track attitudes towards AI use in schools as it changes. 69% of teens believe it's acceptable to use AI to research new topics, which is up from 54% last year. 39% believe it's okay to ask AI to solve math problems,

up from 29% last year. Unless those folks have access to 01 or 03, which no one has access to, really, I might be a little concerned for them. Of course, essay writing remains the biggest taboo, with only 20% believing it's acceptable this year, up from 18% in 2023. To me, that's a real the kids are all right statistic. There are obviously questions with just handing over to ChatGPT the entirety of your writing, so I think that that's a positive number. There is definitely still room for clearer guidance on AI use.

The fact that 15% of students are not sure whether it's okay to use AI to research new topics suggests that we should be doing better in schools, clarifying which use cases are and are not okay. One of the other interesting changes between 2023 and 2024 was that in the 2023 survey, there was a stark demographic divide in AI use. Household income was the main determinant of whether students had heard of ChachiBT and

and Black students were less likely to be using the technology than White and Hispanic students. Now, in 2024, Black and Hispanic students are far more likely to say that they have used AI for their schoolwork than White students are, and household income is a smaller factor.

One person who's been spending a lot of time following the use of AI in schools is Wharton professor Ethan Malik. This week, he shared a study conducted by the World Bank on the use of AI as a supplemental tutor in Nigeria. Students were given access to GPT-4 for a six-week intervention focused on English skills. Teachers provided guidance and initial prompting assistance throughout.

The big headliner statistic is that the students who worked with an AI tutor saw improved test results equivalent to two years of typical learning. The World Bank compared these results to their entire database of educational intervention studies and found that it was in the top 20% for effectiveness, even though it was only a six-week program. What's more, the program benefited all students, not just the high achievers. Girls who were initially lagging behind boys in test performance benefited disproportionately, allowing them to catch up.

What's more, benefits scaled with days of attendance and didn't taper off, suggesting that an even longer program would yield further benefits.

Now, Ethan's post on this went wildly viral, but not for all the good reasons. As Ethan pointed out, the fact that this is teacher-led is likely very important. We know that independent use of AI as a tutor can harm learning in some circumstances because it gives the illusion of learning. That said, by and large, the conclusion was similar. John Domingue writes, it's now obvious that Gen AI will completely transform teaching and learning. Grimes tweeted and said, I think we might be about to enter an educational renaissance.

OpenAI's Rune said, I have a very different opinion on that. Maybe we'll get into it at some point. But in any case, it does feel like a big deal and a really interesting study. And I expect that we're going to hear a lot more about AI in education in 2025.

Now, moving over to the industry side of things, after a fresh round of controversy, beleaguered Apple, at least when it comes to AI, is pulling the plug on AI notification summaries for new services. The AI notification summaries were one of the main features of the lackluster rollout of Apple intelligence last year.

The idea was that Apple intelligence could aggregate a brief overview of multiple text messages or app notifications. Users quickly began mocking the feature, claiming it was stiff and wooden at best and horribly inaccurate at worst. These issues hit a whole new level when applied to news stories. A few weeks ago, the BBC complained their reporting was being completely mangled by the AI, leading to misleading headlines. In one prime example, Apple intelligence misrepresented the news by stating that accused murderer Luigi Mangione had shot himself. Apple

Apple said they were aware of the issue and would make changes to clarify when AI summaries were being used. The company now, though, has gone one step further and completely disabled the feature for new sources and some other apps. AI summaries will also now be displayed in italics, and Apple will inform users when enabling the feature that it's still in beta and could be misleading. Apple is truly adrift at sea right now when it comes to AI. Meanwhile, ex-OpenAI CTO Meera Muradi has been making moves. Her mystery AI startup has made its first hires.

When she left the company back in September, her only comment about what would come next was that she wanted to, quote, create the time and space to do my own exploration. A month later, we started to hear rumors that she was courting VCs to raise a $100 million seed round for a new startup. Reporting said the company would build AI products based on proprietary models, which tells us exactly nothing. Now, we still don't know much about what Murati's startup will do, but rumors are emerging that they're staffing it up with high-profile AI researchers.

Wired reports that Jonathan Lockman, previously head of special projects at OpenAI, has joined the company. Their sources say that around 10 researchers and engineers have now been poached from labs including OpenAI, Character AI, and Google DeepMind. However, those sources also noted that the company still doesn't have a name nor a clear product direction.

We should start a community poll, maybe a bingo board like Cabin in the Woods. My vote, it's got to be something to do with agents, right? Anyways, guys, that is going to do it for today's AI Daily Brief Headlines Edition. Next up, the main episode. Today's episode is brought to you by Vanta. Trust isn't just earned, it's demanded. Whether you're a startup founder navigating your first audit or a seasoned security professional scaling your GRC program, proving your commitment to security has never been more critical or more complex.

That's where Vanta comes in. Businesses use Vanta to establish trust by automating compliance needs across over 35 frameworks like SOC 2 and ISO 27001. Centralized security workflows complete questionnaires up to 5x faster and proactively manage vendor risk. Vanta can help you start or scale up your security program by connecting you with auditors and experts to conduct your audit and set up your security program quickly.

Plus, with automation and AI throughout the platform, Vanta gives you time back so you can focus on building your company. Join over 9,000 global companies like Atlassian, Quora, and Factory who use Vanta to manage risk and prove security in real time.

For a limited time, this audience gets $1,000 off Vanta at vanta.com slash nlw. That's v-a-n-t-a dot com slash nlw for $1,000 off. If there is one thing that's clear about AI in 2025, it's that the agents are coming. Vertical agents buy industry horizontal agent platforms.

Agents per function. If you are running a large enterprise, you will be experimenting with agents next year. And given how new this is, all of us are going to be back in pilot mode. That's

That's why Superintelligent is offering a new product for the beginning of this year. It's an agent readiness and opportunity audit. Over the course of a couple quick weeks, we dig in with your team to understand what type of agents make sense for you to test, what type of infrastructure support you need to be ready, and to ultimately come away with a set of actionable recommendations that get you prepared to figure out how agents can transform your business.

If you are interested in the agent readiness and opportunity audit, reach out directly to me, nlw at bsuper.ai. Put the word agent in the subject line so I know what you're talking about. And let's have you be a leader in the most dynamic part of the AI market. Hello, AI Daily Brief listeners. Taking a quick break to share some very interesting findings from KPMG's latest AI Quarterly Pulse Survey.

Did you know that 67% of business leaders expect AI to fundamentally transform their businesses within the next two years? And yet, it's not all smooth sailing. The biggest challenges that they face include things like data quality, risk management, and employee adoption. KPMG is at the forefront of helping organizations navigate these hurdles. They're not just talking about AI, they're leading the charge with practical solutions and real-world applications.

For instance, over half of the organizations surveyed are exploring AI agents to handle tasks like administrative duties and call center operations. So if you're looking to stay ahead in the AI game, keep an eye on KPMG. They're not just a part of the conversation, they're helping shape it. Learn more about how KPMG is driving AI innovation at kpmg.com slash US. Welcome back to the AI Daily Brief.

Today, we are talking about something that has been a huge point of conversation in the tech and really just everywhere in the world, which is, of course, the looming TikTok ban by the U.S. government.

I haven't really covered it on this show because, of course, we're not a general tech show, even though it's been lurking around the horizon and has been such a dominant force. However, I thought today, as the ban officially looms, that it would be worth discussing at least this one specific dimension about it, which was encapsulated in a piece by Fast Company yesterday, one reason the U.S. government is so spooked by TikTok AI.

So here's the TLDR for those who haven't been paying attention, although I would be very surprised if you guys weren't pretty up to speed on this. There has been a long-term conversation around whether TikTok should be banned in the United States. Many in Washington consider the app a national security risk because it could be used to spy on millions of Americans and spread propaganda. The issue here is, of course, that the app's parent company, ByteDance, is based in China.

Now, ByteDance claims that the user data is stored in the U.S., but has been noncommittal about whether they share data with the Chinese government. Lawmakers haven't been all that clear around exactly why they consider TikTok's user data such a serious national security concern, which for what it's worth has caused a ton of consternation among TikTok's users. But Mark Sullivan of Fast Company has put together a theory that it all has to do with AI-powered content. He presented the view from a Washington insider that China is playing the long game with TikTok.

The key innovation with TikTok, while initially about a type of short form video that creates effectively video or audio memes, its success was really about its recommendation algorithm. TikTok is unbelievably powerful at serving people content that's going to interest them, even if they're not sure or couldn't articulate why they would be interested. If you've ever been flipping through TikTok, scrolling through your For You page, their algorithm takes into account not just things like likes and comments,

but micro signals, like how much longer you watch one video as opposed to your average with every other video. TikTok will then test aggressively, serving you a barrage of somewhat related content, all of which is on the one hand extraordinarily effective at keeping attention, and positively, people would say, discovering things that are going to be of interest to you. But from a more negative side, nothing that we've ever seen more aggressively accentuates the problem of filter bubbles.

I'm a weird one for someone like TikTok, because I genuinely like checking out news from all political perspectives to see how different people are responding to the same event. And I can watch TikTok try to lead me down either conservative rabbit holes on the one hand or liberal rabbit holes on the other in ways that are subtle, powerful, and in some cases, insidious. Anyways, the point is that, again, the key innovation with TikTok is about its algorithm. And the concern is that that capability only increases as AI models improve. Sullivan's source says that the key concern is that, quote,

because AI makes generating content cheaper. The Chinese government might be able to leverage what it knows about each TikTok user and generate content for them that's specially tailored to persuade them. In fact, one thing that the author of this piece muses that is totally true is that the mobilization of young people trying to avoid the ban going into force

is in fact a demonstration that the platform can coalesce political power. One of the funnier and frankly more quintessentially American behaviors in this story is rather than simply accepting their concern around China stealing our data, US TikTok users have instead en masse decided to go download a similar app that is explicitly owned by the Chinese Communist Party called Red Note. It has been through most of this week the number one most downloaded app on Android and iPhone and shows the true magnitude of not only our rebelliousness but our petulance.

Anyway, let's talk about this idea of being able to mass generate personally tailored content for tens of millions of people.

Right now, it's not totally possible, but it's also not far away. In fact, one of the reasons that I think we're not going to see massive job loss is things like personally tailored content that are just going to radically expand how much stuff has to get produced. Part of why this reality is getting closer is just the rate of improvement when it comes to generative video. Then again, assuming individually tailored content does arrive much faster than most people think, it opens up an entirely new can of worms.

TikTok is far from the only platform that could be leveraged in the way the Washington Insider from this story is concerned about. Data on user preferences isn't particularly difficult to come by. And the question is, will the advent of customized content mean that data privacy shifts to becoming a generalized national security issue?

Could it lead finally to some form of internet bill of rights or a change in the way that privacy is viewed on a societal level? I'm pretty pessimistic, but David Green, the civil liberties director at the Electronic Frontier Foundation, thinks it's well past time for that kind of policy. He said, because of Congress's failure to enact comprehensive consumer privacy legislation, corporations from around the world are free to harvest Americans' data, store it forever, and then monetize it through ever-expanding uses and sales.

The ban or forced sale of one social media app will do virtually nothing to protect Americans' data privacy from another country. Ultimately, the Biden administration has made it extremely clear, and this seems to be something that they will have in common with the Trump administration, that AI development is an arms race that could be just as important as something like nuclear weapons. And yet it's becoming clearer and clearer that that arms race is not just about AI-enhanced weaponry. The AI wars are going to be information wars, and TikTok may be the first battleground in that conflict. Now,

Now, in terms of what happens next, this morning, just before I started recording, the U.S. Supreme Court upheld the ban. At the same time, however, the Biden administration has said that it doesn't plan to enforce the law ahead of inauguration, which should theoretically keep the app online for the time being rather than it being shut down on Sunday. Meanwhile, behind the scenes, there is a ton of scuttlebutt around who might be in a position to buy the app should the U.S. divestiture go through.

Elon Musk has been bandied around. Shark takes Kevin O'Leary, aka Mr. Wonderful, has apparently spent the weekend at Mar-a-Lago trying to convince President Trump that he's the one to take over TikTok. And then more recently, Mr. Beast has been using his huge following to advocate that he should have a seat at this table as well. Now, at this stage, TikTok has given absolutely no indication that they're actually interested in selling. So that's a whole different question. And the X factor, as so often is the case, is President-elect Trump.

At this stage, the CEO of TikTok is planning to attend Trump's inauguration. So what the heck this all means is anybody's guess. We'll have to wait and see until at least next week. For now, that is going to do it for today's AI Daily Brief. Until next time, peace.