As companies create AI-powered solutions, how can they ensure they're effective and trustworthy? Join IBM at the break to hear how companies can build trust in their AI with Hrithika Gunnar, IBM's General Manager for Data and AI.
Welcome to Tech News Briefing. It's Wednesday, June 11th. I'm Victoria Craig for The Wall Street Journal. President Trump's signature budget bill could throw a wrench in big tech's plans to power massive data centers needed to fuel the AI boom. But the journal exclusively reports those companies aren't sitting back quietly.
Then so-called sextortion scams have ended in tragedy for teenaged social media users across the country. Now scammers have found a new way to build trust faster. Our family and tech columnist explains what parents need to know.
But first, a coalition of tech giants, including Microsoft, Google, Amazon, and Meta, are lobbying members of Congress not to slash clean energy tax credits in the so-called Big Beautiful Bill awaiting approval from the Senate. The data center coalition, as they call themselves, is concerned rising power prices and power shortages could disrupt their investments, and they've sent a letter to the Senate Majority Leader expressing those worries.
Amrith Ramkumar is the journal's tech and crypto policy reporter. Amrith, you've gotten an exclusive look at this letter. What does it say? The letter to Senate Majority Leader John Thune basically says that tech companies are really worried about electricity shortages and price increases if
Congress goes ahead with the plan to gut clean energy tax credits and aggressively phase them out. The letter asks the majority leader to reconsider and also goes into other things like loan programs, office funding in the energy department, and some other changes they want to keep money flowing into clean power projects. And let's dig into that a little bit, because if these tax credits are suddenly taken away, what is the risk for some of these companies that have invested lots of money
money and time into building out these huge data centers to power AI.
Yeah, there are a lot of risks here. So for the tech companies, the thing they're most worried about is that there won't be enough power to run their data centers at the levels needed to train AI models. So data centers are getting bigger and bigger. The biggest ones now can consume as much power as an entire city, and they have to basically run around the clock, and they often need short bursts of power. So all of that put together means you need a lot of electricity from a lot of different sources. And
renewables are the fastest source to bring online in the next couple of years. They're expected to account for most of the new power generation in this country. So people are really worried that if you get rid of these tax credits, a lot of the planned projects that were in the pipeline will suddenly go away. There are also, again, a lot of concerns about higher prices. Tech companies running these data centers can probably stomach higher prices, but consumers can't. And that's the flip side, which is,
clean energy companies and the energy industry as a whole are also lobbying hard to save the tax credits for obvious reasons. So the tech industry getting involved is just new and a bit interesting because they hadn't really waded into this fight until this point. So clean energy is the biggest discussion point here, but also some tech executives you write are thinking that this could pose a big challenge for the U.S. companies keeping up in the AI race, especially with rivals in
in China. So my question to you is, how is the lobbying from these tech companies going on the Hill? Are these arguments sort of playing at both sides of lawmakers' political persuasions? It's really interesting. The biggest tech companies have been very careful not to be too out there on a lot of these issues just because they don't want to upset the Trump administration. So a lot of the lobbying on the energy tax credits is being done by industry groups or in private conversations.
But yes, this is definitely something they're pounding the table on in all their meetings with lawmakers and administration officials. Zoom out and you will get what the U.S. needs to beat China in the AI race in terms of chips, talent, and other areas. The U.S. is really well positioned.
But the thing they're most worried about by far is power generation. That's an area where China eats our lunch and their grid is growing very consistently year after year while ours is struggling to basically inch ahead.
So does it look like this coalition is going to be successful in getting these clean energy tax credits saved? The tech industry and the data center coalition face an uphill battle still to get what they want out of the clean energy tax credit discussion, which is to preserve the credits for longer. The Republicans have a 53-47 edge in the Senate. And memorandum
have, like Tom Tillis, Lisa Murkowski, John Curtis, they've expressed their desire to prolong the credits and side with the tech companies and businesses on this issue. But at the same time, you can see a world where the Trump administration and others pound the table and say, we can't afford this. There are already concerns about spending and the deficit in this big, beautiful bill. And fiscal hawks in the party aren't going to like anything that increases spending on stuff like these clean energy subsidies, especially in the House.
And the House and Senate eventually have to agree on this. That was Amrith Ramkamar, a reporter covering tech and crypto policy for The Wall Street Journal. Coming up, giving teenagers cell phones can help improve communication. But criminals have discovered they can exploit the trust teens have in their iPhones messaging platforms to make relentless demands for money. We'll have that story after the break.
Enterprise AI is an unstructured data problem at scale. How does generative AI address it? Rithika Gunnar, General Manager for Data and AI at IBM, explains. Think of this as emails, PDF, PowerPoint decks that sit in an organization. Generative AI has allowed us to unlock
opportunity to be able to take the 90% of data that is buried in unstructured formats, which really unlocks a new level of driving data and insights of that data into your workflows, into your applications, which is essential for organizations as we go forward.
As technology evolves, so too do the ways scammers try to extort users. A long-running scam that preys on teenage boys has in the past relied on social media apps like Snapchat. Fraudsters pose as teen girls, share nude photos, and ask for ones in return. Once boys reciprocate, the scammer demands money and threatens to share the boys' photos with their social media followers. If they don't, pay a ransom.
But this scam has moved outside the social media realm and onto Apple's built-in text message platform, where criminals are able to more quickly build trust. And it's had catastrophic consequences, as WSJ family and tech columnist Julie Jargon reports. Julie, you have talked to several families that have dealt with the tragic impact of this scam, which played out at least in part for their sons on the iPhone's Messages app.
Why do young users trust that platform more than social media apps? Yeah, so for one thing, iMessages have become kind of a central place for teens to converse with one another in group chats.
and one-on-one. And there's the blue text bubble, which has become kind of iconic among teens. They don't like texting with someone when they see a green bubble, which suggests that it's not coming from an Apple device. About 88% of teenagers, according to a recent survey, own iPhones. And
And so when they see that blue text bubble, they tend to believe that it is a teenager. That's according to some police detectives who've interviewed teens about this. So how does this scam work? Because it seems like building that trust on Apple's platform is really crucial to this. But then you also note in your story that AI seems to be
an also growing component of this because it doesn't rely on the users to send inappropriate pictures to the scammer. It seems like AI takes that whole process out of this scam. Yes, AI can be part of these scams, but teenage boys are still sharing their own nude images as well. So both
Things are happening. And so what these extortionists are trying to do is, yes, they are trying to gain the trust of teenage boys. And they do that by initially befriending boys on social media. They pose as a teenage girl or a young adult woman. And then they tend to migrate the conversation to another more sort of one-on-one app. For example, in Snapchat, where boys sometimes they can feel more comfortable exchanging messages.
video or images because of the disappearing nature of those images. But of course, the criminals can take screenshots. And then oftentimes, these criminals are also asking boys for their phone numbers because once they have that,
They have a way to continue reaching out to people. So unlike a social media app that you can close out of or delete off your phone if you feel you're being harassed, when the text messages are coming directly from your phone, from the messages app on your iPhone, for example, it's really hard to ignore that. And even when you block a caller from reaching out to you, the criminals can just call you again from a different phone number or iCloud account. And then in addition,
When criminals know that phone number, that local area code, it helps them home in on someone's location and makes the threats feel all that more palpable. So what has Apple said about this scam? And is it taking steps to protect younger iPhone users? Apple declined to provide a statement on this, but did provide information on some of their child safety tools. They have what's called a communication safety feature.
which is on by default for the accounts of kids under the age of 13. And then parents can turn that on if they have family sharing where they link accounts for older kids. And what that does is it automatically blurs nude images that are sent to a child in the messages app.
And if a child attempts to send a photo or video of themselves containing nudity, they receive a message warning them, asking them if they're sure they want to proceed. They receive a message urging them to talk to someone they trust if they're feeling pressured to either view or send nude photos. So that is a feature available, but something that parents have to be aware of and turn on.
And you talked to a number of parents whose children took their own lives as a result of being victims of this scam. Are the parents that you spoke to calling for any other action or do they have any other tools that could be used, things that they could?
wish that they had known or that could help other parents identify that their kids may be targets? Yeah, the parents I spoke to did say that they wish they had received some sort of notification, that they wish there had been something that tipped them off that this was happening. One of the things that the Messages app doesn't have is a way to report a dangerous conversation to Apple. There are features in some of the other social media apps that
including Snapchat and Instagram, that allow users to flag a sender who is threatening to share their nude images. But there's no specific way other than blocking and reporting it as junk mail on the Messages app on Apple devices. Because there's no way to self-report some of these things on the iPhone itself, do we have a good idea of how prolific this scam is? You can't report what you don't know, right? So tech companies are required by law to
report to the National Center for Missing and Exploited Children any type of suspected child sexual exploitation on their platforms, when that includes sextortion. And because there are specific reporting mechanisms for this type of situation on Instagram, WhatsApp,
which is part of Meta, and Snapchat, those tech companies have reported huge numbers of instances of suspected child exploitation. So for example, Instagram last year reported 3.3 million instances of suspected child sexual exploitation. And again, that includes sextortion. Apple reported just 250. The main thing is if you have children or grandchildren or students or any teenage boys that you care about in your life,
Please talk to them about this scam so that they understand when someone is reaching out to them and asking pictures that they understand that this could be part of a scam and not to send those photos, not to send money and go find a trusted adult to talk to.
That was Julie Jargon, the journal's family and tech columnist. And that's it for Tech News Briefing. Today's show was produced by Julie Chang with supervising producer Melanie Roy. I'm Victoria Craig for The Wall Street Journal. We'll be back this afternoon with TNB Tech Minute. Thanks for listening.
How can companies build AI they can trust? Here again is Hrithika Gunnar, General Manager for Data and AI at IBM. A lot of organizations have thousands of flowers of generative AI projects blooming. Understanding what is being used and how is the first step. Then it is about really understanding what kind of policy enforcement do you want to have on the right guardrails on privacy enforcement.
The third piece is continually modifying and updating so that you have robust guardrails for safety and security. So as organizations have not only a process, but the technology to be able to handle AI governance, we end up seeing a flywheel effect of
more AI that is actually built and infused into applications, which then yields a better, more engaging, innovative set of capabilities within these companies. Visit IBM.com to learn how to define your AI data strategy. Custom content from WSJ is a unit of the Wall Street Journal Advertising Department. The Wall Street Journal News Organization was not involved in the creation of this content.