By late 2023, DanaBot had been active for more than five years. DanaBot was one of the better established, longer running malware platforms in circulation. It was modular and professional, sold as a service to cyber criminals around the world.
The result, by this point, was a network of what the FBI estimates suggests is around 300,000 machines infected across dozens of countries. People were using DanaBot campaigns targeting banks and crypto wallets and government portals. And all of those attacks were feeding data back into a command and control system run quietly by a small crew of developers who made and distributed DanaBot. That meant spending a lot of time in this backend dashboard that they built for it.
And the log lines in that dashboard tell a story of all of the stuff that people are getting up to using DanaBot. Someone clicks the wrong link, a session gets hijacked, a password gets stolen, all appearing in this dashboard, this list of devices being compromised. And one morning, someone inside of the operation spots a log entry. At first glance, it looked like any other infected machine that had come online on the network. But eventually, they start to clock that there's something weird going on with this machine.
The data trickles in, keystrokes, browser sessions, screenshots, and they go. Boy, does this one infected machine look a lot like one of our machines, which would suggest that their malware had infected a machine inside of their own infrastructure. At which point, Danabot worked exactly as designed.
The InfoStealer that was a core element of it grabbed and then importantly stored saved passwords and cookies for Gmail, iCloud, Facebook, and a bunch of Russian social media services. Their real doxed info was now being stored on the DanaBot database. A fact that would prove to be one of the toeholds that led to some of the developers of DanaBot getting unmasked.
Just this past week, the U.S. Department of Justice unsealed charges against 16 of DanaBot's alleged developers and affiliates. Names, faces, real-world identities, some exposed because of this initial infection. We are talking about a professional malware operation affecting its own operators and the fallout. Let's start by talking about the story of DanaBot, here on Hacked.
Hey. Hey. How's it going? It's going pretty good. How are you? I'm doing good.
Jordan and I are just laughing at the fact that every time we do the cutaway for the intro sound, we actually play it and make it up as we go. At a certain point, it got longer, and you don't hear all of it, but it got longer than the actual theme song, which is relatively short. And then behind the scenes, there's this mad-libbed acid jazz nonsense shit going on. It is a good warm-up before you pause. Yeah, it's the way we get the energy going to make this show. Yeah.
It's like that little jazzy ad-lib kicks it off in our mind and we're like, yeah, we're making a show. Oh, you got it. You should hear the improv, like the scattered drums that's happening at NPR every day. It's like a jazz bar. How you doing?
Good. Summer's here. Smoke is here. Smoke being atmospheric smoke from forest fires, which has become the standard and norm where I live. But other than that, good. It was beautiful in the evening last night. Went down to the river, went fishing as a fisherman does.
Just, yeah, enjoy and try to enjoy the summer, the little amount of summer that I get per year. The brief window. I will say that every year when it comes, I feel like this year is different, but the sun isn't going down to midnight essentially now. Okay. And it's so strange. I feel like this year is staying up longer than it ever has. Maybe I go to bed earlier than I used to or something's physiologically changing with me, but like...
Now when I walk into my bedroom and I like had to pull my like shades closed and it's still bright in my bedroom and I'm going to bed, I'm like, what is wrong with the world? Like, has it always been like this?
I remember, I definitely have those moments where I feel like I'm a little kid again, especially kind of growing up where we're from. It's like the bedtime of a small child and the moment when the sun actually goes down are like three hours apart. Yeah, at least. So you're just like laying there in bed with the sheets up and there's like birds chirping outside and like older kids playing and you're like, this sucks. Yeah, totally. And now as an adult, it kind of is just happening again. Yeah.
Exactly. I went through the stay up way past the darkness. Exactly. Now I'm in the decline, the youthful decline. So you don't be back in bed at 730. Like, why is it so bright out? I'm fiercely protective of my eight hours of sleep every single night. And so the sun is just fully up certain nights. And I do it again. I do it again. Do it again. So what else? What do we got to cover before we get into it? I think we should...
say that this episode is brought to you by push security sure is more on that later uh anything else we should cover we got some other fun stories i want to talk about after the ad break but up until that point i want to talk about this dana bot story that came across our desk this past week because of this kind of recent
unmasking that happened. It's a fascinating story. It's got that fun little turn in the middle of it. I want to dig into it. That malware as a service, like to create an enterprise like this, I don't know, just great, great, great tale. Can't wait to get into it. It's always fascinating the moment when a like small underground cybercrime project
just sort of grows and grows and grows and whoopsie doodle. You got yourself a pretty real business on your hands here with like clients around the world. Exactly. Like, like you've got account managers, business development people, you've got an entire engineering team. Yeah. And then the, uh,
I mean, the thing that's interesting about this is that based on where this is, we're probably a ways out from seeing any kinds of arrests. But what we do have is identities assumed unalleged of the actual people. And it does seem to have to do with this self infection that took place that adds like an extra interesting layer to this one.
The US government unsealed charges against 16 of Dana bots, alleged operators. They've been running this global malware service for years until this self infection. And these real name is real lock and credentials were all kind of unmasked. It's unfolding right now, which is why I think we should talk about it. Yeah, I'm down. So I'll kick stuff back in 2015, 2016, with the quiet rise of a user known as pumpkin in
In the like weird murky world of Russian language cybercrime forums, places like like ExploitIn and Verified, this new vendor appeared, Pupkin. Starts with like smaller products, nothing quite like what DanaBot would become. He was selling account checkers, brute forcers, credential stuffing kits, stuff to test stolen credentials against like real world login portals, but at larger scale than doing it manually. Lightweight tools, but like effective stuff targeting those poorly protected logins.
basically making it easier for people getting large scale username and password dumps to test out what works. Identity theft. Identity theft. Push security. Sorry, I couldn't help myself. Pupkin is developing this stuff themselves, but the reputation, based on what I could read, they started to build more of a reputation, not just as a coder, but as like a reliable service operator. We were talking about the business side of all of this and,
So, Pupkin's thing was like, they will answer your questions. They're actually updating the tools. They are keeping customers happy.
Likely research suggests that pretty quickly, if at any point Pupkin was truly unindividual, quite rapidly, it probably came to represent a small team of developers and infrastructure maintainers operating in Russia and Eastern Europe. I can't help but draw the parallels in this to some of the episodes we've done about video game cheating. Because I'm sure it starts as one person being like, I wonder if I could build this.
And it's like, okay, I built it. And then it's like, okay, I sold it to a few people. And now I have a Discord and like a Telegram. And now I have 3,000 clients. And I have a customer service representative. And I have... And it just starts to snowball. Like you would hope a for-profit company would. Yeah. Which this...
is very rapidly a for-profit business. It's the kind of thing that happens where you make something by yourself, you put it out into a community. The people you know in that community, maybe one of them reaches out saying, this is dope. Have you thought about doing this? And now you have something of a collaborator that maybe becomes a business partner and it just grows naturally the way those things do. Customer requirements expand. The scope of your product expands. The scope of your engineering team expands.
Scopier revenue expands and now you live in some island with a bunch of money from malware.
May 2018, Proofpoint, the research firm, first identifies DanaBot in a phishing campaign targeting a bunch of Australian banks. Victims had been getting emails with this malicious Microsoft Office document containing some macros. The macros are enabled, the document downloaded, this DanaBot library, boom, they're infected. It was modular, it was built from these discrete kind of off-the-shelf components. It did a couple of key functionalities that still persisted through DanaBot's whole history.
Key logging, credential dumping, remote access, information stealing. The good stuff. The good stuff. Importantly, analysts at this time start to spot what you might expect, which is some geofencing logic that prevents the malware from executing inside of countries like...
same with shocking shocking you got russia you got belarus you got kazakhstan this is likely to avoid drawing the ire of local law enforcement this is a hallmark of these types of cyber crime gangs is you don't uh putin's cool with it as long as you're not doing it to him a hundred percent yeah um don't i think i can i'm sure i'm watching the language don't
poop where you eat, I believe would be the... Which is somehow worse than just saying it the way you... It sounds more disturbed. Yeah, yeah. It's not just an expression, it's like advice and it's too literal. Don't eat with your left hand. There you go. I don't know if you get the reference. I do, I do. Yeah, I know, you've been into it. So...
2018 to 2021, this is the rise, I would say, of DanaBot. It's evolving as a platform. The affiliate side of this, the buyers and renters of the malware start getting their own unique versions with their own unique campaign ideas. They're white-labeled. They're white-labeling it. Yeah, white-labeled business, gotcha. And now the central operators through this dashboard we talked about in that intro story are able to track usage, build the users accordingly. They're able to manage support.
This is modular and scalable. It is software as a service. Multi-tenancy. They probably have all of the same. I don't know why I'm shocked by it. I'm not shocked by it. It is just a SaaS business model applied to an illegal business. It makes complete sense.
Yeah, when you read about, when you read Dana bot coverage from this window of time, because this was, I would say, four or five years ago, malware as a service had existed, but it was getting a lot more coverage of like, you should understand that the way this works isn't.
an intrepid coder goes out, builds a custom tool set for themselves, and then goes out into the world and tries to do crime. It's that the affiliates of this do not need to be coders. They can pay money. They're being sold to by salespeople. It's a sales pipeline. It's literally just like, yeah, we have a business development model. We have an affiliate sales channel. We have a vice president of affiliate sales who supports them. It's like, if you called a telco, they would have the exact same structure. Yeah.
The idea is that like you're working with them as an affiliate. You're working with them as a contractor. You are paying them to deploy campaigns and to collect this stolen data via this panel, this platform that they're then delivering back to you in the form of a report. It's very, very corporate.
except it's crimes out of Belarus. It's fascinating. I would typically at this moment love to use the term that I love, wasted utility, but I actually don't know if this is wasted because it's obviously...
a very successful enterprise. Like, is it furthering the good of humanity and, you know, positive utility? No, but it's still providing utility. A hundred percent. Yeah. So there's the self-infection that we talk about.
I talked about in the intro, there were other little sort of hiccups along the way. In 2019, there was an admin panel leak. It is unclear if this is a disgruntled affiliate, just an operational security slip up. But screenshots of that all-important DanaBot backend admin panel get kind of leaked out. And researchers start to sort of figure out how this is all working. They get a sense of the scale of the bots that are operating in different countries and the number of different users that
It's basically like if you were familiar with Google Analytics, but make it crime. Like people start to see what this actually looks like on the back end.
And we also learned that like importantly, Pupkin's group, this dev like group is enforcing rules. They're vetting affiliates. They're imposing that geo-fencing still that kicked off in 2015. That has persisted to this day. You are not going to be targeting a .ru domain or a government institution. The rules are clear and they're quite well enforced because Pupkin seems to know how to run a business. Yeah, the...
The executive subcommittee for risk assessment has identified that an infiltration at the affiliate level could poise big, big, big risks for them in the future. But in the meantime, it's going off like gangbusters. 2020 to 2022, it's expanding. You're getting campaigns running in Poland, in Italy. The Italy one was interesting. They took down the tax website and replaced a bunch of banking forms with fishing fields. That just...
it seems to just, that just worked. Yeah, of course. Yeah. There were, US, it was a lot of crypto, like exchange redirection-y type stuff. You think of like 2021, it was just great time to be in that world. Yeah.
Yeah, exactly. COVID's kind of hitting, chaos is going. Everybody's talking about Dogecoin and how it's going to pay for their life. And for some it did. And for some it didn't. The other thing was that DanaBot at the start started collaborating more with other people in this ecosystem. It was being used as basically a secondary payload deployment thing. And again, you had just...
A great platform by which to get crypto scams and banking scams and all manner of stuff onto people's systems. It's now becoming not just its own service, but a front door for other people's stuff to get onto people's systems. Yeah.
Mid-2022, the FBI, working with international partners, quietly seizes a bunch of these command and control servers used to operate DanaBot. These were the places where that backend infrastructure, the servers that received all of that stolen data, they managed the plugins, they stored all of the logs.
And crucially, for this unmasking that we're building to, the data on those servers doesn't seem to have been encrypted in the same way as some other stuff. So you have basically just like full stack, like we got bot logs, we got the configurations of the campaigns. We've got just like a really, really good document of everyone that was infected there.
Well, you see encryption takes up extra space, adds additional system latency. And when you're like causing crime, like...
If you get hacked, what's the worst that's going to happen? You know, what's the worst that's going to happen? Well, you're not pooping where it's bad. We need a kid-friendly version of that expression. You're not doing crimes in your own backyard where the cops are going to get mad at you. Let's just put it that way. Correct. That seems to be the trick. Probably have a friendly relationship with the cops in your own backyard. You probably help make their annual bonuses with your additional tax revenues. Exactly.
2023, this is all starting to get a little bit sloppy. Analysts from Checkpoint and Malware Hunter teams start to see some inconsistencies. Suffice it to say, the obfuscation starts to get a little bit poorer as the network of affiliates starts to grow. This is largely coming from lower-tier affiliates, cracked versions of the malware. I think at certain points, people were pirating Dana Fott. That's so funny. It's like your product's so good that there's now like, you know,
stolen versions of it circulating right there's an unconfirmed theory um that came up in like some of the intel research talking about like one of those self-infected machines might have revealed some internal chat logs like potentially the suggesting that this period of time there were some disputes between those affiliates and pupkin there were uh i guess what you might call customer service issues at this at this stage in the the operations history
I really, like, knowing that they rode geofences to prevent their, like, you know, their malware from infecting regional systems, it's surprising that they didn't go to great lengths to make sure that there was no protection about the malware getting on any of their actual operational systems. Yeah. Yeah. Yeah.
For as tightly run a ship as it was on the customer facing side, there were spreadsheets being manually updated. There wasn't necessarily rock solid operational underbelly to this whole thing. Some of it was extremely well done and some of it was a off the shelf malware as a service Russian cybercrime operation. And you're going to get a mixed bag with something like that. Okay. Okay. So.
Danabot, it's thriving. It's a little messy at this point. It's gotten quite big. The self-infection has occurred. The dashboard leak has occurred. Early 2024, we start to get a little bit of the unraveling. Danabot, these campaigns are still continuing, but it's becoming less popular. There's newer malware starting to make its way onto the scene. Really purpose-built competitors. Competitors.
Competitors. New competitors have entered into the market. I like this. It's good. A bunch of the accounts on the different underground forums where Pupkin and the rest of the DanaBot admin team were really prominent start to go dark. There wasn't really a big public takedown or anything or a public doxing. They're just sort of quietly turning the lights down and loading out all of their stuff in the cardboard boxes. May 2025...
We get this big U.S. Department of Justice unsealing of these criminal charges against the 16 individuals accused of developing and operating DanaBot, this whole malware service. They go after these two ringleaders, and they cite more than 300,000 infections globally and $50 million in losses with DanaBot sold to affiliates at about $3,000 to $4,000 per month, we found out in this indictment.
What really sealed the case was it would seem this initial accidental infection that we kicked off the episode talking about these developer machines that were runningly active the payload that phoned home back to their own servers with their private credentials like any other one of their victims. You got their credentials, their panel sessions, their messages. And that's how investigators were able to use this data to correlate the hackers aliases to real names.
confirming the identities through subpoenaed subscriber information from the tech providers. Despite this indictment, none of the 16 defendants have been arrested. All are believed to be residing in Russia beyond the reach of U.S. law enforcement. In spite of that, though, I think it's worth talking about because this kind of exposure is still quite rare.
It didn't collapse because of a whistleblower. It didn't go down because of a rival crew or anything. They self-infected themselves. They slowly just kind of started to try and turn the lights down so no one would notice. And in spite of all that, still kind of managed to bring themselves down a little bit. Yeah, kind of took itself down with its own telemetry. Good way of putting it. Thanks. I find it...
I wonder, so this is like when I initially read the story, the thing that jumped out at me is like, I wonder if this is giving law enforcement an idea. Say more. Well, just like malware is used for so much bad.
And in this situation, the malware was part of what brought them down. So essentially getting access, like, you know, for so long there's been confidential informants and there's been, you know, people going undercover. There's all these ways to try and penetrate these organizations. I wonder if the justice departments of the world are not sitting there being like, like they're doing it. Why don't we do it to them? Cause it's like, it's showing how effective it is.
it is to attack these groups with their own products, essentially. To pose as a customer of one of these malware as a service things and then inadvertently, not inadvertently, very intentionally try and get the tool back onto the developer system. Yeah. I don't know if I'd pose, I don't know how the attack vector for getting it onto their system, but I wonder if you don't have law enforcement sitting back being like,
Maybe we need to fight them on the same battlefield. Sure. So like if they're going to be running in this malware as a service, malware space, like we know that we can get to them digitally if we had malware on their computers. We could do better identification. We could see what changes are coming to further prevention mechanisms. We could...
The same way that people are penetrating software packages, like we talked about the other day, and things like this, and introducing malware and backdoors and rats and all the rest of this stuff. If you're in law enforcement, if I'm reading this, like if I was reading this from like a white hat perspective, I go, wow, look at how valuable that was to our investigation. Imagine we just had some of those tools of our own.
Which I guess brings up a whole conversation about the US government, NSA, and people that have moved to Russia to get away from persecution. But...
Yeah, I have to... So I guess a few things. I would assume at this point that cybercrime law enforcement must be in the business of developing their own... Custom malware? Yeah, I was going to say tools, but it's like the tool being malware. Yeah, yeah. Because you don't need to get a self... Like the self-infection of this is a great hook and a cool reason to talk about it because it's...
It's interesting, but it's not necessary, and it's not even that effective because, again, because it was a self-infection, the doxed information of the developers was stored on their command and control servers, which meant that you still need to seize their servers. But if you deployed something that you controlled and you built, you don't need to seize anything. It's going to come right back to you. Sure. Yeah.
I looked into the technical specs on this and it was all written Delphi, like Delphi, Delphi, D-E-L-P-H-I. Most people say Delphi, but I think it's properly pronounced Delphi. Okay. That's coming from my deep knowledge of the Greek language, thanks to my wife. The...
Which is weird. It's a weird language. It's an old language. It's not a common language. It's a language. It's something that I know hundreds of software engineers and I might know one that knows Delphi. Interesting. Yeah. I wonder what that suggests. I don't know. Probably my initial reaction when I heard that was that the
Pupkin was probably older. And that's only because people that I know that know Delphi are typically older, university professors, people like that. It's not a language that many people learn nowadays. Like every software engineer knows Java TypeScript, but none of them know Delphi. Interesting. Yeah. Huh. Yeah, it seemed it was very well established. And I'm struck by the fact that
It's collapse and kind of them just sort of turning the lights off on themselves lined up with this much larger rise of like malware and stealer logs and session hijacking as a service type products. They got cheaper and cheaper and cheaper and
these big bulk info stealers, like there was a little bit, there was a bunch of these tools flooded the market around this time and got just like a glut of credentials flooding into the market, pennies per victim. It got really, really cheap. There was in our market term, sort of a race to the bottom a little bit that didn't necessarily lead to the self-infection
but probably did lead to the breakdown of the operation, which may have contributed to the breakdown of the operational security, which may have led to that breakdown of, which may have led to that self-infection. Like there was a race to the bottom in this marketplace and this pretty well built, thoughtful piece of software suddenly was struggling to confirm, to compete. And I found that part of it pretty interesting. Yeah.
Let's hang and talk about the self-infection momentarily. I'm just thinking about it. If you are saying geofenced off, say our entire engineering teams in Russia are executive. All of our computers are immune to it. Yeah.
What do you think the chances are that somebody got it when they went on vacation? No, I know what you mean. It's like, if theoretically one of the like special rules of this service was you don't go after people in your own backyard, then,
And one of their own people got caught. It's like, okay, well, was a member of the team outside of that geofence? In which case the geofence was rendered poorly. Were they typically outside of that geofence? Or did they connect to the network while they were traveling, to your point, while they were on vacation? It's like, I don't know what the story is, but if you look at what happened and you looked at the rules of this service, something happened that allowed that to sneak on through. And it's unclear what it was.
Yeah, apparently they had 150 daily active command and control servers, which is a lot. And they were running approximately 1,000 daily victims. That's a big operation. Yeah. We've talked about bit lockers and encryption malwares that lock you down and you have to pay for the key.
So I'm just wondering, like, it would be fascinating to know the revenue numbers for something like this. Like, whenever we talk about these businesses, I always run some dumb calculation, but like a thousand victims a day, like obviously they were charging three or $4,000 a piece, but like the, as the ecosystem goes,
Like, what was the actual, like, financial cost would be fascinating to know how much money they were bringing in. I saw, so I think that the charges, and this would all be negotiated in a court environment, but I think the charges estimated a $50 million in damages. Yeah, that's actually not crazy high.
No, because I think a lot it's like they were doing huge scale, but I don't think any of them were massive. I'm sure that some of them were very large, but the vast majority of them were zero. Like, I think a lot of the time you're not getting the hospital that will pay anything to get out of a ransomware situation or you're not breaking into the crypto wallet with, you know, $5 million in it. A lot of the time it's small. We're picking up pennies here. Yeah. Yeah. But still a non, a pretty real amount of money.
So one of the things I thought was interesting is aside from not being persecuted by Russian authorities, even though they've been identified, a lot of their activity actually happened. They had spikes in activity that aligned with Russian geopolitical interests. So when Russia invaded Ukraine, Ukraine got hit, blasted with a Dana Bot attack at the same time. Huh. Shows you there's a little bit of...
I'm motioning my hands side by side, but yeah, sure. Alliance, maybe some collaboration, collaboration. Yeah. Big companies, maybe a sponsorship for presidential campaign, donate some money, buy some political leeway. If you're not, if you're not doing it, what are you doing? It's it would make sense that you don't mess around inside of the geo fence and
Because you don't want to bug law enforcement. And maybe you curry a little favor with law enforcement. Yeah, that all seems very plausible to me. You'd also talked a bit about how there were modified versions of it and, you know, the white labeling aspect of it. There was also a version of it created that explicitly targeted military and diplomatic systems. Well, there you have it. Well, there you have it. Well, there you have it.
Maybe not so not state run after all. No, I think you might have connected the dot there. I think that's DanaBot. I think we're going to kick it over to some commercials, however briefly, a little ad water slide. And when we come back, boy, am I excited for us to talk about a big old AI-powered software engineering platform that wasn't.
Jordan, somebody that the two of us know, forwarded me an email and said, hey, I can't log into this Microsoft platform. Do you have any? Can you try and see if it works for you? And I said, sure. I immediately looked at the URL that the link was going to and it was deployed on some Indian engineering company's server in some non-exposed directory and immediately knew what was happening.
and it was Adversary in the Middle. So it had a full version so that the password manager would use your password manager passwords. It was coming from Microsoft.
but it was definitely not Microsoft. Yet it was using the password manager. That's spooky because I feel like a lot of people rely on the password manager to correctly identify that the site that they're logging into is the real one. We talked about this with Adam in the episode.
And I immediately identified it, noted it, messaged them back and was like, hey, you know, this is a phishing attack. You've been phished. Change your login creds immediately. Scary. And yeah, that happened in our circle quite recently. Which brings us to the sponsor of the show, Push Security.
Because those kinds of things like phishing, credential stuffing, session hijacking, and account takeover are now the number one cause of breaches right now. Yeah. And with the ability to trick password managers into still delivering the username, credential, and passwords...
Why wouldn't you? Exactly. And meanwhile, most of the security tools people use are still focused on endpoints, networks, and infrastructure. And meanwhile, the browser, where all that gnarly crap went down, the actual place where people work has been mostly ignored. And Push, they're trying to change that.
They built a lightweight browser extension that observes identity activity in real time. It gives you visibility into how the identities are being used across your organization, when logins skip multi-factor, when past years get reused, and when somebody unknowingly enters credentials into a spoofed login page. Then, when something kind of sketchy or risky is detected, Push can go ahead and enforce protections right there in the browser. There's no waiting, there's no tickets, it's just visibility and control directly at that identity layer.
and it's not just about prevention push also monitors for real-time threats like adversary in the middle attacks stolen session tokens and even newer techniques like cross idp impersonation where attackers bypass sso and mfa by registering their own identity provider it's kind of like endpoint detection response but just for the browser honestly very very relevant to your case study
Yeah, it was someone's client's email got hacked and they drafted a perfect response email and sent it out to a bunch of people that looked exactly like one of their emails. Like seeing the power of AI in the scripting, like it was...
"Hey, we have a request for proposals. Please download it at this link. Thank you, blah, blah, blah. Here's the timeline." Like, it was nailed. It looked and was a perfect email clone because it came from a hacked email account. Yeah, of course. And then it just had an adversary in the middle link to get to those RFP documents. Boom. Anyway, back to Push.
The tea behind it's great. If you want to know more, listen to the episode we shot with Adam. Amazing. There's literally no better way to understand what this company does than to listen to that episode. Identity is the new endpoint. Push is treating it that way. Go check them out, pushsecurity.com, and listen to that episode with Adam if you haven't because it is awesome. Pushsecurity.com. So we made an episode a long time ago called The Problems with Passwords, and I was pretty critical about password managers. And funny enough, years ago...
The company that I work for and run started using 1Password teams, and it's been amazing. I now gift 1Password subscriptions to people for birthday presents and Christmas presents because it's made such a profound impact on my life, my cybersecurity, even just my organization.
of access to accounts and accounts that I forgot about when there's hacks, it notifies me. I changed passwords. It's been amazing. And we're happy to have them now on as a sponsor. One password extended access management is the first security solution that brings all these unmanaged devices, apps, identities, gets them all under your control, ensures that every user's credential is strong and protected. Every device is known and healthy and every app is visible.
1Password Extended Access Management solves the problems traditional IAM and MDM can't. It's security for the way we work today, and it's now generally available to companies with Okta and Microsoft Entra and in beta for Google Workspace customers.
1Password's award-winning password manager is trusted by millions of users and over 150,000 businesses from IBM to Slack. Now they're securing more than just passwords with 1Password Extended Access Management. Secure every app, device, and identity, even the unmanaged ones, at 1Password.com slash hacked. That's all lowercase. That's 1, the number one, password.com slash hacked. 1Password.com slash hacked.
The all-new 2025 Arctic Wolf Threat Report is now available.
Developed by top experts from the leader in security operations, the 2025 Arctic Wolf Threat Report is an unparalleled deep dive into the world of evolving cyber threats. Based on insights from its own IR engagements, Arctic Wolf's experts explain all the latest change agents on the cyber threat landscape. Discover how cyber criminals are stealing data, refining BEC scams, and exploiting vulnerabilities in dangerous new ways. Plus, you'll learn about a critical shift in cyber criminal behavior.
and get real-world advice to help your organization improve its cybersecurity resilience against ransomware and many other threats. Download the 2025 Arctic Wolf Threat Report today. Visit arcticwolf.com backslash hacked. That's arcticwolf.com backslash hacked. Hey, Jordan, when we started this podcast, did we ever think we'd make merch? Heck no. I hate pants.
Good thing we don't make pants. That's true. Guess what we make now? Hats? Hoodies and t-shirts and hats. Heck yeah. Visors, lots of visors. Everybody needs a hacked visor. I'm starting a trend. If you're at DEF CON this summer and you're not in a hacked visor. Anyway, we use Shopify. Shopify is great.
It's a global commerce platform that helps you sell every stage of your business. From the launch your own online shop stage to the first real life store stage all the way to the did we just do a million orders stage. Shopify is there to help you grow just like they're helping us. Whether you're selling scented soap or offering outdoor outfits, Shopify helps you sell
They really do mean that. From their all-in-one e-commerce platform to their in-person point-of-sale system, wherever you are and whatever it is you are selling, Shopify has got you covered. It is such a comprehensive platform. You can do pretty much anything on it. It integrates with so many other platforms. It's great. When we did our analysis to figure out which...
online sales platform we wanted to use Shopify was the automatic winner. Yeah. Powers 10% of all e-commerce in the US and Shopify is like the big global force behind Allbirds, Rothy's, Brooklyn, and millions of other entrepreneurs of every size across 175 countries. If you want your name to be on the list, you should probably go check out Shopify and their award-winning help is there to support your success every single step of the way. Because businesses that grow.
Grow with Shopify. Dang straight. Sign up for a $1 per month trial period at shopify.com slash hacked, all lowercase. You go to shopify.com slash hacked right now to grow your business no matter what stage you are in. That URL one more time, Scott? Shopify.com slash hacked. Ka-ching. Ka-ching.
Jordan, I'm excited about this one because we get to talk about fraud that's not crypto related. No coins in this one. If you're not up on it, which I hope some of you aren't because it's a good story. This is a story about an AI company that wasn't and about a bunch of money that was invested and a bunch of things that were supposed to be happening that just turned out to be
a room full of Indian software engineers. There's just so much buzz around like no, no code platforms and vibe coding. And you can make anything just by sort of winking at your computer. Like that's the moment that we are living in. And it is admittedly, and like, it's, it's a very exciting moment. So many of these tools, so extraordinary. And these, these,
brave people ask the question, what if we just lied about that? I have so much to talk about with this, but here's the trigger. This company came out in 2016. We're talking way before the AI revolution started. These people came out and said, we have the ability to do this. We're doing...
They said back then that they were doing what we're doing now. Yeah. So like, and it's not, I think they're trying to move away from the no code world and they're calling it natural language coded or like it's all based on NLP, but whatever, same thing. You're not writing any actual source code. So no code works for me. But like these people were like, I think, I think the AI revolution and how good it got is what killed these guys. Yeah.
Because they were running this scam. So, okay, I should go back and tell the story of what this is. So Builder.ai was a no-code platform. Originally started as Engineer.ai. It was founded out of London, England. And they essentially made the same promises that you're seeing by things like Replit and Lovable and stuff today.
You go in, you type a natural language prompt, and instead of getting instantaneous code back, you eventually get code back because humans were writing it rather than robots. Anyway, so they raised boatloads of money, securing valuations as high as $1.5 billion, I think, on their last raise. They raised $250 million from Microsoft in 2023. I think that was their last big raise.
Yeah, $450 million total before the whole house of cards came tumbling down? Correct. Oof. And here's what I think is funny. It worked for so long and convinced so many smart people. And they tricked so many people. And I think the thing that killed them was the fact that AI actually showed up. You're right. They could have just branded their company as engineer.ai and then just claimed that...
I don't know, the AI was a room full of Indian software engineers. And really what they were writing was like an outsourced channel model, affiliate sales, vector pipelines, the whole nine. And I think what really killed them is they probably, when people showed up with real AI that could do this, they were like, why does yours take so long? It's like, well, there's actually people doing the work. We should talk through the timeline of this because I find it interesting. But I wonder if...
Maybe what happened was that for that 2016 to 2020 pre-chat GPT era of time, they were able to hide behind a story of like, well, this is proprietary. We don't want to show off exactly how this is all working. What you need to understand is the user experience, you submit this prompt and our AI coding assistant, Natasha, will automatically do the software development and deliver the code back to you. 2020 pops off and everyone goes, oh, LLMs,
Tokenized natural language. Got it, got it, got it. So that's what you've been doing. Can you show us that now? It's no longer proprietary. You could show us how your LLM works. You must have one of those, right? Yeah, show us your agentic system. Sure would be cool if you did because the other company that has one is now worth a gajillion dollars. Yeah.
And I would imagine that's the moment where the lie gets really, really, really hard to keep telling. Yeah. So they had apparently upwards of 700 full-time engineers manually coding projects in the background. So really what this is, is like labor cost arbitrage. It's like we're selling this expensive service to...
First world countries with high GDP per capita and then were leveraging cheaper smart labor. But just literally labor cost arbitrage. And it worked for them for so long. They probably did exceptionally well. But the problem is that they were hiding behind this veil of like, it's an AI product. And there was no AI product. Like had they...
I don't know how, maybe they needed the risk assessment committee that we heard about in the first one. Sure. They needed the Russian cyber crimes risk assessment committee. Knowledge and expertise. Because the second, like they would have had the jump to become the replet and to become the lovable. Like they were already in that world. If they saw this stuff coming and were keeping up on it, plus the fact that you have 700 full-time engineers, like,
Like if you allocate a portion of those engineers over to actually building the AI tool that you're supposed to be, they probably could have done it and nobody would have noticed. Yeah, sure. And the timeline of it is just so, from their perspective, such a bummer. The initial, so there'd been this like,
years long period of time where there was a bunch of skepticism that this was real. The Wall Street Journal investigation that exposed the claims that this was, I think their phrase was, this is human assisted AI. And this Wall Street Journal report comes out saying like,
That is even that seems to be a wild overstatement of what is really occurring here, which is as to you, as you said, like software developer salary arbitrage that happened in August 2019.
Like five seconds before all of these LLM tools would have come out and they would have had a path to genuinely becoming the thing they were pretending they were, which was human-assisted AI. If a couple more months, they could have gotten all those developers using AI and they could have began that process of becoming the thing they were saying they had been since 2015 by the skin of their teeth. See, but...
Like, if I'm the CEO of this company, and I'm committing fraud, I'm just trying to think of a nicer way to say it, but there isn't. That if is so important in that sentence, Scott. If...
I'm committing fraud. If I'm the CEO of this company and I've been selling a lie, the second that that lie starts to become reality in the market, I would be adopting it as quick as humanly possible. That's what I'm saying.
adapted some of like even in 20, 2016, 2017, to say that you're a human assisted AI, it would have been still revolutionary. Like the no-code platforms and stuff back then were kind of garbage, but you'd be moving in the right direction. As long as you were adopting and implementing those technical innovations as they came out,
By the time that everybody else was sitting around saying, hey, we could build something like this thing called Replit and just build this agentic system, you'd already have it. They could have made the pivot so cleanly. Yeah, sure. And they just didn't. Yeah. So you've got two different things in this one. You've got the sort of maybe, let's call it misrepresentation of what AI was doing and what humans just paying less than they were charging were doing.
There's also just some really good old fashioned misrepresentation of revenue. Yeah, classic. As this has all been collapsing, it's looking like builder.a overstated its revenues by like three to 400%. They claimed $220 million in 2024 when the real figures were closer to like 50. Still a lot of money, but it just sort of speaks to maybe a...
A board that lacked some independent oversight, not much of an auditing committee or even really a CFO and just unchecked founder control for a very long time with a very large amount of money at stake. Well, the other thing too is that obviously there were whistleblowers that led to the expose in 2019. In 2023, the CEO was given the Ernst & Young UK Entrepreneur of the Year Award.
Yeah. Like four years later, he's still being celebrated in the tech and business community. And it's like, yeah, it's worth maybe talking, but it's like he, so much of this, we, we, we've covered a few stories that get into like the world of VC culture and you realize just how big of a thing, just reputation is in this thing and how far reputation can carry you.
I think you described yourself as the chief wizard, but really the founder of Builder AI was a guy named Sashen Devdugal. And he's a very celebrated entrepreneur, as you said, celebrated the World Economic Forum in like 2023, back in 2009, 2000.
He was the CEO until 2025, like five years after this whistle, like till now. Yeah, exactly. Like a very well respected person and their entire, like it was a very legitimate seeming company. It had very real serious people. And I've been watching the Theranos show and I'm not drawing a parallel between those things right now for legal reasons.
It's funny. The CEO steps down February 2025, goes on the board. They hire in and bring in a CEO, Manpreet Rathia. Sorry. Previously held senior roles, Amazon, Citibank, Flipkart, a bunch of like a senior business tech leader. And they come in and they just go, oh my God.
Like they see behind the veil and they're just like, this is not... This is... This is fraud. Sure. This is AI washing. Yeah, yeah. It's a strange new... But a real concept. It's like you are...
You would think it would go the other way, that it's like, oh, this is pretending to be human labor and human creativity and human effort. And it's like, oh, it was actually just an LLM. It's like, weirdly, in this investment ecosystem, you're better off going the other way. Yeah.
Yeah. I haven't heard that. That's good. Yeah. Well, and it seemed to have gotten, so we talked about the investment, $450 million, Microsoft, the Qatar Investment Authority, SoftBank, like large scale institutional investors that you would think, you would think, to be frank, the due diligence process might have revealed something.
Like, I think it might have revealed this. So you already made reference to Theranos and. Sure. Yeah. I don't know. I don't know how we can not talk about the. Well, and that again brings me back to the story that you can tell, which is like, oh, this is proprietary. We're not going to let that auditor into this room. He used to work for our competitors. We're not going to let that person come take a look at the lab. They used to work over here. You can thread that needle for years.
Like I'd read the Theranos book right after it happened. And I know that, I think there's a movie coming out, isn't it? Or a miniseries or something? There was a show that I've been watching. It was quite good. So it's already out. Yeah. I haven't seen the show, but I did read the book way back and it's, it's this, like they were claiming that they had this intelligent blood testing solution. And then they were just actually mailing blood samples back and testing them in a lab the same way everybody else was. And it's like,
Here, it's like we've got this intelligent software development platform, and instead they're just mailing software requirements documents back to India and having people build them. It's like same, same. Yeah, it's a story like our first one where you have actually surprisingly well-run Russian malware as a service operation. Yeah, exactly. That's still kind of unraveled a little bit because of a lack of thorough checks and balances inside of the operation.
To see something similar happening here where you have 400, the better part of a half a billion dollars from Microsoft and SoftBank invested in a platform, it just hits really, really differently in a story like this.
Yeah. It's also funny because they talk about it, and I'm just going to keep talking about Theranos in relation to this, but same thing happened there where it's like you're getting all these marquee investors, you're getting these big VCs, you're getting all this real money, you're getting these board members that are- Walgreens is on board. Massive prestige. All these little signifiers of legitimacy keep coming out, keep coming out.
And nobody wants to miss out on the technological revolution. But all of a sudden you get this halo. You're like an angel and you're protected. And all of a sudden nobody can scrutinize you. There's whistleblowers that are calling the Wall Street Journal and they're writing exposés about how you're a fraud. But nobody listens to it because Microsoft just gave you $200 million. Yeah.
You've got a bunch of money and this journalist just has a grudge and you're going to fight every single point and you're going to sue the newspaper and, and, and, and, and it's all sort of secondary to the larger point that the accusations are maybe true.
But there is a bit of irony here that there's so much discussion about is software engineering going to die? Are AI going to take all those jobs? And these are the people that were doing the socially just thing and they were taking the AI money and giving it to the people. Giving it to the people. In this TED Talk, I will argue that what I did, what you call fraud, was actually the most moral choice of all.
Before we move on, how is the Theranos show? I'm intrigued. It's quite good. I'd say the show has a little bit of padding. It's like if it was one episode shorter, probably all of the episodes would have been better. I will say Amanda Seyfried is the actor who plays Elizabeth Holmes.
Tour de Force performance as far as I'm concerned. I think she crushes it. Does she do the voice change and everything? And you watch it happen and you watch her test it and then someone calls her out and she waffles on it and then she tries it again. The introduction of the...
The Elizabeth Holmes voice. Elizabeth Holmes. It's almost a plot point. It should be a plot point. It tells you so much about who they were as a person. It emerged with, yeah. How they understood perception. If I'm remembering right, accompanied the emergence of the Steve Jobs turtleneck, which sounds like I'm making a joke, but I'm not. You're not, no. The sort of Steve Jobsification of her.
uh became a bigger part of her identity the more like flack and the like harder the hustle was getting
I feel like we're ruining plot points for listeners, but I think we should cover this just in the tiniest thing. So Elizabeth Holmes, founder of Theranos, fraud, blood testing, making these mobile blood testing units. You could just go into Walgreens and you'd get a blood test done in a short period of time. Anyway, it turned out it was the same as this. They were taking blood samples, mailing them back.
They had a machine called Edison, whatever. But Elizabeth Holmes apparently was obsessed with Steve Jobs to the point that she adapted and manifested and projected Steve Jobs' energy. Black turtleneck, the whole nine glasses. And she changed her voice to be lower and more manly because she thought it commanded more presence and more authority. Yeah.
So fascinating, fascinating character to have a show based on. Also in recent news, her partner is now founding a blood testing company based on technology and AI. So TBD on that thing. Of all the businesses you can start, my guy, like what are you doing? Like this company, Builder AI, I think hit one and a half billion peak valuation on fundraising. Theranos hit nine billion.
Yeah, it was being set up to be the apple of medical technology. The way people talked about it, she is Steve Jobs' wife.
not initially, but now reincarnate. She is here to take this giant slow moving Colossus of an industry and to make it digital and modern and sleek and move fast and break things. Um, and the fact of it was that they just couldn't crack the technology and you keep raising more money and you keep making more deals and you keep raising more money and making more deals, but you just don't have a machine that does the job you're selling at a certain point. Um,
You run out of track to lay in front of the train that's already moving and it crashes. Fascinating story. Does remind me of builder.ai. Totally. Yeah. Complete parallel for me. Completely. Except for builder.ai got ran over by the thing that they were actually supposed to be doing. This is true. Theranos just got caught.
about doing something that they couldn't do. Yeah, it would be as though someone else had come along with small, the whole point of their thing was that you don't need to take a bunch of blood and people that are constantly getting blood tests have to have blood drawn all the time. It's apparently like a really, really like traumatizing experience for people that are like going through some kind of long-term medical condition.
care to constantly have so much blood drawn so it was tiny little blood samples and it would be as though someone else invented micro blood sample thorough full panel blood testing in the middle of them lying saying they had invented it it's like oh how did you do it show us yours and we'll show you ours show us yours uh you want to just like speed run some little news stories before the end it's been a minute since we've chatted yeah let's do it let's do it
I guess first and foremost, dub dub Apple W WDC just happened. Crazy. You got any thoughts on that one? God, it took them long enough. It would be my main thought is, uh, as somebody that's been waiting for an iPad version of the Mac forever, uh,
Like, I recently bought an iPad. Jordan knows this. Because I wanted something to write notes on. That's literally the only reason I spent a grotesque amount of money. Because they are so expensive now. I could have bought another MacBook for less money than an iPad. Yeah, they're not cheap. I don't know if that's actually true, so don't hold me to it. But it was so much money that I feel like I could have bought another laptop. The...
Yeah. It is just like, why did they not... iOS is based in Mac OS. They're the same core, essentially. Different UI kits. But the new iPad UI kit is so similar to this Mac OS. And I imagine Liquid Glass, the new UI template- Unified, sure. Yeah, is going to come to Mac OS. They're just going to be the same. Can we just make them the same?
Never. No, they will literally never do that. I love seeing windowed management. I love that they just gave us like a menu bar and the stoplights in the upper left-hand corner. All great. I love that. It will make things a lot more efficient. Finder and the horrible files app converging towards Finder is good. The old ism.
Which is that you buy a Mac, you're buying a computer, you buy an iPad, you're buying a list of things you're allowed to do. Totally. Still remains unfortunately true. Like this episode will be edited inside of Logic and I will use plugins that I am not able to use in the Logic, in the iPad version. It's like, well, until my core functionality of a computer is added to the list of things I'm allowed to do on an iPad...
It can't become my daily driver. But we inch ever closer. My new iPad Air has an – like I can buy a MacBook Pro with the same like logical infrastructure as my iPad has in it. It's the same chips. Same chipset, same everything. So like why can't I just run – like why can't I just choose to run macOS on it?
Like just let me. No, but what if instead, what if instead we used all that processing power to run the bougiest animations on everything you've ever seen in your entire life? Yeah. We call it liquid glass, which I don't hate as much as some people, but I am assuming is going to have to change so significantly before the actual launch in September, because it is quite often completely unreadable.
Yeah. Yeah. I, if we want to talk about UI design. Sure. Yeah. I think it's cool. I think that usability wise is going to be tricky, especially for accessibility. Yeah.
Is it so groundbreaking? Like, do Gaussian blurs and lens effects impress me? Like, they were in Photoshop, too. It's like... Yeah, Windows Vista is the thing everyone's acknowledging. Yeah. You've done this, like, refractive light glass thing. Totally. Like, that's...
It's nice. It's fine. It's cool. The one thing, and you brought it up and you made tag to it, is that like the amount of processing and rendering power. People that I've seen running the dev version of iOS 26, I think is what it's called. Yeah. Because they jumped 1920. They're doing the car naming thing. Yeah.
Anybody that I've seen running it talks about how much detail is rendering into every UI piece. Like the new Finder app has like drop shadows and shadings and renderings. And yeah, knowing that as a gamer, the first thing I do on Windows is turn all of that off so that my computer runs faster. Certainly.
When I'm talking about a mobile device with a mobile device battery in it, running non-mobile device chipsets, I'm in no rush to care that I'm going to have beautiful Gaussian drop shadow blurs and blah, blah, blah. I'm going to care more about the fact that my iPad gets more than two hours of use before the battery dies. Yeah, exactly. But don't worry, we're making a thinner phone at the expense of the size of the battery. It's going to be a lot of fun. Yeah, that's interesting information.
I was sitting there with a clicker. I didn't actually do this, but I was metaphorically sitting there with a clicker trying to count out every time they said the word Siri in the talk. Oh, really? Like just in the back of my head. And I didn't watch every second of it, but I would guess maybe one time they brought it up. There was like a very fascinating...
Talk about Apple intelligence and Siri without talking about Apple intelligence and Siri, because we are in this little window of time where they have made a lot of promises. They clearly haven't quite caught up with that, which is funny because there were, there was stuff inside of this. That is what the Apple intelligence announcement probably should have actually been really, really good transition translations. Yeah. Using onboard LLMs.
That's great. Table stakes for a mobile operating system in 2025. Keep it coming. There was a lot of little quality of life things, depending on like nice locally run LLMs that you can tell the Apple story about privacy and on device and it's your thing. All of that's great. But now it's in the shadow of this like,
Siri will be God, Apple intelligence will run your life for you story they told like nine months ago. And they're sort of just like they painted themselves into a corner. Let's hang here for a sec because I am and I know a lot of other people who are like into AI. And I will now say that I'm like into AI. I'm like an AI guy. Apple is shockingly behind. Oh, yes. For a company that has...
Yeah. Yeah. No, it's like a problem. It's like the building is on fire. It looks a little toasty in there. Like, yeah, I don't know. It's really bad. Siri has been around for so long. They have made, and this is not a knock on the Siri team. I'm sure they're doing things. But as a user, it doesn't feel like Siri has improved since the first time I used it. No.
I use Siri to turn on and off smart lights in my house, and that is it. Oh, and by the skin of its teeth, can it do it? Like, it's really rickety. And that alone is like a nightmarish scenario that it doesn't understand 90% of the time. And nowadays, we have the Johnny Ives, which is not the right pronunciation of his name, moving to OpenAI, and they're talking about making a screenless AI device that
And it's like, there is, there's good, like I can have a conversation with Grok. Who else has voice mode? OpenAI, Gemini has voice mode. Gemini's great. Yeah. Yeah. Like I can have conversations with these AIs that are doing deep research and, you know, retrieval augmented generation and all kinds of stuff in the background. And then I asked Siri to like turn off the lights in my office and she's just like,
Playing lights. And you're like, what? No, I don't want to listen to some electro pop. It's really not good. No. Here's my theory. So in the, in a dub dub this year, they also spent a bunch of time on spotlight, like the Mac tool where you command space and you can search for files.
And it's always been like very useful, but a little hack half baked. There were a bunch of secondary pieces of software, like Raycast that like gave you a bunch of functionality, shortcuts. Exactly. Yeah. Um, and I would say some of them got Sherlocked a little bit, which is to say Apple built their functionality. Exactly. Um, if anybody doesn't know, spotlight was stolen from a third party, not stolen. God, I'm going to get myself in a trouble in this episode. Um,
All of the functionality that's in Spotlight now wasn't originally in Spotlight. And it was a third party app called Sherlock. Which has since become a shorthand for this kind of thing happening. Anyway, there's all this functionality now built into Spotlight. You can string together shortcuts. You can tell it to do pretty complex things. And it's a tag based system. You need to activate the little, I'm sending a message part of it.
But in Spotlight, you're like, oh, this is all of the hooks into this system in a little text box. Not quite a natural language text box, but you're dangerously close
to having a thing that is closer to what Siri should be in Spotlight than what Siri currently is. And so it's this question of if you build all of these hooks into the operating system, you get Spotlight to the point where it can almost use the computer for you. You've built a lot of the scaffolding of saying, now we're just going to run a large language model on top of it that can connect through to those. And I would bet
an API that you can expose to other large language models that's like, here's the couple hundred hooks that we use to get into Mac OS. If you're approved, you can hook into these too. And people can say, you know what? Same as I use Google search as my default in Safari, I would like Gemini to be the default voice conversation, voice assistant. And yes, I would like to give it permission to use my system for me. Yeah, well...
Couple things to that. That API hook for other system is already done, OpenAI built. It's called MCP, Model Contacts Protocol. And essentially all of the feature set that has been exposed to Apple shortcuts and all the rest of those, all those application interactive functional exposures,
will all be bundled up in MCPs eventually. And not only will Apple's OS provide an MCP, but each of those apps will have one. I can't remember who said it, but somebody recently said, if you're a SaaS company...
and you're not exposing your stuff on MCPs for agentic use, then you will be replaced by a system that does. - That does, because that's how people are gonna be querying these systems. - Correct. The next thing I'll tell you is, I'm not sure what you use for web browsers, but one of the things that I found helpful is, there's like this, I can't remember exactly what it's called. Let me just pull up my settings here, but.
I'm using Firefox and there's the ability to do custom search engines. So you use bang GPT or bang perp. So when you open up a new tab and you have the Google search or the search bar comes up, I start it with exclamation point GPT and then anything I type gets sent to chat GPT 4.1.
Anything that I typed after bank perp goes to perplexity. Anything after bank grok goes to grok. And so it's like 90% of the time when I'm Googling stuff now, I'm using Googling as a verb. I'm not even sending my questions to Google anymore. I'm going to one of the AI assistants that's bringing me back a summarized, cited answer with exactly what I'm looking for rather than me having to spend 10 minutes looking through pages looking for it.
Which I think you bringing up connects through to my biggest argument for why Apple might not want to be territorial about the LLM natural language layer that lets people interact with the computer being theirs. Like there's an argument to like, yes, we have a lightweight LLM right on the phone privacy. Great. That's cool.
People hate Siri and they've been getting bruised up by that for years. So how much they want to own that conversation layer is undecided, probably dependent on the quality of the model. But the bigger argument is that right now, Apple and Google are in this like tango of antitrust cases with the US government, European regulators around the world, dependent on these questions, very old questions of how many billions of dollars is Google allowed to pay Apple in order to be the default search engine?
How much money is a company allowed to pay before it becomes an antitrust issue? And going, you know what? You're right. We should never have been letting them pay all that money for search queries. Now, the large language model layer, which no one can say there isn't a lot of competition for. Totally. You can say that about Google search, but you can't say it about the LLM layer.
That, we let tons of people give us billions of dollars to put that on the iPhone. And I'm like, that to me is a really, really good argument to build your system in such a way that if people like talking to Gemini, you can let them control the iPhone with Gemini. It sounds very unappley, but it seems like all of the signals are pointing in that direction. Yeah. I'm not an Android user, but know a lot of people that are. And my brother got a new phone at Christmas and had it, and it came with Gemini Pro. And it was still...
1.5 pro at that point, I think. I don't think 2.5 was out. And it's no longer pro. They've dropped the pro, just so you know. It's now just 2.5. Google changed a name and a thing and made it confusing in the process. Yeah, exactly. Shocker. But he'd never really used it. And I was just like, we would be sitting having a conversation about something and I'd be like, just ask Gemini. And he just got into the habit of being like,
What percentage of vote, blah, blah, blah. Like any question you have and boom. Talk to your computer. Yeah, talk to it. It's just puking out cited answers and you're just like, there's the answer. Like we could have sat here and argued about it for 45 minutes. Now we have the answer. Yours can do that now. Yeah, I think that's going to be...
That spotlight, I'm very excited as an iPad user for window management in 2025 on my thousand dollar computer. Mind-blowing concept. But I think the actual future of all this stuff was hidden in that little spotlight demo where it's like, oh, you can kind of talk to it a little bit. Oh, you can have it string together shortcuts and do things recurrently, almost like a little, oh, like there's a lot of functionality hidden in that like five minutes of the demo. Yeah.
So I'm just going to go back in time here. I'm not even sure if this was on the episode or just in a passing conversation that maybe you and myself and maybe me and a friend of the pod, Matthew Satchel, had had, or Matthias, the art director who did our art for the show, is I made that same argument when Siri came out originally. I was like, they have all these hooks. They just need to expose it to Siri. They need to do all this stuff. They're building the ecosystem. It's going to be good. That was 10 years ago. It's still not good.
Like it has those hooks. Siri has those hooks. I'm just hoping they get a reasonable voice model to run Siri. I think Spotlight shows that they're panic building the hooks in the background. And it's just how they go about exposing them to those LLMs because Siri was always a non-LLM based conversation. It's like, okay, well, that sucks. We know that sucks. We know this rips.
Just put it all together, guys. Like you have all the parts for this to be good. I watched, I did watch, I actually didn't watch Dub Dub, the whole thing, but I watched some specific, like I saw the pieces that made sense that I cared about. The highlight reel. Yeah, yeah. And, but I did watch the Apple AI MLX, MLX? Yeah, the MLX presentation of like their entire Apple LLM kit. Yeah.
And they've built an entire infrastructure. Like, like the, anyway, that's not into AI, like running, running, running deep seek R1, which is a free open source model provided to us by our friends in China, um, requires like a super computer, like to run it efficiently. You need like 470 gigabytes of VRAM.
So if you were to buy Nvidia chips and Nvidia cards at the time to do that, not even the consumer ones like we have in our PCs, but you'd be six figures-ish close to. You can buy a $10,000 US Mac Studio M3 that has 512 gigs of unified memory running at an insanely high memory bandwidth speed of 871 gigs a second.
And you can put DeepSeek R1 on there and run it essentially at a functional speed. So for 10 grand, Apple has built hardware that is like that M3 Ultra Studio with 512 gigs of memory is built for nobody besides AI people.
Right. There's nobody else needs five. It doesn't matter what kind of rendering you're doing or anything. You don't need that kind of VRAM. You haven't seen these liquid glass textures yet, Scott. That's true. We all need an empty studio to run our OS. No, I take your point though. Yeah. But anyway, the MLX, the Apple AI stuff is actually pretty cool and they're actually doing some pretty cool stuff.
And they're exposing a lot of abilities to make it really easy for people to fine-tune models. So like low-rank adapters, create low-rank adapters, which if you don't know what that is, it's like a custom-trained model that gets attached to the other model to change some of the weightings. Take DeepSeek R1 and feed it like...
10,000 examples of our customer service tickets. And it'll create a little adapter, a little model augmentation that we glue to the R1. And then all of a sudden we have this customer service model that's been custom trained to deal with our customer service. And they're really starting to build for that future where enterprises are looking to internally leverage AI. I think more so than you're seeing with the Googles and the open AIs where they're building to
the market, the generalist market, rather than building siloed custom solutions internally, which I think Apple's kind of isolated off and said this could be a big thing. And I kind of agree with them. Yeah, I mean, they're in an interesting position. Like, Google's operating system is a web browser. Like, they do developer conferences and they have platforms and, hey, maybe AR will be the future. I don't know. XR, Jordan. My apologies, my apologies. Yeah.
But it's just a fundamentally different company. Like their hardware, like what is their hardware for developers? Like it's just not this, they're not analogous to one another, which is what makes the fact that they're in some ways ahead so fascinating. But it does speak to what Apple could kind of rush in from the rear on, which is that kind of stuff. Totally. And again, like the couple of things I want to talk about there, but the first is Apple.
Change takes time, especially the bigger an organization, right? Like if you're a one-person shop, if you're an independent consultant to small business, it's just one person. You can change and you can pivot quickly. If you're a 60,000-person logistics company, like the change... Big ships move slowly. Exactly. And so it's like there's going to be two different models working there and they're going to be competing a bit like...
You're seeing that with AI startups. There's so many of them coming online so fast. A, because AI is facilitating their development, their research, their planning, their everything. You can just move at a pace that's unprecedented. As a small team, next, it's like big, big companies
are just looking at marginal gains, marginal shifts. Like if you're delivering natural gas to the households of North America to keep their houses warm in the winter, it's like, you know, you're, you can't move as quick because of security, risk, liability, et cetera, et cetera. So it's like, yeah, it's going to be, it's going to be a fascinating decade. Like by 2030, it's going to be fascinating. The second thing I want to talk about there.
was how badly I want a pair of AR glasses, or XR glasses. Really? You're in. I'm in. Do you want the...
Now to clarify, XR as in the current generation of functionally VR glasses with pass-through of video, or do you want to go over to the other side, which is the glasses with the teeny tiny non-mapped hood? Which side of that do you think is useful in 2025? I'm talking the Google XR prototypes that they've been demoing at their AI conventions. They are essentially meta Ray-Bans, but with screens in your eyes.
They are, they are literally the contact lenses from my dystopian graphic. They have cameras. They have microphones. We should clarify that because we got an email unpublished. Cause I think some people went looking for that. Oh really? Yeah. Unpublished. I just wrote it as a hobby, as like a probably project. I should publish it now, but the, um, it'd be maybe a little too real to publish now. The, uh,
Anyway, they have microphones, they have cameras, they have everything. Like if you haven't seen the demo of them, you should watch it. Like somebody wearing them will like look at a bookcase and be like, like just briefly glance their eyes past a bookcase and then be like,
Hey, Gemini, do I have a book on UI design? And they'll be like, yeah, it's the third book on the second shelf. Yeah. It's like, OK. Yeah, it reminded me-- I saw that tech demo. It's very similar to the Orion AR tech demo that Facebook did eight or nine months ago, where it was like you managed to get actual-- Heads up. Heads up display tracked into a pair of what kind of look like normal glasses, chunky.
but like a pair of glasses with a heads up display mapping real world content, mapping content into the real world. And, you know, pick your poison a little bit, but I'm sure more likely to use one created by Google and more likely, I would say, to get value from the software provided by Google. That's just my personal experience. I'm not a big meta product person. I don't love having it in my pocket. I'm very disinclined to put it on my face. And for however much that's still...
And for however much that's still true about Google, it is, I would say just for me personally, less true. Like I'm more likely to want to pop a pair of those on. Yeah. I'm intrigued by that. I'm curious when that's going to get to consumers. Cause I've, I've heard that while that that's a mass production issue that like you, it's like, yes, we can do it. We can make 11 of these and they're incredible. And it's like, amazing. Can you make 7,000? They're like, no, no.
It's like, okay, we'll come back when you can do that because I'll give you money. I would genuinely like a pair of cool glasses that can talk to me and see the world. That sounds kind of neat. As somebody who liked the first iPad, I remember Steve Jobs, his classic speech sitting on stage, and he's like, it's really powerful to hold the power of the internet in your hands. I can't remember his exact words, but he was like, this is like a, it feels monumental. And I feel like
Glasses like that, if built very well, functioning very well, will feel like that. It will be like, oh my God, like I am, the technology and life are interwoven now rather than like two separate silos. Like I go use technology in my life, but like now they're together. And that's going to be, it's going to be a cool thing. It might be scary and it might lead us to seeing ads on literally everything, but. Sure.
Which is another thing that's cool to talk about. I know we should probably wrap this up because we're just shooting it now. There's been a lot of conversations about what a post-Google search world looks like.
for advertising. Sure. Like if Perplexity, Gemini, I'm just literally looking at all of the bookmarks on my screen. Of course. Gemini, OpenAI, Grok, Cloud, Perplexity are all feeding me the answers. I'm never going to web pages to find them. No. And Google searches, like so much of Google's revenue is associated to their ad side. And like, what does the world look like when
it's not serving those ads anymore. There's no value to them. Nobody's looking at those ads. And I would agree. And I'd say it brings up an even larger question of like,
Well, ads are the financial engine of the internet. And you could just scale that question up like, well, what even really happens to the internet at that point? Like for the last 15 years, we've been living in an economic situation where Google makes, and I'm going to round some numbers here, a buck for every penny that Conde Nast or the New York Times or any one of the actual creators of this content tend to make off of advertising. They monetized the new internet, which is now the old internet, better than anybody else. And they became one of the largest companies on the planet as a result.
If you were no longer driving any traffic anywhere and you're just querying information from a database that was previously barely financially viable due to advertising, and there's now no eyeballs to see the advertising, what is the economic model that makes any of the content produced on the internet viable? That to me is completely unclear. If you were in the text business, I'm like, oh, I just don't know how that's going to work for you. It barely works now.
This won't improve that at all. My gut response to that is, and there's an interesting, I heard this on a podcast the other day, and they were talking about how AI is creating two types of people. Hyper consumers of content and hyper producers of content. And I think that's only going to get bigger and bigger as
I'm trying to figure out the nice way to say this, crossing my fingers and hoping that AI leads to a life where we're not as busy, for lack of better words. Where like we see economic efficiency growth substantially while not requiring human output to go up in equivalency, if that makes sense. Like, you know, so much, and I talk about this somewhere that will become public at some point.
The technological revolution obviously grew our economic efficiency as it facilitated us to work better, faster, and harder. But it came at the cost of me having a pager, a Blackberry, an email. There's an expectation that every time something gets done, I get a message, I get a notification, I have to respond to it.
I'm hoping that AI is the disconnect for that, where now all of a sudden it's like we can grow our economic efficiency, but we can steal a bit more of our life back. I can disconnect and still be a productive member of society, et cetera, et cetera. I forgot where I was going with this. Yeah, we were talking about the scale of production of the information in the system versus the consumption of them. So what do people do in their spare time? They consume or produce content. Mm-hmm.
And it's like, I think that the monetization of content is only going to get bigger and bigger. My worry is that when it's commodity, like literally a commodity at the scale that it's at, it's like, what were the deals that Reddit did? And I'm just thinking of like an individual creator could theoretically make enough money to live by putting ads on a really popular blog. Reddit can command a deal for Anthropic or Perplexity or any of these companies, but they need to be producing like
like what, like 3 million tokens of AI-parsable information a day? It's like, no, it's literally a commodity. Think of bales, think of giant shipping containers worth of human output. That's what's valuable to these systems. Unless right now, unless we start going like, no, if you have the actual answer to a human query,
That's valued in a different way. That's not, we're not the little bit of language that we're using to feed into the system is like, no, you can't monetize that the same way as that. Or no one will produce those answers and the internet will stop being useful. It'll have to start making stuff up. You need to find a way to make the answer creation process monetizable on the internet or you won't have answers. Yes. But I guess the,
To route back, I think that you're going to see, to speak to the content that I'm talking about, hyper-production of content, is people, let's talk more about the Spotify deal with Joe Rogan. Sure. I think we're going to see more of that stuff. Big influencers, big content creators. Sure. A different kind of content and a different relationship to it. And the monetization of that is going to be, I think, maybe one of the big byproducts of that. I would agree with that.
How to put ads and content and product in front of people who are consuming content. And I think that that's, I think truthfully, and this is going to sound weird and dystopian, but like the Twitches of the world, the YouTubes, like those are the things that are going to get more and more valuable as humans have less to do due to AI and
They will consume more because idle hands are the devil's playground. I think you're getting to maybe the heart of it, which is that how many websites haven't I clicked on because the Google automated response was serviceable? Probably quite a few. Just human nature. The answer's right there. I read it. I don't need to continue on. How many songs written by AI have I listened to on Spotify? None. None.
How many books written by AI have I read recreationally? A book takes six hours to read? None. Information is a commodity, the value of that will be driven down, but the value of authorship and our relation to authors, creators, video, audio, whatever it is. Perspective, country. That remains valuable. The economics of how it will be created will change, and the labor behind that will change.
But as of right now, the thing I keep coming back to when it's like humans will need to do truly nothing, there's nothing you can't do better. It's like, what's your favorite song written by one? Yeah. Because I can list 50 songs that have mattered to me so much and not one of them was written by AI. And I would guess that that list will remain entirely human authored because that's what's valuable about it is my relationship to the author. Totally. I think that AI can't do better.
Doesn't understand human emotion. I think it understands it from a clinical perspective. But yeah, but it doesn't understand it. So it will never have that same connection. But the thing is for me is that the...
We're already living in a world where major influencers are essentially full grade A celebrities. It used to be micro celebrity. We joked about it for decades. It's macro. Now it's full blown macro. If you're a top 10 streamer, other celebrities want to meet you.
You know what I'm saying? They want to come on the stream. They want to come on the pod. They want to come on the show. So my perspective is that entertainment is going to become the new platform for marketing. And it's always been a platform for marketing, but I think that it's going to become the biggest platform for marketing. I'm curious what people are going to feel about the...
30 minute appended because we've tried doing chatty chat episodes. We've done very structured stuff. This had like a night, a story, then another story. And then what I thought was going to be five minutes of talking about dub dub. And you just got like another pod added to the end. So I'm curious. I'm curious. I hope, I hope y'all like it. It was quite fun to do. We used to make this thing called, well, we used to make this thing called hacked after dark. And this essentially felt like a hacked after dark because it's like,
Jordan and I usually sit on these calls after we make the episode and before we make the episode and talk about this stuff. So maybe we just leave the mics on and the cameras on. Honestly, if you like this, let us know because we could just keep doing this and keep all the chatty chat as like a nice little vestigial thing hanging off the end of the episode, which was, of course, brought to you by Push Security. Absolutely. As always. Very fun.
I think without any further ado, we'll catch you in the next one. Take care.