Back in 2018, a global power company called Eaton Energy demoted a man named Davis Liu. He'd been a senior software developer, but during a corporate reshuffle, his role was allegedly downsized pretty significantly. He had a bunch of responsibility taken away and his system access was limited, though apparently not that thoroughly.
We can't know what happened in his head at this point. What happened at all was debated in court for years. And Liu's defense still argues his innocence in spite of the recent guilty verdict. But here's the story that was told in court. Davis Liu, downsized, demoted, and slighted by this company, Eaton Energy, embarks on a project.
The development of a kill switch script called IsDLEnabledInAD, which stood for IsDavisLewEnabledInActiveDirectory. The only reason he wouldn't be is if he had quit or been fired. And the script was like a kill switch set to go off in the event of his termination. As the court case here is finally finished, Lew now faces 10 years in prison for what allegedly happened when that kill switch went off.
We've never really dug into this kind of story before, of internal sabotage, of tripwires and kill switches left behind in a network. But there's a fascinating history of this kind of thing that I wanted to learn more about. So we got a few stories this episode, but we're going to start here with the tale of Davis Liu and the kill switch script here on Hack. So
How's it going, Jordan? It's good. How are you doing, man? I'm doing pretty good. I'm doing pretty good. I like the chicken that we now play of imitating the theme music, and then it's who's going to say, who's going to ask how the other one's doing first?
That's right. That's right. It is a game we play literally every time we make one of these episodes. And I don't know who's winning, but probably not me. I think I had a good year and a half run of doing that every episode. And now the sort of tennis matches started. So are you behind? Yes. Are you catching up? Yes. Yeah.
I think we should open by thanking a lot of the positive feedback we've got from the last episode. Our little observation. Actually, I would say our audience's observation and action about telling us about the issue that we created with the ads. We've had a ton of positive feedback, so I just want to say thank you to that. Warms my heart. And then I've got to give a shout out to Joseph De La Cruz, a listener on Spotify.com.
reached out to let me know that the version of Discord before Discord that I was trying to think of is called TeamSpeak and that is 100% correct and I completely forgot about it because it is no longer relevant to me. So thank you, Joseph, for reminding me.
It's wild that someone caught that because we like vaguely alluded to a thing that was kind of like discord before there was discord, you know, chatting with the game anyway, and that someone saw them was like, they're referring to team speak. Love that. Love that attention to detail.
We also did get another piece of comment that was literally just a, hey, can you call me? It's probably not a good idea, but here's my phone number. To which I offered Jordan $50 to actually call this person and find out and record it and see what it was. Did you do it? I...
I haven't decided not to do that yet. That comment came in and then the weekend happened and it was a full weekend, but I'm trying to figure out a way to call that number and record it. Yeah. I'm super intrigued. I want to know. I want to know, especially because the comment flagged, it's not the best idea, but call me. I was like, oh, that's provocative. It's like, hey, you shouldn't call this number, but here it is.
I feel like that's one of those literature collectives. The prompt that begins, write a story from this prompt. A man calls a phone number he's told not to call, but invited to. This is a terrible idea, but you should call this number. You should really reach out to me. I think I'm going to. After I tell the audience that the show is brought to them by Push Security, you'll hear more about them later in the show.
It's been a while since we've done a little newsy, chatty, multi-story update. And I was pretty stoked to dig into this one because I found it fascinating. Like I said in the intro, we haven't really ever... I kind of went digging through the back catalog and I don't know that we've ever talked about this kind of internal sabotage-y type story. And as we'll get to later in talking about it...
It's not the first of its kind. No. There's a really fascinating history of these kind of tripwires being left behind in networks by folks that previously had legitimate access to that network. I feel like this is like it speaks to like a part of our...
you know, origin, like our reptile brain is like, no, I'm essential. And if anybody does anything bad to me, they must pay. Like it's a retro, like there's a reason why like revenge movies are like a massive like section of Hollywood now, like John wick, like fulfill some primal urge inside of people to be like, yes, I need revenge for things done to me. And, and, and, and,
Good analog. These cases are like the IT version of that. Before we get to the story, there's like a genre of internet content meme-y thing where it's comparing different fictional characters and being like, could this one take on this one? Could this one take on these five? And it's just like a thought puzzle. And the thing that you always see whenever John Wick is evoked in one of those is, did someone kill John Wick's dog?
Because if not, he's just a guy who's good at shooting. But if you killed his dog, he seems to take on like a supernatural kind of quality. And I like that we all sort of just know because of those films what that tripwire, that kill switch seems to be. It's your dog. It's your dog. Don't kill John Wick's dog. Okay.
So Texas software developer, the reason we're talking about this was because I think on March 7th, he was found guilty of this. Yeah.
When he was fired from the multinational power management company Eaton Corporation, it caused a system outage. Big company. Big one. Caused a system outage that locked out. This all gets into alleged language, but allegedly thousands of users worldwide. Convicted on March 7th, he's now looking at potentially up to a 10-year prison sentence for causing intentional damage to a protected computer network.
Big story, big old fallout. You've been working at Eden Corporation starting in 2007. They're a big global power management company. They're based out of Ohio. They have offices all around the world. They do electricity and hydraulics. Well, they make Eden. Eden does, I think, a lot of like, they build a lot of componentry for electrical implementation, be it like commercial side, industrial side, infrastructure side. Like they build so much. Like they're a massive, they might be a Fortune 500. It wouldn't surprise me if they were Fortune 500, but they're a massive publicly traded company.
I don't know if I need to say it, but I own stocks in Eaton. Oh, interesting. I need to disclose that. I don't know if I need to, but I need to disclose that. I think that's good. We very rarely need to do disclosures, but I own a portion of this company that we are covering on the show. Seems like a pretty good one. And I'm expecting you to come down with an iron fist as a result. I don't. I think that's maybe the big headline here is Eaton Energy's fine. Mm-hmm.
They're good. Like, we're going to get some competing stories about the scale of the fallout from this, the amount of damage done both in like sort of human cost and dollars and cents. That number is quite fiercely debated over the course of this trial. But suffice it to say, Eaton Energy Global Power Management Corporation will persist forever.
Back in 2018, the company underwent what they called, I'm going to bore their very corporate jargon here, a corporate realignment that resulted in a downsizing of Lou's role. His responsibilities, his access to the network, it was all kind of shrunk down. Lou had been there since 2007, reportedly unhappy about this. He starts to become disgruntled.
This is all as outlined in the court case. These are in effect because they won the prosecution's allegations that we have now at this point. But the story that they tell is that Lou begins quietly planting malicious code on Eaton servers after this demotion. He goes on a little bit of a tour of the system. All of this is what ends up being triggered by the is DL enabled in AD script. But the stuff that's underneath that banner of let this all march forth in the event that I'm no longer employed here is as follows.
We've got a script that's going to go ahead and just delete the profiles of a whole bunch of people that work at the company. All the user-specific configurations that let you log into your system, your settings, your files. That's gone. We've got a bunch of CPU gobbling up infinite loops. Surprising how vulnerable...
server infrastructure is to infinite loops. This was something that I triggered in a production server when I was 13. Completely unintentionally. I just wrote a script that forked and called itself to recurse through things. But I missed an exit clause on one of the conditions and it just crashed a production server. And I was like,
There's no safeguards against this. There are safeguards against it and things that you could do to prevent it. But it is surprising how effective just putting an infinite loop in a piece of software is at killing things. It just turns them off. They say running, but they're running doing nothing. I was trying to understand this part of it, like this element of what occurred when that kill switch went off.
And I was fascinated by that concept that like a loop is an extraordinarily useful thing. But if you don't create exit conditions for the loop, it's a very dangerous thing because it can not just keep going and create like a denial of service type event, uh,
but it can also spiral off and create like other things that start happening when the loop hits certain conditions. Totally. It was my sense of how this can go wrong. And especially when someone does it intentionally, like you're describing a whoopsie doodle and this is like, Oh, I can use the mechanics of that whoopsie as a like attack vector. Well, if you think about like, if like the interconnectedness of all the systems these days, if you can essentially take one of them offline through putting it into an infinite loop,
Then everything that depends on that in the interconnected woven network just is hanging, waiting for this thing to give them back the information they need. So then all of a sudden, all of the knock-on effects go to the external systems that are around it and so on and so on. Just shuts everything down. It's like the scariness that exists in our society these days and the dependency that we have on network connections. If we just lost the internet for a day...
It's happened. I remember the cell networks went down on one provider in Canada for one day, and it was like mayhem because all of the Moneris Visa machines broke. It's just like knock-on effects go to so many things. The same thing would happen here. I was looting and pillaging that day. I remember that. I threw a garbage can through a Best Buy window. It was the whole thing. I went nuts, man. Yeah.
Uh, it's not true. Um, so you got the infinite loops, you've got the deleted profiles, you've got just like another, I can't tell if this had to do with the profile deletion or was a dedicated task to like just block login attempts. But the effect of this was basically, Hey, a whole bunch of people that need incorporation, we're not going to be able to access the network. Um,
It's an attack on the infrastructure, allegedly. I don't think it's alleged anymore. It's been convicted. No, I think now I can just say did it. Yeah, that's true. Yeah. I think that the reason I keep wanting to temper it is because as of time of recording, he hasn't been charged yet.
And they're pushing for a 10-year prison sentence. And while I don't need to say alleged because he has been convicted, oh, that's a really big prison sentence given that this has been going through the courts for six years. Yeah, I don't know. It's like this is a bad thing. Don't do it. It did a bad thing. And it's like – It's not good. It had a lot of financial, social, organizational impacts. Yeah.
I don't know. I'm by no means an expert on prison sentences, but it's like, this stuff is like modern warfare. A lot of what we talk about, cybercrime hacking,
you know, we depend on these systems now. They're not like nice to haves on the side. These are like things that this organization would have needed to like run. And if he had actually managed to like destroy it would have been billions of dollars in loss. So it's like, yeah, I don't know. And is something being ineffective? Yeah.
Is an attack being ineffective like any kind of insulation against moral culpability? Exactly. No, that's a valid answer. And is there any chance Davis Liu is ever going to do this again and removing him from his community for a decade of his life is going to prevent harm in the future? Be like, probably no. And as such, we are left with a really weird moral conundrum. Society faces that moral conundrum all the time. Yeah.
I can see a courthouse from my house. Damn near. It's behind a building. But like, yes, I'm with you. And I'm sure that that question is being turned over in those halls right now.
The thing that flipped all this off, as we mentioned, is, is DL enabled in AD? Active Directory in this context is Microsoft's identity management platform. A lot of companies use it to just basic who gets access to this system or not. It's the front gate to the whole operation. Loose script was built to constantly check his status in AD. Is his account still active or not? If it was still active, this whole pot of code does nothing.
But the moment his account is disabled, like it would be after he was fired, the kill switch goes off. And when it did, it locked out pretty much all users company-wide, which is exactly what happened on September 9th, 2019, the day that Lou was officially terminated. Thousands of Eaton employees in multiple offices across the globe were instantly locked out of their systems when the tripwire goes off. So here's the real question. I'm shocked that that script didn't accidentally trigger at some other point. Yeah.
That's kind of what I thought. Like, what if someone was doing something, managing the Active Directory and like moved something to-
Lou had gotten a promotion that day. Yeah. It raises questions. And maybe, maybe that speaks to how well is DL enabled in 80 was written. Maybe there were conditions. Maybe there was a time sensitivity. We don't know. We do know a little bit about what he was Googling during this, which was part of the prosecution's case, that this is not a coincidence and that he did it, but we don't know. Maybe that happened. Well, the, he also named a bunch of his, like, I think methods and procedures and,
words for like, that were like aggressive, like malicious intent. And like, like it was very obvious that the code was written for malicious intent upon like
Yeah, I think you might be talking about a different thing, but you're talking about there was one of the programs that was in that bundle of stuff activated by ISDL enabled in AD was a piece of software called Hakai, which is a Japanese word meaning destruction. That is honestly like the fluffiest part of all of this. When Lou turned his company-issued laptop back over to them, he deleted and encrypted all the files that were on it.
which is not in and itself evidence of having done anything wrong. But investigators later found his web browsing history had like how to escalate privileges in network, hiding processes, rapidly deleting files. Again, none of this on its own is like a smoking gun story.
But looking up hacker forum tutorials for how to escalate your privileges in a network so you can do something like this is not a good look when you are being charged with having done something like this. Yeah, taken in context, definitely a bad look. In context. Definitely a bad look. There's a... So the case goes to trial in Cleveland, where they're based. Ohio. The evidence that the prosecution chose Ohio. The malicious, we say that as Alberta boys. Yeah.
Evidence showed that malicious code came from a development server that only Lou had access to, which I think was the smoking gun in this case. The code was also run from a machine using Lou's user ID. There was a pretty big back and forth about how much dollars and cents damage this actually did.
The defense is arguing it's quite small and not that many people were locked out. Only five grand of damage. The prosecution obviously was arguing it was in the hundreds of thousands of dollars. The truth probably falls somewhere in the middle. But the point is that the jury found Lou guilty of one count of intentionally damaging a protected computer, which is a federal offense under the Computer Fraud and Abuse Act, which is why we're looking at a potential decade-long prison sentence for this case.
FBI Special Agent Greg Nelson said, quote, Davis Liu used his education, experience and skill to purposely harm and hinder not only his employer, but thousands of users worldwide. There's plans to appeal the conviction. The I think this is probably more common than it's reported. Like, I think that this is interesting. So like the it's happening. Yes, I think it happens more than we think.
It probably isn't as sophisticated because there's not as many senior programmers and stuff that have advanced privileges that can code real tripwire kill switches into stuff. But even in our company, this has happened before when somebody's been let go and you don't maybe know about it. We're going to go ahead and talk about this when we're recording it as we continue.
Yeah, we let somebody go a long time ago, and they had just even social media access and stuff to certain pages and client pages, and they removed our access to them and took them. True story. I have so many follow-up questions. Okay. Well, I mean, that transitions us. I wish we filmed and broadcast this, because you could see on my face that I'm having a moment of genuine...
We've worked together a long time. I know this person. Okay. So this transitions us really, really nicely to other instances of this happening. The question I had when I read about this, because I was like, this is a really fascinating story, is does this happen often? And to your point about thinking it probably happens all the time, boy, let me tell you, it happens all the time. Yeah.
The big one I found, and there's a bunch, UBS Payne Weber, it happened in San Francisco, Cisco, like the company Cisco it's happened to. But the big one was from the 90s, and I found this fascinating. It was a company called Omega Engineering. 1996, this company, Omega Engineering, who was like a precision instrument manufacturer based out of New Jersey, was blindsided by a much larger internal cyber attack that
perpetrated by a guy named Timothy Lloyd, an 11-year employee and trusted network administrator. There had been like tensions brewing behind the scenes. This is not our story. I won't dig into it too much. He was under disciplinary review. On July 10th, 1996, Lloyd was fired for these ongoing behavioral issues. And unbeknownst to the company Omega, he'd been laying a similar tripwire type trap. And three weeks later on July 31st, a logic bomb was
goes off inside of the computer network and it wiped out 1000 critical manufacturing programs, not turn them off, wiped them out. Uh, the fallout of this was $10 million in losses. Yeah. 80 people were laid off. Wow. Their operations were brought to a standstill.
The U.S. Secret Service was the people who looked into this. They traced it back to Lloyd. They searched his home and they uncovered like stolen backup tapes of stuff. Like he had been archiving the things that were then going to be destroyed. Allegedly, he was indicted in 1998, convicted in 2000. He was briefly overturned. He was reinstated in 2001. And he was sentenced to 41 months in federal prison and $2 million in restitution. It's one of the most damaging cases of like U.S. corporate crime.
internal cyber attack in history. 41 months. And he essentially killed a company.
I could see how Lew's defense attorneys would have some grounds for it. Exactly. But even then, that level of destruction, 41 months doesn't seem like enough punishment. The $2 million in restitution is the actual, okay, well, you're devastated financially for the rest of your life. Get out of prison and go try and pay that off. And I guess the...
We got called out on Twitter for my PSAs, but the PSA on that one is have a good backup structure. It's like if somebody can manage to blow away all the files in your network, then make sure you have a copy of them somewhere. Yeah.
Not the tape in the guy's apartment that he's storing. It's like, were you planning on holding it hostage? I have so many follow-up questions. In my mind, I was already thinking and being like, should I talk about this or should I not? But in my mind, it's like the best kill switch to go out with is ransomware. And that guy essentially did the version of... Like a 1996 version of ransomware. So it's like...
That would be the modern equivalent of that style. On your way out the door, you encrypt the network. But one of the best things about it is since 1996 to 2025, organizations know that their data is under attack and organizations have so many options now to protect it.
They have good backup infrastructures. They have immutable and editable files. They have all those things. Even we have those things, and we're not a huge company. So I'll make sure Eaton had, like, oh yeah, this is gone. Hit a few buttons, everything's down for a few hours, and now we're back to business as usual. Yeah, totally.
It's a fascinating question when you specifically look at those types of roles where it's like, no, you're a trusted network administrator. It's like, oh, that level of trust is so betrayable if someone really was a bad actor and wanted to. It's like, no, we've tasked you with securing this whole operation. You have the janitor's keys jangling at your hip. You can't get into any door. And it's like, I burned all the doors down. It's like, oh, oh, no. Yeah, I don't know. I found it...
I'm still following it. I want to see what ends up happening in terms of the sort of fallout for him legally of where this all goes next. Yeah, like the scale, like the scale of this one was large. The scale of the one that you mentioned in the 90s was huge. It's like, I think that the amount that this happens at a smaller scale...
Oh, yeah.
Nowadays, the best practice is you pull somebody into a boardroom to let them go. And by the time they leave that boardroom, their access is turned off. Their physical access to the site's turned off. And somebody escorts them to the door and sends them on their way. And I think that's probably a change in practice due to the fact that so many people...
There's a movie, we'll end here because I'm talking about a movie that's referencing a reference of a reference, but Margin Call, if you haven't seen it, it's a great film. The whole first act of that film, if you really follow it, is just following the minutia of a large corporate layout.
like the mechanics of like, we need to get you in this room while we talk about this thing. But the second that people come in that from the external consultancy that fires people, everyone knows that someone's going to be fired. So we have to start locking certain systems down. We have to lock them over here while you get brought in here. And it's just about the practical reality of trying to do something like this. It's like a really interesting intersection of very technical stuff and very, very human emotions stuff.
blind rage in the face of a perceived injustice. Like it's so human and so technical. It makes for good, makes for good storytelling. Totally. Okay. Okay. Well, where do we go from here? Where do we go from retribution? Um, little content warning for this next one. It concerns sensitive subject matter. I don't think there's any kids listening to this, but if there are my God, don't let them listen to this next part. Um,
And if you don't feel like listening to something that alludes to harm against children, maybe stop listening. It's more important that you take care of yourself than you hear this next story. So this concerns a AI image generation tool called Genmosis. Genmosis.
There's two different elements to this story. One concerns a massive data breach, and one concerns what this sort of smaller, more obscure, popped up and torn down AI image generation tool was being used for that we learned about as a result of the leak. A massive unsecured database belonging to South Korean AI image generation company Genmosys was discovered by a security researcher named Jeremiah Fowler. The story was broken by Wired.
The exposed database is one of the first times we're getting to see inside of one of these things contained 95,000 records, including explicit AI generated images and prompts.
So for one of the first times, we're getting to see all of the images produced by one of these tools, as well as all of the prompts that went into it. One of the first leaks of its kind, from what I was able to find. The database, which was online, was neither password protected nor encrypted. Shocking. Accessible to anyone on the internet. It was discovered in early March 2025 by researcher Jeremiah Fowler, who immediately reported the issue to the AI Generation company, as well as its parent company.
After that report, the entire website gets quickly turned off. They never responded to the comment, but I think it speaks to kind of how this product and tool was spun up for a little brief window of time, used by a lot of people, 95,000 records, and then torn down at the first sign of trouble. Both websites were deleted after Wired contacted them in light of Jeremiah Fowler's initial findings.
The thing where this gets dark is that we, as a result of the prompt data being leaked, people were able to see what folks were using this for. Genomists would have been subject to South Korean laws regarding content moderation, which are not dissimilar to those in the West. There's just stuff you can't generate with these tools, and there's just stuff that the tools can't generate. But what we learned from the 45 gigabytes of data
was that this tool was being used to generate a lot of sexually explicit content, some of it not containing adults. There were rules in place about what could be used, but prompts were discovered in used terms, and I won't dig into it, that were
sort of designed to get around some of those prompts where you couldn't ask it for A, but you could sure ask it for B. We're all familiar with jailbreaking these prompt restrictions, and it seemed that the criteria for breaking these was quite low using this image generation tool. The company's website had previously promoted the ability to create, quote, uncensored images and featured a marketplace for explicit AI-generated images, which makes the types of materials that people were producing with this
particularly egregious. Jeremiah Fowler called the findings quote, "terrifying" and expressed the ease with which people were able to create this kinds of immoral and illegal content. It seems like these tools now, given the ease with which running these models locally
uh can be done can be sort of spun up and torn down it's like a pop-up shop it's the market selling the knockoff stuff somewhere it's a thing that you can spin up promote make some money and then and then get out of dodge um it reminds me of other stories we've talked about with certain types of spousal monitoring software where it's like oh you can just spin one of these bad boys up um
And I think we're starting to see this with AI image generation. So this one's pretty dark, but we learned a lot about that world from this one. It is a very fascinating world. So you're talking, oddly enough, I spent the entire weekend looking into how to set up my own local LLMs. So I was doing not image generation, but for code generation. Yeah.
So I spent a lot of time this weekend looking into it, looking into the hardware requirements, stuff like that. And it's not egregious. And the other thing is, too, is that you can take any of the publicly available models, like DeepSeek's notable, you can take them and you can actually retrain them. So Perplexity took DeepSeek, their open source models, and retrained them to not be...
censored by Chinese government. Sure, not subject to the laws that Deep Sea's parent corporation are subject to. Yeah. Yeah, yeah. So they made a list of about 300 topics that they knew the model would not respond properly to, and they actually reconditioned it. They didn't retrain the entire model. They just reconditioned it to allow for that stuff to come out of it now. So it's like you could, if there's a publicly available LLM
that allows for image generation like in this case but has bumper rails on it about like what it allows those bumpers it is possible to to recondition them the thing that stood out to me was like so there were conditions on what you can and can't produce because again this was a a korean corporation in korea has laws including not being able to do foul things
And what we were seeing was a lot of de-aging. So it was a lot of people prompting very explicit content regarding very real adult celebrities, which is not good either. But you put that aside, it was people then running those and using the system to then start de-aging those people. So it's like you've sort of just like, you've created a way around these rules preventing child sexual abuse material. That's a very short distance to doing something very, very evil. Yeah. Yeah.
I guess it's something like the, I spend a lot of my time these days thinking and reading and learning and coding and building stuff with AI. I find it fascinating. Yeah. You're cooking with it. Like you're making a lot of stuff and it's, it's cool to see how quickly you're able to do it. And just even like, like looking at better ways to integrate it and utilize the agentic systems and just figuring out what I can do to automate things that I don't want to do is the reality of it. And, and,
So I never even, my brain never actually crosses into this. Like I'd never even think about the like negative parts of it because I'm spending so much time like with the positive parts of it. But it, but it is like, it is a scary thought. Like, and especially like local model execution is so easy that it's like, if you like, there's going to be a shift in policing around this stuff for CSAM because like all of a sudden it's no longer going to be distribution of like
libraries of content and stuff. It's going to be distribution of models that are good at creating this stuff. Especially once they get into the part of creating models that are effective at creating video content and things like that. It's going to be a whole different game. I'm struck by how people repurposing models is becoming...
more common. And I'll clarify what I mean here. I know Huawei, the big Chinese mobile company, in a similar kind of vein to Apple falling back on open AI whenever a query is more complex than what they can do locally with Siri, is doing the same thing with DeepSeek. Well, you have Huawei AI empowered by DeepSeek. And when those models are open source, the ability to retrain and recondition them
It's like, well, this is just going to become a more common practice and be like, oh, I'm running my own version of deep seek here locally. And it's this fork of this version that can do X and Y to get around this and get around this. How many layers deep do you get before you notice the insidious part? Um, it, I remember when the max studio, uh,
The recent version of the Mac Studio came out. One of the first things was people going like, this is an extraordinary computer in terms of like dollar value for running models locally. If you look at the processing power, you look at the cost and you would imagine a pile of these things. And I'm curious to see where that goes next. I am chasing that dragon, not in any way, shape or form related to this story. But like I am at the point where I think I am going to set up
a dedicated system in my house to run a model. And like, the reality is, like I was talking about this with my wife last night, I was like, the AI, like we've been living in this technological revolution pretty much my entire life. You know, we've
PCs, personal computers, interconnected personal computers, mobile computing, mobile communication. We're still inside of the revolution, and AI is the next big thing in that revolution. The amount of stuff that you can make these things do now
I've been playing with different ways of engaging with the models. I don't think chat is the best interface for so many different things. So I've been creating my own AI clients specific to the context that I want to use them. To me, it is another huge milestone in this revolution and I just want to make sure that I'm fully in on it. I want to make sure that I fully understand it.
And to me, the next step of that is by having my own models at home, by having my own LLM, by playing with reconditioning, by playing with different model varieties. And it's like a natural step for me. And it's not hard. Yeah. So it's, I don't know. I'm intrigued by it. There's the open source DIY part of my brain just gobbles it up. I'm so fascinated by this. And it
If for no other reason than to make sure that it isn't walled behind four or five companies that get to control it and just sort of decide in concert with one another how much it costs. Like, I don't think that's good for this type of thing, especially when we consider what could potentially be done with it and the impact that it could have on the economy. I like the idea that you can homebrew this stuff. You can do it yourself. You are not...
You're a little less contained by what that small handful of companies wants you to be able to do with it. And as with all things, like this exact same structure can be applied to every wave of computing and the internet. There is a dark side to that of what then can be done when you remove the restrictions of a big company that is subject to laws in a country. Yeah. And it's just in a person's basement. It will unlock...
things and terrifying things. Something man-made horrors beyond our comprehension.
But in the meantime, we should probably tell folks who this show is brought to them by. Who is it brought to them by, Jordan? Brought to them by Push Security. We talk about a lot of different tools off of air. Some are very, very clever. Some feel like solutions in search of a problem. But every now and then something comes along that just makes a lot of sense for like a big company. Push Security is that kind of tool. You know, identity attacks, you know, phishing, credential stuffing, session hijacking, account takeover, whatever.
These are some of the number one causes for breaches right now. And most security tools are still focused on endpoints, infrastructure, networking. Meanwhile, the browser, the place where we are right now and we spend most of our days has been largely ignored. Push changes that. They built a lightweight browser extension that observes identity activity in real time, gives you visibility into how identities are being used across your organization. Like when logins skip multi-factor authentication, when passwords are reused, or when someone unknowingly enters their credentials into a spoofed login page.
Then when something risky is detected, Push can enforce protections right there in the browser, no waiting, no tickets. It's visibility and control directly at the identity layer. And it's not just about prevention. Push also monitors for real-time threats like adversary in the middle attacks, stolen session tokens, and even new techniques like cross-IDP impersonation, where attackers bypass single sign-ons and multi-factor authentication by registering their own identity provider for your organization. Think about it.
It's kind of like endpoint detection response, but all right there in the browser. The team behind it all, they're all offensive security pros. They publish some of the most interesting identity attack research out there, like the software as a service attack matrix, which breaks down exactly how these kinds of threats bypass traditional controls. Identity is the new endpoint and Push is treating it that way. Check them out at pushsecurity.com. Pushsecurity.com.
I think we are retiring the ad oasis. An oasis is a leisurely experience. You really take your time in it. And I think we're inventing now the like ad water slide where you get in and you're out before you even realize it. It's the water park. A lot of quick rides. It's the water park. Exactly. It's a lot of fast rides. Thrills and chills.
Every once in a while, a new security tool comes along and just makes you think, this makes so much sense. Why has nobody done this already?
And why didn't I think of it? Well, Push Security is one of those tools. I'm in a browser right now. Most of us do pretty much all of our work in a browser nowadays. It's where we access our tools and apps using our digital identities. Push turns your employees' browsers into a telemetry source for detecting identity attack techniques and risky user behaviors that create the vulnerabilities that identity attacks exploit.
It then blocks those attacks or behaviors directly in the browser, in effect, making the browser a control point for security. Push uses a browser agent like Endpoint Detection Response uses an endpoint agent. Only this time, it's so you can monitor your workforce identities and stop identity attacks like credential stuffing, adversary in the middle attacks, session token theft.
Think back to the attacks against Snowflake customers earlier this year. These are the kind of identity attacks that Push helps you stop today. You deploy Push into your employees' existing browsers: Chrome, Arc, Edge, all the main ones. Push then starts monitoring your employees' logins so you can see their identities, apps, accounts, and the authentication methods that they're using.
If an employee gets phished, Push detects it and blocks it in the browser so those credentials don't get stolen. Like we said before, it's one of those products where you ask yourself, why isn't everyone already doing this? The team at Push all come from an offensive security background. They do interesting research.
into identity SaaS attack techniques and ways of detecting them. You might know of the SaaS attack matrix. Well, that was the folks at Push that helped develop it. And those are the kind of attacks that they're now stopping at the browser. A lot of security teams are already using Push to get better visibility across their identity attack services and detect attacks that they couldn't previously see with endpoint detection or their app and network lock.
I think this is an area that's blowing up and not just identity threat detection response, but also doing threat hunting at the browser level. Like it just makes sense. Push Security is leading the charge here. It's a very cool product, a very cool team, and it's well worth checking them out at pushsecurity.com slash headspace.
hacked. That's pushsecurity.com slash hacked. We are beginning to lose some of the hackers and visionaries who laid the foundation of the cybersecurity industry.
Enter Where Warlocks Stay Uplate, an interview series dedicated to documenting the history of cybersecurity. Inspired by the seminal book Where the Wizards Stay Uplate, The Origins of the Internet, this interview series aims to capture the stories, insights, and legacies of the pioneering figures who shaped the field of cybersecurity from its inception to the present day.
Each month, two long-form video interviews will be released on the Warlocks Project's YouTube and Spotify channels, featuring candid conversations in which cybersecurity pioneers share their technical achievements, as well as their personal journeys, challenges, and ethical dilemmas they faced along the way. This project has a huge supporting cast, including Emmy-winning producer, a Harvard anthropologist and historian, and the former editor of Frack Magazine and more.
Guests were members of such groups as Cult of the Dead Cow, Woo Woo, and Root. Check out their anthropological map on wherewarlocksstayuplate.com to see just how large this project is. Where Warlocks Stay Up Late is now available to stream on YouTube and Spotify, and soon it will be available wherever you get your podcast fix. Scott, what do you like best about Shopify? Oh, Shopify. Well, the cha-ching sound, you know, I adore. But actually... You mean this cha-ching sound? Yeah.
Yes, Jordan, that's a jing sound. But truthfully, I love Shopify just because it is a well thought out, well designed, well conceived, well executed service that makes my life easier. And what more can you ask for in today's world than paying for a service that you don't hate, that you actually love?
I like Shopify in the same way that I like all a lot of kind of creative software. For a lot of people, you got an idea in your head, you want to put it out into the world, but you don't have the right tool to do it. Selling stuff on the internet is one of those things that seems like it should be really trivial and simple because Lord knows everyone is doing it. And then you try and figure out how and it's complicated.
not with Shopify Shopify lets you plug all the different stuff you want into one place gives you a really nice clean easy front end for people to shop from lets you receive payment lets you run your product through it it's how we got the hacked store running far easier than a bunch of other tools that exist we genuinely really appreciate that that's what I love about Shopify
Yeah, yeah, I completely agree. It is as complicated as you want it to be, or you can use it at a pretty high level like we do. And it's very easy. So upgrade your business and get the same checkout we use with Shopify.
Sign up for your $1 per month trial period at shopify.com slash hacked, all lowercase. Go to shopify.com slash hacked, H-A-C-K-E-D, to upgrade your selling today. Scott, one more time. That's shopify.com slash hacked. Now that we're back from the ad water park, should we...
I think we talk about Bluetooth microcontrollers that are in everything and may or may not have a thing that may or may not be a vulnerability. That may or may not be a problem. So there's this thing called the ESP32.
It's this tiny little microcontroller chip you've maybe never heard of, maybe have. If you're a big old nerd, which I appreciate, even if you haven't heard of it, you've definitely used it. If you have a Bluetooth speaker or a smart thermostat or a security camera, you've
of things gadgets. The ESP32 is kind of an anchor of that whole product category. There's a billion devices worldwide currently using the chip. You know what one of those devices is, Jordan? What's that? The Flipper Zero. I saw this.
Anyway, the ESP32 is like the most used Bluetooth and Wi-Fi controller chip, and it's in everything. And the default Wi-Fi board for the Flipper Zero, a hacking tool, has this chip in it. So anyway, just a small touch-in before you get in the deeper end of the story. No, it's a good thing to bring up. What we're looking at here, and the reason this is kind of fascinating, is that when this was initially published and reported on, it was kind of described as a bit of a backdoor issue.
And as far as if the manufacturer's clarified a little bit, it is and it isn't. What we're fundamentally talking about is a debug feature that can be used and compromised potentially in some sketchy ways. But because of the scale of this chip and how much stuff it's in, including a literal hacking tool, it's worth talking about.
Two security researchers from Tarlogic Security in Spain had started digging around in the ESP32's Bluetooth features, curious if there was anything going on under the surface. They built this USB Bluetooth tool themselves called Bluetooth USB that was able to bypass the standard OS level APIs and get them like raw direct access to the Bluetooth traffic at the chip's hardware level.
And what they found was 29 hidden commands in the chip's Bluetooth firmware. These were not documented anywhere by the manufacturer of the chip, Espressif. What the commands allowed them to do was allowing the person to theoretically read and write directly to the memory of the chip, RAM and flash, meaning that someone could potentially rewrite the device's software or inject persistent malware. What this means practically, there's good news.
These commands can't be activated remotely. This is not a "the remote hacker somewhere in the world is compromising my device." You would have to compromise the device physically. The bad news is that if someone has physical or root access to a device with this chip,
These commands written onto the chip in this way could help them embed malware on the device that could not be gotten rid of by a hardware or factory reset. Yeah, to me, it looks like a debug toolkit. Like when I look at the calls, like read memory, write memory, erase flash memory, write flash memory, set MAC address, like a lot of these...
functions, I can see how you could use them maliciously, 100%. And I can also see why they exist for development and debugging purposes, 100%. Those two things are very similar. Like often debugging is like... Sure, that's a good way of putting it, yeah. Debugging is like, how do I make it easiest on me to understand what's happening in the chip? And hacking is like, how do I make it so I can make the chip do what I want?
And it's like, and those are, those use the same toolbox. So I could see how this became a big story. And just given the scale of them, like there's, I, there's probably like 15 of these chips in my house. Yeah. I'm looking around the room I'm in right now and going like three, four, like I'm just counting. Yeah. Stuff that might have this chip in it.
Like you said, like debug stuff, debug commands like this aren't new. Other chip makers like Broadcom has their version. Texas Instruments have versions of these things. This type of thing isn't that rare.
And this speaks to a really common tension, I think, in Internet of Things security, which is that developers need debugging tools. But if you leave them accessible after the thing ships, it can create a vulnerability. Espressif, the people who manufacture the ESP32, acknowledge this is kind of an issue. And they promised an update soon to remove these hidden debug commands from future firmware releases.
They have reiterated that these commands are part of like a pretty standard host controller interface thing that is used in a bunch of different products. Basically, like the takeaway is if you have any Internet of Things stuff that you think someone could get physical access to, update it because this theoretically constitutes a vulnerability to it. Well, I'd say the yes and no because you wouldn't need physical access to
you would just need access to the host running the chip. That's a good distinction. Yeah, when we talk about IoT vulnerabilities, your washing machine talks to the Wi-Fi through an ESP32 chip. So it's like if somebody hacks LG washing machines and figures out a backdoor into them, they can then use this chip to do other things. It's like an attack vector.
Now all of a sudden you've got a malicious Bluetooth device on your network or a malicious Wi-Fi device inside of an LG washing machine. And a compromise that can't be fixed, again, with a factory reset. We've talked a lot about it. I know you and I have discussed it. Apple devices are...
A reset, just a turn on and off is going to fix a lot of problems in an iPhone and a factory reset is going to fix even more. And a compromise that can't be fixed with a reset of any sort is interesting. Like that's just a different kind of thing. But the other thing too, I'd say is like maintainability. Like how many people that have IoT devices in their house actually spend, how many companies that make them? And then again, knocking on how many people that own them.
Like that funnel gets very small. Maintain them. Like if a firmware update comes out for your fridge, are you A, do you know it exists? B, are you going to run it? Yeah, my fridge is farming crypto right now, I'm sure. Like it's fine. It's helping compute my new model. Yeah, it's helping find new star clusters or something. Exactly, exactly.
Wouldn't that be great if one of those SETI, like let us use your extra compute to like process the Cosmos things was at the heart of some giant like malware scheme? I know, like we met some people at DEF CON that specialized in security for...
And it's like, it's really good that that's becoming a priority because I think when IoT devices started coming out, they weren't prioritized. So it was like security was not a priority since so many of them were vulnerable. So many like, it's like Wi-Fi routers back in the day where they all had default passwords. And now we're at a situation where it's like our fridges all have default firmware. Yeah.
So I know this is actively changing as it's been identified. And we've talked about this in DDoS things where people have created armies of IoT devices to become DDoS endpoints. It's just a... I don't know. So I'm glad this wasn't a real big problem, but I can see how it could have been made into a bigger problem. I'm glad that they're fixing it. Yeah, I remember...
When we first started making this show in earnest, a handful of stories that had to do with like
For me back then getting up to speed with the world of cybersecurity It was like learn the basics learn what a DDoS attack is and then immediately internalize the fact that a toaster can be implicated It was just this weird thing of like there's something very technical going on But then also that like cool light bulb you own that changes color and connects to your phone It may be being used by Russian cyber criminals like it was that kind of it's it's the surreal part of it cuz it's just some stuff in your house I got rid of that toaster
And meanwhile, I'm just sitting here thinking about like, if I could take an army of IoT devices, how long would it take them to compute a model for an LLM for me? To go back to that thing you were talking about earlier, I could see a point in the future where people can lease out unused compute or like, I don't know what you'd volunteer, but like to say like, no, if you want to use some portion of this to train the thing that you're training, like give her. Yeah. Yeah.
I'm fascinated by it now. I have to Google this after the show. I have to see if there is a distributed model trainer. It would make total sense that there would be. I love that idea. If anybody out there knows, or if you're a part of a project, add us on X or send us an email or something. I'd love to hear more about it. Are there any big security stories from the last couple weeks that we haven't touched on yet?
And specifically, have you added any editors-in-chiefs to Signal Chat's planning strikes of military nature? Because I haven't. We didn't talk about that, did we? We didn't talk about that. It's been talked about so much. I'm like, am I going to tell you that that happened? Totally. I wasn't added to any Signal Chat's.
planning strikes in Yemen. So I have nothing to contribute to that story other than maybe don't do that. And I don't think it's Signal's fault. I'll chip that in on that. There's sort of maybe even a little bit of a clamor to be like, seems like Signal's not very good. Like I'm 100% sure it's not Signal's fault. Yeah.
Yeah, I quite like Signal. Yeah. Don't use this as an excuse to outlaw encryption or some dumb shit. That would be the worst thing that could come of all this. It's like, you did a dumb thing. Own it. Own it. Just own it. Yeah, nothing's really jumping to mind. Again, I'm just talking, we're wrapping up here, but it's like, I've just been living in the AI bubble. Yeah, you're in it. I'm in it. I...
I see it. I see what's happening. And the thing is, too, I remember eight months ago we were on a show and I was like, somebody needs to figure out AIHR. It's like...
It's happening. There's a framework that I'm implementing an application in right now that's essentially that. When you say HR, you're not talking about replacing human HRs with AIs. You're talking about managing agentic resources to do stuff the same way a project manager or an HR person does with human beings. I see what you're saying. I'm building an organization of agents.
each with their own subject matter specializations and maybe that's like you're the researcher and then this was the person that evaluates the quality of the research and they can tell you to go get more research if they feel like it's not enough and like building essentially an agent yeah it brings you into the room when it needs to let the other agent know that it's firing them while this other agent locks down their computer system so they don't write a kill switch code i feel you yeah
Exactly. That's called bringing a full circle right there. That's what we pros do. That's what we pros do on Hack Podcast. Brought to you by Push Security. Pushsecurity.com. Thank you again for listening. Appreciate you taking the time to hang out with us as we tell weird tech tales. And we'll catch you in the next one. Take care.