I have a degree in software engineering. But can you remember a time in your life when there wasn't such a thing as software engineers? I can't. All my life, it's been a thing. But I bet my great-grandparents went their whole life without ever hearing about software engineering. So let's take a quick look backwards to find when software engineering popped into existence.
In the 1950s, NASA was doing some pretty amazing things, flying spaceships to the moon and beyond. These spaceships were loaded with lots of technology, antennas, radios, computers, cameras, software, and hardware. And that's just on board the spaceship. You've seen these giant command centers they have where mission control is. There are computers on everyone's desk and giant screens in front of the room. And there are dozens of scientists and engineers in the room.
yet not a single one of them was a software engineer because the term had not been used at any point in the 1950s.
In the 1960s, NASA developed the Mariner Space Program. The goal here was to send unmanned spaceships to Mercury, Mars, and Venus to take photos of them. In 1962, the first Mariner spaceship was launched, and it was headed for Venus. It didn't have anyone on board. It was controlled remotely, and on board were just electronics, antennas, computers, jet fuel, and cameras.
But only a few minutes after launching, things started to go wrong. The computer on board that was in charge of controlling the ship was acting erratic, giving all kinds of wild commands for the ship to do. The folks at Mission Control tried to correct the computer gone wild, but they couldn't do anything about it. Then they started to realize this rocket's not going to make it to Venus. It's not even going to make it out of the atmosphere. And it might even crash into Earth and hurt someone.
So the people at Mission Control decided there was no choice but to push the self-destruct button and blow up Mariner 1 over the Atlantic Ocean. That was the end of the Mariner 1 spacecraft, an $18.5 million ship blown up.
So what happened? Well, scientists and engineers spent days replaying the events and logs that they captured after launch. A piece of hardware failed, which caused an onboard computer to kick in and try to control the craft.
But the way it was trying to control the craft wasn't right. Something was wrong with that computer. So they examined the code that was put on that computer. And that's when they saw the problem. A missing dash in the algorithm. A single missing dash. It's not like the dash you're thinking. It's more like a bar that was supposed to be above the letter R, which stands for radius. And that meant it should have been a smoothed value for radius.
Without this bar, it was taking the current value for R. And since this rocket was trying to recover from some bad hardware, the values for R were bouncing all over. So the output of the program was bouncing all over. It should have been taking an average reading for R, not the wildly fluctuating values. So the computer was telling the rocket to fly all crazy and out of control. The logic and algorithm that the scientists gave the programmer was correct.
But whoever programmed that algorithm into the computer missed this little dash above the R. And because of that tiny little bug in the code, it resulted in the whole rocket being destroyed. When NASA makes a mistake like this, they try to find ways to prevent anything like this happening in the future. They realized they were implementing software on a lot of systems, but had no way to test the reliability of that software.
This is when it became clear that software engineering should be a discipline. And shortly after that, it started getting developed and became a thing. This software bug didn't just crash a spaceship, but it launched a whole new field of study and new principles for designing, developing, and testing computer software. These are true stories from the dark side of the internet. I'm Jack Recider.
This is Darknet Diaries.
So that's why I gifted my friend a subscription to Delete.me. They were always kind of aware of cybersecurity, but never took it super seriously. But then they received the first report on what Delete.me found and deleted, and they were amazed. That's when they understood how helpful it is to have someone on their team when it comes to their privacy.
Take control of your data and keep your private life private by signing up for Delete.me, now at a special discount for Darknet Diaries listeners and your loved ones. Today, get 20% off your Delete.me plan when you go to joindeleteme.com slash darknetdiaries and use promo code DD20 at checkout. The only way to get 20% off is to go to joindeleteme.com slash darknetdiaries and enter code DD20 at checkout.
That's join, delete me.com slash darknet dies code DD20. This episode is sponsored by Vanta. Trust isn't earned. It's demanded. Whether
Whether you're a startup founder navigating your first audit or a seasoned security professional scaling your GRC program, proving your commitment to security has never been more critical or more complex. And that's where Vanta comes in. Businesses use Vanta to establish trust by automating compliance needs across over 35 frameworks like SOC 2 or ISO 27001, centralized security workflows, complete questionnaires up to five times faster, and proactively manage vendor risk.
Vanta can help you start or scale your security program by connecting you with auditors and experts to conduct your audit and set your security programs quickly. Plus, with automation and AI throughout the platform, Vanta gives you time back so you can focus on building your company. Join over 9,000 global companies like Atlassian, Quora, and Factory who use Vanta to manage risk and prove security in real time. For a limited time, listeners get $1,000 off Vanta at vanta.com slash darknet.
That's Vanta, V-A-N-T-A, Vanta.com slash Darknet for $1,000 off. Are you ready? Yep, sounds good to me. So what got you started? Hold on, let's start with your name and what do you do?
My name is Maddie Stone, and I am a security researcher focused on studying zero days that are actively exploited in the wild at Google Project Zero. We're going to get into what she does at Google, but I find that the path to get there is interesting. So when she was a teenager, she developed an interest in computers and after high school, went to college at Johns Hopkins University in Maryland. Yeah, so I actually double majored in computer science and Russian language and literature.
because I wasn't fully committed to this whole engineering thing. I didn't know if I would be bored doing that. So I was like, let's learn a new language and ended up really enjoying it.
and sort of just a very different way of using your brain in classes and everything like that. And it allowed me to study abroad too, which I've always loved to travel. Whoa, this is crazy. So you know Russian? Well, I used to. I used to be good. But then you moved, like you studied abroad to where? So I did two months in St. Petersburg and four months in Moscow. Wow.
And after graduating, got a job at the Applied Physics Lab at Johns Hopkins. Which is a government research laboratory. And that's where I ended up for the first four and a half years, studying or working on reverse engineering of like firmware and hardware. It looks like a really cool place, actually. There are about 8,000 employees at this Applied Physics Lab, and they take on research projects for the Department of Defense and NASA. So they get hands-on experience while doing advanced research.
So I was also working with literal rocket scientists, if that doesn't, you know, keep your ego in check. And while working there, she simultaneously was able to get a master's degree in computer science, too. I was super fascinated by, like,
the hacking portion. And, you know, when you see all these things, but have never actually done it, it sounds like really sexy and everything like that. And I had really, really loved assembly. I had actually listed that that was my favorite language when like, you know, they did around and did profiles of folks and, and interviews with different companies, they ask you and they're like, you love assembly? I was like, yeah, it was
I became the teaching assistant for that course and then as an independent study created all new projects.
That's very interesting to me. I too have an IT degree, and I learned Java and C and C++ and Visual Basic and all these programming languages, all of which I could understand no problem. But when I took the assembly language class, I was so lost. It was the only IT class that I actually struggled with. And that's because it's so different than everything else.
assembly language is very low level. A high level language, you can see things like variables, if statements, for loops, and functions. But with assembly, you have commands like move, push, pop, add, subtract. Real basic and rudimentary stuff. A program that is just a few lines of code in Python can become 10 times longer in assembly than
But assembly has some superpowers. It can interact with memory and the CPU in ways that other languages can't. And it can be incredibly efficient, too. You get much better control over the computer's resources. And you know what? You can go even deeper, too, to an even lower level and look to see what's going on in the hardware.
You could open up the case of the computer, get out some probes, and jam them into the circuit board and watch what electrical signals are moving through the circuitry. This is even more hard to read because all you see at that level is whether the voltage is high or low, but having this kind of read-write access
gives you really the ultimate power over your computer. And it was this low-level stuff that fascinated Maddie. It was like doing brain surgery to teach someone something or to see how they think. A computer can't hide its thoughts when you're this deep into it.
Another big reason she liked it was because she could break down any program into assembly. It doesn't matter what language a program is created in, you can run any compiled program through a disassembler and see the whole program in assembly language. A lot of applications and programs are compiled and in a sort of bytecode that's not human-readable, and you certainly can't see the original code that was used to create it.
So you can't tell what so many programs actually do. But at the end of the day, the computer has to know what to do. And that bytecode can be converted into assembly so you can kind of read what's happening. So if you get good with assembly, you can get a much deeper understanding of how computers handle memory and processes, and you can decipher any program.
It's just really hard to read at that level. It's kind of like reading a book, but you only get to look at one letter at a time, and the book only has 10 usable letters that make up all the words. Anyway, getting better at assembly and learning more about hardware is what she spent four years doing at the Applied Physics Lab. And then one day Google calls and says, hey, are you interested in interviewing with us? Which I was pretty shocked about because as a...
Student, I tried really hard to get any even interviews or calls with all of the big tech companies, but I was not someone of interest to them. So I was very surprised to get the call and ended up going through the interview process and getting the offer to join the Android security team as a reverse engineer. A reverse engineer is someone who takes a program and tries to figure out what it does.
by sometimes converting it to assembly language and trying to make sense of it. And I mean, Google is where Android is made. So why would someone need to reverse engineer Android when they could just look at the source code written right there in the same building? I was focused on all of the malware, you know, in the Android ecosystem. Oh, duh, that makes sense. The malware that's targeting Android is often compiled where you can't see the code that's used to make it.
And Maddy's job was to reverse engineer and decompile some of this code and examine it for malware. And if it was malware, figure out what it's doing and then tell the Android developers how to fix this. And more specifically, I started leading a team that was focused on finding any sort of malware or...
bad apps that were one, potentially pre-installed on different OEM or manufacturer devices, you know, because there's like thousands of different manufacturers of Android devices, as well as looking at
can we find malware for all the apps that are off of Google Play Store? So, you know, in lots of parts of the world, there's apps that are passed around through different stores other than Google Play or they're peer-to-peer passed or things like that. So are there ways that we can still protect Android users from those apps as well in figuring out what's malware and what's not?
Okay, so I just got curious what kind of malware we're talking about here when it comes to Android, and I started looking some things up. One really popular virus going around is Gin Master. Apparently there are millions of Android devices infected by this. And don't forget, Android is an operating system that's used on both phones and tablets. But this Gin Master malware, once it gets into a device, it will capture private data from the device and send that to an external server.
It can also give attackers access to that device. Gin Master is clearly something you'd never want on your phone or tablet. So why does it exist on millions of devices?
Well, the way it often gets onto a device is that it gets tacked onto another app. And it's typically a bad app that a user is tricked into installing. A common strategy is to make a lookalike app of a popular game out there. This is to trick people into thinking that they're getting the app that they want, but it's not the real one. And then when someone downloads it and installs it, not only do they not get the app that they want, but they get infected with this Gin Master malware.
So at the end of the day, it's actually a user who downloads and installs the virus. They just don't know it's a virus. And when a device is infected with it, it can steal user data, take control of the device, or install more malicious stuff.
So it's malware like this that gets sent to Maddie for analysis. And she can flag apps like that to warn Android users that this app contains malware. And specifically, the way Android apps are packaged is in something called an APK file, which stands for Android Package. Yes. So we find an APK file, which is basically just a zip file with all of the different components.
components of an Android app. Not all Android apps are written in Java, but I think it is the most common language they're typically written in. And what's nice for Maddie is Java apps can be decompiled pretty easily. And you can see a pretty close picture to how the original program looked. So she doesn't need to break it down into assemblies. She can read through what it's doing close to its original format, making it a lot easier to understand.
But it's not always this easy. Sometimes hidden in the Java is additional compiled programs. Yes. So that's where some of the more sophisticated malware authors would try to hide some of their behaviors in native libraries within the APK file. So these are compilations.
compiled C or C++, which once it's compiled, it's in machine code, which we can disassemble to assembly code. And of course, this is where Maddy shines. She can read this assembly language to understand how the malware does what it does, then reports it to the Android team to see if there's anything they can do to protect users from malware like this. Yeah, so...
The first thing is we had to put flags into the Google Play Protect system because the number one thing is you want users to be alerted to give them the option to remove it or disable the application from their device.
The next step is really writing automated solutions because especially when you're in a malware team and there's always more apps or samples to look at than there are humans to analyze them. So the goal is always that it's only ever reverse engineered once in that terms and that then after you've reversed engineered at once,
Then there's software automated solutions that can find all the other copies that may come out of that. So that's really the process is analyze it, figure it out, flag it so users were protected, and then figure out automated solutions. So tell me a story about maybe some interesting malware that you found or landed on your desk. You're like, all right, I'll take a look. Whoa, this is crazy, this stuff. So one of the biggest...
sort of malware families that I was not expecting. And it ended up into like a year plus investigation was what we called Shamoa. It took a lot of practice to learn how to pronounce that correctly, but it was a large botnet. And what was really interesting and how I got into it is that this application, which is usually written in Java, had this native library. So C or C++ compiled code in it.
And as I kept trying to dig into this native library, it became obvious that it was heavily, heavily obfuscated as well as doing an incredible amount of anti-analysis and anti-debugging checks. So it was very sophisticated in sort of trying to monitor, like, am I being monitored and analyzed by a security engineer or am I running on a real device that I can infect?
and i ended up diving into i think it took like over a month or a month and a half to really dive into all the aspects of that native library and then when i started looking for other apps with similar native libraries it became clear that it was this botnet in this family of malware that was doing some pretty sophisticated stuff
One of the funniest anecdotes to me is that I actually presented on that native library at Black Hat. Yeah. So in 2018, Maddie came on stage at the Black Hat Security Conference and showed everyone in the audience the exact techniques that this malware was using. So what are all these different techniques that we're going to talk about? What makes it so interesting? First, we're going to start about some of the JNI or Java Native Interface manipulations.
Then we're going to go into some places where they've used anti-reversing techniques, in-place decryption, and finally to about 40 different runtime environment checks that they use. And I think it was less than 24 or definitely less than 72 hours later, we saw the malware authors changing different aspects and characteristics of decryption.
this library that I just presented on. So they only changed the characteristics and techniques I had discussed in the black hat presentation. So that presentation hadn't been streamed or anything like that. So that was very fascinating to see.
Whoa, yeah, that is interesting. This means either the malware authors or someone who knows the malware authors were at her talk, watching her, taking notes on how she's able to detect their malware and then rushing back to their computers to update their malware to make it harder for the Google team to detect it.
And see, this is the thing about Maddie. She seems to be on this mission to make it harder for malware makers to do what they do. She gets in their heads and learns where and how they're hiding so she can shine a big old spotlight on it and make them scatter. Her goal is to make it easier for people to find malware and at the same time make it harder for someone to make malware.
So one day I had a new calendar invite in my inbox from Ben Hawks, who was the longtime lead of Project Zero. And we had never met before. And he said, "Hey, I just wanted to chat about this potential new role and sort of experiment for Project Zero."
Oh, wow. Project Zero was trying to steal her. That's pretty cool. This is a very talented team within Google, which focuses on finding zero-day vulnerabilities. Yeah, so Google Project Zero is a team of sort of applied security research with a mission of make zero-day hard. But the key thing here is this team will look for bugs in any software, not just Google's products.
I think the idea here is that Google users don't just exclusively use Google products. Yeah, so if you think about it, to protect, say, Google Chrome users or Gmail users or things like that, a lot of Google users can be attacked through vectors other than just the Google products.
So whatever operating system you're running Chrome on, for example, if that has vulnerabilities, then that could be a way to hack those users. Or back in 2014, Flash was one of the biggest ways to attack people via the web. So doing a lot of research and vulnerability research into Flash would ultimately help protect Chrome users.
So the team at Project Zero looks for zero-day vulnerabilities anywhere. Oh, and zero-day vulnerabilities are bugs that the software maker doesn't yet know about, which also means the defenders don't know about it either, and they can't defend against this kind of bug. Now, if the Project Zero team finds a bug, they tell the vendor to fix it and then start the timer. If 90 days goes by and the vendor doesn't fix it, Google will publish this bug publicly.
Anyway, this was the team who approached Maddie. So his hybrid role would be not just for me to not just be a vulnerability research, but sort of combine this threat and tele malware analyst side of it. And I would use the starting point of zero days that are actively exploited in the wild. So not just hunting animals,
Zero days that attackers could theoretically be finding, but instead having my starting point be the exploits that are actually used.
Yeah, I get it. If the goal of Project Zero is to make zero days hard to make, adding a reverse engineer to the mix really boosts the potential research that can be done. Now, instead of just looking for unknown malware out there, you can feed known malware to Maddie, and she can digest that and come up with patterns to look for more malware that's out there. It's sort of approaching finding malware a totally different way. And combining these forces makes them more effective. So she took the job and joined Google Project Zero. ♪
So I really came into this team with not a lot of knowledge and just this basic idea from Ben that he told me, take it and run with it and figure out what makes sense.
So I did not really have any Windows, iOS, browser, et cetera, vulnerability research experience. My experience prior to Android had been on hardware and embedded devices, which doesn't tend to be the biggest targets of interest for Project Zero. And so it was a lot of learning, but we started off sort of off big in that
I joined the team in July of 2019 and Google received information that the commercial surveillance company NSO had this Android exploit that they were using to target Android users in their delivery of Pegasus, the piece of spyware that has been all over the news lately.
And we actually got sort of some like marketing details about this capability. And so my first job was taking all of those details and seeing if I could figure out what the bug was so that we could patch it and, you know, break the capability.
And so I was digging through all the different Android source code, Linux kernel source code, trying to figure out what is this bug and somehow managed to figure out exactly which bug it was because the details we were given happened to line up that there was only one vulnerability that potentially matched every single detail we were given.
So that was a pretty wild first bug to report and put into the Project Zero issue tracker. We reported it to Android under a seven-day deadline instead of the 90 due to a high probability that it was being actively exploited in the wild.
And then wanting to show that it could be exploited, I partnered up with Jan Horn to write a proof of concept, not just triggering the vulnerability, but actually showing a way to exploit the vulnerability and how it would be useful to get or to use in, say, the Pegasus chain. So that was quite the wild week.
For Maddy to identify how Pegasus software is used in Android and then to come up with a working proof of concept exploit all in a week, that's amazing. That's like finding and squashing a million dollar bug. Seriously, there are companies out there who are willing to pay a million dollars for a bug like this because it's so valuable to certain people.
Pegasus is the spyware used by NSO, which is a company based in Israel who sells this spyware to different countries around the world. And it's quite expensive to buy this Pegasus software. And so when MADI discovers how it's used and makes it no longer usable, it must make NSO angry. Now they have to rip out their existing way of exploiting phones and find a new way to do that, which isn't so easy.
But this is Project Zero's goal, to make it harder for exploits to be out there. And if a company has a whole business model of selling malware and exploits to countries, then yeah, they'll be impacted by this, and it'll mean the price of Pegasus will go up since it's harder to find these vulnerabilities.
Generally, it is nation-state actors who are using zero-day exploits. And they're generally using these zero days against human rights defenders, journalists, minoritized populations, politicians. And so while every human, you know, doesn't necessarily need to be worried about being attacked with zero-day exploits,
All of us are generally impacted when they're used, when journalists become scared or unable to write the truth that they find and that human rights defenders are being targeted so fewer people are scared to stand up and speak out or minoritized populations are being targeted or critical infrastructure companies and things like that. That does ultimately impact us all.
If you want to know more about this, I did a whole episode on NSO. That's episode 100. You'll hear how they sell software to countries and then those countries turn around and use it to attack civil society. And of course, nation-state actors aren't always abusing their power. They do use their abilities to stop terrorist attacks and criminal activity. But at the end of the day, the measure of any technology is how it winds up getting used against vulnerable people, not just how it helps.
So if there are zero-day vulnerabilities out there that are being used to target innocent people, then finding those and fixing them will help civil society be more secure. And it's kind of wild to me to think that Maddie here is trying to disarm nation-state actors by finding what weapons and exploits they have, and then once discovering it, getting it fixed so it can't be used to exploit people anymore. Has there been...
Has there been any threatening reactions to this? Like, I can imagine NSO Group being pretty upset after your first project there and being like, okay, Maddie is now on our list. Like, do you ever get any weird stuff? Well, it was actually very strange of in January of 2020, I was invited to the conference Blue Hat Israel.
And so I went and there were actually two people who came up to me and their badges said they worked for NSO. And they said and they asked me questions about why I chose the techniques I did. And so that was a very strange interaction overall.
But one of the more anxiety producing was back in, I believe it was 2021. Google tabbed the threat analysis group discovered that North Korean hackers were targeting security researchers, including, you know, security researchers from Project Zero in the hopes of trying to steal the zero day exploits.
from security researchers to use in their campaigns. So being personally, you know, or personally, I mean, you know, in the population of folks targeted is a rather frightening aspect of, but it also just gave a lot of empathy for people doing the real hard work and are often targets of, you know,
the nation state attackers using zero days. Yeah. So some other philosophy here is like NSA is in the business of finding zero days and using them as weapons. And sometimes, you know, one of the nation states that you're going up against is your own nation. Do you get like cross conflicted there or how does that feel to you? I don't think so because the vast majority of the time we have no idea who is
behind a bug. Also because you're just working so quickly that like people don't usually have attribution, you know, immediately. They just, if attribution even comes out, like the threat Intel experts are usually, you know, three to six months behind. So there's never sort of that conflict because all we get is here's an exploit sample or here's a patch dip and the bug was labeled in release notes. But I, yeah,
So I've never really felt conflicted in that way because there's no way to know. All you know is that people are being harmed. So that would sit even worse with me to not try and get it fixed. Yeah. We're going to take a quick break here, but stay with us because we're going to hear more from Maddie when we get back.
Earlier this year, in 2022, Maddy saw that Apple patched a bug in their WebKit product. This is the browser engine that Apple's Safari browser uses. And there was a pretty big vulnerability discovered in it.
But the patch notes were a little vague. So Maddie started to try to learn more. And when I started digging into it, one of the ways that I also analyze when it's just a patch dip, I don't have any other information, is for open source software such as WebKit.
I will look at sort of the history of that file in the areas that they patched, or it's called the git blame of it sort of tells you when did this line appear or when was this source code line last changed.
And what I ended up figuring out was that this was sort of a zombie bug and that it had actually been originally fixed back in 2013. But then the bug was reintroduced because that patch was regressed and undone in 2016. And then here we were in 2022 with the bug exploited in the wild and patched again.
Why do you think it regressed? So I did a deep blog post into this, really trying to understand. And it was actually became sort of a team effort because all of us were really interested in trying to understand how did this happen? There was also a very interesting sort of overlap of my teammate, Sergei Glazunov, was actually the original reporter of the bug back in 2008.
January 2013 and was actually reported to Chrome because at that time, Chrome was still built on top of WebKit as their browser engine. They didn't split off until 2014, I believe. And so he was jumping in and looking at it with me. So were some of my other teammates like Mark Brand. And what it looks like overall is that
they were trying to change, do a refactoring to one, make it more performant. Through that, that meant there were some really huge patch changes. Just based on the structure of security teams and reviewing, a lot of times folks aren't really given a huge amount of resources and time to scroll through and look at line by line, like what are all these changes that are being made and things like that.
It's got to be quite the embarrassing feeling to find that your code had been vulnerable for seven years and you're just now discovering it. It makes you stop and wonder, who all knew about this? Is it possible some advanced hacking group or nation state actor had known about this and was using it to take over people's browsers when they needed to? It's hard to tell and we'll never know.
Back in the fall of 2020, we discovered some exploit servers and just happened to discover that they were delivering us exploits on different devices and different browsers. And in that case, you know, you're generally first just getting exploited.
the first stage exploit and then some sort of fingerprinting script maybe or something like that. And so we were like, oh my goodness, like this is giving us exploits and our devices are fully patched. Like what the heck is going on? This must have been a very exciting
exciting day to find that there's a server out there in the world that is able to remotely attack a device and exploit it in ways that are just not stoppable for a security research team like this it's a big moment you want to quickly try to capture as many exploits as you can from their server and then analyze them and see exactly how they're infecting devices so you can get them fixed
So in this case, it was a watering hole attack. Where a watering hole attack is if you go to a website and it is just going to try to infect anyone who goes to this website. So that was sort of the case here of, oh, this is weird. Suddenly this is very weird traffic and oh, that's an exploit and that's a fingerprinting script.
What did we stumble upon here? And this website had active traffic and users coming to it. So Maddie and the team at Project Zero knew that people were actively being hacked right now when they were visiting the site and wanted to move as quick as possible to stop any more people from being infected. And so that was where we all really came together and were working through weekends and long hours to first detect
get as many of the exploits as we could, and then teaming up, tearing them apart, getting around the obfuscation, trying to figure out what exactly is the bug that is being exploited here, and getting those reported and working with the vendors to get those patches out as soon as possible.
So they were able to squash any bugs that Google was responsible for and then get all the other vendors who had bugs to squash them too, which made this website no longer effective at being able to exploit updated devices that had come to visit the site. And this is why I'm always telling you to patch your software. Always update your operating system and any apps you have if there's an update available because it makes it harder for someone to hack into your stuff.
So, I mean, did you ever figure out who was doing this? Like, was it a nation-state actor or who your thoughts were that would want to, you know, run this kind of attack?
So we assume that it is a nation state actor just because the sheer volume of zero days and the sophistication behind the zero days, it seems rather unlikely that anyone other than a nation state actor would want to have access and be willing to use that number. I believe that when we looked at it, it was approximately, I believe, 11 people.
zero days that the actor had used over the course of a year. So that definitely would make me think nation state, but no, I do not know who was behind it. And I also, I am not an extrovert in attribution, but I have not seen or heard any definitive answers on who the threat researchers and threat intel experts believe was behind it.
Whoa, 11 zero days? That's amazing. To make a zero-day vulnerability takes quite a bit of time and skill. This isn't some simple social engineering attack or some off-the-shelf malware. Each of these 11 zero-day vulnerabilities were something that took a lot of resources to find and to turn into a usable exploit.
On top of that, the way these exploits were chained together was incredibly sophisticated. So because it takes so many resources to develop and weaponize that many bugs, then that's why Maddie thinks it was likely some kind of nation-state actor. This is beyond the capabilities of a cybercrime group or hacktivist group. If you can use a less sophisticated form of
attack to get access to whatever you need, then that will always be the choice. If your device, if your targets are insecure and say, you know, they'll fall for phishing, then that's the easiest route. And that's what you'll take. If your targets don't keep their devices up to date, unless you can use a end day exploit, that's what you'll take. So these zero days are when
One, you really don't want to leave a trace because people don't know what this bug and exploit will look like. And you're targeting entities or individuals who probably have some pretty good security, hygiene and posture. And those are often going to be people who know their targets, such as our human rights defenders and journalists, etc.
Hmm. So the way I understand it, nation-state actors typically have a few different objectives. It could be intelligence gathering, like hacking into another nation and stealing information. And it could be disrupting the enemy, like deleting the servers that a terrorist organization uses.
But we've also seen nation states participate in cybercrime and hacktivism. North Korea has been hacking into banks and stealing money from them. And China has been hacking into U.S. companies to steal their intellectual property. But we've also seen China hack into the Gmail accounts of human rights activists to try to stop them or figure out what they're up to. And we've seen the UAE hack into human rights activists' phones to track them and arrest them.
And of course, Russia is meddling with elections and even sabotaging the Olympics in some weird ways. So there's a big spectrum of what governments are doing out there in the mean streets of cyberspace. And I don't know about you, but to me, trying to figure out this space, it gets blurry fast. What's good? What's evil?
Some things are clear, but others not so much. Like when a country hacks into and spies on another ally country. Why? Because they don't trust their ally? Because they want more information than what their ally is willing to give them? And what happens when they do find out that their ally has some nefarious plan? Do the ends justify the means?
It gets tricky. And I imagine the weight of who you may be helping and who you may be hurting must weigh on Maddie as she does her work. Of course, I don't think anyone who's in this industry or business can't help but think about sort of the philosophy of it. And so for me, it feels pretty easy and I hope I'm on the good side of it.
I want people to have safe and secure access to the internet, whether it's, you know, just their data, their device and everything like that. So the case and the part of that safe and secure that I am currently able to hopefully make the biggest difference on is in the zero day and zero day exploit space. But, you know, previously I was trying to accomplish that.
with making sure every Android phone didn't have malware on it. So that's sort of my guiding principle is I think the world would be a pretty amazing place if everyone could access and connect to all this amount of information and education and everything like that with safe and security and know that their privacy is protected. So, yeah.
It's nice that Maddie has a good ethical mindset to all this and is helping us all become more secure. But just keep this in mind. There are people just like Maddie who work for the bad guys, doing exactly what she's doing, looking through patch notes and trying to figure out what exploit just got fixed to see if there's anything the vendor missed or some sort of related bug. And then once they find a bug, they'll develop it into an exploit and weaponize it instead of getting it fixed.
And that just makes me think, okay, if there are like enemies and allies out there where countries are hacking into each other, then what does that make Maddie? An enemy or an ally? Or is there some kind of third faction out there?
Also, NSA stands for National Security Agency. Their job is to ensure the US is secure and is able to send secure communications without our data getting into the enemy's hands. So you'd think that if the NSA has found a way to bypass the security of something, they'd want to find a way to get that fixed right away to ensure that the software used by hundreds of millions of Americans is secure, right?
But despite that the NSA spends millions of dollars on finding and developing vulnerabilities, they don't report that much to vendors.
We have seen them report some things, sometimes, but it's often under suspicious reasons. Like when the Shadow Brokers claimed they had NSA exploits, NSA told Microsoft to patch a certain bug right away. And there were other bugs that the NSA reported, which made me think that they might have intelligence that some other enemy nation might be actively using that exploit to hack into our stuff.
And it becomes even more difficult to navigate all this when so many of the tech giants are also U.S.-based. I'm not saying there's any sort of collaboration between the NSA and the U.S. tech giants, but it makes sense to me that there is a closer relationship than other nations might have with U.S. tech companies.
I kind of see it as sort of an arms race. While nation states around the world want more exploits and zero-day vulnerabilities to carry out their objectives, Maddie is over here trying to neutralize those and build up the defenses for everyone to be able to defend against nation states better.
I don't really think of it as a race unless we're talking maybe in single vulnerability case, like, oh, we know this bug is being exploited. It needs to be fixed as fast as possible. That's really the only area that I sort of view as a race. Maybe also around the, this was just patched and we want to make sure that the patch is sufficient. We complete variant analysis before the attackers are able to
But at a longer haul, I don't think of it as a race as much as making smarter decisions. Because ultimately, what we want is that it is so difficult, so expensive, so requires so much expertise that we
really hold on to their zero days close to the vest and they're so valuable to them that they only use them in really, really special cases. I think we're still at the point now that yes, while it tends to be a smaller population of people targeted globally, I think we're still seeing too broad, um,
usage of these zero days to believe that attackers find them as valuable as we would hope. And so that looks like making it that much harder for them to find vulnerabilities. So let's say they cannot use variants of a previously public vulnerability. They instead have to come up with their own. They have to come up with a whole new bug class that we've never seen before, not using these use after freeze and buffer overflows.
They're not able to use a public exploit technique that someone has we've seen before or they use before and just want to plug and play a new vulnerability. And because we as an industry are not only fixing the vulnerability, we're mitigating the exploit technique.
They don't need three zero days. They need six now to maintain the same capability they have before. That's really the way I think about it. And what makes me hopeful where I know a lot of people can feel down in that zero days are this sort of impossible problem to solve is the exciting part is iterative progress over
we will see the return on investment from. So it's not that you have to do steps A through J, and that's the only time you will begin to see this return on investment. Every little step we take forward in this to make it just that much harder, they just can't use the variant on this bump. We fix this exploit technique. Every single one of those actions make it harder. So that's sort of the way that I view this whole problem.
So with all this effort, is it working? Is Project Zero actually making it harder for people to make zero-day vulnerabilities? I think on a long scale, like definitely since 2014, zero-day has become harder. But I think what's hard is that to me, at least, it's pretty obvious that it's not hard yet. Like, for example, for the first six months of the year through 2022, we
What was it? There was a huge percentage of the zero-day in the wild zero days were variants of previously patched bugs. Okay. 50% of the in the wild zero days from 2022 as of mid-June were variants of previously patched bugs. That makes it
really hard for me to look at of we had chances to block one in two of these zero days that we as an industry didn't take. And 27% or 22% somewhere in that range
of the in the wild zero days from 2020, or even variants of in the wild zero days from 2021. So the attackers could come back, you know, less than 12 months later and just use a variant of the bug again. So I think there's, I'm more focused on what we can do and the opportunities we have rather than smirking at the news as much. But of course, we've got to take the wins when we can get them. Mm-hmm.
Do you use this term before private state-of-the-art versus public state-of-the-art? What does this mean and how does it apply to you? So in vulnerability research, publishing, like, what's the new attack surface? What's the new bug class or exploitation technique that we consider state-of-the-art in terms of novel, a great way to bypass vulnerability?
new exploit mitigations, et cetera. And so offensive security researchers like my team, we published a lot to show, oh, we found this new way to bypass X to help show this is why it has to get fixed. And this is where its weaknesses lie. And so that would be the public safety art because it is offensive security researchers talking about it publicly of this is where the
these techniques stand right now. Private state of the art is, but what techniques do the attackers actually have?
And so part of the reason why I focus on zero days that are actually exploited in the wild is because it can help us close that gap between public state-of-the-art and private state-of-the-art. Because a lot of time we use public state-of-the-art to help inform what is the next area of research that we should focus on. But if that's diverging too far from what the attacker is actually doing,
then this research is not as useful to us because we're not having what we call those collisions with attackers and trying to fix bugs and vulnerabilities. We're not putting our resources in areas that are super useful. So that's what we mean by when we say or I say public city art versus private art.
That's a really interesting concept to me. We know what's out there when it becomes seen, but we don't know what hasn't been discovered yet. And what hasn't been discovered yet could be a hugely overlooked use of technology or capability that we just haven't been creative enough to imagine that scenario. So it becomes almost a theoretical question. What theoretically could attackers do today?
How can we look into those areas to try to figure out what they are working on to stop them, to make us all more secure? Well, one of the things I think is most promising is that in 2021, there were the most in the wild zero days ever since we've been tracking since mid 2014, detected and disclosed as in the wild. That might sound sort of
Not make sense why I think that's promising to some people, but I think it is because I didn't say we can't track the number of in the wild zero days used. We can only track the number of zero days in the wild that are first detected by someone and then disclosed as used.
Hey, in the wild, if folks are finding them and reporting them to other vendors and never saying, hey, this is in the wild as well, it's not just another volume, then there's no way for us to know about it. So I do think in the last.
three or so years, there have been huge improvements across the industry of people working on detection and trying to find zero-day exploits, not just brushing it off and saying this is an unsolvable problem. And I'm also really hopeful of the trends and transparency around these. I think there's still plenty of progress to
to make and the transparency space around these zero-day vulnerabilities and exploits. But I'm hopeful that we're having more and more vendors transparently disclose when something is being actively exploited, that some vendors are making it easier to
figure out which patch in open source software goes with a CVE and giving more robust descriptions of it. And my hope is then we get to areas where they're doing these detailed and publishing root cause analyses and doing more variant analysis on their own rather than sort of third parties like myself and my team and some other security researchers coming in and doing that work.
I think I would like to see on my phone whether or not I was
exploited. If there's some sort of play protect feature that says, oh, we've updated this. Oh, wow. Somebody was actively exploiting you. Big notice there. I want you to know. Yeah. I think that would be super interesting. And that is one area that's been growing of lots of different researchers trying to figure out how do we, what type of forensics do we look for? These are sophisticated actors, so they're also pretty good at cleaning up
traces and zero-day explates don't always leave a lot of traces. So how do we figure out if someone had spyware running on their phone, if they had an exploit delivered to their computer or device? And Citizen Lab and Amnesty International are also doing some really awesome work in this space as they also work closely with the targeted populations.
Thank you.
I saw a really big cell tower the other day, and I just walked up to it, and I looked up all the way at the top, and I was like, whoa, that's really high tech. This is Darknet Diaries.