We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Inside the Biggest Heist

Inside the Biggest Heist

2025/3/8
logo of podcast web3 with a16z crypto

web3 with a16z crypto

AI Deep Dive AI Chapters Transcript
People
R
Robert Hackett
Topics
Robert Hackett: 本集讨论了 Bybit 遭遇的 15 亿美元加密货币盗窃案,这是可能是历史上最大规模的盗窃案之一。这次事件突显了加密货币安全领域的脆弱性,特别是多重签名钱包和硬件钱包的安全问题。我们还讨论了攻击的细节、不同类型钱包和组织的加密货币安全现状,以及如何保护自己免受类似攻击。 Matt Gleason: 这次攻击利用了 Safe 多重签名钱包的漏洞,黑客通过修改 Safe Web 应用的 JavaScript 代码,在交易签署前替换了目标地址,从而窃取了 Bybit 的资金。攻击者可能掌握了 Bybit 的内部信息,并针对性地进行了攻击。这次攻击也暴露了硬件钱包的安全性并非绝对,以及社会工程学攻击的风险。 为了提高安全性,公司应该进行详细的威胁建模,将资金分散存储到多个钱包中,并验证交易的哈希值。个人用户应该避免下载可执行文件,并在工作机器上运行可疑软件。软件供应商应该加强软件开发生命周期中的安全措施,以防止供应链攻击。

Deep Dive

Shownotes Transcript

Translations:
中文

Hey everybody, I'm Kai Risdahl, the host of Marketplace, your daily download on the economy. Money influences so much of what we do and how we live. That's why it's essential to understand how this economy works. At Marketplace, we break down everything from inflation and student loans to the future of AI so that you can understand what it all means for you. Marketplace is your secret weapon for understanding this economy. Listen wherever you get your podcasts.

Are hardware wallets by and large safer? Probably. But like, safer than what? Are they safe enough to be storing, say, $1.5 billion? Like, apparently, maybe not. Welcome to Web3 with A16Z. I'm Robert Hackett. Today we're talking about what is potentially the biggest heist of all time.

a hack of the Dubai-based crypto exchange Bybit, which took place last month for a total of $1.5 billion, and which the Federal Bureau of Investigation has attributed to a North Korean state-sponsored hacking group. In this episode, we cover the details of how the attack went down, the state of crypto security across different types of wallets and organizations, and what you can do to help protect yourself from similar attacks.

We're joined by Matt Gleason, a security expert at A16Z Crypto, whose excellent write-up of the incident you can find in this episode's show notes. We've also included an FBI PSA about the hack and other useful links as well. As a reminder, none of the content should be taken as investment, business, legal, or tax advice. Please see a16z.com slash disclosures for more important information, including a link to a list of our investments. ♪

So Bybit, a big crypto exchange, was hacked and it's the biggest crypto hack of all time? Potentially the biggest heist of all time. Biggest heist. Okay, so this includes all known bank robberies. Yeah, the second biggest one is supposed to be something that Saddam did. Really? What did he do? Seizing property, yeah.

At what point does it become like a heist versus just government appropriation? That would be hard to tell because like you could argue that the shift to communism that happened in the former Soviet Union had some of that, that everything capitalistic was converted to the government. And so, yeah.

That would be much larger, by the way. Anyway, $1.5 billion is just enormous. And it was taken in the form of mostly Ethereum, but also some like Ethereum related crypto. What happened and how did they get away with it? Well, how did they do it? How did they do it? Let's go through like at a very high level first, and then we kind of dig in as we go. Great. What essentially happened is Bybit's multisig signers signed a transaction that basically

effectively relinquished ownership of their multisig. So what it did is it upgraded the multisig to code belonging to the presumed attacker,

And then they were able to leverage the code that now was running the multisig to drain it completely. When you say that it upgraded the multisig code, it basically created new owners for the multisig? Is that what happened? A little bit more nuanced. So in Ethereum smart contracts, you have the concept of basically a proxy. And so every smart contract is immutable, but the

the storage in the contract can be altered. And so what you do to proxy is you create an immutable contract that points at another contract and it says, hey, go run the code in this contract as yourself. It's a sequence called delegate calling. And so what essentially they did is they were able to override the address that delegate call calls to point at a smart contract they had deployed. And so

didn't just give ownership, it gave like full everything. It's no longer a Gnosis safe wallet. It was one and it is now something else. How does that work in practice? So here I am, I'm working at Bybit and me and my crew of like a few people who are the head honchos who get to make the signatures that allow transactions to happen. We go in, we

We sign this thing thinking that it is just moving some money between our own wallets, just a standard part of regular business operations. We look at it, we say, okay, looks good enough, click confirm. And so what happened exactly there? So the smart contract itself was fine. None of this leveraged anything on-chain specifically. The signers...

gave them authorization to do what they did. And the way they got them to do that is a little bit nuanced. But let's go through the process they would have seen normally.

So if I'm a multisig signer in Gnosis and I'm going through this process and I'm like, okay, I want to send, let's say, 30,000 Ether from my Bybit multisig to my hot wallet. What I do is it's a three of six, I believe. I go through, I am sent the details of what the transaction is.

I click sign transaction. It sends it to my wallet. In this case, very likely something like a ledger, a hardware wallet. That wallet displays to me what's essentially a hex key. It's a hash of what the transaction is. And so they see this hash and they would say, okay,

It looks like a random string of bits to me. So either they would have made a choice to re-verify that hash using an auxiliary program. They would go through, they would enter the stuff into the program, and then it would display a hash. They'd match the two. Or they would just look at the hash and go, well, I can't understand this. I'll sign it. And so either of those two things could have happened, presumably not the former, just because had they done that, they would have seen something was altered.

If you're signing using like a MetaMask, the protocol uses something called EIP-712 signatures. And so MetaMask or other more graphical wallets can display exactly what it's doing. So display here's where it's going to, here's how much you're sending, here's what the call data is, and a few other bits of information.

And then you would presumably verify that reading it. And then you'd hit sign and it would construct the hash itself using the data it had and then sign it and then send it forward. And so that's what a signing process for a single signer looks like. You repeat that step three times or potentially two times. And then you have someone just look at it and go, OK, I'm going to execute. But in this case...

They sign it three times. And then once you have those three signatures, you can then execute it on chain and you execute what is in the transaction. In this case, what the actual transaction was is it went from...

Presumably something like sending 30,000 ETH to a hot wallet to performing a delegate call against a different contract the attacker controlled. And what this delegate call did is it overwrote a specific line in storage. And this line happens to be the implementation. So it overwrote where the wallet would point when it was looking for how should I execute my functionality.

And so this effectively upgrades the wallet through a non-standard path and then gives them full control. And so once they had that, they called methods called sweepeth and sweeperc20 or sweep token and took everything from it.

So how were the attackers able to get into the guts of this wallet to begin with, to make that change, to make that delegate call? This is where it gets a little crazy. This is where it gets crazy. It sounded crazy beforehand. Yeah, so what they did is what you would do if you were attacking this kind of system. But how they got to that point is like, honestly, far beyond what my initial expectations were on how far they got.

So what ended up happening is they actually changed the deployed JavaScript code of the safe web app. And so essentially they went to an S3 bucket, likely. They took the source code, they modified it, and they put it back. And through that process, they inserted very specific code. And this code, you can read it because it's on archive.org. We've seen at least the snippet we think was executed there.

And essentially what it does is it looks for, hey, is this being signed on behalf of Bybit, this specific multisig? If so, right before they execute the transaction, replace it with my own and then put it back and send it back to the app. So it looks like nothing happened.

There's actually a few other things they did in there that are shrewd, but like, it's just like, literally, it was a bait and switch right inside there. The code they had would have been run by anyone using the safe app at this point in time. It was just designed only to attack Bybit. Man, it's almost like you read a sci-fi story where somebody engineers a virus that is supposed to go after just one person, but be carried by everybody else, but be lethal to that one person. That's like what this is.

Yeah, it is a very highly targeted way to wield what was ostensibly the ability to try to get almost anyone's multisig.

Because they control the entirety of the Safe web app. Anyone using that Safe web app, that specific web app, could have been targeted. They just weren't. How long was this change enacted within Safe? So I believe it was active for about three days. They tested it against their own wallet. They tested it live on-chain on the real chain. You can actually...

find that address inside that source, go to that address, and you can see them testing it. And you can see how they do it. The code actually says if you're Bybit, or if you're our test multisig, it was designed to hit both and they used it again to test against themselves. And so they tested themselves to make sure okay, this works. This doesn't do anything weird, like interface wise. And

And then supposedly they would have put back the original source. They go, they exploit the attack, they get Bybit. And then I think it's speculated that they themselves replaced the code with the original code per se.

And so the original code was there after the attack had been done. So if you were investigating, you could look at the source code and you'd be like, well, this looks fine. They actually went through the Internet Archive, who had snapshotted one of these versions. Wow. To find out what had happened there. And hopefully, honestly, SAFe would have had logs and other things. You said this code was hosted in Amazon. It's not in AWS. This isn't like GitHub open source code.

No, this is after it's been compiled and minified. When you deploy JavaScript, if you deploy it with like all the variables, that's a lot of ASCII text to be sending, especially given like

The amount of JavaScript we run on our modern web browsers is immense. And so what they essentially do is they take every variable and all the white space and everything, and they just like crunch it out. And so it looks like it's one line of just JavaScript code that has like single variables in everywhere it can, sometimes doubles.

That just allows us to use less space when we're sending it from place to place. You can actually look at how the exploit code fits into the minified code itself. And you can see they manually wrote it. It has carriage returns. It has more proper stuff in it. It's more readable versus the rest of the code is just this minified craziness. And so it's after everything has happened, after it's been packaged for deployment, and then it's put up for everyone to use. Okay.

Okay, you're saying like this is in production. So the code's open source, but in production, it's running and exists on an S3 bucket in AWS. Yeah, it exists to be served to end users. You wouldn't put minified source as source code because no one can read it. It's not very useful. The code would be massive and everyone would be like, why is this here?

It's essentially a release artifact. Okay, so this was a supply chain attack. The attackers went after this critical piece of software in this multi-sig flow. They managed to put something out there that would target just one entity. There are obviously a lot of people that use SAFE. Why go after Bybit? Clearly it's a big target, but there are other big targets too. Yeah, so the amount of money they have...

And then potentially other intel they may have had, either through reconnaissance within SAFEs infrastructure or just generalized reconnaissance. This brings me back to another thing that they did within the exploit itself.

Inside the exploit, they actually go out of their way to take one of the Bybit signers and essentially disallow them from signing anything. So one of the people you would have expected to basically always go through and sign, if that person was trying to sign, it would just refresh the page on them. Now, are you saying that this basically changed the multisig from being a three of six to a three of five or that it changed to like a two of six now?

No, it's still technically be a three of six, right? Okay. It's just that one of those six is no longer able to perform their duties. Through their interface. Yes. So through... Through Safe's interface. Through Safe's interface, one of the wallets very likely could not have signed the transaction, even if they wanted to, which kind of points to them having some extra intel on the specifics of how Bybit was doing signing.

And so if they had reason to believe that Bybit was performing signing using specific devices with specific processes, then this would have been the impetus for like, why target Bybit? Like, well, they have a ton of money.

And we may actually know what they're using to sign. Because of that, we think the attack would be more likely to be successful. Because remember, as I said earlier, if you're using MetaMask, it is going to tell you what every variable is. And the way they changed it, they didn't hide that information. Had they had a MetaMask wallet or another graphical wallet that handled the EIP-712 signature as well,

They would have seen, hey, this is going to a strange address. It went from transferring 30,000 Ether to zero. And it injected all this data into the call itself. Like, this is super weird. Maybe we shouldn't sign this.

So they had the chance of getting detected had they not been able to verify that like the wallets themselves couldn't have done some of this or that the wallets themselves had some configuration that like ignored the user verification flow or something. But like they seem to know something about who they were targeting because they would have had reason to believe this attack would work versus like.

I honestly don't know if certain other entities could have been hit just because of other verification processes they have and other types of things. Because there are tools out there for verifying hashes with your hardware wallets. There are graphical wallets you can use and there are steps you can have in place to know like, oh, hey, I just look at the transaction you guys are signing and it is clearly malicious. We should stop signing.

Within the security sphere, we're often told that hardware wallets are generally considered a better practice or safer than digital ones. Hardware wallets are mostly thought of as secure because they won't give away the key. You have a backup of that key in form of seed phrase, hopefully backed up in a very safe place. But otherwise,

Communication with that hardware wallet, it's never going to tell anyone what the key is. Like you have to ask it like, hey, sign this thing for me. And then it has some interface that allows you to go through it and hit OK. But in the case of these generalized signatures, it's just displaying a hash for you. And like, how do you verify the hash? Presumably, you might have a command line utility on your computer that you can enter something and it displays the hash for you. But that presumes your computer is safe. So you have this like weird catch 22 where it's like, well, OK.

It'd be nice if the hardware wallet had a bunch of software on it that then can display all this information to me and it can do all this stuff. Yeah, but then maybe it's vulnerable to something, right? Like you've introduced more interfaces, you've introduced more stuff. It's harder to manage. Like this is all a very big muddled thing. Are hardware wallets by and large safer? Probably. But like safer than what? And up to how much?

It's like an honest question because the fact of the matter is, are they safe enough to be storing, say, $1.5 billion? Like, apparently, maybe not. So let's recap. We've gone in depth on two areas where somebody could have noticed something going wrong. One was the code being changed on the safe side. And another one is the actual initiated transaction via multisig.

But let's go further back because what enabled anybody to make changes in the code in the first place that triggered this whole cascade of events that follow? So exactly how they would have gotten in, exactly how they would have gotten access. This is unknown currently. If we are to extrapolate this to the other kinds of these attacks we've seen,

the typical way a campaign like this would have been performed by the Democratic People's Republic of Korea. And by the way, we should talk about that too. Like, why do we have high confidence in the attribution of this attack to DPRK, the North Korean actors? The way that suspicion has been like not verified completely, but I think the people most familiar with that stuff suspect that

This is potentially them. Which, by the way, they don't necessarily have to be inside of North Korea. They could be working in some like building in China. They could be mercenaries. They could be anywhere all over the world. But the idea is that they are affiliated with and state sponsored by North Korea. The way they would do their standard campaigns is they would typically target an individual working at a company. They would send them messages with some script that

I'm explaining why they should either join a meeting using a specific meeting client or open this PDF using a special PDF opener because it's encrypted to be secure. And this could be maybe somebody is going for a job interview or they think they're getting headhunted for some sweet gig. And hey, hop on this weird custom video conferencing software. And by the way, I got to say this as a PSA, there are...

people out there impersonating people affiliated with this podcast.

and trying to get people to think that they've been invited on as a guest, and then trying to get them to download this sort of Trojan horse video conferencing software. So PSA to anybody listening, if you're not hearing from me, you should. You should always check with somebody at the firm itself through other channels to make sure that you're indeed talking to somebody who is a part of the organization you think you're talking to. And check with two, because this is a problem that's gotten rampant. We're seeing lots of incidents of this.

Yeah, I mean, this is it's a fairly standard play. And so essentially, they have a fairly innocuous pretext. In Radiant Capital's case, it was a reach out from a supposed former contractor asking for a critique of their resume.

When did the Radiant hack happen? So Radiant Capital was actually hacked twice. The second hack was for $50 million and it happened in October of last year. And so they downloaded a PDF and they downloaded the PDF launcher and it owned their machine. That's what you'd expect an initial hook to be, that they would have gone through some individual and they would have gotten access through that means. I think the speculated thing is they stole the keys.

At the end of the day, all these big hacks always seem to come down to some sort of social engineering at the start of it. And then leveraging a mistake. They would have just sat there for potentially like weeks or months or maybe just days to do reconnaissance to find out like how systems worked, what they did, and then how they might be able to leverage the intricacies or the quirks involved.

that they have identified in the systems. They're looking for gaps, they're looking for specific things they can target, and then they're customizing the exploit to target and to hit that exact thing. So I want to dig in in two places. One is at the start of this whole thing, when you have people getting owned,

by social engineering attacks. We know that humans are fallible, they make errors, they are usually the weakest link in the chain. What can be done to better protect against that and all of the terrible things that happen afterward? So from an individual perspective, typically what you would do is you just...

Don't download executable software. That would be the biggest one. And like, it honestly is very difficult as a person to discern whether or not you should trust it. And so being vigilant about it can help. Perhaps just don't run stuff that is sent to you on a machine that has access to a bunch of stuff. Don't run it on your work machine. Don't run it on other stuff. All this is very difficult for the individual to navigate. We're not all vigilant all the time. Like,

You can try and honestly, you might succeed for most of your time, but you can also plan for failure and make it so the blast radius of your failure isn't too far. The other thing to realize from a corporate perspective is that if you're a corporation or if you're a company and you're trying to combat this kind of stuff, you actually can't assume that the employee didn't do it on purpose. There is the use case of the employee just gets $20,000 to do it.

When you talk about like those use cases, now it gets much more difficult. And then as a corporation...

Being able to make sure that no single individual is able to negatively impact your software or supply chain in a meaningful way is always good. Having a bunch of monitoring to tell whether or not anything of that nature is happening and then having the ability to see if they seem to be running anything that may have backdoors on it. That's the type of thing companies would typically do to avoid specifically like the entry point thing.

Obviously, there's more they can do to, again, limit blast radius. If you're a crypto company and you have a crypto wallet, like perhaps don't have $1.4 billion in one wallet,

That way, if the transactions change from underneath you, you lose the amount that's in the wallet, not the entirety of the amount. Are there any rules of thumb that an organization like a Bybit, like a crypto exchange should keep in mind when it comes to allocating funds across wallets? I mean, it all reduces to performing pretty detailed threat models, right?

Newsflash, the threat model is North Korea is coming after you and they have many billions of dollars and infinite time. Yes. Well, now they do. But the other big thing to consider here is like if I split it into 10 wallets, but there's a way to get after all 10 wallets at once, splitting it into 10 wallets doesn't do anything. And that's the threat model thing.

splitting isn't always going to be the be all end all. You have to be able to split it in a meaningful way where the threat now has to do double, triple the work to get to the second thing and then to the third thing. And so knowing all of like, here's how it would go wrong. Here's how this would happen. And then honestly, to some extent, you can hire people to help you with that. And if you have, again, billions of dollars at risk in this kind of arena, well, you should do it.

that will give you some semblance of like, okay, we kind of know what will happen. We know, and we can address this. And then after you've modeled it out, you should also verify that system exists as you think it does. Because like the second most common thing you're going to see is that they're like, yeah, we had a really good way of reducing this risk. And we made a mistake here. Then everything got stolen. But the first step would be understand what the threat is. And in Bybit's case, there are a few threats, right? Yeah.

If safe itself gets owned, if all of the developer or all of the multi-sig holder laptops get owned, if the connection gets misrepresented in a way that allows them to imitate safe, if any of the extensions that are installed on their browsers are able to modify the JavaScript inside safe, there's like four layers down of these are all the things that could happen to a web interface. And

Reducing some of that risk, some of it honestly is just reduced by how well can we verify what we are signing? When I'm presented that hash and ledger, how do I verify that hash and ledger? Do I regen it on another computer and then verify it one for one and literally go through every single digit and make sure it's exactly the same?

Like in this case, the signing is so infrequent. The answer is probably yes. And what that ends up doing is it reduces that risk. Now that whole chain no longer as much of a risk because now you're checking exactly what you're signing. You're not even trusting the web interface, right? You are trusting the computer you're computing the hash on and you are trusting the hardware wallet you have, but you are already trusting those two things.

And then if you're doing complex transactions, like having someone sit there and simulate them and go through and run through them, that also injects even more ability for you to reduce the amount of places anything can go wrong. And so that's a big part of it is like, I have this threat model, I have this thing, I could split up my wallets, that would be great. But I could also inject

insert places where if certain things get attacked, it doesn't matter. If this happens, it doesn't matter. I have this wallet, I verify it. And once I verified it, I know what it's signing. Can North Korea or the actor, the hackers in this case, can they even access this money now? I mean, this is a lot of money. There are a lot of people looking at this transaction and the wallet that's now holding it. Are they going to be able to actually move that anywhere?

They're going to try to launder the money, almost certainly. How they're going to try to launder it is anyone's guess. People are going to try to stop them from laundering the money. How much of the money they can launder, again, who knows? So will they get access to $1.5 billion? Probably not.

How much of it will they get access to? Nobody knows. This is now just a money laundering exercise for them. It's a money laundering exercise, but unlike maybe in the cash economy where you can like secret money away in bags and hide it away, all this stuff is in public. Like it's going to be hard for them to do this without some extremely intense and serious scrutiny on anywhere that the funds are flowing to.

Yeah. And everyone is currently actively monitoring where everything goes. There is a lot of effort on the ground to try to keep as much from being laundered as possible. And how much of it they're able to do, we'll see. They're transferring it around a lot. People are watching them constantly. Humor me, but like, how would somebody even conceive of or think about this challenge of moving such an enormous amount of money?

I mean, like anyone, you're disguising the source of what the money is. You need to disguise where it came from, because obviously, if you say it's stolen, people are going to be like, we don't want to do business with you. You have to have some way to obfuscate where the money came from.

who you are, obfuscating who you are, is stealing identities, creating false identities, and what governments might let you do that, or what governments might have weak enough controls to let you do that. There's black markets for a lot of this stuff.

Even so, you'd have to be able to move the money in a way that you couldn't track the source of it, like you said. How traceable they are, who knows? But how traceable they are to specific exchanges is another question to ask. Because some of this is like, are they going to try to withdraw from specific exchanges inside the United States? Like, maybe not. Are they going to try to withdraw from small exchanges scattered throughout the world? Like...

Probably. It's harder for them to do identity verification well. It's harder for them to do source identification well. The laws are different in those places. They'll just find ways to do it. Like, let's be clear. The test transactions they performed were performed using money withdrawn directly from Binance. They are able to use exchanges and they have. They made four withdrawals from Binance for a total of basically one Ether.

And so the question is, how effective are some of their laundering operations? I think governments know, but I don't think they're going to tell us. It is an eye-popping figure to say a billion and a half in dollar amount of crypto. But at least one thing that might be encouraging is the fact that maybe at the end of the day, the attackers won't really realize much of that. And hopefully, the players in this industry are going to shore themselves up and make it so that the same sort of attack wouldn't

work again you know never say never but i think generally most people are going to be so aware of it that anyone with a sizable sum who's using this kind of multi-sig transaction stuff and hardware wallets is going to do a lot of effort to verify the transactions and that the specific play that took place here likely will not happen again at anywhere near the sums that it happened

I would expect anyone with 10 plus million dollars in a wallet to be sitting there and to be like really going through it. Because like you have to realize if I'm a person on Gnosis and I have 50 million dollars in a Gnosis safe wallet, it's just a happenstance coincidence that they didn't choose to steal that money from me. Now there's that big looming like you saw what happened to the other guy when they didn't do it. That's why we do it now. And we do it 100 percent of the time.

There's actually, so if you explore YouTube, there was a content creator named Patrick Collins who went over literally how to do this. He was like, hey, here's how you verify what you're doing on Gnosis or on Safe. And he did this in response to the Radiant hack. This was done like weeks ago. Wow. And so there are videos out there explaining step by step. Here's what you need to do. Here's a tool to do it. If

If you have an old school ledger, here's what it will display. If you have like this newer Trezor wallet, here's what it will display. If it's on MetaMask, here's what it will display. Like going through all of this detail blow by blow on the specifics of how to avoid exactly this case. Oh man. Well, we'll put that in the show notes. I'd love for people to see that video and any other information that would be helpful.

Okay, well, let's say you're a company in the crypto industry right now. You've seen some of these big hacks. What can companies do? What should they be doing to protect themselves? So if you're a company and you're performing large transactions on anything...

you need the ability to double check what the transaction is doing prior to signing, usually from as close to the wallet level as possible. So if you're presented a hash on your ledger or your trezor, that involves recreating the hash somewhere and verifying byte for byte that hash matches.

If it's any sizable sum, you need to make sure all of this works. And you need to make sure it works to a T. And this also includes if you're not using multisig. If you're sent a transaction to your wallet and it has a bunch of data and you have a sizable amount in that wallet, be able to make sure what that transaction is going to do. Be able to understand some of that information within that transaction. Or if you're unable to do that, then you can transfer to a wallet that just has less money in it.

And that way, like, hey, I have my wallet with a few thousand dollars or a few hundred dollars that I'm willing to lose is a much better case than I have my wallet with a million plus in it. In other words, don't put all your eggs in one basket. Have many baskets, given how expensive eggs have gotten today. Oh, yeah, exactly. The other things are, if you're becoming especially paranoid, you can basically recreate the interface yourself.

safes and open source software, you could deploy your own version of the interface on your own computer that can only be accessed by your own computer. And then what you know is that the only way that's getting modified is if your computer gets modified, presuming that it doesn't get code from somewhere else. You have the capability of getting a wallet and that wallet is fully self-contained on your own machine and is non-changeable.

And if you can get that, that does a lot for you. You're still trusting, though, that the open source code is okay. You're trusting that at a point in time it was okay. And it is open source, so you can build it and you can look at it. It's certainly not bulletproof, but...

It's a lot more assurance than, hey, I'm trusting this thing on someone else's server that they can change out from under me at any point in time. So you've been laying out a few preventative, proactive measures that people can take, including checking hashes and transaction details.

spreading out your assets and also running instances of software that you download yourself, post it locally, and do that in multiple devices so that you can have redundancy and check how one is executing against how another is executing. There are other things that companies can do too. If we talk this less from the lens of like, how do I not be Bybit and more in the lens of like,

Like, how do I not be safe in this situation? And when you say how to not be safe, you mean how to not be the organization within the software supply chain. Oh, yeah. Sorry. The company name is safe. So that may not have been clear. So if you're the software vendor and you don't want people to be getting hacked because of you, you have to really lock down the entirety of what is called the software development lifecycle.

So typically you'll program software. It'll be in a source control repo. It'll go through a build system and build and compile it. That build system will usually use an artifact system that pulls down software artifacts that are used during the build process and packaging process. And then that build system will deploy it. And so as much as you can harden that whole system as much as possible. So like a single person is not capable of just like swapping things in and out.

is great. In this case, that would be making sure that the deployments themselves, so the areas that things are deployed, no one has access to them, but the build server, that the build server, or whatever is doing your build system stuff is the only thing that's able to deploy this code, making sure the build server is mostly not accessible to people. So hardened build server, hardened deployment area, or production cloud, basically, and hardened artifact repo, making sure that it

has everything really well locked down. So it's pulling artifacts, it gets the artifacts, it stores them, but it absolutely can't be swapped out by random people in your company. And then all the way back to the source code. Source code should not be unilaterally modifiable by a single developer, meaning if

If a double helper modifies it, they put in a PR and then that PR is reviewed by someone. And then that review tries to make sure, oh, hey, this looks good. It's doing what it says. It won't keep it from happening, but it makes the whole process more difficult for anyone to inject stuff in. And it makes it so it becomes a lot more clear when unauthorized versions of that have happened because you have a clear expectation of like, oh, this is...

happening because of this event that can be cross correlated and then looked at. And then like typically source control, you'll have stuff like commit signing. So you'll make sure that like, okay, this commit is done by this person. And I can verify that because not only is there credentials and GitLab or GitHub, but it's also signed by the key on their development machine.

Any of these hardening procedures help ensure that you don't become the one leveraged to hit someone else. And like, let's be honest, it takes two to tango here. You have to have a compromise vendor and someone using the systems. A lot of times we tend to flame the person who got hit, not the person who helped like make it happen. And that like, it is really on both of them to be able to do these things. And so, yeah.

If you're in the vendor seat, make sure that the access that your developers have is pretty minimized, that you're not going to have a situation where one of them gets compromised in some way, either through phishing or intentionally, and then creates problems for your company. It's worth calling out, especially because I think in this podcast, we've probably been a little bit hard on Bybit themselves and commented a little bit less on the other side. Bybit ultimately was the one who lost the money.

But SAFe was part of that process. Right. It's a supply chain. It is a chain. There are multiple links in the equation here. You know, we put out a state of crypto report every year in the fall.

And in our most recent one, we looked into the amount lost to hacking across centralized exchanges, centralized finance, and also DeFi. And an interesting trend we noted was that it seemed like DeFi protocols are getting hacked a lot less now. At least since 2021, it seems like attackers are beginning to target centralized exchanges more often.

Is this something that you're noticing as well? Yeah. So it's valuable to examine this as a kind of why do we think these things get attacked and like what are the measures to secure these systems? Ultimately, I think there's twofold here. One, smart contract hacks are hard to do. They're completely individualized every single time and the payoff can oftentimes be low. And the audit process and the security process around smart contracts is just like very good.

the amount of hackable smart contracts in the ecosystem by percentage is going to be significantly less than it was in 2020 and 2021. So if I'm a hacker and I'm trying to make money doing or hacking this kind of stuff, what do I do? Well, I go after the systems that are harder to secure and the systems that are much more complex. And so this is network systems by people who hold crypto. And so that's kind of what you see.

Also, consider the fact that a lot of their operationalization and tools will have been developed when they were doing the same play just to banks and the SWIFT system. So they have experience in the area, they know what they're doing. And so they're just running the same playbook they always have been. And so I think that's a good way to think about it is that smart contracts themselves are probably more secure than they've ever been. And that

enterprises are still as insecure as you would expect them to be. Does this fundamentally come down to the fact that there are more humans in the loop in a centralized organization? It's not even just the humans in the loop. It's just the amount of stuff in the loop, right? As you have anything in a loop that you just don't understand or that not everyone in the loop understands, you just inherently have more risk of making a mistake.

It's very easy as you're deploying something to just set it up in a slightly weird way that allows someone to do something and to set it up and to be like, well, like, do we have to verify this strange hash that hits this hardware wallet? Like, nah, that doesn't make any sense. Let's just skip that. It's an honest mistake. It's an innocuous mistake. But those pile up over time as systems get more complex. And then eventually you find enough of these mistakes somewhere to take advantage of them.

complexity breeds mistakes breed the ability to exploit those mistakes and so for enterprises as you get bigger you just become more susceptible because you have more moving parts and more complexity in your systems

So this is a trend that you would expect to continue, that more value gets lost to hacking from centralized organizations rather than from DeFi as we move forward? Up to the point that the centralized exchanges start to simplify very specific areas within their risk profile.

Even as a complex organization, you do have the ability to take the risky thing you're doing and compartmentalize it in such a way that is vastly simpler. That's using a simpler stack, that's using less stuff, that has less complex features. It's highly verifiable and it goes through that process. I would expect both things to happen simultaneously. That exchanges will continue to lose money, but my hope would be they start losing less money.

because they start to have more of these systems in place that are more securable, that are harder to hack. And I expect a lot more investment from those exchanges. If you're an exchange right now and you're like, hey,

Should we cut this team that helps us design these storage systems for all the money we're holding? The answer is no. Do you need to make them bigger? Not necessarily, but you need to be able to make sure they're good. You need to be able to make sure they're working. In the little time that we have left, I want to ask you just about the big picture here.

We're entering this world where automated systems and AI are going to be able to do a lot more than what people are doing. And that's both from an attack and defense perspective. But we're also developing interesting new tools like formal verification and other sorts of means of securing oneself or assuring yourself that things are running properly.

What hope is there for defenders moving forward into this future? Are we always going to be at a disadvantage to the attackers where you slip up once and you're just going to get your comeuppance? Or is there hope for people who are trying to lock things down and be secure? For the people who want to be secure, there is hope and there's going to be the ability to do that.

So even as we introduce fuzzy systems like AI, as long as we have more rigorous verification techniques, then we will be able to say, oh, hey, this system will not do this thing. Or with X degree of certainty, we know this system won't do this thing.

And that is very powerful. There will still be this give and take from, hey, we want to do crazy stuff and push this stuff forward versus, hey, we need to know exactly what is happening. We need to know exactly how it works and we need to know what these fuzzy systems are able to do and what they're not able to do. And so you're always going to see some gung-ho organizations get hit. That's

for the organizations that are being a lot more careful. They're going to slowly get more features, more nice things to have that also fit within this risk model. My hope would be that we'll have more organizations focus on helping de-risk

And that once you have successfully de-risked things that you as a company can just go as hard as you want because there's someone else helping you look out for the risks that you're taking on. And so that'd be my hope. But like ultimately, how much of that will happen? Like, who knows?

Got to have hope or else it is a bleak outlook. I hope everybody takes a step back after this conversation and is able to evaluate their own postures, their own threat models and ways that they might secure yourself against all the bad actors out there who want nothing more than to own you. Absolutely. If we get anything out of this, my hope would be

that we don't see a hack originate from the same type of play. That like, even if a supply chain is hacked, people have decoupled enough from their trust in the supply chain to make themselves more resilient. And that these sorts of supply chain attacks just seem to happen less and less. Matt, thank you so much for coming on and sharing your expertise and advice for everybody. Thank you very much. I didn't mean to be too much of a doer.

It's all good. You need a healthy dose of doom in order to get the wake-up call.