Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
David Schwed

The Ledger hack could have been much worse. But it also could have been easily prevented

(Credit: dem10—Getty Images)

The Web3 and crypto industries clearly have a lot to learn about cybersecurity.

Last week saw one of the more terrifying crypto industry hacks in recent memory, threatening not just a single protocol or application, but an untold number of apps that depended on one piece of infrastructure. And it could have been prevented with security practices that are second nature in more mature industries.

It happened in the dead of night U.S. time on Dec. 14. That’s when an attacker injected malicious “drainer” code into Ledger’s Connect Kit, a widely used software component maintained by the hardware wallet maker. For a few hours before it was patched, the malicious code snatched digital assets right out of wallets connected to services through Connect Kit. One commentator, only slightly hyperbolically, described the hack as compromising “all web3 websites in the world.”

Luckily, the damage to crypto users hasn’t been as catastrophic as it easily could have been. But the hack has devastating implications for Ledger itself, above all because it was 100% preventable—if only a painfully simple code-update-monitoring process had been in place. The fact that the compromised code was first detected by the third-party firm Blockaid, using a version of that update-monitoring process, rather than by Ledger itself, makes the failure even more damaging.

But similar failures are common across cryptocurrency and blockchain projects—and for similar reasons. Specifically, many crypto projects have immature or underfunded security stances, usually overwhelmingly focused on searching specific pieces of code for vulnerabilities. 

The Ledger hack shows just how limited this approach is, since the vulnerability was not in the code at all. Instead, it was in the process of managing the code. To prevent such internal process failures, crypto projects need to reorient their security standards around more robust security reviews common in—to pick a particularly ironic example—the banking sector.

Plumbing problem

Connect Kit acts as a kind of plumbing for an extended universe of distributed apps. In theory, Connect Kit allows Ledger wallet users to carefully control third-party apps’ access to cryptocurrency stored using Ledger’s hardware dongles. Compromising Connect Kit amounted to compromising all of those connected services. 

It was a new iteration of a classic “supply-chain attack,” which gained notoriety with the Russian-backed Solarwinds hack, which similarly compromised behind-the-scenes infrastructure software and may have caused as much as $100 billion in damage to a broad array of businesses and entities in 2020. The Ledger Connect Kit hack was caught and fixed within hours, and now seems to have cost users less than half a million dollars in crypto.

But autopsies of the attack have exposed deep problems with how Ledger managed its software—software with which the overriding pitch to users is that it's hyper-secure.

Here’s what happened, at least as far as we know right now. According to Ledger, the initial compromise was a phishing attack that gained access to the accounts of a former Ledger employee. While it’s impossible to say for sure, it seems that offering better anti-phishing training might have prevented this first apparent process failure.

But far worse, the former employee still had access to a Ledger JavaScript package managed using a third-party service called NPM. That’s the second process failure: All former employees' access to code should, obviously, be immediately revoked upon their departure.

But even that wasn’t the truly cardinal sin. It was apparently routine for changes to that NPM-hosted Javascript package to be used to update the Connect Kit code in real time, with seemingly no human review or sign-off. That’s the third process failure—and it’s particularly dire.

Automatic updating from a live database of code is often referred to as “load from CDN [content delivery network]”. It allows an application to be updated rapidly, frequently, and without needing a user’s interaction. But the method also, at least as implemented for Connect Kit, created a major vulnerability, because there was no human check to make sure changes were intended and official. 

Once the hacker was inside the JavaScript package on NPM, there was effectively nothing at all between them and the code controlling users' wallets. Ethereum developer Lefteris Karapetsas of Rotki pulled no punches, describing the use of this live update method as “insane.” 

(Notably, however, some observers have laid blame at the feet of NPM itself for its failure to implement better version control natively.)

These are precisely the kinds of failures that a security review focused exclusively on code would not catch—because they’re not in the code.

Auditing audits

That’s why the language of security “audits,” so frequently invoked by blockchain firms, can sometimes be misleading.

A formal financial audit is not just a matter of making sure all of a firm’s money is where it’s supposed to be at one particular moment. Rather, an accounting audit is a complete, end-to-end review of a firm's overall money-handling practices. A CPA performing a financial audit doesn't just look at bank statements and revenue numbers: They are also required, as laid out by the AICPA, to evaluate “a business's internal controls, and assess fraud risk."

But an audit in cybersecurity doesn’t have the same comprehensive, formal meaning as it does in accounting. Many security audits amount mostly to point-in-time code reviews—the equivalent of a financial audit that merely reviewed current bank balances. Code reviews are obviously crucial, but they are only the beginning of real security, not the end.

To truly match the rigor of a financial audit, a cybersecurity review needs to assess a firm’s entire development lifecycle through a formal, structured process that makes sure nothing falls through the cracks. That includes reviewing the various phases of the development lifecycle, including quality assurance, and it means developing a threat analysis that identifies likely risks. It includes internal security reviews, on things like phishing prevention. And it includes a review of change-management processes—particularly relevant in the Ledger case.

If there’s a silver lining here, it’s that it doesn’t mean crypto is inherently or fundamentally impossible to properly secure. It can certainly seem that way, with the constant drumbeat of hacks, vulnerabilities, and collapses. But the problem isn’t blockchain’s unusual architecture—it was a series of compromises on rigorous and standardized security.

As the crypto industry matures, the companies that invest in meeting those standards will reap the benefits through providing trust and longevity. And the rest will be left behind, stained by avoidable failures.

David Schwed, a foremost expert on digital asset security, is COO of the blockchain security firm Halborn and the former global head of digital asset technology at BNY Mellon. The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.