The Math Is Not the Problem
AES-256, the symmetric encryption standard used to protect everything from military communications to your iPhone backup, would take longer than the current age of the universe to brute-force with all the computing power on Earth. RSA-2048, the asymmetric standard behind most secure web connections, rests on the factoring problem, a mathematical challenge with no known efficient solution. These are not marketing claims. They are statements about computational complexity that the cryptographic community has stress-tested for decades.
So when a hospital gets ransomwared, when a breach exposes millions of encrypted passwords, when a government agency finds its communications compromised, the cipher is almost never what broke. Something else did. And the gap between “encryption is unbreakable” and “encrypted data got stolen anyway” is where most of the interesting security failures actually live.
The mistake most people make is imagining an attacker sitting at a keyboard, trying to crack the cipher directly. That attacker doesn’t exist in practice. The real attacker is far less theatrical and far more effective.
The Key Is Not the Cipher
Encryption protects data by transforming it with a key. The mathematical operation itself can be bulletproof. But keys have to live somewhere, and wherever they live, they can be stolen.
The 2014 iCloud celebrity photo breach, widely misreported as a hack of iCloud’s encryption, was actually a credential-stuffing and phishing campaign. Attackers obtained usernames and passwords, not cryptographic keys, but the outcome was identical: access to supposedly protected data. Apple’s encryption did exactly what it was supposed to do. The attack went around it entirely.
This pattern repeats constantly. The 2021 Microsoft Exchange Server vulnerabilities allowed attackers to bypass authentication, not break encryption. Once you’re authenticated to a system, the encryption obligingly decrypts everything for you. You are, from the system’s perspective, the legitimate user.
Key management is an unglamorous discipline. Where does the private key live? Who has access to the key management service? What happens when a key-management employee gets phished? These questions don’t appear in cryptography textbooks, but they determine the real-world security of encrypted systems far more than the choice of cipher.
Endpoints Are Where Encryption Goes to Die
End-to-end encryption, the kind Signal uses and WhatsApp claims to use, protects data in transit. The message travels from device to device in encrypted form, unreadable to anyone intercepting it in the middle. This is genuinely valuable.
But the encryption ends at the endpoint. And endpoints are compromised regularly.
Pegasus, the surveillance software developed by NSO Group, didn’t crack Signal’s encryption. It didn’t need to. It compromised the phone itself, reading messages after they were decrypted and displayed on screen. The encryption was intact. The device wasn’t. From a cryptographic standpoint, the system worked perfectly. From a security standpoint, it failed completely.
This is not a flaw in encryption. It’s a definitional boundary. Encryption protects data in transit and at rest. It makes no promises about the security of the environment where data gets used. When people say their data is “encrypted,” they usually mean it’s encrypted somewhere in the pipeline, not that it’s inaccessible to anyone who compromises the right machine.
The practical implication is that hardening endpoints, restricting process privileges, patching aggressively, and monitoring for anomalous behavior matters at least as much as the cryptographic primitives underneath.
The Human Layer Doesn’t Have a Patch Tuesday
Verizon’s annual Data Breach Investigations Report has, for many years running, found that a significant portion of breaches involve phishing or credential theft. The specific numbers shift year to year, but the directional finding is stable: attackers prefer targeting people over algorithms, because people are easier.
This isn’t because users are uniquely foolish. It’s because social engineering exploits cognitive shortcuts that are features in most contexts and bugs in security contexts. A help-desk employee who resets a password quickly is doing their job well. An attacker who knows this will call the help desk, claim to be a locked-out executive, and get the reset done before the employee thinks twice.
The RSA SecurID breach in 2011 started with a phishing email containing a spreadsheet titled “2011 Recruitment Plan.” An employee opened it. The malware that followed eventually compromised the seeds used to generate SecurID tokens, undermining a two-factor authentication product used by defense contractors and financial institutions worldwide. The underlying cryptography of the tokens was not the attack vector. A single employee’s click was.
This is the attacker the encryption discourse ignores. They’re not solving differential equations. They’re sending emails.
Implementation Bugs Are More Common Than Weak Ciphers
Even when the cryptographic algorithm is sound and the keys are well-managed and the endpoints are hardened, the implementation can undermine everything. The history of deployed cryptography is littered with examples.
Heartbleed, disclosed in 2014, was a buffer over-read bug in OpenSSL’s implementation of the TLS heartbeat extension. It had nothing to do with the strength of TLS’s underlying cryptography. It allowed attackers to read memory from servers, including private keys. The cipher suite was fine. The code was not.
The KRACK attack in 2017 exploited a flaw in the WPA2 Wi-Fi protocol’s implementation of a cryptographic handshake, allowing attackers to replay and potentially decrypt traffic. Again, the cryptographic primitives were sound. The protocol’s use of them was not.
Writing correct cryptographic code is genuinely hard. Timing side-channels, padding oracle vulnerabilities, nonce reuse in AES-GCM (which catastrophically breaks confidentiality) are all ways that correct algorithms become insecure systems. This is why the standard advice is to use high-level cryptographic libraries rather than rolling your own, and why even that advice requires trusting that the library was implemented correctly.
The Metadata Escapes
Even when encryption works perfectly, it typically doesn’t hide everything. Metadata, the information about communications rather than their content, often leaks significant intelligence.
Encrypted traffic still reveals source and destination IP addresses, packet timing, and volume. A 2014 study from Stanford, the “Analyzing the Use of a Real-Time Traffic Analysis Attack on the Tor Anonymity Network” line of research, demonstrated that traffic analysis could identify which encrypted web pages users were visiting with high accuracy, purely from packet sizes and timing. The content was encrypted. The pattern was not.
The NSA’s metadata collection programs, revealed through the Snowden disclosures, were predicated on exactly this point. Call records showing who called whom, when, and for how long can be more revealing than call content. Knowing that a journalist called a specific government office twice before a story ran, then called a lawyer afterward, tells a story even without a transcript.
This doesn’t mean encryption is useless. It means encryption is not a complete privacy solution, and treating it as one creates a false sense of protection.
What This Means
Modern encryption algorithms are, for practical purposes, unbreakable by direct attack. That claim is accurate and largely irrelevant to most real-world breaches.
The actual threat model looks like this: attackers target credentials and authentication systems, they compromise endpoints where data gets decrypted, they exploit implementation bugs in cryptographic code, they manipulate the people who manage keys, and they extract intelligence from metadata even when content is protected.
Security decisions should be made against the likely attacker, not the theoretical one. The likely attacker is not factoring large primes. They’re sending a believable phishing email, buying stolen credentials on a broker market, or exploiting an unpatched server. None of these attacks are stopped by a stronger cipher.
The useful question isn’t “is my data encrypted?” It’s “who can decrypt it, under what circumstances, and what would it take for an attacker to become one of those people?” That question surfaces the actual vulnerabilities: the shared credentials, the over-privileged service accounts, the key stored in the environment variable, the employee who clicks links.
Encryption is load-bearing infrastructure. It’s doing real work. But it operates inside a system, and systems fail in ways that bypass their strongest components entirely.