Back to Blog
OWASP

Cryptographic Failures (OWASP A02): Examples & Fixes

April 25, 202622 min readSecureCodingHub Team
cipher::aes_gcm()

Cryptography is the part of an application's security model that almost every developer ships and almost no developer enjoys debugging. Owasp cryptographic failures — A02 in the current Top 10, formerly called Sensitive Data Exposure — covers the entire surface where data confidentiality and integrity are protected by cryptographic primitives that turn out to be the wrong primitives, configured the wrong way, with keys managed the wrong way, or implemented by code that handrolled what a vetted library would have handled correctly. The 2021 rename from "sensitive data exposure" was deliberate: the old name described the impact (data was exposed), the new name describes the cause (cryptography was chosen, configured, or implemented wrong). This guide walks the three layers where crypto failures hide, the algorithms developers reach for that should not exist in 2026 code, the TLS and AEAD patterns that prevent the most common findings, the key-management discipline that determines whether any of it actually protects data, and the implementation pitfalls that trip up otherwise careful teams. For the broader category context, see the OWASP Top 10 2025 changes overview.

Why A02 Renamed From "Sensitive Data Exposure" to "Cryptographic Failures"

The 2021 OWASP revision moved this category from A03 to A02 and renamed it from Sensitive Data Exposure to Cryptographic Failures. The rename was not cosmetic. The old name described what an attacker observed at the end of the incident — credit card numbers, health records, session tokens visible in plaintext. The new name describes the engineering failure that produced the outcome. Data was exposed because cryptography was missing, was the wrong choice for the threat model, was correctly chosen but configured insecurely, or was correctly configured but implemented around a key-management failure that made the encryption irrelevant. The 2025 list keeps A02 as Cryptographic Failures, and the framing has stuck because it directs developer attention to the cause rather than the symptom.

The rename matters operationally because the remediation pathway depends on which layer failed. A team that ships unencrypted database fields and a team that ships AES-256-GCM with a hardcoded key both produce the same headline ("customer data exposed"), but the engineering work to fix each is different. The first needs encryption added; the second needs the encryption it already has to actually protect anything, which is a key-management problem masquerading as a crypto problem. Renaming the category to reflect the cause makes those two situations visibly different in the bug tracker, the threat model, and the remediation plan.

The 2026 reality is that almost no production application is missing encryption entirely. Browsers refuse plaintext HTTP, cloud providers default storage to encrypted-at-rest, managed databases encrypt by default, and most languages ship secure defaults in their standard libraries. The findings that produce real incidents are not "encryption was missing" but "encryption was present and did not protect what the team thought it protected" — because the cipher was broken, the protocol version was deprecated, the key was in source control, the IV was reused across messages, or the integrity check that should have accompanied the confidentiality check was absent. Cryptographic Failures captures this whole class.

The Three Layers Where Cryptographic Failures Hide

Crypto failures cluster on three layers, and recognizing which layer a particular failure lives on is the first step toward fixing it. Most production failures live on layers 2 and 3.

Layer 1: algorithm choice. The application reaches for a primitive that should not exist in 2026 code — MD5 for any non-checksum purpose, SHA-1 for any signing context, DES or 3DES for symmetric encryption, RC4 for any context, RSA-1024 for any new key, ECB mode for any block cipher use. The fix is to pick a modern primitive: SHA-256 or SHA-3 for general hashing, Argon2id or bcrypt for password hashing, AES-256-GCM or ChaCha20-Poly1305 for symmetric encryption, RSA-3072+ or Ed25519 / X25519 for asymmetric. Layer 1 failures are the easiest to detect (SAST tools catch most of them) and the easiest to fix (the replacement is almost always a one-import change).

Layer 2: implementation. The algorithm is correct, but the way the application uses it is broken. AES-CBC without an HMAC (no integrity protection — vulnerable to padding-oracle attacks). AES-GCM with a reused IV (catastrophic — a single IV reuse with the same key leaks the XOR of the two plaintexts and the authentication key). Math.random() used for token generation (predictable). PKCS#1 v1.5 RSA padding without constant-time error handling (Bleichenbacher attack). Constant-time comparison missing on a MAC verification (timing oracle). The algorithm in each case is fine; the surrounding code makes it useless. Layer 2 is where most A02 findings actually live in 2026 audits.

Layer 3: key management. The algorithm is modern, the implementation is correct, and the key is in config.py committed to GitHub. Or in an .env file checked into the repository three years ago and now in dozens of forks. Or in a Kubernetes secret printed in a CI log. Or in a KMS but with an IAM policy that grants the secret to every role in the account. Or rotated never. Or generated by a developer's os.urandom(16) and stored alongside the ciphertext it protects. Cryptography is exactly as strong as the weakest control over its keys, and key management is where mature programs invest the most engineering effort because layer 1 is solved-by-default and layer 2 is solved-by-library.

The diagnostic discipline at code review and pentest time is to ask which layer a finding lives on. A SHA-1 password hash is layer 1. An AES-CBC implementation without integrity is layer 2. A correctly used AES-GCM whose key is in source control is layer 3. The remediation each requires is different, and conflating them produces incident postmortems that fix the wrong thing.

Weak and Broken Algorithms

The list of cryptographic primitives that should not appear in 2026 code is not long, but every entry on it still ships in production code that reaches modern audits. Each algorithm has a story about why developers reach for it and a clear modern replacement.

MD5. Cryptographically broken since 2004, with practical collision attacks demonstrated repeatedly through the 2010s. MD5 is acceptable as a non-cryptographic checksum (file deduplication, cache keys) but unacceptable for any security purpose. The recurring 2026 finding is MD5 used for password hashing in legacy code that has been migrated through several rewrites without the hashing function being upgraded. Replacement: SHA-256 for general digest, Argon2id or bcrypt for password hashing.

SHA-1. Theoretically broken since 2005, practically broken since the SHAttered attack in 2017. SHA-1 still appears in code because Git uses it (a non-cryptographic context), because legacy TLS configurations supported it, and because many older HMAC implementations defaulted to HMAC-SHA1. The HMAC-SHA1 case is the persistent one — HMAC-SHA1 is still considered safe because HMAC's security depends on the underlying hash's pseudorandomness rather than its collision resistance, but mature programs have moved to HMAC-SHA256 anyway because the algorithm-allowlist conversation is easier when SHA-1 does not appear at all.

DES, 3DES. DES has been broken by exhaustive key search since the 1990s. 3DES (Triple-DES) is theoretically stronger but vulnerable to Sweet32 birthday attacks against its 64-bit block size, deprecated by NIST since 2017, and disallowed for any new use after 2023. The recurring 2026 finding is 3DES in financial systems that historically required it for regulatory reasons and never finished the AES migration. Replacement: AES-256-GCM.

RC4. Broken in TLS since 2013 (the AlFardan-Bernstein attacks), prohibited in TLS by RFC 7465. Still appears in code that interacts with legacy systems or in handrolled stream-cipher constructions. Replacement: ChaCha20-Poly1305 for stream-cipher use cases, AES-256-GCM for general symmetric encryption.

ECB mode. Not an algorithm but a mode of operation, and the one that should never be used. ECB encrypts each block independently, so identical plaintext blocks produce identical ciphertext blocks — the famous "ECB penguin" image demonstrates this visually. ECB is the default in many language libraries' encrypt() functions, which is why it ships unintentionally. Replacement: GCM mode (authenticated encryption) for almost all cases; CBC mode with HMAC if AEAD is unavailable for some reason.

# BAD: MD5 for password hashing — fast, collision-broken, GPU-friendly
import hashlib
def store_password(plain):
    return hashlib.md5(plain.encode()).hexdigest()

# GOOD: Argon2id with appropriate parameters for 2026
from argon2 import PasswordHasher
ph = PasswordHasher(
    time_cost=3,
    memory_cost=65536,   # 64 MiB
    parallelism=4,
)
def store_password(plain):
    return ph.hash(plain)
def verify_password(stored, plain):
    return ph.verify(stored, plain)

Why developers still reach for broken algorithms. The pattern is rarely "the developer thought MD5 was secure." It is more often "the developer copied a Stack Overflow snippet from 2009," "the developer needed a fingerprint and reached for the most common-named function," or "the developer is interoperating with a legacy system that requires the old algorithm." Each pattern is addressable: SAST tools flag the algorithm names reliably, secure-by-default libraries make the modern primitive the easiest reach, and interop requirements get a separate threat model that documents the legacy boundary explicitly rather than leaking the weak algorithm into new code.

TLS Configuration Failures

TLS is the single most universally deployed cryptographic protocol on the planet, and the configuration surface is large enough that misconfiguration is a persistent finding category. The defaults shipped by modern web servers and load balancers are usually safe, but every team that touches the configuration has the opportunity to weaken it, and many do.

Protocol version. TLS 1.0 and TLS 1.1 were deprecated by the IETF in RFC 8996 in 2021. TLS 1.2 is the minimum acceptable protocol in 2026; TLS 1.3 is the recommended default. The recurring finding is a load balancer or CDN edge that still terminates TLS 1.0 / 1.1 because a legacy mobile client requires it. The mitigation is either to upgrade the client (almost always possible) or to terminate the legacy protocol on a separate, scoped endpoint that does not share a TLS context with modern traffic.

Cipher suite ordering and weak suites. A TLS 1.2 endpoint that supports modern AEAD ciphers (AES-GCM, ChaCha20-Poly1305) but also supports CBC ciphers, RC4, or export-grade suites is vulnerable to downgrade attacks against the weakest suite the server accepts. The remediation is a strict cipher allowlist — typically the Mozilla "intermediate" or "modern" profile — and ECDHE (forward-secret) key exchange only. Static-RSA key exchange is deprecated; any cipher suite that does not provide forward secrecy should be removed from the allowlist.

Weak DH parameters. Servers using Diffie-Hellman key exchange must use parameters of at least 2048 bits; 1024-bit DH parameters were broken at nation-state scale by the Logjam attack (2015). Most modern servers default to ECDHE with safe curves (P-256, X25519), which avoids the DH parameter question entirely.

Missing HSTS. HTTP Strict Transport Security tells the browser that the site must always be loaded over HTTPS for a specified duration, preventing SSL-stripping attacks where an attacker intercepts the initial plaintext request and downgrades the connection. HSTS is one HTTP response header (Strict-Transport-Security: max-age=31536000; includeSubDomains; preload) and should be present on every production HTTPS endpoint. The HSTS preload list (browsers ship a list of always-HTTPS sites) provides protection on the very first visit.

Mixed content. An HTTPS page that loads scripts, styles, or iframes over HTTP defeats the protection of HTTPS for the resources loaded plaintext. Modern browsers block mixed-content requests by default, but the underlying code paths still exist and are findings until removed. The fix is to load every subresource over HTTPS or via protocol-relative URLs.

Certificate validation disabled. The single most damaging TLS misconfiguration in client code is verify=False in a Python requests call, rejectUnauthorized: false in a Node.js https call, or the equivalent in any other client library. Each disables certificate validation and reduces the connection to plaintext in security terms — an attacker on the network path can present any certificate and the client accepts it. The pattern is usually copy-pasted from a "fix self-signed cert error" Stack Overflow post during local development and never removed. SAST tooling catches the syntactic pattern reliably; the discipline is to never disable validation in code that ships, ever.

Certificate pinning trade-offs. Pinning the expected certificate (or its public key) in mobile or desktop clients prevents an attacker who compromises a CA from issuing a fraudulent certificate the client will accept. Pinning is appropriate for high-value mobile applications (banking, messaging) and inappropriate for general web applications because rotating the pinned certificate requires a client update. The 2026 default is to pin in mobile-first applications with controlled client distribution and to rely on Certificate Transparency monitoring for everything else. For the broader configuration discipline that surrounds TLS defaults, see the security misconfiguration deep-dive.

Data at Rest: AES-GCM, ChaCha20-Poly1305, and the AEAD Pattern

Data at rest — files in object storage, fields in a database, backups on tape — is the second canonical context for cryptographic failures. The right answer for symmetric encryption in 2026 is authenticated encryption with associated data (AEAD), and the question is which AEAD primitive to use.

Why AEAD is non-negotiable. A confidentiality-only cipher (AES-CBC, AES-CTR without an integrity tag) protects the data from being read but does not protect it from being modified. An attacker who cannot decrypt the ciphertext can still flip bits in it, and the plaintext after decryption will be a corrupted version of the original — sometimes in ways the application cannot detect. The CBC padding-oracle family of attacks (Vaudenay 2002, demonstrated against many real systems) extracts plaintext from a confidentiality-only cipher by observing how the application reacts to deliberately corrupted ciphertexts. AEAD primitives include an integrity tag in the ciphertext, and decryption fails closed if the tag does not validate. This eliminates the entire padding-oracle class by construction.

AES-GCM. The default AEAD for most contexts. AES-256-GCM provides 256-bit security on the confidentiality side and a 128-bit authentication tag. The GCM mode requires a unique nonce (also called IV) per message under the same key — and "unique" means actually unique, not "almost certainly unique with a 96-bit random nonce up to a few billion messages." For high-volume systems, deterministic counter-based nonces or the AES-GCM-SIV variant (RFC 8452, nonce-misuse resistant) are appropriate.

ChaCha20-Poly1305. An AEAD construction that pairs the ChaCha20 stream cipher with the Poly1305 MAC. Faster than AES-GCM on hardware without AES-NI (most mobile and embedded devices), constant-time by construction, and equally secure. The preferred AEAD for mobile and embedded contexts; equivalent to AES-GCM for server-side use on modern x86 hardware.

The "associated data" part. AEAD primitives let the caller bind associated data (additional authenticated data, AAD) to the ciphertext. The AAD is not encrypted but is authenticated — modifying it invalidates the tag the same as modifying the ciphertext. AAD is the right place for context that should be cryptographically bound to the ciphertext: the user ID for whom the data was encrypted, the table and column the field belongs to, the version of the encryption scheme. Binding context as AAD prevents ciphertext-confusion attacks where an attacker copies ciphertext from one user to another and the decryption succeeds because the cipher does not know it shouldn't.

// BAD: AES-CBC with manual padding, no integrity check
const crypto = require('crypto');
function encrypt(key, plain) {
  const iv = crypto.randomBytes(16);
  const cipher = crypto.createCipheriv('aes-256-cbc', key, iv);
  return Buffer.concat([iv, cipher.update(plain), cipher.final()]);
}
// Attacker can flip bits in ciphertext; decryption either succeeds with
// corrupted plaintext, or fails with a padding error that becomes an oracle.

// GOOD: AES-256-GCM with bound AAD and unique nonce
function encrypt(key, plain, aad) {
  const iv = crypto.randomBytes(12);                  // 96-bit nonce
  const cipher = crypto.createCipheriv('aes-256-gcm', key, iv);
  cipher.setAAD(Buffer.from(aad));
  const ct = Buffer.concat([cipher.update(plain), cipher.final()]);
  const tag = cipher.getAuthTag();
  return Buffer.concat([iv, tag, ct]);                // iv || tag || ciphertext
}
// Decryption fails closed if iv, tag, ct, or aad are tampered with.

Library traps to avoid. Many language libraries expose low-level encrypt(plaintext) functions whose default mode is ECB or whose default mode silently changes between library versions. Read the documentation, never accept the default without checking what it is, and prefer high-level libraries (libsodium / NaCl, Tink, Web Crypto API's recommended primitives) that expose only safe constructions and refuse to compile a vulnerable usage. The libsodium philosophy — one safe construction per task, no algorithm pickers — is the model to follow when integrating crypto into application code.

Hashing vs Encryption — The Most Common Conceptual Error

The single most common cryptographic conceptual error in code review is conflating hashing and encryption. The two operations have different threat models, different inputs and outputs, and different appropriate use cases. The mix-up produces predictable failure patterns that a few minutes of clear thinking would prevent.

Hashing is one-way: input goes in, a fixed-size digest comes out, and there is no key and no inverse function. The digest is used to verify integrity (does this input match what was hashed before?), to fingerprint content (do these two inputs produce the same hash?), or — with a slow, salted, memory-hard hash — to store passwords (a password verifier without storing the password itself). The right primitives differ by use case: SHA-256 / SHA-3 for general digests, HMAC-SHA256 for keyed integrity, Argon2id / bcrypt / scrypt for password storage. Passwords are hashed, never encrypted. Encrypting passwords means the decryption key exists somewhere on the same servers as the ciphertext, which means an attacker with server access has plaintext passwords.

Encryption is two-way: plaintext goes in, ciphertext comes out, and a key holder can recover the plaintext. Encryption is for confidentiality of data the application later needs to use as plaintext — credit card numbers it needs to charge, document contents it needs to render, fields it needs to display in user-facing surfaces. Encryption is wrong for passwords (because the application never needs the plaintext password — only to verify the user-provided one), wrong for integrity-only purposes (an HMAC is the right primitive), and wrong any time the goal is "I never want to recover this value, only to compare it to something." Hashes are for one-way comparison; encryption is for two-way recovery.

HMAC for integrity, not encryption. A frequent mistake is to encrypt a value purely to "make it tamper-proof" — the application does not actually need confidentiality, only integrity, but reaches for encryption anyway. The correct primitive for integrity-only is HMAC: compute HMAC(key, value), store the HMAC alongside the value, and verify on read. HMAC is faster than encryption, simpler to use correctly, and — for the integrity-only goal — sufficient. Encryption is appropriate when both confidentiality and integrity are required, and an AEAD primitive provides both at once.

The hashing-vs-encryption rule for password storage in particular has its own deep dive in the OWASP A07 authentication failures guide, where Argon2id parameters, bcrypt cost factors, peppering, and migration patterns are covered in detail. The point for A02 is that a code review that finds encrypt(password) anywhere should treat it as a finding regardless of which encryption is used.

Key Management — Where Most Failures Actually Live

Cryptography is exactly as strong as the weakest control over its keys. A modern AEAD primitive correctly used with a key that is hardcoded in source, committed to a public repository, or stored in a Kubernetes config map readable by every pod is providing zero confidentiality protection. Key management is the layer where mature programs invest the most engineering effort, because layer 1 (algorithm choice) is solved by SAST tooling and layer 2 (implementation) is solved by libraries — but layer 3 (key management) is solved only by the discipline the team builds around it.

Hardcoded keys in source. The recurring 2026 finding. A developer needed an encryption key during local development, generated one, and pasted it into config.py as a default. The default never got removed; production overrides it via environment variable in some deploys but not others; the value in the repository is the production key for at least some environments. SAST tools and secret-detection tools (TruffleHog, Gitleaks, GitHub secret scanning) catch high-entropy strings reliably; the discipline is to run the scanning in CI on every commit and to treat any flagged string as an incident regardless of whether it is "actually" a key.

Keys in .env files committed to git. A close second. The .env pattern is widely promoted as "don't put secrets in code; put them in environment files" without the second half of the sentence — "and never commit the environment file." Once a secret has been committed to a git history, even a force-push and rewrite does not remove it from local clones, GitHub forks, or backup mirrors. The secret must be considered compromised and rotated. A repository-wide history rewrite removes the secret from the canonical history but does not retroactively secure clones taken before the rewrite.

KMS / HSM as the default. Modern programs store keys in a managed key service — AWS KMS, Google Cloud KMS, Azure Key Vault, HashiCorp Vault, or a hardware security module for the highest-value contexts. The application has IAM permission to perform encrypt and decrypt operations against the KMS but never has direct access to the raw key material. A compromise of the application server yields the ability to decrypt data the application could already decrypt, but does not yield the key itself, which limits the blast radius of the compromise. The KMS provides audit logs of every key use, which support detection and forensics.

Envelope encryption. The pattern that scales KMS use to high-volume data encryption. The KMS holds a key-encrypting key (KEK); for each piece of data, the application generates a random data-encrypting key (DEK), encrypts the data with the DEK, encrypts the DEK with the KEK via KMS, stores the encrypted DEK alongside the ciphertext, and discards the plaintext DEK. Decryption asks the KMS to decrypt the DEK, then uses the DEK locally to decrypt the data. The KMS sees only DEK encrypt/decrypt requests (small, fast operations); the bulk encryption of data is done locally with no KMS round-trip per byte. Envelope encryption is the standard pattern for any system that encrypts more than a few hundred kilobytes per request.

# BAD: hardcoded key in source, no rotation, no audit
SECRET_KEY = b'this_is_my_super_secret_key_32by'  # don't do this
def encrypt_field(plain):
    return aes_gcm_encrypt(SECRET_KEY, plain)

# GOOD: KMS-derived data key with envelope encryption
import boto3
kms = boto3.client('kms')

def encrypt_field(plain, context):
    # Generate a fresh data key; KMS returns plaintext + ciphertext copies.
    resp = kms.generate_data_key(
        KeyId='alias/app-data-key',
        KeySpec='AES_256',
        EncryptionContext=context,   # bound to KMS audit log
    )
    dek_plain = resp['Plaintext']
    dek_cipher = resp['CiphertextBlob']
    ct = aes_gcm_encrypt(dek_plain, plain, aad=context)
    # Discard plaintext DEK; store only ciphertext DEK + ciphertext.
    return {'dek': dek_cipher, 'ct': ct}

Key rotation. Keys must rotate. A key in continuous use for years has had time to leak through configuration mistakes, log captures, insider exposure, or compromised backup tapes. KMS-managed keys typically support automatic rotation (annual is the AWS KMS default); application code that uses the key must not assume the same key material across calls. For envelope encryption this is essentially free — each piece of data is encrypted with a fresh DEK, and the KEK rotation only affects DEKs going forward. For directly-used keys, rotation requires re-encrypting existing data under the new key, which is expensive but necessary.

BYOK and customer-managed keys. Bring-Your-Own-Key (BYOK) and customer-managed encryption keys (CMEK) let enterprise customers control the keys their data is encrypted with — the SaaS provider's KMS uses keys imported by the customer or held in the customer's KMS. BYOK is a compliance and trust feature for high-value B2B contexts; the engineering discipline is to design the encryption layer for key-per-tenant operation from the start rather than retrofitting it later.

IAM scope on KMS. A KMS does not protect keys if every IAM principal in the account has kms:Decrypt on every key. The principle of least privilege applies to key access the same as any other resource — only the application services that need to encrypt or decrypt for a particular workload should have permission to do so, and the audit log should be reviewed for unexpected access patterns. A common misconfiguration is granting kms:* at the account root role and never narrowing it.

Cryptographic Implementation Pitfalls

Even with modern algorithms and managed keys, implementation details produce vulnerabilities. The list of pitfalls below is non-exhaustive but covers the recurring findings in 2026 audits.

Handrolled crypto. The single highest-impact discipline for individual application teams is "don't write crypto from primitives." Writing AES from the S-box up, implementing RSA from BigInt operations, building HMAC from raw hash calls — every example of handrolled crypto in production application code is a finding waiting to happen. The correct level of abstraction for application code is a high-level library that exposes named operations (encrypt, sign, verify, hash) and hides the primitive choice. Libsodium, Google Tink, and the Web Crypto API are the canonical examples; using them rather than the language's low-level crypto module eliminates entire categories of implementation bugs by construction.

Padding oracle attacks. The classic CBC mode vulnerability. The decryption library distinguishes between "the ciphertext decrypted to invalid padding" and "the ciphertext decrypted to invalid plaintext content," and an attacker who can observe the difference (through different error messages, different response times, or different HTTP status codes) can recover the plaintext byte-by-byte. The mitigation is AEAD — GCM mode does not have a padding step and the integrity tag fails uniformly regardless of where the corruption is — and uniform error handling: the application returns the same error, with the same timing, regardless of which decryption failure occurred.

Timing attacks. A comparison operation that returns early on the first mismatched byte leaks the position of the mismatch through the time taken — and a sufficiently patient attacker (or one with sufficiently many requests) can use the timing signal to recover a secret byte-by-byte. The mitigation is constant-time comparison: hmac.compare_digest() in Python, crypto.timingSafeEqual() in Node.js, the equivalent in every modern crypto library. Any comparison of MACs, signatures, OTPs, or any other secret-equivalent value must use the constant-time function, never ==.

IV / nonce reuse. AES-GCM with a reused IV under the same key is catastrophic — the XOR of the two ciphertexts equals the XOR of the two plaintexts, and the authentication key is recoverable from any two ciphertexts that share an IV. The 96-bit random IV is statistically unique up to a few billion messages per key, after which the birthday bound makes collision likely. For high-volume systems, deterministic IV generation (a per-message counter) or AES-GCM-SIV (RFC 8452, nonce-misuse resistant) eliminates the risk. The discipline is to never share a key across processes that generate IVs independently — each process runs the risk of generating the same random IV.

Predictable randomness. The Math.random / random.random pattern. Most language standard libraries expose two random functions: a fast pseudo-random one for non-security purposes (game logic, sampling, jitter) and a cryptographically secure one for security purposes (token generation, key derivation, nonce generation). The fast PRNG is not suitable for security; its output is predictable to an attacker who observes a few values. Use crypto.randomBytes() in Node.js, secrets.token_bytes() in Python, java.security.SecureRandom in Java, System.Security.Cryptography.RandomNumberGenerator in .NET. The discipline is to make the secure function the obvious reach — wrap it in a helper with a name that tells callers it is for secrets.

// BAD: Math.random for token generation — predictable, not crypto-secure
function generateResetToken() {
  return Math.random().toString(36).substring(2, 18);
}

// GOOD: crypto.randomBytes for any security-relevant token
const crypto = require('crypto');
function generateResetToken() {
  return crypto.randomBytes(32).toString('base64url');  // 256-bit random
}

Insufficient entropy at boot. Embedded systems and freshly booted virtual machines may have insufficient entropy in the OS random pool when the first cryptographic operations run. Keys generated in this state may be predictable across the fleet — the 2012 Lenstra-Heninger study found tens of thousands of TLS certificates sharing prime factors due to this exact pattern. Modern systems handle this correctly (Linux's getrandom() blocks until the entropy pool is initialized), but custom embedded firmware or constrained devices may not. The mitigation is to use the OS-provided blocking random source and to delay key generation until sufficient entropy is available.

Bleichenbacher and PKCS#1 v1.5. RSA encryption with PKCS#1 v1.5 padding has a structural vulnerability — the padding-validation step leaks information about the plaintext through error responses, and a sufficiently patient attacker can decrypt arbitrary ciphertexts. The mitigation is to use OAEP padding for RSA encryption (or ideally to not use RSA encryption at all — use ECDH key agreement + AEAD for session establishment) and to handle padding-validation errors with constant-time, uniform responses.

Detection: SAST, Dependency Scanning, Secrets Detection

Detection of cryptographic failures runs across multiple complementary tools. Each catches a different slice of the category; layered together, they produce strong coverage.

SAST. Source-code scanners catch the layer-1 algorithm-choice failures reliably — MD5 calls, SHA-1 calls, DES/3DES uses, RC4 references, ECB mode strings, verify=False patterns, hardcoded high-entropy strings. Modern SAST tools (CodeQL, Semgrep, Snyk Code, SonarQube, Bandit) ship cryptography-specific rule packs that flag the syntactic patterns. SAST also catches some layer-2 implementation issues — Math.random for security purposes, missing constant-time comparison, padding-mode flags — though the coverage on layer 2 is less complete than on layer 1.

Dependency scanning. A surprisingly large fraction of cryptographic vulnerabilities come from outdated cryptography libraries — OpenSSL versions with known CVEs, BouncyCastle versions with patched padding-oracle bugs, JWT libraries with the alg:none default. Software composition analysis (SCA) tools — Snyk, Dependabot, Renovate, GitHub Dependency Graph — flag the outdated versions and surface the patched releases. The discipline is to keep dependencies current, especially the cryptography ones.

Secrets detection. Tools that scan repositories, CI logs, and runtime environments for committed secrets — TruffleHog, Gitleaks, GitHub secret scanning, GitGuardian. The detection catches the layer-3 finding (key in source) at the point of commit, before the secret reaches the repository's history. Repository-level secret scanning runs continuously and alerts on any new high-entropy string or known-format token (AWS keys, Stripe keys, JWT bearer tokens, etc.). Mature programs run secret scanning in CI as a blocking check on every pull request.

DAST and TLS scanners. External scanners that test the deployed TLS configuration — Qualys SSL Labs, testssl.sh, ssllabs-scan — produce comprehensive reports on protocol versions, cipher suites, certificate validity, HSTS configuration, and known-attack vulnerability (Heartbleed, ROBOT, LUCKY13, etc.). The scanner provides a grade and a list of weaknesses; the remediation is usually one configuration change per finding. The scan should run as part of CI/CD on every staging deployment and on production after every infrastructure change.

IAST and runtime monitoring. Instrumentation that observes cryptographic library calls at runtime catches some patterns SAST misses — ciphertexts that fail to authenticate (possible attack indicator), keys that change unexpectedly (possible misconfiguration), randomness that comes from the wrong source (a security-relevant code path that called the fast PRNG). For more on the SAST/DAST/IAST coverage trade-offs and where each fits in a development pipeline, see the IAST vs DAST vs SAST comparison.

Code review with a crypto lens. Every pull request that touches code calling a cryptographic primitive — encrypt, decrypt, sign, hash, random, TLS configuration, JWT handling — gets explicit review for the algorithm choice, the implementation pattern, and the key handling. Secure code review with a crypto-specific checklist consistently catches what tooling misses, particularly the layer-2 and layer-3 patterns that depend on context the tools cannot infer from source alone.

· OWASP A02 · DEVELOPER ENABLEMENT ·

Crypto Failures Hide in Code Review. Train Developers to See Them.

A SAST tool that flags MD5 in CI is better than discovering it in a pentest report — but most cryptographic findings live below the syntactic layer SAST sees well. AES-CBC without integrity, IV reuse in a high-volume service, a KMS key with overly broad IAM, an HMAC verified with == instead of constant-time comparison, encryption used where hashing was the right primitive — each is a pattern a developer should recognize at code-review time. SecureCodingHub builds the crypto-aware fluency that turns A02 from a recurring scanner finding into something developers catch themselves at authoring time. If your team is tired of every pentest producing another weak-cipher or hardcoded-key report, we'd be glad to show you how our program changes the input side of that pipeline.

See the Platform

Closing: Use Modern Primitives From a Modern Library

The cryptographic failures category persists on the OWASP Top 10 not because the cryptography is unsolved — modern primitives are well-specified, mature libraries implement them correctly, and managed key services handle the operational layer that used to require dedicated infrastructure teams. The category persists because the implementation surface keeps expanding (new languages, new platforms, new cloud services), the threat landscape keeps shifting (post-quantum considerations, new side-channel classes), and the gap between "the right answer is well-known" and "the right answer is consistently applied" remains wide.

The teams that have largely closed their A02 surface in 2026 share a small set of practices. They use a single high-level cryptography library across the codebase rather than reaching for low-level primitives — libsodium, Tink, or the language's recommended high-level wrapper. They hash passwords with Argon2id and accept no other answer. They encrypt with AEAD and never handroll a confidentiality-only mode. They store keys in a managed KMS and use envelope encryption for any non-trivial data volume. They run SAST, dependency scanning, and secret detection on every pull request. They review TLS configuration externally with each deployment. They treat verify=False and encrypt(password) as findings regardless of context. They maintain a single document that says which primitive is used for which task, and they review it annually.

None of those practices is novel; all of them require sustained engineering investment. The category will keep producing incidents in organizations that have not made the investment, and will produce vanishingly few in the ones that have. The discipline that distinguishes the two is not knowledge of cryptography — it is the institutional commitment to apply known answers consistently at every place the code touches a key, a primitive, or a protocol. That fluency is what secure-coding training is for, and it is the difference between a program that detects MD5 in CI and a program that no longer ships cryptographic failures in the first place.