DAST Explained: Dynamic Application Security Testing Pillar Guide
TL;DR
Dynamic Application Security Testing (DAST) tests a running application from the outside, the way a remote attacker would. It catches what static analysis cannot — server misconfigurations, missing security headers, weak TLS, runtime authentication flaws, and deployment-specific issues — and it misses what static analysis catches well, including logic in dead code paths, second-order injection, blind vulnerabilities, and code paths the crawler never reaches. The DAST landscape is decades old and includes both open-source scanners (OWASP ZAP, Burp Suite Community) and commercial platforms (Burp Suite Professional and Enterprise, Invicti, Acunetix, Veracode DAST, Checkmarx One DAST, StackHawk, Detectify, Probely, Rapid7 InsightAppSec). Mature application security programs run DAST alongside SAST and SCA so each layer catches what the other two cannot.
Dynamic Application Security Testing has been a recognized category for nearly two decades. The first generation of automated web application vulnerability scanners appeared in the early 2000s, evolved into the modern DAST market through the 2010s, and now coexists with newer categories like IAST and ASPM. Despite that history, the boundary between automated DAST and human-driven penetration testing remains a recurring source of confusion for newcomers, and the boundary between DAST and SAST is where most program design conversations actually live.
This guide is the pillar reference for the DAST category. It walks through what DAST actually means, how the underlying scanners work, the vulnerability classes they catch and miss, the major tools in the market, and how DAST fits alongside source-level analysis and software composition analysis in a defense-in-depth program. The goal is education first: by the end you should be able to evaluate any DAST tool on its merits, scope a deployment in your own pipeline, and articulate to a non-security stakeholder why DAST alone is never sufficient and why an AppSec program without DAST has predictable blind spots.
What DAST Actually Means
If we had to define dast in a single sentence: dast dynamic application security testing is automated black-box probing of a running application to find vulnerabilities visible to an external attacker. Every other piece of nuance below — crawl, attack, validation, the categories DAST catches and misses, the tool landscape — flows from that one definition.
Dynamic Application Security Testing is the practice of probing a running application for security vulnerabilities by interacting with it the way an external attacker would. The scanner has no access to source code, no knowledge of internal architecture, and no awareness of business intent. It speaks the application's external protocol — almost always HTTP for web applications, sometimes including WebSocket, GraphQL, gRPC, or REST API conventions — and it infers the existence of vulnerabilities from the responses the application returns. The "dynamic" half of the name distinguishes it from static analysis, which inspects code without executing it. The "application" half distinguishes it from network vulnerability scanners that probe ports and services rather than application logic.
The category originated as a class of automated web application vulnerability scanners in the late 1990s and early 2000s, when web applications became the dominant attack surface and manual penetration testing alone could not scale to the volume of new applications shipping. Tools like Sanctum AppScan (later acquired by IBM and rebranded as IBM Security AppScan, then Rational AppScan, then HCL AppScan after the HCL acquisition), SPI Dynamics WebInspect (acquired by HP, now Micro Focus / OpenText Fortify WebInspect), and Acunetix established the early commercial DAST market. The OWASP Zed Attack Proxy (OWASP ZAP) and PortSwigger's Burp Suite established the open-source and prosumer ends of the market and remain dominant today. Over the past decade the category has expanded to include API-first scanners (StackHawk, 42Crunch), continuous external attack surface scanners (Detectify), and SaaS-delivered platforms (Invicti, Probely, Rapid7 InsightAppSec).
The core mechanic is consistent across vendors. The scanner sends an HTTP request constructed to test for a specific vulnerability class — a SQL fragment in a query parameter, a script tag in a form field, a path traversal sequence in a file argument, an XML payload designed to trigger an external entity expansion. It then analyzes the HTTP response, looking for evidence that the payload was processed in an unsafe way: a database error message, the script reflected back into the rendered HTML, the contents of a system file in the response body, or a verifiable side-effect such as an outbound DNS lookup to a scanner-controlled domain. The list of payloads, the response patterns, and the heuristics for distinguishing real findings from background noise are what differentiate one scanner from another.
How DAST Tools Work
A typical DAST scan progresses through three phases: crawl, attack, and validation. Each phase has its own tradeoffs and its own contribution to false positive and false negative rates.
Crawl phase. The scanner begins with one or more seed URLs and follows every link, form, and reachable endpoint to enumerate the application's attack surface. Modern crawlers handle JavaScript-heavy single-page applications by running a headless browser (typically a headless Chromium) and observing the DOM as the application loads, rather than parsing static HTML alone. Where a sitemap or OpenAPI specification is available, the scanner can be seeded with it directly to avoid relying on inference. Authenticated scanning is configured here: the scanner is given credentials, a recorded login sequence, or session tokens so it can continue past the login wall. The completeness of the crawl is the upper bound on the completeness of the scan; anything the crawler does not discover, the attack phase cannot test.
Attack phase. For every parameter on every discovered endpoint — query string parameters, form fields, JSON body fields, headers, cookies, path components — the scanner iterates over a payload library designed to trigger specific vulnerability classes. A serious modern scanner ships thousands of payloads spanning injection (SQL, NoSQL, LDAP, command, XPath, template), cross-site scripting (reflected, stored, DOM-based), server-side request forgery, XML external entity, server-side template injection, deserialization, open redirect, file inclusion, and cryptographic weaknesses. The scanner is also responsible for rate-limiting itself so that a 50,000-request scan does not overwhelm the application or trip rate-limit defenses.
Validation phase. A raw match between a payload and a response pattern is only a candidate finding. The scanner's job is to validate the candidate before reporting it, ideally with evidence the operator can reproduce manually. Validation strategies include differential analysis (comparing the response with a baseline request), out-of-band confirmation (using a scanner-controlled DNS or HTTP collector to confirm payloads that reach a server-side execution context), and time-based confirmation (for blind injection, where the payload causes a measurable execution delay). The black-box assumption — no knowledge of source — is what makes validation hard. Without source visibility the scanner cannot prove that a finding will be exploitable in every deployment, only that it was exploitable in this scan against this configuration.
What DAST Catches Well
DAST is at its strongest on issues that manifest in the deployed application's response to external requests. Because the scanner sees what an attacker would see, the findings are inherently runtime-confirmed — when DAST reports a vulnerability, an external actor could reproduce it. The categories where DAST is the most reliable detection mechanism include:
- Server and framework misconfiguration: Verbose error pages, directory listing enabled, default install pages still reachable, debug endpoints exposed in production, framework administrative consoles left accessible. These are configuration failures, not source-level bugs, and they are visible only to a tool that interacts with the running deployment.
- Missing or misconfigured security headers: Absent or weak Content-Security-Policy, missing Strict-Transport-Security, weak Referrer-Policy, missing X-Content-Type-Options, missing or permissive CORS configuration. A scanner enumerates these in seconds; a code reviewer reading source rarely finds the same issues.
- TLS configuration weaknesses: Deprecated TLS versions enabled, weak cipher suites, expired or self-signed certificates in production, missing HSTS preload, insecure renegotiation. These live in the web server configuration, not in application code.
- Insecure cookie attributes: Session cookies issued without Secure, HttpOnly, or SameSite attributes; session tokens transmitted in URL parameters; session fixation patterns visible only when a real session is exercised.
- Exposed administrative endpoints: /admin, /actuator, /phpinfo.php, /.git/, /.env, framework health endpoints with unfiltered output. Reachable in production, often invisible in source review because they were intended for staging only.
- Default credentials and weak authentication: Login forms accepting admin/admin, default credentials on third-party admin panels embedded in the deployment, missing brute-force protection on authentication endpoints.
- Simple injection on reachable parameters: Reflected XSS, error-based SQL injection, command injection, and open redirect on parameters the crawler reached and the fuzzer exercised.
The common thread is runtime context. None of these issues live purely in source code; they emerge from the interaction between source, configuration, deployment, and the network stack underneath. That is the territory only DAST occupies.
What DAST Misses
DAST has structural blind spots, and being honest about them is what lets you design a program that compensates. The categories DAST consistently misses are not vendor-specific — they are properties of black-box automated testing.
- Second-order injection. An attacker submits a malicious payload through endpoint A, the application stores it, and the payload executes when endpoint B (often hours or days later) reads the stored value back into a sensitive sink. The DAST scanner observes endpoint A returning a normal HTTP 200, sees no immediate evidence of injection, and moves on. The vulnerability is invisible without correlating writes to reads — something black-box scanning is structurally unable to do.
- Blind SSRF without out-of-band detection. The application makes an outbound request to an attacker-controlled URL, but the response to the client contains no observable difference. Without an out-of-band collector configured (a scanner-owned DNS or HTTP server that records inbound connections), DAST cannot distinguish "the request was blocked" from "the request was made to an internal target and the response was discarded." Many deployments do not configure out-of-band detection.
- Business logic flaws. A coupon that should apply once but accepts replays. A pricing endpoint that accepts negative quantities. A cross-tenant authorization bypass that requires guessing valid object identifiers. A scanner sees two HTTP 200 responses and reports nothing; a human tester or a SAST data flow that traces an authorization decision recognizes the issue.
- Race conditions and time-of-check-to-time-of-use bugs. Concurrency issues that require timing two requests precisely to land in the same critical section. Reproducible by a determined attacker, but not by a serial fuzzer that sends one request and waits for the response.
- Dead code paths with latent vulnerabilities. A deprecated endpoint that is no longer linked from the UI but remains routed and exploitable. The crawler never finds the URL; the scanner never tests it. Source-level analysis sees every route regardless of reachability.
- Vulnerabilities behind authentication walls. Unless authenticated scanning is configured carefully — and the scanner's session is renewed when it expires, and protected endpoints are not skipped because they returned 302 to a login page — anything past the login screen is invisible. Many production DAST deployments only scan unauthenticated surface area for operational simplicity.
- Hardcoded credentials in compiled assets. A secret embedded in a JavaScript bundle is reachable in principle through DAST (the scanner downloads the bundle), but most DAST scanners do not analyze bundled assets for entropy patterns. Source-level secrets scanning catches these reliably.
None of these gaps are arguments against running DAST. They are arguments for running DAST in combination with other layers that have complementary blind spots. Every category in application security has shape, and the program is built by stacking shapes that cover each other's gaps.
SAST vs DAST vs IAST
The three main automated application security testing categories — SAST, DAST, and IAST — analyze the same application from three different vantage points and find three largely non-overlapping bug populations. Treating them as alternatives is the single most expensive mistake newcomers make. The table summarizes the practical contrasts.
| Dimension | SAST | DAST | IAST |
|---|---|---|---|
| Vantage point | Source code (white-box) | Running app from outside (black-box) | Instrumented runtime (grey-box) |
| When in lifecycle | Code, Build (CI), Pull request | Test, Pre-prod, Continuous | Test, QA |
| Strongest at | Injection, taint flows, dead-code bugs | Runtime config, headers, TLS, deployment issues | Confirmed runtime issues with code-level location |
| Weakest at | Runtime config, deployment context | Logic, dead code, second-order, blind issues | Code paths test traffic does not exercise |
| Output type | File and line, with data flow trace | HTTP request and response evidence | Code-level finding with triggering request |
For a deeper walk through why SAST and DAST find different categories of issue and why a serious program runs both, see SAST vs DAST: Complementary Approaches to Application Security. For where DAST sits relative to human-driven penetration testing — a related but distinct activity — see DAST vs Penetration Testing: When to Use Each Approach.
The DAST Tool Landscape
The DAST market spans free open-source scanners maintained by the security community, prosumer tools sold per-seat to penetration testers, and enterprise platforms sold by AppSec budget. The table below covers the most commonly evaluated tools with the kind of summary an AppSec leader needs to compile a shortlist. Capability claims are limited to what each vendor publicly documents; coverage of any specific vulnerability class will vary by version and by configuration.
| Tool | Type | Open source / Commercial | Best for |
|---|---|---|---|
| OWASP ZAP | Active and passive scanner, intercepting proxy | Open source (Apache 2.0) | Free baseline scanning, CI automation, exploratory testing |
| Burp Suite Professional | Intercepting proxy and scanner for individual testers | Commercial (per seat) | Manual penetration testing, targeted application probing |
| Burp Suite Enterprise | Scheduled, distributed automated scanning | Commercial | Continuous DAST across many applications |
| Invicti (formerly Netsparker) | Enterprise DAST with proof-based scanning | Commercial | Large-scale automated scanning with reduced false positives |
| Acunetix | Web vulnerability scanner (now part of Invicti group) | Commercial | Web app and network scanning, mid-market deployments |
| Rapid7 InsightAppSec | Cloud-delivered DAST (successor to AppSpider) | Commercial | Customers in the broader Rapid7 Insight platform |
| Veracode DAST | Cloud-delivered DAST in the Veracode platform | Commercial | Organizations standardized on Veracode for SAST and SCA |
| Checkmarx One DAST | DAST module in the Checkmarx One platform | Commercial | Organizations standardized on Checkmarx for SAST and SCA |
| StackHawk | Developer-focused, CI-integrated API and web DAST | Commercial (free tier) | Pipeline-integrated scanning, API-first applications |
| Detectify | External attack surface monitoring with crowdsourced research | Commercial | Continuous external surface monitoring |
| Probely | Cloud DAST with API and developer integrations | Commercial | Mid-market and product engineering teams |
Choosing between these tools is rarely about which one finds more vulnerabilities in an isolated benchmark. It is about how well the tool fits into the way you actually deploy and ship — whether it integrates with your CI, whether it speaks the protocols your application uses (REST, GraphQL, gRPC, WebSocket), how it handles authentication for your specific identity provider, how its findings flow into your bug tracker, and how the false positive rate looks against your applications, not someone else's. Always run a proof of concept against a representative application before signing.
How to Run DAST in Your Pipeline
Adopting DAST well is mostly a sequence of operational decisions, not tool choice. The setup that catches issues without breaking applications, exhausting engineering goodwill, or producing reports nobody acts on shares the same handful of properties.
Use a dedicated staging environment. DAST against production carries real risk: payloads can corrupt data, trigger costly third-party API calls, or exhaust rate limits. The default deployment target is a staging environment that mirrors production architecture but is isolated from production data and downstream integrations. Where production scans are valuable (for catching configuration drift), they should be run with a constrained payload set, careful rate limiting, and explicit operational approval.
Configure authenticated scanning carefully. Anything past the login screen is invisible to an unauthenticated scan. Configure the scanner with a service account, a recorded login sequence, or pre-issued session tokens, and verify that the session does not expire mid-scan and silently drop coverage. For SaaS applications, scan with multiple roles to surface authorization issues between privilege levels.
Scope with a sitemap or OpenAPI spec. Crawler-only enumeration always misses endpoints. Where the application exposes an OpenAPI specification, point the scanner at it directly so every documented endpoint is scanned regardless of whether the crawler reached it. For traditional applications, an explicit URL list or sitemap.xml supplements the crawler.
Rate-limit the scanner. A DAST scan that fires thousands of requests per second can break the application, exhaust connection pools, or trigger WAF blocking that masks real findings. Configure per-scan request rates that the application can tolerate, and stagger scans across multiple targets so concurrent load is bounded.
Integrate with the bug tracker. DAST findings that live only in the scanner's dashboard get ignored. Wire the scanner to auto-create tickets in Jira, GitHub Issues, or whichever tracker the engineering team actually uses, with severity-based routing rules and remediation SLAs. The metric that matters is mean-time-to-remediate, not number-of-findings-in-the-dashboard.
Decide on pipeline gating. Critical-severity new findings can fail a deployment; existing critical findings should be visible but not block (gating on accumulated debt freezes the program). Calibrate severity thresholds against the false positive rate you actually observe, not the rate the vendor markets.
DAST + SAST + SCA: Defense in Depth
The defensible AppSec triangle is SAST, SCA, and DAST. SAST analyzes source code at the moment it is written and catches the categories that are visible in the code path: injection, broken authentication patterns, insecure cryptography, dangerous deserialization, hardcoded credentials. SCA analyzes the open-source dependencies the application pulls in and catches the categories that are visible in the dependency tree: known CVEs in third-party libraries, license obligations, version drift. DAST analyzes the running application and catches the categories that are visible only at runtime: server misconfiguration, missing security headers, weak TLS, deployment-specific authentication flaws. Each category has a shape; the program is built by stacking shapes that cover each other's gaps.
GraphNode focuses on the SAST and SCA layers — the two categories that catch vulnerabilities before deployment. GraphNode SAST performs interprocedural data flow analysis across 13+ languages and 780+ security rules, surfacing taint-style vulnerabilities to the engineer who introduced them in the minutes after they push. GraphNode SCA walks the full transitive dependency tree and matches every component-version pair against vulnerability advisory databases, surfacing CVEs and license obligations at the pull request rather than at production discovery. Neither product is a DAST tool; they are the lifecycle-earlier counterparts that complement whichever DAST scanner you choose. For the broader picture of how these categories fit into a complete program, see the pillar guide on application security.
Frequently Asked Questions
What is DAST?
DAST stands for Dynamic Application Security Testing. It is a category of automated security testing that probes a running application from the outside, the way a remote attacker would, by sending crafted HTTP requests and analyzing the responses for evidence of vulnerabilities. DAST has no access to source code and no knowledge of internal architecture; it interacts only with the deployed application's external surface. It is strongest at finding runtime issues like server misconfiguration, missing security headers, weak TLS, and exploitable injection on reachable endpoints, and it is structurally weaker at finding business logic flaws, second-order injection, and vulnerabilities in code paths the crawler never reaches.
What is the difference between DAST and pen testing?
DAST is an automated tool that runs continuously and finds known vulnerability patterns at scale. Penetration testing is a human-driven, time-boxed engagement in which a qualified tester attempts to compromise the target using a mix of automated tools, manual probing, and creative exploit chains. DAST handles the broad-and-frequent layer; pen testing handles the deep-and-creative layer. DAST runs nightly or per-deployment and costs marginal compute; a pen test runs quarterly or annually and costs tester-days. They are complementary, not substitutable: a mature program runs both. For more detail, see the deep-dive comparison on DAST vs penetration testing.
Is DAST enough on its own?
No. DAST has structural blind spots that no amount of tuning will close: second-order injection, blind SSRF without out-of-band detection, business logic flaws, race conditions, dead code paths, and any vulnerability behind an authentication wall the scanner is not configured to authenticate past. A program built only on DAST will reliably miss entire categories of issue that source-level analysis (SAST) catches at the moment the code is written, and entire categories that dependency analysis (SCA) catches at the moment a vulnerable library is added. Defense in depth requires layering DAST with SAST and SCA so each layer catches what the other two cannot.
Does DAST replace SAST?
No. SAST and DAST analyze the same application from fundamentally different vantage points and find largely non-overlapping bug populations. SAST sees the code regardless of whether it is reachable from the outside; DAST sees only what the crawler reaches. SAST runs the moment a developer commits a change; DAST runs after a deployment exists to scan. SAST traces taint flows through dead code paths, error handlers, and authenticated endpoints the scanner never touches; DAST confirms exploitability against the deployed configuration including web server, framework middleware, and TLS stack. A mature program runs both, with SAST in CI and DAST against staging, and treats correlated findings as high-priority true positives.
What is the best free DAST tool?
OWASP ZAP is the most widely deployed open-source DAST scanner. It is maintained by the OWASP community, licensed under Apache 2.0, ships with both passive and active scanning, includes an intercepting proxy for manual exploration, and has well-documented automation modes for CI integration. Burp Suite also has a free Community Edition that is excellent as an intercepting proxy for manual testing, but the automated active scanner is restricted to the paid Professional tier. For most teams looking to start with DAST without procurement, OWASP ZAP is the standard choice; teams that want a more polished commercial experience typically evaluate Burp Suite Enterprise, Invicti, StackHawk, or one of the SaaS platforms in the table above.