GraphNode
All guides
AppSec

OWASP Top 10 (2021): The Complete AppSec Risk Guide

| 18 min read |GraphNode Research

TL;DR

The OWASP Top 10 is the de facto standard reference for web application security risk, maintained by the Open Web Application Security Project. The current published edition is OWASP Top 10 (2021): A01 Broken Access Control, A02 Cryptographic Failures, A03 Injection, A04 Insecure Design, A05 Security Misconfiguration, A06 Vulnerable and Outdated Components, A07 Identification and Authentication Failures, A08 Software and Data Integrity Failures, A09 Security Logging and Monitoring Failures, and A10 Server-Side Request Forgery (SSRF). No single tool catches all 10 — a complete program layers SAST (catches A01 logic, A02, A03, A07, A08, A10), SCA (catches A06), DAST (catches A01, A05, A07), and secure design review (catches A04). This guide walks through every category, explains how OWASP builds the list, and maps each risk to the AppSec testing layer that detects it.

The OWASP Top 10 has been the de facto standard reference for application security risk since 2003. Every two to four years, the Open Web Application Security Project — a vendor-neutral non-profit — publishes a refreshed list of the most critical web application security risks. The list shapes secure-coding training programs, compliance frameworks, vulnerability scanner rule packs, and security team prioritization across the industry. PCI-DSS, the Payment Card Industry Data Security Standard, references it explicitly. Most procurement RFPs for AppSec tools require coverage. If you have ever worked anywhere near application security, you have been touched by the OWASP Top 10.

The current published edition is the OWASP Top 10 (2021), available at owasp.org/Top10. It is the authoritative version as of 2026 — OWASP has signalled that the next refresh will land on its usual cadence, but until that publication ships, the 2021 list remains canonical. This guide walks through all ten 2021 categories in order, explains the methodology OWASP uses to build the list, and maps each risk to the AppSec testing category that actually catches it. The mapping matters: no single scanner catches all ten, and treating the OWASP Top 10 as a checklist that one tool can satisfy is the most common mistake teams make.

How OWASP Builds the Top 10

A short note on phrasing before we get into methodology: practitioners interchangeably write "owasp top 10," "owasp 10 top," and "top 10 owasp vulnerabilities" to refer to the same list. OWASP itself uses "OWASP Top 10" as the canonical title; the variant orderings are simply how the term shows up in search queries and procurement notes. Whichever phrasing you encounter, the underlying document is the one published at owasp.org/Top10 and walked through category by category below.

The Top 10 is not a popularity poll. OWASP combines two streams of evidence: a community survey that gathers practitioner-perceived risks (intended to capture rising threats that have not yet shown up in scan data) and a quantitative analysis of vulnerability data contributed by tooling vendors, bug bounty platforms, and security consultancies. For the 2021 edition, the data analysis covered hundreds of thousands of applications and tens of thousands of CWEs. Each Top 10 category is itself an aggregation of related CWE (Common Weakness Enumeration) entries — A01 Broken Access Control, for instance, maps to 34 underlying CWEs, while A03 Injection consolidates a broader set including SQL injection, NoSQL injection, command injection, LDAP injection, and cross-site scripting.

Four factors determine whether a CWE category makes the list and how it ranks. Incidence rate measures the percentage of tested applications in which at least one instance of the weakness was found. Exploitability measures how technically easy a finding is to weaponize. Detectability measures how readily the weakness can be discovered (lower detectability is more dangerous, because it stays hidden). Technical impact measures the worst-case consequence of a successful exploit. The 2021 edition shifted methodology slightly compared to 2017: incidence rate became the primary ranking input, with exploitability and impact factored in but no longer the dominant signal. That shift is why categories like Insecure Design joined the list — the underlying weaknesses were always there, but the rebalanced methodology made room for them.

Categories also evolve between editions. A02:2021 Cryptographic Failures was previously A03:2017 Sensitive Data Exposure; OWASP renamed it to focus on the root cause (broken crypto) rather than the symptom (data leaking). XSS, which was its own category in 2017 (A07:2017), was folded into A03:2021 Injection because it shares the same underlying mechanism: untrusted input reaching a dangerous sink without sanitization. The list is meant to be read as a living taxonomy, not a fixed sequence. When you compare a 2017 finding to a 2021 mapping, expect movement.

A01:2021 Broken Access Control

Definition. Access control enforces what authenticated users are allowed to do. Broken access control means that authorization rules — who can read which record, who can perform which action — are missing, inconsistent, or bypassable. This category jumped from fifth place in 2017 to first place in 2021 because OWASP's data showed it had the highest incidence rate across tested applications: 94 percent of applications were tested for some form of broken access control, and the average incidence rate was around 3.81 percent.

Real-world example. An Insecure Direct Object Reference (IDOR) bug is the canonical case: an API endpoint at /api/orders/12345 returns the order if the authenticated user owns it — but a developer forgets to verify ownership, so any authenticated user can change the ID to 12346 and read someone else's order. Another classic: an admin endpoint protected only by a hidden link in the navigation menu, with no server-side role check.

Prevention. Deny by default. Centralize access control checks in a shared module rather than re-implementing them at each endpoint. Use server-side session state, never client-supplied claims, to determine authorization. Log access control failures and alert on repeated denials.

Which AppSec category catches it. Broken access control is hard for any single tool to catch fully because the rules live in business logic. DAST finds it best when configured with multiple authenticated user roles and trained to swap session tokens between requests. SAST can flag IDOR-shaped patterns (database queries by user-supplied ID without an ownership clause) and missing role-check decorators. Manual code review and threat modeling remain irreplaceable for the rest.

A02:2021 Cryptographic Failures

Definition. Previously titled "Sensitive Data Exposure" (A03:2017), this category was renamed in 2021 to focus on the root cause: failures in the cryptographic controls that should protect data in transit and at rest. It covers weak ciphers, missing encryption, hardcoded keys, broken random number generation, and protocol misuse. The rename matters because "sensitive data exposure" described the symptom — data getting leaked — while "cryptographic failures" points at the actual defect in the code.

Real-world example. A backend uses MD5 or SHA-1 to hash passwords, allowing offline cracking after a database breach. An application transmits credentials over HTTP rather than HTTPS. A developer hardcodes an AES key directly in source so the encryption is reversible by anyone with read access to the repository. Or a custom token uses Math.random() instead of a cryptographic RNG, so session identifiers become predictable.

Prevention. Use vetted cryptographic libraries; do not roll your own. Choose modern algorithms (Argon2 or bcrypt for passwords, AES-256-GCM for symmetric, TLS 1.2+ for transport). Store keys in a secrets manager or HSM, never in source control. Disable legacy protocols and ciphers at the load balancer.

Which AppSec category catches it. SAST is the primary detection layer. Static analyzers — including GraphNode SAST — flag use of weak hash functions, deprecated ciphers, hardcoded keys, and broken random number generation directly from source. Configuration scanning catches TLS misconfiguration at the platform layer. Secret scanning catches hardcoded keys committed to repositories.

A03:2021 Injection

Definition. Injection occurs when untrusted input is interpreted as code or commands by a downstream interpreter. The category covers SQL injection, NoSQL injection, OS command injection, LDAP injection, XPath injection, expression-language injection, and — new in 2021 — cross-site scripting (XSS), which was previously its own category but was folded in because the underlying mechanism is identical: tainted data reaching a sink that interprets it.

Real-world example. A login endpoint builds SQL by string concatenation: "SELECT * FROM users WHERE email='" + email + "'". An attacker submits ' OR '1'='1 as the email and bypasses authentication. A reflected XSS bug echoes a query parameter into the page without escaping, allowing an attacker to inject a script that hijacks the victim's session. A command injection bug builds a shell command with user-supplied input and runs it.

Prevention. Use parameterized queries and prepared statements for all database access. Use ORMs that build queries safely. Apply context-appropriate output encoding for any user input rendered into HTML, JavaScript, CSS, or URL contexts. Validate input against an allow-list, not a deny-list. Avoid passing user input to shell commands at all; if unavoidable, use a vetted argument-array API rather than string concatenation.

Which AppSec category catches it. Injection is the canonical use case for SAST with deep data flow analysis. A serious static analyzer traces tainted input from source (HTTP parameter, file upload, database read) through method boundaries to sink (SQL query, shell command, HTML render) and flags the path. Pattern-only linters miss the multi-hop cases; interprocedural data flow catches them. GraphNode SAST is built on context-aware taint propagation across 13+ languages with 780+ rules covering injection-class vulnerabilities. DAST confirms exploitability by sending crafted payloads at runtime, but is bounded by what the crawler reaches.

A04:2021 Insecure Design

Definition. Insecure Design is the new entry in the 2021 edition. It captures risks that originate before any code is written — flaws in the architecture, threat model, or business-logic design rather than implementation defects. A perfectly implemented authentication flow that lacks rate limiting on password reset is an insecure-design problem; the code does what it was designed to do, but the design itself is weak.

Real-world example. A movie-ticket booking flow that allows unlimited reservations without payment leads to inventory abuse. A password recovery flow that reveals whether an email is registered enables user enumeration. A multi-step transaction that does not verify state continuity allows replay or step-skipping attacks.

Prevention. Threat-model new features at the design phase. Use secure design patterns and reference architectures. Establish abuse-case requirements alongside use-case requirements. Apply security user stories to the backlog. Document trust boundaries explicitly.

Which AppSec category catches it. No automated tool fully catches insecure design. The discipline that addresses it is threat modeling, performed by humans during the planning phase. SAST and DAST may surface symptoms downstream, but the root cause is upstream of any scanner.

A05:2021 Security Misconfiguration

Definition. Security misconfiguration covers any case where a system is deployed with insecure default settings, incomplete hardening, exposed administrative interfaces, verbose error messages that leak internals, or missing security headers. With cloud platforms now the default deployment target, this category has expanded to include cloud service misconfigurations: public S3 buckets, overly permissive IAM roles, exposed Kubernetes dashboards.

Real-world example. An admin panel left at the default username and password after deployment. A staging environment exposed to the internet with a debug endpoint that returns full stack traces. Missing X-Frame-Options and Content-Security-Policy headers that would have blocked clickjacking and XSS. An S3 bucket holding customer data left publicly readable because the team never explicitly set the ACL.

Prevention. Establish hardened baseline configurations for every environment. Automate environment provisioning so configuration drift is impossible. Disable unused features and default accounts. Set security headers explicitly. Review error messages for information leakage before production deployment.

Which AppSec category catches it. A combination layer. DAST catches missing security headers, exposed admin endpoints, and verbose error messages. IaC scanning (Terraform, CloudFormation, Helm) catches misconfigured cloud resources before deployment. Container scanning catches Dockerfile misconfigurations like running as root. CSPM tools catch drift in deployed cloud resources at runtime.

A06:2021 Vulnerable and Outdated Components

Definition. Modern applications are between 70 and 90 percent third-party code: open-source libraries, frameworks, runtime engines, and OS packages. When any of those components has a publicly disclosed vulnerability and your application uses an affected version, you inherit the vulnerability. This category covers both direct dependencies (libraries you explicitly import) and transitive dependencies (libraries pulled in by other libraries, often four or five layers deep).

Real-world example. The Log4Shell incident in December 2021 was the canonical case. A critical remote code execution vulnerability in Log4j affected millions of Java applications, the vast majority of which never declared Log4j directly — it came in transitively through Spring Boot, Apache Solr, Elasticsearch, and dozens of other widely used frameworks. Teams that audited only their direct dependencies missed it entirely. Equifax's 2017 breach, which exposed 147 million records, was traced to a known Apache Struts vulnerability that had a patch available for months.

Prevention. Maintain a continuously updated inventory of every component-version pair in your application, including the full transitive tree. Subscribe to vulnerability advisory feeds (NVD, GitHub Advisory Database, OSV). Patch promptly when fixes are available. Generate and ship an SBOM (Software Bill of Materials) with every release.

Which AppSec category catches it. This is exactly what SCA (Software Composition Analysis) exists to solve. GraphNode SCA walks the full transitive dependency tree and matches every component-version pair against vulnerability advisory databases. For a deeper walkthrough of how SCA works, see SCA scanning explained; for why transitive dependencies dominate the attack surface, see transitive dependencies as attack surface.

A07:2021 Identification and Authentication Failures

Definition. Previously titled "Broken Authentication" (A02:2017), this category was renamed and slightly broadened in 2021. It covers any flaw in how users prove who they are or in how their authenticated session is managed: weak password policies, credential stuffing exposure, missing multi-factor authentication, session fixation, JWT misuse, missing session expiration, and so on.

Real-world example. An application accepts any password, including dictionary words like "password123" and previously breached credentials. A JWT implementation accepts the none algorithm, allowing forged tokens. Session IDs are predictable. A password reset link does not expire and remains valid indefinitely. A login form has no rate limiting, enabling automated credential-stuffing attacks against breached password databases.

Prevention. Enforce strong password policies and check candidate passwords against known-breached lists (Have I Been Pwned API). Require multi-factor authentication for sensitive operations. Use vetted libraries for session management; do not roll your own. Rate-limit authentication endpoints. Rotate session identifiers on privilege change.

Which AppSec category catches it. A mixed layer. SAST catches code-level patterns like accepting none JWTs, hardcoded secrets in token signing, or insecure cookie flags. DAST catches missing rate limiting, weak password acceptance, and predictable session IDs at runtime. Design review catches missing MFA requirements and broken account-recovery flows.

A08:2021 Software and Data Integrity Failures

Definition. A new category in 2021, this covers cases where code, infrastructure, or data is updated or loaded without verifying integrity. It includes insecure deserialization (a category in its own right in 2017), use of unsigned packages or auto-updates from untrusted sources, and CI/CD pipelines that lack integrity controls. The rise of supply-chain attacks like SolarWinds drove the inclusion of this category.

Real-world example. An application deserializes user-supplied data into native objects without type validation, enabling remote code execution via crafted gadget chains (the classic Java deserialization attack). A CI pipeline pulls Docker base images by mutable tag (:latest) rather than immutable digest, so a compromised registry can swap in a malicious layer. An application auto-updates from a server that has no signature verification, so a compromised update server can push backdoored binaries to every customer.

Prevention. Use digital signatures or attestations to verify software provenance. Pin dependency versions and verify integrity hashes. Pin container images by digest, not tag. Avoid deserializing untrusted data; if unavoidable, use a strict allow-list of expected types. Use SLSA-style supply chain integrity frameworks for release pipelines.

Which AppSec category catches it. SAST catches insecure deserialization patterns directly in source. SCA can flag dependencies pulled from unverified sources. Supply chain integrity tooling (signing, attestation, SLSA frameworks) addresses the build-and-release layer. For background on supply chain integrity, see what is SLSA.

A09:2021 Security Logging and Monitoring Failures

Definition. An attacker who cannot be detected has effectively unlimited time to operate. This category covers missing audit trails, insufficient logging of security-relevant events, logs that are not centralized or alerted on, and missing detection capability for active attacks. It is the only Top 10 category that is more about reducing incident dwell time than preventing the initial compromise.

Real-world example. An application does not log failed login attempts, so a credential-stuffing attack runs for weeks undetected. Logs are written but never aggregated to a SIEM, so an attacker pivots through the network without any single team noticing. A breach is discovered months later when stolen data appears for sale on a dark-web forum, rather than being detected at the time of exfiltration.

Prevention. Log security-relevant events: authentication, authorization, input validation failures, configuration changes. Centralize logs to a SIEM with retention long enough to support investigation. Define alerts for the patterns that matter. Practice incident response with tabletop exercises so the alerts are actually actionable.

Which AppSec category catches it. This is a SIEM and observability category, not a SAST or SCA category. SAST can flag missing log calls in some patterns, but the substantive work belongs to the security operations side of the program: log pipeline design, SIEM rules, alert tuning, and incident response readiness.

A10:2021 Server-Side Request Forgery (SSRF)

Definition. SSRF occurs when an application fetches a remote resource based on a user-supplied URL without sufficient validation, allowing an attacker to coerce the server into making requests to internal systems the attacker cannot reach directly. SSRF was promoted to the Top 10 for the first time in 2021, in large part because of its role in the 2019 Capital One breach, which exposed 100 million records by abusing an SSRF vulnerability to pivot into AWS metadata.

Real-world example. An image-fetching feature accepts a URL parameter and downloads the resource server-side. An attacker passes http://169.254.169.254/latest/meta-data/iam/security-credentials/ — the AWS instance metadata endpoint — and the application returns the cloud credentials of the host instance, which the attacker then uses to access S3, RDS, or anything else the role allows. Internal services bound to localhost-only ports are equally exposed.

Prevention. Validate user-supplied URLs against an allow-list of permitted hosts. Disable HTTP redirects in server-side fetchers. Block internal IP ranges (RFC 1918, link-local 169.254/16, loopback). Use IMDSv2 on AWS, which requires a session token and defeats simple SSRF. Network-segment outbound traffic from application tiers.

Which AppSec category catches it. SAST with proper data flow analysis catches SSRF directly at the source — a tainted URL flowing into an HTTP-fetch sink without an allow-list check is exactly the pattern interprocedural taint propagation is designed to find. DAST can confirm exploitability with crafted payloads but only against reachable endpoints. SAST plus IaC review of egress controls is the strongest layered defense.

How GraphNode Maps to the OWASP Top 10

The honest answer to "does GraphNode cover the OWASP Top 10?" is layered. GraphNode SAST and SCA together address the categories that source-code and dependency analysis can address, which is six of the ten. The remaining four — broken access control logic, security misconfiguration, insecure design, and logging gaps — require additional layers (DAST, IaC scanning, threat modeling, SIEM) that sit outside the static-analysis scope.

OWASP 2021RiskGraphNode CoveragePrimary CWE Family
A01Broken Access ControlSAST partial (IDOR patterns); needs DASTCWE-285, CWE-639
A02Cryptographic FailuresSAST (weak crypto, hardcoded keys)CWE-327, CWE-798
A03Injection (incl. XSS)SAST (data flow taint analysis)CWE-79, CWE-89, CWE-78
A04Insecure DesignOut of scope; needs threat modelingCWE-209, CWE-256
A05Security MisconfigurationLimited; needs DAST + IaC scanCWE-16, CWE-732
A06Vulnerable ComponentsSCA (transitive dependency CVEs)CWE-1104, CWE-937
A07Auth FailuresSAST partial (JWT, session patterns)CWE-287, CWE-384
A08Integrity FailuresSAST (deserialization); SCA (sources)CWE-502, CWE-829
A09Logging FailuresOut of scope; needs SIEMCWE-778, CWE-117
A10SSRFSAST (data flow to HTTP sinks)CWE-918

Six of the ten 2021 categories — A02, A03, A06, A07, A08, A10 — fall squarely within what GraphNode SAST and GraphNode SCA are built to detect. A03 Injection is the strongest area, where deep interprocedural data flow analysis distinguishes a serious engine from pattern-matching alternatives. For the remaining four categories, GraphNode is intentionally not the answer; a complete program layers in DAST, IaC scanning, threat modeling, and SIEM. Anyone marketing a single tool as "complete OWASP Top 10 coverage" is overselling — and there is no such thing as OWASP certification for any product, despite occasional vendor language to the contrary.

OWASP Top 10 Sub-Pages — Detailed Coverage

Each of the ten 2021 categories has its own deep-dive page with extended examples, prevention checklists, and detection mapping. Use the cards below as a jump table.

OWASP Top 10 in Your Pipeline

A practical OWASP-aligned program layers detection across the SDLC. Each layer addresses different categories, and the cost-effectiveness of each comes from running the right tool at the right phase.

SAST in CI runs on every pull request and catches A03 Injection (the strongest area for static taint analysis), A02 Cryptographic Failures, A07 patterns like broken JWT handling, A08 insecure deserialization, and A10 SSRF directly at the source. Findings appear as inline pull request comments with file and line, so developers fix them while the context is still fresh. Critical-severity new findings should fail the build; existing findings remain visible without blocking. For a deeper take on where SAST fits in the AppSec stack, see SAST tools.

SCA in CI walks the full transitive dependency tree on every build and catches A06 Vulnerable and Outdated Components. The output is a list of vulnerable dependencies with affected component-version pairs, fixed versions, and dependency paths. Continuous monitoring is equally important: a vulnerability disclosed today may affect a release you shipped three months ago. See SCA scanning explained for the full mechanics.

DAST in staging probes the running application from the outside and catches the categories that depend on runtime behavior: A01 Broken Access Control (with role-aware testing), A05 Security Misconfiguration (missing headers, exposed admin endpoints, verbose errors), and parts of A07 Authentication Failures. See DAST explained for how dynamic testing complements static analysis.

Security-aware code review and threat modeling catch A04 Insecure Design and the business-logic portion of A01 that no scanner finds. Quarterly threat modeling for new services, security champions embedded in product teams, and tabletop incident-response exercises are the human-layer practices that close the remaining gaps. For the broader picture, see application security.

Frequently Asked Questions

What is the OWASP Top 10?

The OWASP Top 10 is a standard awareness document published by the Open Web Application Security Project that ranks the most critical security risks to web applications. It has been the de facto reference for application security since 2003 and is updated every two to four years. The list is based on a combination of community survey data and quantitative vulnerability data contributed by tooling vendors and security consultancies. Each Top 10 entry is itself an aggregation of related CWE (Common Weakness Enumeration) categories.

What is the latest OWASP Top 10 edition?

The current published edition is the OWASP Top 10 (2021), available at owasp.org/Top10. It is the authoritative version as of 2026. OWASP refreshes the list every two to four years; until a new edition is officially published, the 2021 list remains canonical. Previous editions include 2017, 2013, 2010, 2007, 2004, and the original 2003 release.

Is the OWASP Top 10 a compliance requirement?

Not directly, but it is referenced by several compliance frameworks. PCI-DSS requirement 6.2.4 explicitly references OWASP Top 10 categories as the minimum threats secure-coding training and review must address. NIST SP 800-218 (the SSDF) and ISO 27034 do not name OWASP directly but cover the same vulnerability classes. Most AppSec procurement RFPs also require coverage. So while no regulator audits "OWASP compliance" per se, the list functions as an industry-wide minimum bar.

How does the OWASP Top 10 differ from CWE Top 25?

The OWASP Top 10 ranks broad risk categories specific to web applications and is updated every two to four years. The CWE Top 25, maintained by MITRE, ranks individual software weakness types across all software (not just web apps) and is updated annually. The two overlap heavily but are organized differently: a single OWASP category like A03 Injection aggregates multiple CWE Top 25 entries (CWE-79 XSS, CWE-89 SQL injection, CWE-78 OS command injection). Both are useful — OWASP for awareness and program design, CWE for fine-grained finding classification.

Can SAST detect all of OWASP Top 10?

No single tool, including SAST, detects all ten categories fully. SAST is strongest on injection (A03), cryptographic failures (A02), insecure deserialization in A08, SSRF (A10), and code-level patterns within A07 authentication failures. SAST contributes partially to A01 broken access control by flagging IDOR-shaped patterns. The remaining categories — A04 insecure design, A05 security misconfiguration, A06 vulnerable components, A09 logging failures — require other layers: SCA for A06, DAST for A05, threat modeling for A04, SIEM and observability for A09. A complete OWASP-aligned program layers SAST, SCA, DAST, IaC scanning, and human review.

Is there an OWASP API Top 10?

Yes. OWASP maintains a separate OWASP API Security Top 10, specifically for API-only services, with its own list of risk categories (Broken Object Level Authorization, Broken Authentication, Broken Object Property Level Authorization, Unrestricted Resource Consumption, Broken Function Level Authorization, and others). The current edition is the API Security Top 10 (2023). It overlaps with the main Top 10 but rebalances toward authorization, rate limiting, and API-specific design risks. For organizations whose attack surface is dominated by APIs rather than browser-facing applications, the API Top 10 is the more relevant reference.

Detect 6 of the 10 OWASP Categories with GraphNode SAST + SCA

Deep data flow SAST catches A02, A03, A07, A08, and A10. SCA catches A06 across the full transitive dependency tree. One engine, deployable on-premise, with 780+ rules covering the OWASP Top 10 and CWE Top 25.

Request Demo