NIST 800-218 SSDF: A Practitioner's Guide to the Secure Software Development Framework
TL;DR
NIST SP 800-218 is the Secure Software Development Framework (SSDF) v1.1, published by NIST in February 2022. After Executive Order 14028 and OMB Memorandum M-22-18, federal software suppliers must self-attest to conformance with SSDF practices. Beyond government, banks, healthcare, and large commercial buyers now reference SSDF in vendor risk questionnaires. The framework groups practices into four families: Prepare the Organization (PO), Protect the Software (PS), Produce Well-Secured Software (PW), and Respond to Vulnerabilities (RV). Most AppSec evidence lives in the PW and RV groups, where SAST and SCA do the heavy lifting.
Executive Order 14028 changed the rules for organizations that sell software to the United States federal government. The order tasked NIST with publishing secure software development guidance, and OMB Memorandum M-22-18 (later reinforced by M-23-16) made conformance with that guidance mandatory for federal software suppliers through a self-attestation form. The guidance NIST produced is NIST Special Publication 800-218, also known as the Secure Software Development Framework, or SSDF v1.1. In the four years since, SSDF has spread well beyond federal contracting. Banks, healthcare providers, insurers, and large enterprise buyers increasingly ask their vendors to demonstrate SSDF alignment as part of standard procurement. This guide explains what SSDF actually requires, which AppSec capabilities satisfy each practice, and how to prepare an attestation package that holds up to scrutiny.
What NIST SP 800-218 Is
Whether you write it as NIST 800-218 or NIST 800 218, the framework being referenced is the same publication. The hyphenated form is the official NIST citation style; the unhyphenated spelling is what most procurement teams type into a search bar. Both point at the Secure Software Development Framework described below.
NIST SP 800-218 is a NIST publication first issued in September 2021 and updated to version 1.1 in February 2022. It defines a set of high-level secure software development practices that can be integrated into any existing software development lifecycle, from waterfall to continuous deployment. The framework is intentionally process-oriented rather than tool-prescriptive: it describes what should be done, gives examples of how it can be done, and points to other standards and frameworks (BSIMM, OWASP SAMM, ISO 27034, the SAFECode practices) for additional detail.
Two policy documents anchor SSDF in the federal procurement process. Executive Order 14028, "Improving the Nation's Cybersecurity," signed in May 2021, directed federal agencies to enhance software supply chain security. OMB Memorandum M-22-18, issued in September 2022 and amended by M-23-16 in June 2023, required that any agency using third-party software obtain a self-attestation from the producer that the software was developed in conformance with secure development practices articulated in SSDF. CISA published the Secure Software Development Attestation Form to standardize that attestation, with phased compliance deadlines that ran through 2024.
The current version of the framework is SSDF v1.1. NIST has signaled future updates aligned with AI-specific software development concerns, but as of this writing, v1.1 remains the authoritative version referenced by federal attestation requirements.
The Four Practice Groups
SSDF organizes its practices into four groups. Each group answers a different question about your software development program.
- Prepare the Organization (PO): Have you put the people, processes, and technology in place to develop secure software? PO covers role definitions, secure development training, a documented secure SDLC, toolchain management, and security requirements for all software the organization produces.
- Protect the Software (PS): Are you protecting the integrity of your code and the artifacts you ship? PS covers source code repository security, code signing, archival of released software, and protecting all forms of code from unauthorized access or modification.
- Produce Well-Secured Software (PW): Are you actually building software that resists attack? PW is the heart of SSDF for application security teams. It covers threat modeling, secure design, reusing well-secured software, code review, build configuration, automated analysis, testing, and secure default configurations.
- Respond to Vulnerabilities (RV): When vulnerabilities are discovered after release, do you find them, prioritize them, and fix them in a disciplined way? RV covers continuous vulnerability identification, assessment and remediation workflows, and root cause analysis to prevent recurrence.
Each practice has a unique identifier (for example, PW.5 or RV.1) and a list of suggested tasks and notional implementation examples. The framework also references other standards next to each practice, which is useful when you already have an ISO 27001 or SOC 2 program and want to map existing controls to SSDF.
PW Practices: The Heart of SSDF for AppSec
The Produce Well-Secured Software group is where AppSec teams spend most of their evidence-gathering effort. PW spans nine practices, and the bulk of automated security tooling — SAST, SCA, IaC scanning, secret detection, hardening checks — maps to PW.4 through PW.9. The table below summarizes each PW practice and the tooling category that most directly satisfies it.
| Practice | Name | What It Asks For | Primary Tooling Category |
|---|---|---|---|
| PW.1 | Design Software to Meet Security Requirements and Mitigate Security Risks | Threat modeling, security architecture review, documented design requirements. | Threat modeling tools, architecture review |
| PW.2 | Review the Software Design to Verify Compliance with Security Requirements and Risk Information | Documented design reviews against security requirements before implementation. | Manual review process, design review templates |
| PW.4 | Reuse Existing, Well-Secured Software When Feasible Instead of Duplicating Functionality | Inventory, vetting, and tracking of reused components and dependencies. | SCA, SBOM management |
| PW.5 | Create Source Code by Adhering to Secure Coding Practices | Secure coding standards, training, and enforcement at code authoring time. | SAST in IDE, secure coding guidelines |
| PW.6 | Configure the Compilation, Interpreter, and Build Processes to Improve Executable Security | Compiler hardening flags, sandboxing, reproducible builds, signed artifacts. | Build configuration, compiler hardening |
| PW.7 | Review and/or Analyze Human-Readable Code to Identify Vulnerabilities and Verify Compliance with Security Requirements | Manual code review and automated static analysis of source code for vulnerabilities. | SAST, peer review |
| PW.8 | Test Executable Code to Identify Vulnerabilities and Verify Compliance with Security Requirements | Dynamic and interactive testing of running software for vulnerabilities. | DAST, IAST, fuzzing, penetration testing |
| PW.9 | Configure Software to Have Secure Settings by Default | Hardened default configurations, secure-by-default deployment templates. | Hardening guides, IaC scanning, configuration baselines |
PW.5, PW.6, PW.7, PW.8, and PW.9 are the practices most reviewers focus on when they evaluate an AppSec program against SSDF. PW.5 and PW.7 together cover the spectrum of static analysis, from IDE-time feedback to scheduled full-codebase scans. PW.7 specifically calls out automated tools as a recommended implementation example, which is the language reviewers look for when they map your SAST scanner to evidence. PW.8 is where DAST, IAST, fuzzing, and penetration testing live. PW.9 ties to hardened defaults, which connects to infrastructure-as-code scanning and secure deployment templates.
RV Practices: Vulnerability Response
The Respond to Vulnerabilities group covers what happens after software is built and released. Vulnerabilities continue to be discovered in code, in dependencies, and in the runtime environment, and SSDF expects a documented program for finding and fixing them on an ongoing basis. RV has three practices.
| Practice | Name | What It Asks For | Primary Tooling Category |
|---|---|---|---|
| RV.1 | Identify and Confirm Vulnerabilities on an Ongoing Basis | Continuous monitoring for new vulnerabilities in released code and dependencies, and intake of external vulnerability reports. | SCA monitoring, vulnerability disclosure program, scheduled SAST |
| RV.2 | Assess, Prioritize, and Remediate Vulnerabilities | Documented severity scoring, prioritization, remediation SLAs, and tracking through closure. | Vulnerability management, ticketing integration |
| RV.3 | Analyze Vulnerabilities to Identify Their Root Causes | Root cause analysis on each high-severity vulnerability and feedback into secure coding standards and tooling rules. | Post-incident review, lessons learned process |
RV.1 is where SCA earns most of its keep. Open-source dependencies in shipped software accumulate new CVEs every week, and SSDF expects continuous monitoring rather than point-in-time scans. RV.2 is where remediation discipline matters: reviewers want to see severity criteria, target remediation timelines, and tracking that proves vulnerabilities actually close. RV.3 asks for honest root cause work — when a critical vulnerability slips through, what changed in your process or tooling so the same class of issue is caught next time?
How GraphNode Maps to SSDF Practices
The mapping below shows where the GraphNode platform produces evidence directly applicable to specific SSDF practices. GraphNode is not a NIST-certified tool — there is no certification program for SAST or SCA against SP 800-218 — but the platform is designed to generate the artifacts (scan reports, SBOMs, audit logs, remediation tracking) that an SSDF attestation package typically needs.
| SSDF Practice | GraphNode Capability | Evidence Produced |
|---|---|---|
| PW.4 | SCA dependency inventory across direct and transitive packages | Component inventory, license records, SBOM export |
| PW.5 | IDE plugins for IntelliJ IDEA, Eclipse, and Visual Studio with shift-left feedback as code is written | In-IDE scan logs, developer-time finding history |
| PW.7 | SAST data flow analysis across 13+ languages with 780+ rules covering the OWASP Top 10 and CWE Top 25 vulnerability categories | Per-build SAST reports with finding metadata, rule mapping, and trend data |
| PW.7 / PW.8 | CI/CD integrations with Jenkins, GitHub Actions, GitLab CI, Azure DevOps that gate on policy violations | Pipeline scan history, policy gate enforcement records |
| RV.1 | Continuous SCA re-scoring as new CVEs are published against components already in your dependency graph | Vulnerability watch list, alert history, scheduled scan logs |
| RV.2 | Severity scoring, ticketing integration, and remediation status tracking through closure | Triage workflow records, time-to-remediate metrics |
| PO / PS | Role-based access control, audit logs, and tamper-evident record of who changed what and when | Audit log export, RBAC configuration export |
For the static analysis side of the mapping, see the GraphNode SAST product page. For the open-source dependency side, including SBOM generation and continuous CVE monitoring, see the GraphNode SCA product page. The audit log, RBAC, and reporting capabilities apply to both modules through the unified platform.
Building an SSDF Attestation Package
Federal attestation under M-22-18 is a self-attestation, but the supplier signing the form is expected to retain evidence sufficient to defend the attestation if challenged. Commercial buyers asking SSDF questions in vendor risk assessments expect the same body of evidence. A defensible attestation package usually includes the following:
- Written secure SDLC policy: A documented policy that maps the organization's SDLC to SSDF practice groups, names accountable owners, and is reviewed at least annually.
- SAST scan reports: Evidence that source code is automatically analyzed for vulnerabilities (PW.7), with retention covering the attestation period and trend data showing the program is active over time.
- SCA scan reports and SBOMs: Evidence that third-party components are inventoried (PW.4) and continuously monitored for new vulnerabilities (RV.1), including export of CycloneDX or SPDX format SBOMs.
- Vulnerability remediation tracking: Records that show vulnerabilities are triaged, assigned, and closed within documented SLAs (RV.2), with metrics on time-to-remediate by severity.
- Build provenance and artifact integrity: Signed build artifacts and ideally SLSA-aligned provenance metadata that proves shipped artifacts come from the source you scanned. See our explainer on what SLSA is and why it matters for the supply chain side of this evidence.
- Secure development training records: Evidence that developers receive role-appropriate secure development training (PO.2), with completion tracking.
- Toolchain access and audit logs: Records showing only authorized people changed scanning rules, suppressed findings, or modified policy gates.
Reviewers do not expect perfection. They expect a documented program, evidence that the program operates as documented, and an honest record of the gaps you are working to close.
SSDF for Federal Contractors vs Commercial Buyers
For federal contractors, SSDF is no longer optional. Under OMB M-22-18 and M-23-16, agencies are required to obtain a CISA Secure Software Development Attestation Form from any third-party software producer whose software the agency uses. The form is signed by an officer of the producer and certifies conformance with specified SSDF practices. If the producer cannot fully attest, a Plan of Action and Milestones (POA&M) is required. For organizations selling into federal markets, see our government solutions overview for how GraphNode supports the attestation process.
Commercial buyers have no equivalent regulatory mandate, but SSDF has rapidly become a vendor risk reference point. Banks, insurers, and large healthcare buyers regularly include SSDF-aligned questions in third-party security questionnaires: do you perform automated static analysis on your codebase, do you maintain a software bill of materials, do you have documented vulnerability remediation SLAs. Demonstrating SSDF practice maturity often shortens the procurement cycle even when SSDF is not explicitly required. For financial services context, see our financial services solutions page.
Frequently Asked Questions
What is the difference between NIST SSDF and NIST CSF?
The NIST Cybersecurity Framework (CSF) is an enterprise-wide cybersecurity risk management framework that covers identification, protection, detection, response, and recovery across an entire organization. SSDF (NIST SP 800-218) is narrower and more specific: it focuses exclusively on practices for developing secure software. Many organizations use CSF as the umbrella program and SSDF as the secure SDLC chapter inside it.
Is SSDF mandatory?
For organizations selling software to United States federal agencies, conformance with SSDF practices is required by OMB Memorandum M-22-18, evidenced through the CISA Secure Software Development Attestation Form. For commercial software providers, SSDF is not legally mandatory but is increasingly included in vendor risk assessments by banks, insurers, healthcare organizations, and large enterprise buyers.
What is OMB M-22-18?
OMB Memorandum M-22-18, "Enhancing the Security of the Software Supply Chain through Secure Software Development Practices," was issued in September 2022. It directed federal agencies to require self-attestation from third-party software producers that their software was developed in conformance with the practices in NIST SP 800-218. M-23-16, issued in June 2023, amended the timeline and clarified scope.
Does SSDF require SAST and SCA specifically?
SSDF is intentionally tool-agnostic. It does not require any specific product category by name. However, the suggested implementation examples for PW.7 (review and analyze human-readable code) explicitly reference automated static analysis, and the suggested examples for PW.4 and RV.1 reference component inventory and continuous vulnerability monitoring of third-party software. In practice, organizations satisfy these practices with SAST and SCA tools because no other tooling category produces equivalent evidence at the scale a modern codebase requires.
How long does SSDF compliance take?
The timeline depends on how mature your existing AppSec program is. Organizations with established SAST, SCA, and a documented secure SDLC can typically prepare a defensible attestation package in two to three months, mostly spent gathering evidence and writing the policy document that maps existing controls to SSDF practices. Organizations starting from scratch should plan for six to twelve months to deploy tooling, develop policy, train developers, and accumulate enough evidence to support an honest attestation.
What's the difference between SSDF v1.0 and v1.1?
SSDF v1.0 was published in September 2021. SSDF v1.1, published in February 2022, expanded the framework with additional implementation examples, refined practice descriptions, and added explicit references to other NIST publications, BSIMM, and OWASP SAMM. The four practice groups (PO, PS, PW, RV) and the structure of the framework are the same. Federal attestation requirements reference v1.1, which remains the current authoritative version.
Closing
SSDF is becoming the lingua franca of secure software development, not because of any single regulatory mandate but because it gives buyers, sellers, and reviewers a shared vocabulary for what a credible secure SDLC looks like. The work to comply usually clarifies your AppSec program in ways that pay off beyond the attestation itself: documented policy reveals gaps that ad hoc practice hides, vulnerability tracking forces honest conversations about remediation timelines, and SBOM generation surfaces dependencies you did not know you had. Whether your driver is a federal procurement deadline or a vendor questionnaire from a major bank, SSDF is worth treating as the operating model for your AppSec program rather than a one-time compliance exercise.