Americas

  • United States

Asia

Oceania

Chris Hughes
Contributing Writer

How the Secure Software Factory Reference Architecture protects the software supply chain

Feature
Jun 22, 20227 mins
Application SecurityDevSecOps

This breakdown of the Cloud Native Computing Foundation's secure software factory guidance focuses on software provenance and build activities.

API security alert / software development / application flow chart diagram
Credit: SPainter VFX / Getty Images

The term “factory” related to software production might seem bizarre. Most still associate it with the collection, manipulation and manufacturing of hard materials such as steel, automobiles or consumer electronics. However, software is produced in a factory construct as well. “Software factory” generally refers to the collection of tools, assets and processes required to produce software in an efficient, repeatable and secure manner.

The software factory concept has taken hold in both the public and private sector, being recognized by organizations such as MITRE and VMware. The U.S. Department of Defense (DoD) has a robust ecosystem of at least 29 software factories, most notably Kessel Run and Platform One. Given the concern over software vulnerability, particularly in the software supply chain, it’s important to execute the software factory approach in a secure manner.

The Cloud Native Computing Foundation (CNCF) has provided guidance on this with its Secure Software Factory Reference Architecture. Here’s a breakdown of what it covers.

What is the Secure Software Factory Reference Architecture?

CNCF defines a software supply chain as “a series of steps performed when writing, testing, packaging and distributing application software to end consumers.” The software factory is the logical construct in aggregate that facilitates that delivery of software. When done correctly, it ensures security is a key component of that application delivery process.

The CNCF Secure Software Factory (SSF) guidance builds on previous CNCF publications such as the Cloud-native Security Best Practices and Software Supply Chain Best Practices. The reference architecture emphasizes existing open-source tooling with an emphasis on security. It also rallies around four overarching principles from the Software Supply Chain whitepaper, each of which is required to ensure secure software delivery from inception to code to production:

  • Defense in depth
  • Signing and verification
  • Artifact metadata analytics
  • Automation

The SSF Reference Architecture isn’t focused on areas such as code scanning and signing but instead takes a deeper focus on code provenance and build activities. The rationale for this focus is that downstream activities such as SAST/DAST are reliant on validating the provenance and the identity of the party you are receiving something from a trusted entity. These may be identities tied to a human user or a machine identity. The combination of a signature and validating that it is coming from a trusted source are key to assurance of provenance.

Each entity in an SSF has dependencies. Whether those entities are related to broader organizational IAM systems, source code management, or downstream, the SSF itself is depended on for attestations and signatures of artifacts that downstream consumers are using.

Secure software factory components

The SSF Reference Architecture has several “core” components plus management and distribution components. The core components are responsible for taking inputs and using them to create output artifacts. Management components focus on ensuring the SSF runs in alignment with your policies, while distribution components safely move the products of the factory for downstream consumption.

SSF Reference Architecture core components

Core components include the scheduling and orchestration platform, pipeline framework and tooling and build environments. All SSF components use the platform and associated orchestration to conduct their activities.

The pipeline and associated tooling allow the facilitation of the workflow to build software artifacts. The guidance emphasizes that the pipeline itself should be subject to the same requirements of your workloads. This points out that the pipeline itself is part of your attack surface and can be exploited to impact downstream consumers, much like it did in SolarWinds. This is a key emphasis that is echoed by emerging frameworks like the Supply Chain Levels for Software Artifacts (SLSA).

Lastly, the build environment is where your source code is converted into machine-readable software products, referred to as artifacts. Mature build environments are working to provide automated attestations regarding the inputs, actions and tools used during the build, to validate the integrity of the build process and associated outputs/artifacts. Organizations such as TestifySec are innovating to ensure organizations can detect process tampering or build compromises.

SSF Reference Architecture management components

Management components include the policy management framework and attestors and observers. In the SSF context, your policy management framework is what helps codify organizational and security requirements such as IAM, assigned worker nodes, and authorized container images. These policies will look different for each organization due to differing risk tolerances and applicable regulatory frameworks.

The policy management framework is crucial as the push for zero trust unfolds. Determine who is allowed to do what and under what context is key to enforcing tenets of zero trust such as least-permissive access control. You don’t want to deploy containers that were pushed by unauthorized individuals or even containers from sources you don’t trust or aren’t signed by a source you trust.

Given the cloud-native context often infers you’re using containers and an orchestrator such as Kubernetes, you have entities such as node attestors, workload attestors and pipeline observers. These verify the identity and authenticity of your nodes and workloads as well as the verifiable metadata associated with pipeline processes.

SSF Reference Architecture distribution components

Rounding out the key components identified in the SSF Reference Architecture are your distribution components. These include an artifact repository and admission controller. The outputs of your pipeline process produce artifacts that are stored in your artifact repository. These can include items such as container images, kubernetes manifests, software bills of materials (SBOMs) and associated signatures. We see a push to use solutions such as Sigstore to sign not just code but SBOMs and attestations. This is emphasized in the previously discussed Linux Foundation/OpenSSF OSS Security Mobilization Plan.

Admission controllers are responsible for ensuring only authorized workloads can be run by your scheduling and orchestration components. These controllers can enforce policies such as on what sources are allowed into a build, what components are allowed onto a node host and that the components used are trusted and verifiable.

SSF Reference Architecture variables and functionality

The SSF guidance understands that inputs and outputs from the SSF will vary. Inputs include items such as source code, software dependencies, user credentials, cryptographic material and pipeline definitions. Outputs would include items such as software artifacts, public signing keys and metadata documents.

The whitepaper also discusses the SSF functionality such as a project moving through the SSF and ultimately providing secure outputs and artifacts that are attested to and have a level of assurance to establish trust with downstream consumers.

SSF guidance complex out of necessity

At a first glance the SSF Reference Architecture will seem complex, and that’s because it is. Delivering software in modern cloud-native environments involves many moving parts and accompanying processes to ensure that what is being both consumed and produced can be done with a level of assurance that aligns with an organization’s risk tolerance.

The complexity also emphasizes both how challenging it is to tie it all together and how fraught with opportunity for missteps and misconfigurations the system can be. They could lead to a cascading downstream impact on consumers across the software-powered ecosystem.

It is often said that defenders have to be right all the time and malicious actors have to be right just once. Implementing best practices and guidance from organizations such as CNCF is a great place to start on a journey towards delivering secure software at the speed of relevance for the business.

Chris Hughes
Contributing Writer

Chris Hughes currently serves as the co-founder and CISO of Aquia. Chris has nearly 20 years of IT/cybersecurity experience. This ranges from active duty time with the U.S. Air Force, a civil servant with the U.S. Navy and General Services Administration (GSA)/FedRAMP as well as time as a consultant in the private sector. In addition, he also is an adjunct professor for M.S. cybersecurity programs at Capitol Technology University and University of Maryland Global Campus. Chris also participates in industry working groups such as the Cloud Security Alliances Incident Response Working Group and serves as the membership chair for Cloud Security Alliance D.C. Chris also co-hosts the Resilient Cyber Podcast. He holds various industry certifications such as the CISSP/CCSP from ISC2 as holding both the AWS and Azure security certifications. He regularly consults with IT and cybersecurity leaders from various industries to assist their organizations with their cloud migration journeys while keeping security a core component of that transformation.

More from this author