Anchore https://anchore.com/ Software supply chain security solutions Mon, 06 May 2024 19:13:58 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 https://anchore.com/wp-content/uploads/2021/12/Anchore_Mark_Blue-100-1.png Anchore https://anchore.com/ 32 32 Navigate SSDF Attestation with this Practical Guide https://anchore.com/blog/navigate-ssdf-attestation-with-this-practical-guide/ https://anchore.com/blog/navigate-ssdf-attestation-with-this-practical-guide/#respond Tue, 07 May 2024 12:00:00 +0000 https://anchore.com/?p=987473738 The clock is ticking again for software producers selling to federal agencies. In the second half of 2024, CEOs or their designees must begin providing an SSDF attestation that their organization adheres to secure software development practices documented in NIST SSDF 800-218.  Download our latest ebook to navigate through SSDF attestation quickly and adhere to […]

The post Navigate SSDF Attestation with this Practical Guide appeared first on Anchore.

]]>
The clock is ticking again for software producers selling to federal agencies. In the second half of 2024, CEOs or their designees must begin providing an SSDF attestation that their organization adheres to secure software development practices documented in NIST SSDF 800-218

Download our latest ebook to navigate through SSDF attestation quickly and adhere to timelines. 

SSDF attestation covers four main areas from NIST SSDF including: 

  1. Securing development environments, 
  2. Using automated tools to maintain trusted source code supply chains
  3. Maintaining provenance (e.g., via SBOMs) for internal code and third-party components, and 
  4. Using automated tools to check for security vulnerabilities.  

This new requirement is not to be taken lightly. It applies to all software producers, regardless of whether they provide a software end product as SaaS or on-prem, to any federal agency. The SSDF attestation deadline is June 11, 2024, for critical software and September 11, 2024, for all software. However, on-prem software developed before September 14, 2022, will only require SSDF attestation when a new major version is released. The bottom line is that most organizations will need to comply by 2024.

Companies will make their SSDF attestation through an online Common Form that covers the minimum secure software development requirements that software producers must meet. Individual agencies can add agency-specific instructions outside of the Common Form. 

Organizations that want to ensure they meet all relevant requirements can submit a third-party assessment instead of a CEO attestation. You must use a Third-Party Assessment Organization (3PAO) that is FedRAMP-certified or approved by an agency official.  This option is a no-brainer for cloud software producers who use a 3PAO for FedRAMP.

Details over details – so we put together a practical guide to the SSDF attestation requirements and how to meet them “SSDF Attestation 101: A Practical Guide for Software Producers”. We also included how Anchore Enterprise automates the SSDF attestation compliance by directly integrating into your software development pipeline and utilizing continuous policy scanning to detect issues before they hit production.

The post Navigate SSDF Attestation with this Practical Guide appeared first on Anchore.

]]>
https://anchore.com/blog/navigate-ssdf-attestation-with-this-practical-guide/feed/ 0
Zero Trust Webinar with Security Boulevard https://anchore.com/events/zero-trust-webinar-with-security-boulevard/ https://anchore.com/events/zero-trust-webinar-with-security-boulevard/#respond Mon, 06 May 2024 16:31:50 +0000 https://anchore.com/?p=987473748 The post Zero Trust Webinar with Security Boulevard appeared first on Anchore.

]]>
The post Zero Trust Webinar with Security Boulevard appeared first on Anchore.

]]>
https://anchore.com/events/zero-trust-webinar-with-security-boulevard/feed/ 0
Anchore Enterprise 5.5: Vulnerability Feed Service Improvements https://anchore.com/blog/enterprise-5-5-release-vulnerability-feed-improvements/ https://anchore.com/blog/enterprise-5-5-release-vulnerability-feed-improvements/#respond Thu, 02 May 2024 12:00:00 +0000 https://anchore.com/?p=987473722 Today, we are announcing the release of Anchore Enterprise 5.5, the fifth in our regular minor monthly releases. There are a number of improvements to GUI performance, multi-tenancy support and AnchoreCTL but with the uncertainty at NVD continuing, the main news is updates to our feed drivers which help customers adapt to the current situation […]

The post Anchore Enterprise 5.5: Vulnerability Feed Service Improvements appeared first on Anchore.

]]>
Today, we are announcing the release of Anchore Enterprise 5.5, the fifth in our regular minor monthly releases. There are a number of improvements to GUI performance, multi-tenancy support and AnchoreCTL but with the uncertainty at NVD continuing, the main news is updates to our feed drivers which help customers adapt to the current situation at NIST.

The NIST NVD interruption and backlog 

A few weeks ago, Anchore alerted the security community to the changes at the National Vulnerability Database (NVD) run by the US National Institute of Standards and Technology (NIST). As of February 18th, there was a massive decline in the number of records published with metadata such as severity and CVSS records. Less publicized but no less problematic has been that the service availability of the API that enables access to NVD records has also been erratic during the same period.  


While the uncertainty around NVD continues and recognizing that it continues to be a federally mandated data source (e.g., within FedRAMP), Anchore has updated its drivers to give customers flexibility in how they interact with its data. 

In a typical Anchore Enterprise deployment, NVD data has served two functions. The first is a catalog of CVEs that can be correlated with advisories from other vendors to help provide more context around an issue. The second is as a matching source of last resort for surfacing vulnerabilities where no other vulnerability data exists. This often comes with a caveat that the expansiveness of the NVD database means there is variable data quality which can lead to false positives.

See it in action!

To see the latest features in action, please join our webinar “Adapting to the new normal at NVD with Anchore Vulnerability Feed” on May 7th at 10am PT/1pm EST. Register Now.

Improvements to the Anchore Vulnerability Feed Service

The Exclusion feed

Anchore released its own Vulnerability Feed with v4.1 which provided an ‘Exclusion Feed’ to avoid NVD false positives by preventing matches against vulnerability records that were known to be inaccurate. 

As of v5.5, we have extended the Anchore Vulnerability Feed service to provide two additional data features. 

Simplify network management with 3rd party vulnerability feed 

The first is the ability to download a copy of 3rd party vulnerability feed data sets, including NVD, directly from Anchore, so that transient service availability issues don’t generate alerts. This simplifies network management by only requiring one firewall rule to be created to enable the retrieval of any vulnerability data for use with Anchore Enterprise. This proxy-mode is the default in 5.5 for the majority of feeds but customers who want to continue to benefit from autonomous operations that don’t rely on Anchore and contact NVD and other vendor endpoints directly can continue to use the direct-mode as before.

Enriched data

The second change is that Anchore is continuing to add new CVE records from the NVD database that customers use while providing Enriched Data to the records, specifically for CPE information which helps map affected versions. While Anchore can’t provide NVD severity or CVSS records, which by definition have to be provided by NVD themselves, these records and metadata will continue to allow customers to reference CVEs. This data is available by default in the proxy-mode mentioned above or as a configuration option with the Direct Mode.

How it works

For the 3rd party vulnerability data, Anchore is running the same feed service software that customers would run in their local site which periodically retrieves the data from all of its available sources and structures the data into a format ready to be downloaded by customers. Using the existing feed drivers in their local feed service (NVD, GitHub, etc) customers download the entire database in one atomic operation. This contrasts with the API based approach in the direct-mode which means that individual records are retrieved one at a time which can take time. This can be enabled on a driver-by-driver basis. 

For the enriched data, Anchore is running a service which looks for new CVE records from other upstream sources, for example the CVE5 database hosted by MITRE and custom data added by Anchore engineers. The database tables that host the NVD records are then populated with these CVE updates. CPE data is sourced from vendor records (e.g., Red Hat Security Advisories) and added to the NVD records to enable matching logic.

Looking forward

Throughout the year, we will be looking to make additional improvements to the Anchore Vulnerability Feed to help customers not just navigate through the uncertainty at NVD but reduce their false positive and false negative count in general.

See it in action!

Join our webinar Adapting to the new normal at NVD with Anchore Vulnerability Feed.
May 7th at 10am PT/1pm EST

The post Anchore Enterprise 5.5: Vulnerability Feed Service Improvements appeared first on Anchore.

]]>
https://anchore.com/blog/enterprise-5-5-release-vulnerability-feed-improvements/feed/ 0
Modeling Software Security as Unit Tests: A Mental Model for Developers https://anchore.com/blog/modeling-software-security-as-unit-tests-a-mental-model-for-developers/ https://anchore.com/blog/modeling-software-security-as-unit-tests-a-mental-model-for-developers/#respond Tue, 30 Apr 2024 12:00:00 +0000 https://anchore.com/?p=987473710 Modern software development is complex to say the least. Vulnerabilities often lurk within the vast networks of dependencies that underpin applications. A typical scenario involves a simple app.go source file that is underpinned by a sprawling tree of external libraries and frameworks (check the go.mod file for the receipts). As developers incorporate these dependencies into […]

The post Modeling Software Security as Unit Tests: A Mental Model for Developers appeared first on Anchore.

]]>
Modern software development is complex to say the least. Vulnerabilities often lurk within the vast networks of dependencies that underpin applications. A typical scenario involves a simple app.go source file that is underpinned by a sprawling tree of external libraries and frameworks (check the go.mod file for the receipts). As developers incorporate these dependencies into their applications, the security risks escalate, often surpassing the complexity of the original source code. This real-world challenge highlights a critical concern: the hidden vulnerabilities that are magnified by the dependencies themselves, making the task of securing software increasingly daunting.

Addressing this challenge requires reimagining software supply chain security through a different lens. In a recent webinar with the famed Kelsey Hightower, he provides an apt analogy to help bring the sometimes opaque world of security into focus for a developer. Software security can be thought of as just another test in the software testing suite. And the system that manages the tests and the associated metadata is a data pipeline. We’ll be exploring this analogy in more depth in this blog post and by the end we will have created a bridge between developers and security.

The Problem: Modern software is built on a tower

Modern software is built from a tower of libraries and dependencies that increase the productivity of developers but with these boosts comes the risks of increased complexity. Below is a simple ‘ping-pong’ (i.e., request-response) application written in Go that imports a single HTTP web framework:

package main

import (
	"net/http"

	"github.com/gin-gonic/gin"
)

func main() {
	r := gin.Default()
	r.GET("/ping", func(c *gin.Context) {
		c.JSON(http.StatusOK, gin.H{
			"message": "pong",
		})
	})
	r.Run()
}

With this single framework comes a laundry list of dependencies that are needed in order to work. This is the go.mod file that accompanies the application:

module app

go 1.20

require github.com/gin-gonic/gin v1.7.2

require (
	github.com/gin-contrib/sse v0.1.0 // indirect
	github.com/go-playground/locales v0.13.0 // indirect
	github.com/go-playground/universal-translator v0.17.0 // indirect
	github.com/go-playground/validator/v10 v10.4.1 // indirect
	github.com/golang/protobuf v1.3.3 // indirect
	github.com/json-iterator/go v1.1.9 // indirect
	github.com/leodido/go-urn v1.2.0 // indirect
	github.com/mattn/go-isatty v0.0.12 // indirect
	github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421 // indirect
	github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742 // indirect
	github.com/ugorji/go/codec v1.1.7 // indirect
	golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9 // indirect
	golang.org/x/sys v0.0.0-20200116001909-b77594299b42 // indirect
	gopkg.in/yaml.v2 v2.2.8 // indirect
)

The dependencies for the application end up being larger than the application source code. And in each of these dependencies is the potential for a vulnerability that could be exploited by a determined adversary. Kelsey Hightower summed this up well, “this is software security in the real world”. Below is an example of a Java app that hides vulnerabile dependencies inside the frameworks that the application is built off of.

As much as we might want to put the genie back in the bottle, the productivity boosts of building on top of frameworks are too good to reverse this trend. Instead we have to look for different ways to manage security in this more complex world of software development.

If you’re looking for a solution to the complexity of modern software vulnerability management, be sure to take a look at the Anchore Enterprise platform and the included container vulnerability scanner.

The Solution: Modeling software supply chain security as a data pipeline

Software supply chain security is a meta problem of software development. The solution to most meta problems in software development is data pipeline management. 

Developers have learned this lesson before when they first build an application and something goes wrong. In order to solve the problem they have to create a log of the error. This is a great solution until you’ve written your first hundred logging statements. Suddenly your solution has become its own problem and a developer becomes buried under a mountain of logging data. This is where a logging (read: data) pipeline steps in. The pipeline manages the mountain of log data and helps developers sift the signal from the noise.

The same pattern emerges in software supply chain security. From the first run of a vulnerability scanner on almost any modern software, a developer will find themselves buried under a mountain of security metadata. 

$ grype dir:~/webinar-demo/examples/app:v2.0.0

 ✔ Vulnerability DB                [no update available]  
 ✔ Indexed file system                                                                            ~/webinar-demo/examples/app:v2.0.0
 ✔ Cataloged contents                                                         889d95358bbb68b88fb72e07ba33267b314b6da8c6be84d164d2ed425c80b9c3
   ├── ✔ Packages                        [16 packages]  
   └── ✔ Executables                     [0 executables]  
 ✔ Scanned for vulnerabilities     [11 vulnerability matches]  
   ├── by severity: 1 critical, 5 high, 5 medium, 0 low, 0 negligible
   └── by status:   11 fixed, 0 not-fixed, 0 ignored 

NAME                      INSTALLED                           FIXED-IN                           TYPE          VULNERABILITY        SEVERITY 
github.com/gin-gonic/gin  v1.7.2                              1.7.7                              go-module     GHSA-h395-qcrw-5vmq  High      
github.com/gin-gonic/gin  v1.7.2                              1.9.0                              go-module     GHSA-3vp4-m3rf-835h  Medium    
github.com/gin-gonic/gin  v1.7.2                              1.9.1                              go-module     GHSA-2c4m-59x9-fr2g  Medium    
golang.org/x/crypto       v0.0.0-20200622213623-75b288015ac9  0.0.0-20211202192323-5770296d904e  go-module     GHSA-gwc9-m7rh-j2ww  High      
golang.org/x/crypto       v0.0.0-20200622213623-75b288015ac9  0.0.0-20220314234659-1baeb1ce4c0b  go-module     GHSA-8c26-wmh5-6g9v  High      
golang.org/x/crypto       v0.0.0-20200622213623-75b288015ac9  0.0.0-20201216223049-8b5274cf687f  go-module     GHSA-3vm4-22fp-5rfm  High      
golang.org/x/crypto       v0.0.0-20200622213623-75b288015ac9  0.17.0                             go-module     GHSA-45x7-px36-x8w8  Medium    
golang.org/x/sys          v0.0.0-20200116001909-b77594299b42  0.0.0-20220412211240-33da011f77ad  go-module     GHSA-p782-xgp4-8hr8  Medium    
log4j-core                2.15.0                              2.16.0                             java-archive  GHSA-7rjr-3q55-vv33  Critical  
log4j-core                2.15.0                              2.17.0                             java-archive  GHSA-p6xc-xr62-6r2g  High      
log4j-core                2.15.0                              2.17.1                             java-archive  GHSA-8489-44mv-ggj8  Medium

All of this from a single innocuous include statements to your favorite application framework. 

Again the data pipeline comes to the rescue and helps manage the flood of security metadata. In this blog post we’ll step through the major functions of a data pipeline customized for solving the problem of software supply chain security.

Modeling SBOMs and vulnerability scans as unit tests

I like to think of security tools as just another test. A unit test might test the behavior of my code. I think this falls in the same quality bucket as linters to make sure you are following your company’s style guide. This is a way to make sure you are following your company’s security guide.

Kelsey Hightower

This idea from renowned developer, Kelsey Hightower is apt, particularly for software supply chain security. Tests are a mental model that developers utilize on a daily basis. Security tooling are functions that are run against your application in order to produce security data about your application rather than behavioral information like a unit test. The first two foundational functions of software supply chain security are being able to identify all of the software dependencies and to scan the dependencies for known existing vulnerabilities (i.e., ‘testing’ for vulnerabilities in an application). 

This is typically accomplished by running an SBOM generation tool like Syft to create an inventory of all dependencies followed by running a vulnerability scanner like Grype to compare the inventory of software components in the SBOM against a database of vulnerabilities. Going back to the data pipeline model, the SBOM and vulnerability database are the data sources and the vulnerability report is the transformed security metadata that will feed the rest of the pipeline.

$ grype dir:~/webinar-demo/examples/app:v2.0.0 -o json

 ✔ Vulnerability DB                [no update available]  
 ✔ Indexed file system                                                                            ~/webinar-demo/examples/app:v2.0.0
 ✔ Cataloged contents                                                         889d95358bbb68b88fb72e07ba33267b314b6da8c6be84d164d2ed425c80b9c3
   ├── ✔ Packages                        [16 packages]  
   └── ✔ Executables                     [0 executables]  
 ✔ Scanned for vulnerabilities     [11 vulnerability matches]  
   ├── by severity: 1 critical, 5 high, 5 medium, 0 low, 0 negligible
   └── by status:   11 fixed, 0 not-fixed, 0 ignored 

{
 "matches": [
  {
   "vulnerability": {
    "id": "GHSA-h395-qcrw-5vmq",
    "dataSource": "https://github.com/advisories/GHSA-h395-qcrw-5vmq",
    "namespace": "github:language:go",
    "severity": "High",
    "urls": [
     "https://github.com/advisories/GHSA-h395-qcrw-5vmq"
    ],
    "description": "Inconsistent Interpretation of HTTP Requests in github.com/gin-gonic/gin",
    "cvss": [
     {
      "version": "3.1",
      "vector": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:L/A:N",
      "metrics": {
       "baseScore": 7.1,
       "exploitabilityScore": 2.8,
       "impactScore": 4.2
      },
      "vendorMetadata": {
       "base_severity": "High",
       "status": "N/A"
      }
     }
    ],
    . . . 

This was previously done just prior to pushing an application to production as a release gate that would need to be passed before software could be shipped. As unit tests have moved forward in the software development lifecycle as DevOps principles have won the mindshare of the industry, so has security testing “shifted left” in the development cycle. With self-contained, open source CLI tooling like Syft and Grype, developers can now incorporate security testing into their development environment and test for vulnerabilities before even pushing a commit to a continuous integration (CI) server.

From a security perspective this is a huge win. Security vulnerabilities are caught earlier in the development process and fixed before they come up against a delivery due date. But with all of this new data being created, the problem of data overload has led to a different set of problems.

Vulnerability overload; Uncovering the signal in the noise

Like the world of application logs that came before it, at some point there is so much information that an automated process generates that finding the signal in the noise becomes its own problem.

How Anchore Enterprise manages SBOMs and vulnerability scans

Centralized management of SBOMs and vulnerability scans can end up being a massive headache. No need to come up with your own storage and data management solution. Just configure the AnchoreCTL CLI tool to automatically submit SBOMs and vulnerability scans as you run them locally. Anchore Enterprise stores all of this data for you.

On top of this, Anchore Enterprise offers data analysis tools so that you can search and filter SBOMs and vulnerability scans by version, build stage, vulnerability type, etc.

Combining local developer tooling with centralized data management creates a best of both worlds environment where engineers can still get their tasks done locally with ease but offload the arduous management tasks to a server.

Added benefit, SBOM drift detection

Another benefit of distributed SBOM generation and vulnerability scanning is that this security check can be run at each stage of the build process. It would be ideal to believe that the software that is written on in a developers local environment always makes it through to production in an untouched, pristine state but this is rarely the case.

Running SBOM generation and vulnerability scanning at development, on the build server, in the artifact registry, at deploy and during runtime will create a full picture of where and when software is modified in the development process and simplify post-incident investigations or even better catch issues well before they make it to a production environment.

This historical record is a feature provided by Anchore Enterprise called Drift Detection. In the same way that an HTTP cookie creates state between individual HTTP requests, Drift detection is security metadata about security metadata (recursion, much?) that allows state to be created between each stage of the build pipeline.  Being the central store for all of the associated security metadata makes the Anchore Enterprise platform the ideal location to aggregate and scan for these particular anomalies.

Policy as a lever

Being able to filter through all of the noise created by integrating security checks across the software development process creates massive leverage when searching for a particular issue but it is still a manual process and being a full-time investigator isn’t part of the software engineer job description. Wouldn’t it be great if we could automate some if not the majority of these investigations?

I’m glad we’re of like minds because this is where policy comes into picture. Returning to Kelsey Hightower’s original comparison between security tools as linters, policy is the security guide that is codified by your security team that will allow you to quickly check whether the commit that you put together will meet the standards for secure software.

By running these checks and automating the feedback, developers can quickly receive feedback on any potential security issues discovered in their commit. This allows developers to polish their code before it is flagged by the security check in the CI server and potentially failed. No more waiting on the security team to review your commit before it can proceed to the next stage. Developers are empowered to solve the security risks and feel confident that their code won’t be held up downstream.

Policies-as-code supports existing developer workflows

Anchore Enterprise designed its policy engine to ingest the individual policies as JSON objects that can be integrated directly into the existing software development tooling. Create a policy in the UI or CLI, generate the JSON and commit it directly to the repo.

This prevents the painful context switching of moving between different interfaces for developers and allows engineering and security teams to easily reap the rewards of version control and rollbacks that come pre-baked into tools like version control. Anchore Enterprise was designed by engineers for engineers which made policy-as-code the obvious choice when designing the platform.

Remediation automation integrated into the development workflow

Being able to be alerted when a commit is violating your company’s security guidelines is better than pushing insecure code and waiting for the breach to find out that you forgot to sanitize the user input. Even after you get alerted to a problem, you still need to understand what is insecure and how to fix it. This can be done by trying to Google the issue or starting up a conversation with your security team. But this just ends up creating more work for you before you can get your commit into the build pipeline. What if you could get the answer to how to fix your commit in order to make it secure directly into your normal workflow?

Anchore Enterprise provides remediation recommendations to help create actionable advice on how to resolve security alerts that are flagged by a policy. This helps point developers in the right direction so that they can resolve their vulnerabilities quickly and easily without the manual back and forth of opening a ticket with the security team or Googling aimlessly to find the correct solution. The recommendations can be integrated directly into GitHub Issues or Jira tickets in order to blend seamlessly into the workflows that teams depend on to coordinate work across the organization.

Wrap-Up

From the perspective of a developer it can sometimes feel like the security team is primarily a frustration that only slows down your ability to ship code. Anchore has internalized this feedback and has built a platform that allows developers to still move at DevOps speeds and do so while producing high quality, secure code. By integrating directly into developer workflows (e.g., CLI tooling, CI/CD integrations, source code repository integrations, etc.) and providing actionable feedback Anchore Enterprise removes the traditional roadblock mentality that has typically described the relationship between development and security.

If you’re interested to see all of the features described in this blog post via a hands-on demo, check out the webinar by clicking on the screenshot below and going to the workshop hosted on GitHub.

If you’re looking to go further in-depth with how to build and secure containers in the software supply chain, be sure to read our white paper: The Fundamentals of Container Security.

The post Modeling Software Security as Unit Tests: A Mental Model for Developers appeared first on Anchore.

]]>
https://anchore.com/blog/modeling-software-security-as-unit-tests-a-mental-model-for-developers/feed/ 0
Upstream – a Tidelift expedition https://anchore.com/events/upstream-a-tidelift-expedition/ https://anchore.com/events/upstream-a-tidelift-expedition/#respond Mon, 29 Apr 2024 22:45:06 +0000 https://anchore.com/?p=987473716 The post Upstream – a Tidelift expedition appeared first on Anchore.

]]>
The post Upstream – a Tidelift expedition appeared first on Anchore.

]]>
https://anchore.com/events/upstream-a-tidelift-expedition/feed/ 0
Adapting to the new normal at NVD with Anchore Vulnerability Feed https://anchore.com/webinars/adapting-to-the-new-normal-at-nvd-with-anchore-vulnerability-feed/ https://anchore.com/webinars/adapting-to-the-new-normal-at-nvd-with-anchore-vulnerability-feed/#respond Wed, 24 Apr 2024 21:43:13 +0000 https://anchore.com/?p=987473686 The post Adapting to the new normal at NVD with Anchore Vulnerability Feed appeared first on Anchore.

]]>
The post Adapting to the new normal at NVD with Anchore Vulnerability Feed appeared first on Anchore.

]]>
https://anchore.com/webinars/adapting-to-the-new-normal-at-nvd-with-anchore-vulnerability-feed/feed/ 0
SSDF Attestation 101: A Practical Guide for Software Producers https://anchore.com/white-papers/ssdf-attestation-101-a-practical-guide-for-software-producers/ https://anchore.com/white-papers/ssdf-attestation-101-a-practical-guide-for-software-producers/#respond Wed, 24 Apr 2024 21:27:08 +0000 https://anchore.com/?p=987473683 This ebook sheds light on the recently increased security requirements by the US government and helps companies understand the 4 main requirements for SSDF Attestation.

The post SSDF Attestation 101: A Practical Guide for Software Producers appeared first on Anchore.

]]>
This ebook sheds light on the recently increased security requirements by the US government and helps companies understand the 4 main requirements for SSDF Attestation.

The post SSDF Attestation 101: A Practical Guide for Software Producers appeared first on Anchore.

]]>
https://anchore.com/white-papers/ssdf-attestation-101-a-practical-guide-for-software-producers/feed/ 0
AWS Summit https://anchore.com/events/aws-summit/ https://anchore.com/events/aws-summit/#respond Wed, 24 Apr 2024 20:44:59 +0000 https://anchore.com/?p=987473680 The post AWS Summit appeared first on Anchore.

]]>
The post AWS Summit appeared first on Anchore.

]]>
https://anchore.com/events/aws-summit/feed/ 0
Streamlining FedRAMP Compliance: How Anchore Enterprise Simplifies the Process https://anchore.com/blog/streamlining-fedramp-compliance-how-anchore-enterprise-simplifies-the-process/ https://anchore.com/blog/streamlining-fedramp-compliance-how-anchore-enterprise-simplifies-the-process/#respond Tue, 23 Apr 2024 12:00:00 +0000 https://anchore.com/?p=987473665 FedRAMP compliance is hard, not only because there are hundreds of controls that need to be reviewed and verified. On top of this, the controls can be interpreted and satisfied in multiple different ways. It is admirable to see an enterprise achieve FedRAMP compliance from scratch but most of us want to achieve compliance without […]

The post Streamlining FedRAMP Compliance: How Anchore Enterprise Simplifies the Process appeared first on Anchore.

]]>
FedRAMP compliance is hard, not only because there are hundreds of controls that need to be reviewed and verified. On top of this, the controls can be interpreted and satisfied in multiple different ways. It is admirable to see an enterprise achieve FedRAMP compliance from scratch but most of us want to achieve compliance without spending more than a year debating the interpretation of specific controls. This is where turnkey solutions like Anchore Enterprise come in. 

Anchore Enterprise is a cloud-native software composition analysis platform that integrates SBOM generation, vulnerability scanning and policy enforcement into a single platform to provide a comprehensive solution for software supply chain security.

Overview of FedRAMP, who it applies to and the challenges of compliance

FedRAMP, or the Federal Risk and Authorization Management Program, is a federal compliance program that standardizes security assessment, authorization, and continuous monitoring for cloud products and services. As with any compliance standard, FedRAMP is modeled from the “Trust but Verify” security principle. FedRAMP standardizes how security is verified for Cloud Service Providers (CSP).

One of the biggest challenges with achieving FedRAMP compliance comes from sorting through the vast volumes of data that make up the standard. Depending on the level of FedRAMP compliance you are attempting to meet, this could mean complying with 125 controls in the case of a FedRAMP low certification or up to 425 for FedRAMP high compliance.

While we aren’t going to go through the entire FedRAMP standard in this blog post, we will be focusing on the container security controls that are interleaved into FedRAMP.

FedRAMP container security requirements

1) Hardened Images

FedRAMP requires CSPs to adhere to strict security standards for hardened images used by government agencies. The standard mandates that:

  • Only essential services and software are included in the images
  • Updated with the latest security patches
  • Configuration settings meet secure baselines
  • Disabling unnecessary ports and services
  • Managing user accounts securely
  • Implementing encryption
  • Maintaining logging and monitoring practices
  • Regular vulnerability scanning and prompt remediation

If you want to go in-depth with how to create hardened images that meet FedRAMP compliance, download our white papers:

DevSecOps for a DoD Software Factory: 6 Best Practices for Container Images

Complete Guide to Hardening Containers with STIG

2) Container Build, Test, and Orchestration Pipelines

FedRAMP sets stringent requirements for container build, test, and orchestration pipelines to protect federal agencies. These include:

  • Hardened base images (see above) 
  • Automated build processes with integrity checks
  • Strict configuration management
  • Immutable containers
  • Secure artifact management
  • Containers security testing
  • Comprehensive logging and monitoring

3) Vulnerability Scanning for Container Images

FedRAMP mandates rigorous vulnerability scanning protocols for container images to ensure their security within federal cloud deployments. This includes: 

  • Comprehensive scans integrated into CI/CD pipelines 
  • Prioritize remediation based on severity
  • Re-scanning policy post-remediation 
  • Detailed audit and compliance reports 
  • Checks against secure baselines (i.e., CIS or STIG)

4) Secure Sensors

FedRAMP requires continuous management of the security of machines, applications, and systems by identifying vulnerabilities. 

  • Authorized scanning tools
  • Authenticated security scans to simulate threats
  • Reporting and remediation
  • Scanning independent of developers
  • Direct integration with configuration management to track vulnerabilities

5) Registry Monitoring

While not explicitly called out in FedRAMP as either a control or a control family, there is still a requirement that the images stored in a container registry are scanned at least every 30-days if the images are deployed to production.

6) Asset Management and Inventory Reporting for Deployed Containers

FedRAMP mandates thorough asset management and inventory reporting for deployed containers to ensure security and compliance. Organizations must maintain detailed inventories including:

  • Container images
  • Source code
  • Versions
  • Configurations 
  • Continuous monitoring of container state 

7) Encryption

FedRAMP mandates robust encryption standards to secure federal information, requiring the use of NIST-approved cryptographic methods for both data at rest and data in transit. It is important that any containers that store data or move data through the system meet FIPS standards.

How Anchore helps organizations comply with these requirements

Anchore is the leading software supply chain security platform for meeting FedRAMP compliance. We have helped hundreds of organizations meet FedRAMP compliance by deploying Anchore Enterprise as the solution for achieving container security compliance. Below you can see an overview of how Anchore Enterprise integrates into a FedRAMP compliant environment. For more details on how each of these integrations meet FedRAMP compliance keep reading.

1) Hardened Images

Anchore Enterprise integrates multiple tools in order to meet the FedRAMP requirements for hardened container images. We provide compliance policies that scan specifically for compliance with container hardening standards, such as, STIG and CIS. These tools were custom built to perform the checks necessary to meet the two relevant standards or both!

2) Container Build, Test, and Orchestration Pipelines

Anchore integrates directly into your CI/CD pipelines either the Anchore Enterprise API or pre-built plug-ins. This tight integration meets the FedRAMP standards that require all container images are hardened, all security checks are automated within the build process and all actions are logged and audited. Anchore’s FedRAMP policy specifically checks to make sure that any container in any stage of the pipeline will be checked for compliance.

3) Vulnerability Scanning for Container Images

Anchore Enterprise can be integrated into each stage of the development pipeline, offer remediation recommendations based on severity (e.g., CISA’s KEV vulnerabilities can be flagged and prioritized for immediate action), enforce re-scanning of containers after remediation and produce compliance artifacts to automate compliance. This is accomplished with Anchore’s container scanner, direction pipeline integration and FedRAMP policy.

4) Secure Sensors

Anchore Enterprise’s container vulnerability scanner and Kubernetes inventory agent are both authorized scanning tools. The container vulnerability scanner is integrated directly into the build pipeline whereas the k8s agent is run in production and scans for non-compliant containers at runtime.

5) Registry Monitoring

Anchore Enterprise is able to scan an artifact registry continuously for potentially non-compliant containers. It is configured to watch each unique image in image registries. It will automatically scan images that get pushed to these registries.

6) Asset Management and Inventory Reporting for Deployed Containers

Anchore Enterprise includes a full software component inventory workflow. It can scan all software components, generate Software Bill of Materials (SBOMs) to keep track of the component and centrally store all SBOMs for analysis. Anchore Enterprises’s Kubernetes inventory agent can perform the same service for the runtime environment.

7) Encryption

Anchore Enterprise Static STIG tool can ensure that all containers are maintaining NIST & FIPS encryption standards. Verifying that containers are encrypting data at-rest and in-transit for each of thousands of containers is a difficult chore but easily automated via Anchore Enterprise.

The benefits of the shift left approach of Anchore Enterprise

Shift compliance left and prevent violations

Detect and remediate FedRAMP compliance violations early in the development lifecycle to prevent production/high-side violations that will threaten your hard earned compliance. Use Anchore’s “developer-bundle” in the integration phase to take immediate action on potential compliance violations. This will ensure vulnerabilities with fixes available and CISA KEV vulnerabilities are addressed before they make it to the registry and you have to report these non-compliance issues.

Below is an example of a workflow in GitLab of how Anchore Enterprise’s SBOM generation, vulnerability scanning and policy enforcement can catch issues early and keep your compliance record clean.

Automate Compliance Reporting

Automate monthly/annual reporting using Anchore’s reporting. Have these reports set up to auto generate based on the compliance reporting needs of FedRAMP.

Manage POA&Ms

Given that Anchore Enterprise centrally stores and manages vulnerability information for an organization, it can also be utilized to manage Plan Of Action & Milestones (POA&Ms) for any portions of the system that aren’t yet FedRAMP compliant but have a planned due date. Using Allowlists in Anchore Enterprise to centrally manage POA&Ms and assessed/justifiable findings.

Prevent Production Compliance Violations

Practice good production registry hygiene by utilizing Anchore Enterprise to scan stored images regularly. Anchore Enterprise’s Kubernetes runtime inventory will identify images that do not meet FedRAMP compliance or have not been used in the last ~7 days (company defined) to remove from your production registry.

Conclusion

Achieving FedRAMP compliance from scratch is an arduous process and not a key differentiator for many organizations. In order to maintain organizational priority on the aspects of the business that differentiate an organization from its competitors, strategic outsourcing of non-core competencies is always a recommended strategy. Anchore Enterprise aims to be that turnkey solution for organizations that want the benefits of FedRAMP compliance without developing the internal expertise, specifically for the container security aspect.

By integrating SBOM generation, vulnerability scanning, and policy enforcement into a single platform, Anchore Enterprise not only simplifies the path to compliance but also enhances overall software supply chain security. Through the deployment of Anchore Enterprise, companies can achieve and maintain compliance more quickly and with greater assurance. If you’re looking for an even deeper look at how to achieve all 7 of the container security requirements of FedRAMP with Anchore Enterprise, read our playbook: FedRAMP Pre-Assessment Playbook For Containers.

The post Streamlining FedRAMP Compliance: How Anchore Enterprise Simplifies the Process appeared first on Anchore.

]]>
https://anchore.com/blog/streamlining-fedramp-compliance-how-anchore-enterprise-simplifies-the-process/feed/ 0
DevSecOps for a DoD Software Factory: 6 Best Practices for Container Images https://anchore.com/white-papers/devsecops-best-practices-for-dod-software-factories/ https://anchore.com/white-papers/devsecops-best-practices-for-dod-software-factories/#respond Fri, 19 Apr 2024 16:45:27 +0000 https://anchore.com/?p=987473660 The post DevSecOps for a DoD Software Factory: 6 Best Practices for Container Images appeared first on Anchore.

]]>
The post DevSecOps for a DoD Software Factory: 6 Best Practices for Container Images appeared first on Anchore.

]]>
https://anchore.com/white-papers/devsecops-best-practices-for-dod-software-factories/feed/ 0