Implementing the Auto-refreshing Official Kubernetes CVE Feed
Author: Pushkar Joglekar (VMware)
Accompanying the release of Kubernetes v1.25, we announced
availability of an official CVE feed
alpha feature. This blog will cover how we implemented this feature.
An auto-refreshing CVE feed allows users and implementers to programmatically fetch the list of CVEs announced by the Kubernetes SRC (Security Response Committee).
To ensure freshness and minimal maintainer overhead, the feed updates automatically by fetching the CVE related information from the CVE announcement GitHub Issues. Creating these issues is already part of the existing Security Response Committee (SRC) workflow.
Until December 2021, it was not possible to filter for issues or PRs that are
tied to CVEs announced by Kubernetes SRC. We added a new
official-cve-feed to address that, and SIG-Security labelled relevant
issues with it. The in-scope issues are
closed issues for which there is a CVE
ID(s) and is officially announced as a Kubernetes security vulnerability by SRC.
You can now filter on all of these issues and find them
For future security vulnerabilities, we added the label to the SRC playbook so that all the future in-scope issues will automatically have this label.
Building on existing tooling
For the next step, we created a
prow job in order to periodically query the
GitHub REST API and pull the relevant issues. The job runs every two hours and
pushes the CVE related information fetched from GitHub into a Google Cloud
For every website build (at least twice a day),
Netlify data templates make a
call to this Google Cloud Bucket to pull the CVE information and then parses
into fields that are JSON Feed v1.1
compliant. The JSON file is available for programmatic consumption by automated
security tools. For humans, the JSON also gets transformed into a Markdown table
for easy viewing.
Building trust and ensuring that the feed is not stale were our main priorities when designing this feature for success and widespread adoption.
Integrity and Access Control Protections
Changes to any of the four artifacts used to build this feed could lead to feed tampering, broken JSON, and inconsistent or stale data.
Let’s look at how access is controlled for them one by one:
GitHub Issues for Publicly Announced CVEs
official-cve-feed label is restricted to limited number of trusted
community members. Access to add this label is defined
in this configuration file
. Any updates to this configuration file require the changes to go through the
existing code review and approval process
The Prow job is defined in
kubernetes/infra configuration file
The shell script to push and pull the data in Google Cloud Bucket is defined in
sig-security-tooling sub-project. Both of these files go through the
same code review and approval process mentioned earlier.
Google Cloud Bucket
Write access to Google Cloud bucket is configured to be restricted to a set of
trusted community members managed via an
invite-only Google Groups Membership
Website Data templates
Website data templates that fetch and parse the stored JSON blob are managed
kubernetes/website and have to follow the same code review and approval
process as mentioned earlier.
The feed is updated everytime new CVE data is available by periodically verifying if generated data is not the same as the stored data in the feed.
prow job runs every two hours and compares the
sha256 checksum of the
existing contents of the bucket with checksum of the latest JSON file generated
through GitHub issues. If the there is new data available, the hashes do not
match (typically because of a newly announced CVE) and the updated JSON file is
pushed onto the bucket replacing the old file and old hash checksum.
If the hashes match, the
write to bucket operation is skipped to reduce
redundant updates to the cloud bucket. This also sets us up for more frequent
runs of the prow job if needed in the future.
If you would like to get involved in future iterations of this feed or other security relevant work, please consider joining Kubernetes SIG Security by joining our bi-weekly meetings or hanging out with us on our Slack Channel.
A special shout out and massive thanks to Neha Lohia (@nehalohia27) and Tim Bannister (@sftim) for their stellar collaboration for many months from “ideation to implementation” of this feature.