-
Notifications
You must be signed in to change notification settings - Fork 239
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC for linking packages to their source and build #626
Conversation
One question for clarification: This RFCs repo has historically been mostly focused on the CLI. Is this RFC for the implementation of this functionality in the CLI or for implementation in the Registry? |
|
||
Code on GitHub, GitLab etc can be browsed and audited, but packages in the registry are opaque and much harder to vet. If packages are built and published in the open these attacks become a whole lot harder. However, right now there’s no way of knowing where a package came from when you retrieve it from the registry. | ||
|
||
We want to solve the problem of npm packages being disconnected from their source by linking the published package to the source code repository and build that it originated from. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
given that build output should never be committed, and builds/publishes are typically done on local machines, how can this link be verified?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This RFC will only work with packages published from trusted build infrastructure. It will not support local publish.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What are the criteria for "trusted"? What are the associated costs in standing up such infrastructure, and what portions of the community would be excluded by those costs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ljharb I believe the premise of this entire RFC is predicated on builds/publishes happening in the open/cloud (as the motivation aims at getting closer to being able to meet SLSA Level 3 standards - ref. https://github.com/npm/rfcs/blob/link-packages-to-source-and-build/accepted/0000-link-packages-to-source-and-build.md#non-falsifiable-provenance-using-a-trusted-builder & https://slsa.dev/spec/v0.1/levels#detailed-explanation)
Notably, "private/inner-source" is considered a "non-goal" (ref. https://github.com/npm/rfcs/blob/link-packages-to-source-and-build/accepted/0000-link-packages-to-source-and-build.md#non-goals) so any local builds/publishes would be out of scope afaik.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is all covered in the RFC
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the manual approval allows me to input an actual second factor token, then that would be great! The things you linked don't seem to permit that yet tho.
I look forward to the RFC discussion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ljharb I'm surfacing that request to the actions team.
In general I see staging / approvals as out of scope for npm as a whole, but up for challenging those assumptions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like to point out that one of the major use cases of SLSA provenance is to mitigate the threat of compromised (Npm) registry credentials. That's because the provenance provides a link to the source repo. So if an attacker were to compromise a registry credential and push a malicious package, the provenance would change from "source=github.com/genuine/source" to "source=github.com/attacker/source" or even "". The registry or another system that monitors it would immediately detect the compromise. In essence, this means that the system fails "safe" so long as:
- you have 2FA on your registry account, protecting privileged settings, etc
- you use an automation token to push packages.
An attacker who gets hold of an automation token can push packages but these would be immediately detected because of the change of source. In a nutshell, you don't need staged builds if you upload provenance from a trusted builder. (I'm assuming that merely pushing packages in an attempt to DoS a user's registry account is not a valuable attack).
Let me know if there are some nuances I'm missing.
(Note: you could improve on automation token using OIDC to further harden the system and improve user experience.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An attacker who gets hold of an automation token can push packages but these would be immediately detected because of the change of source. In a nutshell, you don't need staged builds if you upload provenance from a trusted builder.
💯 Yes great summary! We can also make it difficult to change the provenance, e.g. locking it to a particular source repo once set, or preventing publishing new versions that don't include it if it's previously been set. Changing this could be gated behind 2FA on your registry account, limited to admins for an org.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you disaggregate the system pushing the package to the repo from the system building/signing the artifact, you will be able to protect against the upload token being compromised because you can validate the signed artifacts with published public keys.
Furthermore, sigstore (I believe mentioned in this RFC, I'm still reading through it) can provide some ability to detect a compromised signing key. Will comment more on this later after reading more.
@bnb while the repository has historically been focused on the CLI, and this RFC touches a broader surface area, we felt this work was too important to not work through it with the broader community community. |
@MylesBorins should we expect discussion time set aside in the npm CLI RFC meeting, a separate synchronous meeting, or no synchronous meeting to discuss? |
@bnb added the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very exciting.
Some suggested improvements.
##### What goes on the public Rekor ledger? | ||
Only public npm packages will be signed and published to the public [rekor.sigstore.dev](https://docs.sigstore.dev/rekor/overview) ledger by default. | ||
|
||
Privately scoped packages, or packages from private repositories, will not be signed or published to the public ledger. This will be determined by interrogating the npm registry at publish time. Users will be able to override this similar to the `cosign --force` command by passing an argument to `npm publish`, e.g. `--force-build-provenance`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a potential dependency confusion attack here? Ie. can I get a private package to publish to rekor if I upload a package of that name to the public registry?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh yeah you mean a case where you have some internal corp dep pulled from a private registry and then someone finds this name and creates a public version? Not sure we can do much to mitigate the creation of these. Do you see an added risk being able to create these?
An issue could be getting the wrong provenance attestations during verification if we just used the package name and version. We can mitigate this by querying for the attestations using a shamsum of the installed package's tarball.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think what @jchestershopify means is if I have package x
internally and someone publishes x
, the RFC says that npm will interrogate the registry for x
and get the internal package signed accidentally because x
exists on the public registry. This is especially heinous in this case, since the ledger is immutable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When you say internal package x
you mean on some other registry right? I think in this case the signature would be for the public package x
and associated via the shasum of the tarball, which won't match the internal package x
. Maybe I'm missing something though?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're also planning on creating a "release attestation" on the npm registry that gets uploaded to the public Rekor ledger. This is a signed statement that the npm registry accepted and authorized the publish for public package x
on top of the "build attestation" that gets created during publish in the CI/CD system, which links the source and build to the package.
|
||
The [above](https://www.esentire.com/security-advisories/npm-library-supply-chain-attack) [attacks](https://www.esentire.com/security-advisories/coa-npm-supply-chain-attack) [are](https://www.esentire.com/security-advisories/rc-npm-supply-chain-attack) examples of this, and they frequently occur due to compromised npm credentials but can also happen due to compromised CI/CD or builds. | ||
|
||
Code on GitHub, GitLab etc can be browsed and audited, but packages in the registry are opaque and much harder to vet. If packages are built and published in the open these attacks become a whole lot harder. However, right now there’s no way of knowing where a package came from when you retrieve it from the registry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unpkg.com makes it much easier - it seems like maybe addressing this (making npm packages browseable on the web safely) would be a good first step?
I'm reasonably certain that the owner of unpkg.com would be happy to transfer the IP to npm if it meant it'd be an official solution (at the least, it's worth asking)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We already have a beta implementation of exploring code in npm and are funding improvements to it for live code audit on npmjs.com in parallel to this work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @feross
|
||
## Goals | ||
- Establishes a verifiable link between a public npm package and the source repository and build it originated from. | ||
- Does not expose any Personally identifiable information (PII) about maintainers, e.g. emails. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
emails are already exposed for all publishers to npm - this seems like it should be a non-goal.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is about the data that ends up in the public ledger, separate from the registry
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Users may be using different email addresses for their OIDC provider vs the one they use for npm.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah ok, makes sense, thanks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
emails are already exposed for all publishers to npm
edit: missed above discussion when posting this.
Yes this is true. We're not proposing changing what goes in the packument. The thing we've focused on in this RFC is the signed statement about the repo and build that will go on a public immutable signature ledger (Rekor). We didn't want a solution where this signed statement would include any kind of identifying information about maintainers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a friend who is an expert and advocate on this exact topic in favor of maintaining developer privacy. Would it make sense to invite them to review the proposal and provide feedback?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@fkautz yes please 👍 sounds like they might have some great context that we're also not aware of yet.
- Verification should be performed without depending on any third-party systems other than the registry. | ||
- Compatible with third-party npm clients, e.g. `yarn` and `pnpm`. | ||
- Should allow third-party npm registries, e.g. GitHub Packages, Artifactory and Verdaccio to follow suit and implement similar interfaces. | ||
- Should be maintained with >99.9% uptime so that developers are not blocked from publishing new packages. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
3 9's seems pretty paltry; can we aim higher?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bear in mind this SLO affects publication only; it won't break builds if sigstore is offline. I expect most publishing authors will be fine with waiting a little bit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1, this SLO should be for signing which must be online (for fetching an identity certificate and publishing to the log).
Note that the verification mechanism should be designed with offline verification in mind. For example, querying the log for checking for inclusion would be an online action, but this can be designed to be offline with a persisted proof of inclusion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes this is only during publish when talking to Sigstore infrastructure. Sigstore is run as free-to-use public good infrastructure hosted on GCP which has SLAs from 99.9% to 99.99% on used services AFAICT.
There's a lot to figure out committing to a SLOs on a shared service like this, with people on-call from different organisations. My hope would be that we can eventually get to four 9s uptime once this process is more mature.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I reached out to the sigstore team, and they pointed me to this as their current official statement, as of June 29, 2022: https://blog.sigstore.dev/an-update-on-general-availability-5c5563d4e400
They are targeting four months from June 29th (interpreting as end of October) to ramp up operations, tooling, and publishing SLOs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep I believe the current target for the Sigstore GA is 99.9% uptime for the hosted versions of Fulcio and Rekor.
|
||
## Sigstore as signing solution | ||
The [Sigstore](https://www.sigstore.dev/) project has been selected as the solution to signing npm packages (see detailed explanation below). It is currently the only working solution that supports our key requirements: | ||
- Links packages hosted on the npm registry to the source and build they originated from (provenance information). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
other than recording "an npm specifier" and "a github repo and SHA" together, how are these linked? published npm packages already include repo and SHA information in the packument, so how would sigstore's linking be different?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This information would come as an OIDC claim from the publishing party and thus verifiable... e.g. the claim of a specific repo would come from the GitHub action itself, not meta data published to the registry from a personal machine
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just want to be mindful/note that there's nothing in this RFC which mentions that source, CI or build envs will be or will become immutable; which puts their audibility into question (ex. I can build & publish my package with a trusted builder but then delete that action/workflow/repo - killing the link/trace)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this mean you have to use GitHub as your source repo?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're proposing ways to make this information harder to falsify. Currently you can say the repo is anything and it's not validated, so an attacker can easily forge this information.
A part of the solution is using information from the OIDC id token that you can get from a supporting CI/CD system, this includes information about the repo and commit. The whole thing is then signed and can be verified against the CI/CD's public key for example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nothing in this RFC which mentions that source, CI or build envs will be or will become immutable
Yes this is true. Worth noting though that the entry on Rekor and the stored copy on npm will show an audit trail of where it was originally built from. This might still be useful for post-mortem analysis. Having a broken link could also end being a strong signal to the community that a package is no longer maintained and should not be used.
Does this mean you have to use GitHub as your source repo?
No, we'll support any provider supported by Sigstore's Fulcio service. Today this only includes GitHub Actions but it's vendor neutral and any provider that meets the following requirements will be supported:
- OIDC ID tokens identifying the current workflow/run/build.
- ID token with a custom audience (needs to be set to
sigstore
for Fulcio to accept it). - Claims about the code repository, commit, build and actor
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just want to be mindful/note that there's nothing in this RFC which mentions that source, CI or build envs will be or will become immutable; which puts their audibility into question (ex. I can build & publish my package with a trusted builder but then delete that action/workflow/repo - killing the link/trace)
I'm not sure if npm
will want to do this but whether links are still valid is something that can be checked during package/artifact verification.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if
npm
will want to do this but whether links are still valid is something that can be checked during package/artifact verification.
Yeah maybe this could be optional check that can be performed during verification. We probably don't want to do this by default as it could add significant time to the verification if we're checking 1000s+ links for a typical install.
Another idea would be to send the link though a proxy service on npm, and if this service detects a 404 it updates some state on the package that shows a big red warning next to the link next time you view the package on npmjs.com for example. This might cause more headaches than it's worth though, if requests intermittently fail or if the source repo is down and suddenly a bunch of links are broken and need to be fixed when the service is back up.
Sigstore infrastructure and tooling are currently not considered production ready and are run on a best-effort basis. The Sigstore project is working towards a [General Availability](https://blog.sigstore.dev/an-update-on-general-availability-5c5563d4e400) release later this year but we don't yet know what these guarantees will look like yet. | ||
|
||
As such, there are several risks of adopting Sigstore for npm that are worth calling out: | ||
- No buy-in from the broader open source npm community. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tbh this seems like a potential dealbreaker
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any of these risks are a potential deal breaker 😇
Our intent and hope is that these will not prove to be a problem but we want to call out that these are indeed risks that could result in this specific approach not working out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't disagree, but IMO this isn't because there wouldn't be but more because there's been near zero exposure.
#### How should maintainers authenticate to sign packages? | ||
[Sigstore](https://www.sigstore.dev/) issues short-lived “disposable keys” notarized against log-ins with OIDC identities. So what OIDC identity provider (IdP) do maintainers use? | ||
|
||
This has been the subject of some controversy; for instance, in the [proposal to add Sigstore signing to RubyGems](https://github.com/rubygems/rfcs/pull/37), users worried about “vendorization” (making the repository reliant on third parties to function, e.g. GitHub, Google or Microsoft) and privacy (some maintainers are pseudonymous; Sigstore certificates include email addresses by default when signing using the [cosign](https://github.com/sigstore/cosign) CLI tool). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this should be considered a big concern for npm as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would npm consider being an OIDC Provider for users who do not want to rely on a third party provider? It already has unique user names, manages user identities, and has 2FA support.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is an option, that was brought up in #626 (comment).
While it does provide a good user experience, the concern is that it doesn't isolate the identity provider from the package repository, so a compromise of a user account for one would compromise both.
Co-authored-by: Jordan Harband <ljharb@gmail.com>
|
||
We want to solve the problem of npm packages being disconnected from their source by linking the published package to the source code repository and build that it originated from. | ||
|
||
Once a package includes provenance information (where and how it was built) we can start showing this when browsing a package on the npm registry. You would be able to click through to the particular commit and build that published a given version: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you expect that this data will be able to be reflected in third-party registry caches like Verdaccio, jFrog Artifactory, and Sonatype Nexus, or is it being designed in a way that will be proprietary to the npm website? Additionally, will the API be somewhat reproducible for those third-party registries for non-public modules?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, this was answered in the bullets below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it being designed in a way that will be proprietary to the npm website? Additionally, will the API be somewhat reproducible for those third-party registries for non-public modules?
We don't want it to be proprietary to the official npm registry, which is partly why we're excited about adopting Sigstore. We're planning to document what is required by third-party npm registries to support this functionality and make it as easy as possible to adopt.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are several of us in both the CNCF and OpenSSF who are willing to help spec this out or provide guidance. Please let us know if you would like assistance.
|
||
<img width="376" alt="screenshot of npm package with provenance information" src="https://user-images.githubusercontent.com/20165/176637201-fdb02c11-c810-48e1-a203-ea4fb2008ad3.png"> | ||
|
||
Developers consuming open source packages should get the benefit of this without any changes to their workflows. To begin with, the package integrity should be verified when running the `npm audit signatures` command and eventually transparently integrated into `npm install` and enabled by default. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice to be able to have the existing npm audit signatures
functionality and the new functionality as independent features. Not saying that this shouldn't be added to npm audit signatures
but more that we should also ensure that end-users can run the two sets of functionality independently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed; the entire point of npm audit foo
when we discussed npm audit signatures
was that every foo
would be independently configurable and invokable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
existing
npm audit signatures
functionality and the new functionality as independent features
Thoughts on verifying all available signatures (registry and build) when running npm audit signatures
but allowing you to filter the type with an argument like: npm audit signatures --type=registry|build
(naming tbd)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is going to be broader shared behavior between the various signature verification commands such as "what do you do if packages are not yet in cache" that lead me to feeling a single command makes far more sense
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW I'm not at all against a single command but I have seen so many situations where people want just one task and don't care about other tasks in a bundled command.
It feels... very un-npm-y to me not to be able to opt-in to choosing more limited but precise sets of work that do the exact thing I want completed, especially when the functionality is fundamentally not related (checking what you received is what's on npm vs. what this RFC proposes).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also to answer @feelepxyz: that's a totally fine solution to me!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
npm audit
is intended to be the single command - every subpart of it is intended to be granularly configurable and disableable by users, so that nobody is forced to do a check they don't want.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the consensus is to split it up into two separate functions, I would recommend splitting something like this:
npm audit signatures # runs everything related to signatures
npm audit signatures compare # only computes the hashes and compares
npm audit signatures integrity # validates the signature and ensures it is present and trusted in sigstore
This way, npm audit
remains a single command, and npm audit signatures
does not result in a user misunderstanding what it does.
Open source maintainers should be able to add build provenance information to their packages with near-zero initial and ongoing overhead. | ||
|
||
## Goals | ||
- Establishes a verifiable link between a public npm package and the source repository and build it originated from. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be good to clarify at some point in this RFC if "source repository" must be GitHub or not. I'm 100% sure this will be a question that comes up, so it'd be good to be clear about that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's no requirement on the source repo being a GitHub repo. The requirement is on the CI/CD system. So for example when CircleCI supports the Fulcio service (which verifies OIDC id tokens from the cicd system and mints short-lived signing certificates) you would be able to use any source repo supported by Circle.
Will clarify this 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make sure to consider that the source git repo may change, e.g. tags are mutable. branches may be deleted and recreated with new content. Malicious content could be uploaded (by the dev, or through a compromised user credential), a build submitted, then the branch deleted and recreated in an attempt to hide the payload.
(May already be covered, still reading through the document).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Malicious content could be uploaded (by the dev, or through a compromised user credential), a build submitted, then the branch deleted and recreated in an attempt to hide the payload.
TBH we don't yet have great answers around these issues. We could try and mitigate against some things on the npm side, e.g. by optionally verifying the repo still exists during verification but verifying the correct code still exists is probably not feasible.
It would be great if we could somehow create immutable checkpoints in the source code repo/CI system. One naive idea could be to package up a shallow clone of the git repo as a build artifact 🤔
- Establishes a verifiable link between a public npm package and the source repository and build it originated from. | ||
- Does not expose any Personally identifiable information (PII) about maintainers, e.g. emails. | ||
- Avoids developer-managed keys (as there's no good way to offer trust to the community given the challenges of distributing public keys). This, along with opt-in signing, adds near-zero initial and ongoing overhead for open source maintainers. | ||
- Maintainers can opt-in to including build provenance information (where the code lives and how it was built) when publishing using `npm publish` for public packages. | ||
- Incentivizes maintainers to build in the open because of the strong guarantees that this offers. | ||
- Verification happens transparently on `npm install` without the need to obtain or manage additional tools or keys. | ||
- Verification should have a negligible performance impact on `npm install`. | ||
- Verification should be performed without depending on any third-party systems other than the registry. | ||
- Compatible with third-party npm clients, e.g. `yarn` and `pnpm`. | ||
- Should allow third-party npm registries, e.g. GitHub Packages, Artifactory and Verdaccio to follow suit and implement similar interfaces. | ||
- Should be maintained with >99.9% uptime so that developers are not blocked from publishing new packages. | ||
- Should allow future extensions to support centrally managed signing authorities such as certificates managed by an enterprise and inner source within air-gapped enterprise environments. | ||
- Buy-in from the broader open source community. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Almost all of these seem to be effectively external, non-maintainer benefits. Are there any more maintainer-focused goals? Right now, I see very little incentive for maintainers to be excited for this outside of ones like me who work at companies and get paid to do this kind of maintenance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is good feedback 👍 part of the story is not placing any kind of maintenance overhead on maintainers, for example having to manage keys. Another part in my mind is making publishing from CI/CD safer and easier, which is not covered in this RFC but we're planning as a follow up.
We want to allow authorising the publish using the same OIDC id token from the CI/CD system. So instead of using a long-lived access token to authorize the publish, you would log into your npm dashboard and set up which source repo is allowed to publish what package. The publish step then exchanges the OIDC id token for a short lived access token after the npm registry has checked the source repo from the id token matches the allowed one.
Once you've set up publishing in this way, you would not need to use long-lived access tokens and the risk of stolen credentials basically goes away if you also use 2FA on your npm account. The setup for this would be exactly the same as what we're proposing in this RFC so you essentially get it for free. I imagine adoption would go up a lot once we have both features in place and all major CI/CD providers are fully supported.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 to more clearly articulating the benefits to maintainers. You may also wish to separate goals (e.g. improve security) from requirements (e.g. zero overhead).
To me, the main benefit for maintainers is:
- Maintainers gain increased protection against malicious upload by compromised credentials or (other) rogue maintainers.
- Running the signed timestamp authority for Rekor as well as a Certificate Transparency monitor for Fulcio. This will help spread the trust outside the four walls of Sigstore. | ||
- Maintaining the Sigstore trust root and having a root key holder as part of the group of Sigstore root key holders. | ||
|
||
We also remain open to an alternative solution if Sigstore is not able to meet npm's requirements for uptime and support. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are there any potential alternatives that have been identified if this is the case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not concrete alternatives but the rough shape would probably look similar to the architecture in this RFC with the difference that npm/github would be running the services instead. Either fully fledged or a paired down version.
It would be sad if we got to this as it would probably mean that we either end up with some proprietary signature service just for npm/github or potentially multiple competing public good instances.
The Cosign functionality will be embedded directly in npm CLI, so we're left with the public [fulcio.sigstore.dev](https://docs.sigstore.dev/fulcio/overview/) (CA) and [rekor.sigstore.dev](https://docs.sigstore.dev/rekor/overview/) (ledger) services. | ||
|
||
##### What does [fulcio.sigstore.dev](https://docs.sigstore.dev/fulcio/overview/) give us? | ||
- Independent party to validate claims of the OIDC identity token which contains references back to the repo, workflow run and git SHA. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this RFC is only about provenance generated through CI, this is good.
Do you want to support other identities signing provenance or artifacts such as email or SPIFFE IDs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For now, it's only focusing on CI systems integrated with Fulcio. I would say that it's harder to define policies around email and SPIFFE IDs. One central concept is to offer a set of "trusted builders" across all supported CI systems. As references to those should be part of the certificate (i.e they are validated by Fulcio) we have a way forward to identify/validate those packages built by a trusted builder.
But this does not of course rule out future extensions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you want to support other identities signing provenance or artifacts such as email or SPIFFE IDs?
Possibly yes eventually! We haven't figured out how to support private package and enterprise uses-cases here. I can imagine that we will want to support several different signing identities for these use-cases.
Less sure about email identities as it really only tells us who published the package and not where and how the package was built. It also introduces the issue of linking that identity to an authorized maintainer. Open to exploring ideas around it though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to help here if possible. I have experience with SPIFFE, SBOM generation, and in-toto and am happy to assist. I can also ask colleagues involved with projects like in-toto to help weigh in on the design.
The trick here I think will be to sign a top level document that embeds the claims. Perhaps the developer key can be used to sign the commit, then the commit can be embedded a document that is signed by the SPIFFE SVID, then submitted to Sigstore.
- Privacy: Only a handful of identity providers are supported today that might not suit all users (GitHub, Google or Microsoft). | ||
- Security: Some users might only have secondary, less secure, accounts on one of the supported services making it an easier target. | ||
- Does not include provenance information about where the release came from and how it was built. | ||
- The email might not match the one used on the registry making it hard to validate that it's from a legitimate maintainer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1, this is a critical point - How do you create a verification policy that maps an identity to a package in such a way that a) you can update it in the case of compromise or a new maintainer takes over, and b) you can differentiate between a valid update and an update that occurs because an maintainer's account on an IdP is compromised? Avoiding this is ideal.
**Cons** | ||
- Privacy: the identities used to sign packages are public (though users can make a pseudonymous account). | ||
- Privacy: Only a handful of identity providers are supported today that might not suit all users (GitHub, Google or Microsoft). | ||
- Security: Some users might only have secondary, less secure, accounts on one of the supported services making it an easier target. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, no way to check that a user has MFA enabled on one of those accounts for example, as this isn't included in identity tokens.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO this should borderline be a requirement. I'm amazed that this isn't supported. If my publishing email is j@gmail.com
but I publish the password publicly, this means anyone could sign in to that email and publish as me which... fundamentally gets around the entire point of this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A concern that has been brought up in the past is including the MFA status in identity tokens makes it easier to find targets. It would probably be simpler for all identity providers to require MFA.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would probably be simpler for all identity providers to require MFA.
No opposition from me :)
|
||
The plan is to start with direct vendor support in Fulcio. Fulcio will need to be patched to produce certs containing custom claims data from Circle CI, GitLab and Google Cloud Build. | ||
|
||
We will also work with OpenSSF and other package ecosystems adopting [Sigstore](https://www.sigstore.dev/) on an official claims set that any provider can implement. Once we have this each vendor could be added to Fulcio by simply specifying the standard OIDC client configuration. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(From a Fulcio maintainer) This would be great!
However, adopting something like Sigstore raises several major questions (beyond the risks detailed above): | ||
- What security benefits does Sigstore provide? | ||
- How should maintainers authenticate to sign packages? The solution should provide both high assurance in the authenticity of packages and respect maintainer privacy. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would also posit that "What's the benefit for maintainers" is an extremely important question. This current set is assuming that maintainers care and want to do signing, which isn't guaranteed.
If there's not a good benefit for maintainers, this will create a bad situation where maintainers are pressured by consumers to take on this additional workload with no tangible benefit. Resentment and hostility easily follow in situations like that (see: typescript definitions for package maintainers who don't want to do typescript).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There will inevitably be maintainers who object to signing for a variety of reasons. While I think it is reasonable to listen, a software repository also answers to the needs of end users.
For a maintainer a compromised package is perhaps embarrassing and distressing. For end users it is potentially catastrophic. And there are far more end users than maintainers. I don't see any other way to do the utilitarian calculus than to find that the needs of the end users for enhanced security are paramount.
edit: to be clear I am a passer by, not an npm person.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In any ecosystem where you have two cohorts of users, producers and consumers, it's simply not viable to do utilitarian calculus, because neither group can exist without the other. Both groups' needs must be weighed together.
eg, mandatory 2FA for maintainers is annoying and inconvenient, but the burden on maintainers is relatively small, and the benefits for maintainers and end users is exceedingly high, so it works out to enforce it.
If the burden is too large, or the benefits aren't bulletproof enough or high enough, it wouldn't likely be worth the tradeoffs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both groups' needs must be weighed together.
That is in fact the calculus. Two variables: cost to maintainers, cost to end users, integrated under a curve with respect to the spectrum of potential outcomes. The area under the curve is dominated by the impact on users because there are so many and because they are more negatively affected than maintainers by a compromise.
If the burden is too large, or the benefits aren't bulletproof enough or high enough, it wouldn't likely be worth the tradeoffs.
The burden here is quite minimal. One click for the common case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it needs to be minimal for the uncommon cases too. Prolific maintainers can be eccentric, myself included, and doing both unpaid technical and emotional work does not tend to predispose humans to being charitable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@feelepxyz a CI/CD machine is just as attackable as my local machine, though - perhaps more so, since it's a known target that's exposed to the open web. Why can't my local machine - something that has proven to be more trustworthy so far than all CI providers I'm aware of, largely due to nobody trying to attack it I'm sure - be able to sign it?
I would really argue that this isn't true, unless you're running something without any security controls. I would assume the CI/CD systems here would be follow standards, be pen tested, etc. Compare that with a local workstation that could be downloading arbitrary software, running arbitrary software in addition to the build you're running. That build could be attacked from a pretty unbounded number of vectors. Compare that with a CI/CD system that has adequate network, identity, etc. controls, there would be minimized attack surface.
For me it isn't that I don't trust your personal build machine, its that we can't enforce any kind of provenance claims as to what repo the package was built from. An attacker that's publishing a malicious version of some package from their local machine could easily state that the package is from the original legitimate repo, without it actually reflecting the changes in the published package.
This doesn't fix the problem right? A CI/CD system that has been compromised can falsify some elements of the provenance just as well as a laptop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't fix the problem right? A CI/CD system that has been compromised can falsify some elements of the provenance just as well as a laptop.
This is my point, I think - that there's nothing inherent about a CI/CD system that isn't also true about a personal laptop, and vice versa - the only difference is likelihoods due to competence, target awareness/attractiveness, which software is being run, etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's true. However, I think generally CI/CD is easier to secure compared to your laptop. CI/CD system can be audited, and configured through code, to ensure that for example that builds happen hermetically, only download pre-approved tools with a provenance trail. Your laptop unless it's managed by some other system doesn't haven't similar properties. Also in the case of packages that have multiple developers and maintainers it makes it more clear that packages are built through one system that has been approved by the project.
In the case of let's say GHA, you need to trust that GHA is doing the right things from a workflow perspective, OIDC, etc. GHA is taking on a lot of risk to their reputation, but most likely have resources to do so. If I build an npm package from my local laptop it's mostly on me. I could inadvertently have a rootkit on my machine, using a compromised OS kernel, have downloaded malicious software, etc. It is harder to trust anonymous actor's laptop compared to a potentially audited/threat modeled CI/CD system or service.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps - but there are few personal laptop environments more objectively trustable than mine based on track record :-) without a doubt the majority will want to use this system, and probably should! However, if it's made a requirement - explicitly, or implicitly via creating metadata that effectively punishes an author for not complying - that seems like it would be harming maintainers for a very dubious benefit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Trust has economies of scale. Asking me to trust every random laptop means that, to achieve a similar level of confidence to a highly defended central provider, I would need to interview a bunch of people who, as has been pointed out elsewhere in this discussion, might object to the added burden.
Put another way: I don't know you. I know Github. Given my current priors, I trust them more than you, and it's costly for both you and I to update those priors.
|
||
Sigstore has three main components: a CLI tool (Cosign), a Certificate Authority ([fulcio.sigstore.dev](https://docs.sigstore.dev/fulcio/overview/)), and a time stamping and immutable ledger service ([rekor.sigstore.dev](https://docs.sigstore.dev/rekor/overview/)). | ||
|
||
The Cosign functionality will be embedded directly in npm CLI, so we're left with the public [fulcio.sigstore.dev](https://docs.sigstore.dev/fulcio/overview/) (CA) and [rekor.sigstore.dev](https://docs.sigstore.dev/rekor/overview/) (ledger) services. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As someone unfamiliar with this ledger: Are there any concerns around exponential growth of this ledger and the insertion of new entries slowing down publishing? Other.... ledgers that I'm aware of begin to hit speed issues that would be concerning at the volume of all published packages.
Additionally, when you say The Cosign functionality will be embedded directly in npm CLI
does this mean that the CLI will reach out to the Fulcio/Rekor APIs, or that additional binaries would be included?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ledger is sharded - Every year a new tree is created, so this should alleviate concerns around performance. Additionally, while not currently implemented, we'd like to have log mirrors to distribute the load, but this will require the community operating mirrors, which comes with performance expectations and storage costs.
The Cosign functionality will be embedded directly in npm CLI
I assume this is referring to performing the same behavior as Cosign, using the sigstore-js library to call out to Rekor/Fulcio.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Every year a new tree is created, so this should alleviate concerns around performance.
I'd be really curious to know what the throughput is. npm has... a lot of publishes, and I get concerned about even a yearly reset if this will also include other ecosystems that are also growing + new ecosystems that haven't yet been invented.
What happens when the equivalent of all packages in 2022 are published in one week? 😅
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It will likely take a very long time before all daily package publishes are signing their builds. Assuming the uptick will be gradual over time.
The database that backs the transparency logs of both Rekor and Fulcio has very high throughput. It's the same database that powers the certificate transparency log for all SSL certs being issued and performs over 2,000 writes per second.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It will likely take a very long time before all daily package publishes are signing their builds. Assuming the uptick will be gradual over time.
IIRC npm's growth over time hasn't necessarily been gradual 😅
The database that backs the transparency logs of both Rekor and Fulcio has very high throughput. It's the same database that powers the certificate transparency log for all SSL certs being issued and performs over 2,000 writes per second.
IMO it would be really nice to do some of the math before this RFC is ratified to see how much runway we have until we hit 2,000 writes per second across all the ecosystems that have begun considering this. It'd be good to know that this is/isn't a problem and how far out we need to start thinking about being concerned :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that some envelope arithmetic would be ideal. My current estimate is that it'll be no big deal: 2k RPS is 2.88M per day. From searching a bit I see counts of ~2M packages in npm. I doubt that every package is signed every day, so there's some headroom.
The 2k probably isn't a hard upper limit either. Let's Encrypt reported early last year on their current DB servers -- 2x EPYC giving 64 cores, with 2TB RAM. It's possible to buy 4 socket EPYC systems with up to 4TB of RAM.
Sigstore supports both email and CI/CD workload-based OIDC identities: | ||
|
||
##### Email based identity provider | ||
For instance, Google, GitHub, Microsoft (supported by Sigstore today) or any other IdP that Sigstore might support. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know this is a problem introduced by proxy of using sigstore, but I'm uncomfortable with the likely future where maintainers who want to publish a package are relatively forced by their users to identify with a limited set of platforms that have extremely valid and well-documented criticisms around privacy (Google/Microsoft tracking, GitHub backing up on cookies policy), politics (GitHub ICE, Microsoft DoD/JEDI deals, Google's Dragonfly, to name a few), and other ethical issues (treatment of women/Black/trans/other marginalized employees).
I don't fully believe that opting out of this feature for those reasons is a particularly equitable solution, nor one that will be an actual choice in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is mentioned as the Vendorization problem. RubyGems was the first to encounter this discussion and a number of alternatives were floated both within the RubyGems discussion and in discussion with peers in other ecosystems (including npm).
- Grow the list of endorsed IdPs. The current list isn't meant to be final, it's just what was convenient to bootstrap. There are many candidate IdPs who are non-commercial.
- Operate an IdP specific to npm. This solves the vendorization problem (you've already decided to trust npm) but it almost completely cancels a critical advantage of independent IdPs, which is that to sign and upload an attacker will need to compromise two accounts.
- A "neutral" IdP, independent of npm, probably shared among multiple ecosystem software repositories, operated by some trusted third party (at a first guess, the OpenSSF).
- An email-based flow for verification. This has mostly been rejected as "OIDC over SMTP" -- a real possibility of protocol design flaws make this highly risky.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're sidestepping some of the problems with email based identities in this proposal by just supporting workflow identities from supported CI/CD systems. That said, we might wan to support email based identities in future if support improves.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I certainly hope vendorization will be addressed from the beginning by offering alternative solutions. As other package managers run into the same issue, I see an opportunity for collaboration. If vendorization isn't addressed, you can expect a vocal group to resent this proposal, causing confusion and doubt and ultimately leading to failure of the whole thing. Improving supply chain security is too important to fail, so it should be done right to get everybody on board.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For reference, some further discussion of the Neutral IdP idea took place in Fulcio issue 444.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I certainly hope vendorization will be addressed from the beginning by offering alternative solutions.
💯 One of our internal goals of publishing this RFC is to help start these conversions with CI providers. I'm also excited about Sigstore as a solution to software signing for this reason. It's a vendor neutral entity that can help create alignment.
I'm not closed or opposed to extending npm package signing to support some kind of "author/maintainer" IdP in future that could allow maintainers to publish from their laptops. For the first iteration I think we'll start with only supporting build or workflow identities from CI/CD providers.
Co-authored-by: Jordan Harband <ljharb@gmail.com>
Co-authored-by: Tierney Cyren <accounts@bnb.im>
- Doesn't expose any personal identifiable information about maintainers. | ||
- Avoids developer-managed keys by placing trust in the CI/CD identity provider. | ||
|
||
Today this means only commercial CI/CD providers will be supported. This is far from where all developers are publishing today so this will add friction and slow adoption. All major CI/CD providers offer free plans for Open Source projects and we hope that all packages will eventually be built out in the open to make supply chain attacks harder to execute. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this is the goal, you are going to get questions about 2FA and automated publishing. While strides have been made, are there further efforts to continue to make publishing from CI a reasonable experience given that this will effectively force all maintainers of large-scale packages to publish from CI?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copying answer from another thread.
are there further efforts to continue to make publishing from CI a reasonable experience given that this will effectively force all maintainers of large-scale packages to publish from CI?
Yes! We're considering a proposal to authorise the publish using the same OIDC id token from the CI/CD system (still early days so haven't committed to this work yet). So instead of using a long-lived access token to authorize the publish, you would log into your npm dashboard and set up which source repo is allowed to publish what package. The publish step then exchanges the OIDC id token for a short lived access token after the npm registry has checked the source repo from the id token matches the allowed one.
Once you've set up publishing in this way, you would not need to use long-lived access tokens and the risk of stolen credentials basically goes away if you also use 2FA on your npm account. The setup for this would be exactly the same as what we're proposing in this RFC so you essentially get source and build linking for free if you set up OIDC publishing. I imagine adoption would pick up a lot once we have both features in place and all major CI/CD providers are fully supported.
I think we can also ease the migration by providing reusable workflows for publishing npm packages on CI/CD. On actions you could start by using a manual workflow trigger to publish.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While longer term we are hoping to implement something on top of OIDC we are currently working on Granular access tokens as a successor to automation tokens. These tokens will:
- Be able to be scoped to a specific package or scope
- Scoped to read or read/write permissions
- Has an expiry date
As mentioned above longer term we are looking at OIDC to remove the needs for tokens and a streamlined GitHub experience to make setting up everything a breeze. All of this is intended to be build on open technologies so that other CI/CD environments can benefit from similar implementation and that we can explore scaling these approaches to other package registries.
None of the above is part of the signing work outlined in this RFC though.
What I can say is that the remainder of this year has a big focus on "package security" for publishing in the same way we've spent the last 9 months extremely focused on "account security"
Website CAs currently use a similar approach when they publish all issued certificates to a [transparency log](https://certificate.transparency.dev/). | ||
|
||
### CI/CD OIDC provider support | ||
Today only GitHub Actions is fully supported by Fulcio. We’d like to see support added for any public CI/CD service that can meet these requirements: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this means that GitHub Actions are the only way to publish verified packages, I'm a hard and vocal -1 on this until that's not the case. There are extremely valid reasons to not use GitHub Actions that haven't been addressed in the years it's been out, and there are major packages in the ecosystem that do not use GItHub Actions for those reasons.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(from the following context that does not seem to be the case for npm to ship this, but I am going to leave this here to ensure that perspective is logged)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
100%, GitHub Actions is the only provider that fully supports the requirements from Fulcio today but I'd like to see other CI/CD providers like GitLab and Circle be supported before this functionality launches for npm.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be feasable to allow any OICD provider to include signatures with published npm packages and let the npm cli configure which providers it trusts? npmjs.com can choose to only display verified badges for Github Actions and other trusted providers but other communities could choose to trust providers of their choice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be feasable to allow any OICD provider to include signatures with published npm packages and let the npm cli configure which providers it trusts?
The current proposal relies on the OIDC Identity Providers trusted by Fulcio, which is currently defined here: https://github.com/sigstore/fulcio/tree/main/federation
On top if this I could also see npm providing some way for you to write your own policies of which of these providers to trust. The hope is that the list of supported and trusted providers in Fulcio grows significantly over time as its vendor neutral and open source.
One idea to make it easier to support more providers in Fulcio would be to come up with a spec for ID token claims, so any provider that supports some officially supported set of claims could be supported more easily in Fulcio.
- Support for self-hosted CI/CD systems. | ||
- Support signing from self-hosted CI/CD systems. | ||
- Support signing from local machines or laptops. | ||
- The initial goal does not support signing from your local laptop as our primary aim is to link the package to the source repo it was published from. There's no scalable way of making sure this source information isn't falsified when it comes from a local machine. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you elaborate here on why a local machine that i can configure is different than a CI machine that i can configure?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a state space problem. A CI/CD system that is hosted immutably and can be verified/audited will be simpler to reason about compared with a laptop that usually has lots of arbitrary stuff installed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Supporting signing from both CI/CD and local machines increases the scope of the initial work quite a bit so we've decided to just start with signing from CI/CD as the identity solution (OIDC id tokens issued by the CI provider) does not extend to local machines.
There are ways we can support signing from local machines in future, e.g. using proof of email linked to a maintainer but this gives different guarantees than signing from CI/CD.
Related to this, my hope is that all public packages are eventually published from open, auditable and automated systems with all the other benefits this brings like ephemeral environments and the ability to prove what went into a build. Maybe I'm deluding myself on the possibility of this though? 🤔
I also think the experience and selection of automated build systems could be vastly improved to make publishing from one as easy as toggling a button or two on whatever source control system you might use that just works with the package setup/registry and versioning scheme you use.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@feelepxyz the possibility is only there if the community freely decides to make that move - and it’s not a free decision unless the existing mechanisms are equally privileged.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Related to this, my hope is that all public packages are eventually published from open, auditable and automated systems with all the other benefits this brings like ephemeral environments and the ability to prove what went into a build. Maybe I'm deluding myself on the possibility of this though? 🤔
I think there are two reasons why this is challenging:
- exploration - whether from someone just learning JavaScript or from someone just exploring a new solution to a problem they have, this adds barriers and weight.
- tooling - currently, npm simply does not provide the necessary tools that make such automated publishing safe and reliable for anyone operating past a certain scale or who requires a certain level of trust.
I agree that things could be dramatically easier. Unfortunately that does require substantial investments, and it is seemingly often very challenging for certain teams to actually secure the necessary resources to be able to accomplish the outcomes that are required to achieve that ease.
Any updates on the progress of this? |
Signed-off-by: Philip Harrison <philip@mailharrison.com>
@bnb yes! We're actively working on this and aiming to open up a beta in the new year. I'm planning to merge this RFC soon as we've settled on the approach for now. We just wrote about Sigstore reaching General Availability (GA) and are working on a JavaScript client for Sigstore that will be used to generate this signed source/provenance information. There have been several concerns and questions raised about the accessibility of this feature and how we could roll this capability out to more developers, e.g. if you can't use CI to publish your package, or how we could support already published/legacy packages. We're starting out by supporting signing newly published public packages from trusted CI systems (the goal being non-forgeable/non-falsifiable information about the source and build). The barrier of entry will initially be higher than we want it to be. The tooling won't be as good as we want it to be but our aim is to invest in this over time and get to the point where the secure way is the path of least resistance. This will take a time but we're committed to investing in this area. |
Signed-off-by: Philip Harrison <philip@mailharrison.com>
O.o did we discuss this again in an RFC call such that it's appropriate to merge this? |
@feelepxyz why is a non-npm team member merging an npm RFC prior to having full discussion about it? That's not a good look for Microsoft (or Github) as an acquirer. This is a very low value security property that could potentially cause a lot of harm to the ecosystem by de-facto punishing perfectly secure packages without this provenance info. More to the point, if perfectly secure packages don't choose to ship this provenance info (like my 10+% of npm packages likely will never do) then why would the ecosystem adopt it as a signal? |
@ljharb thanks for your contributions to the Node community. This RFC was open for comment for ~3 months before we merged it, and we signaled our intention to merge in a comment almost two weeks ago. This issue is the proper forum for discussion on the RFC, and that discussion has dwindled to nil over the last two months.
This is not a constructive comment. We will emphasize once again that the new functionality described in this RFC is optional, and you're free to use it or disregard it as you choose. |
@trevrosen The only people who should be merging things on an npm repo is the npm team, not "any github employee", and npm RFCs have never before been merged a) based on a time limit, b) without extensive discussion in an RFC call, or c) by someone who isn't on the npm team. The RFC has been commented on a number of times, and many of the comments are not in favor. What's the point of an RFC if you're going to merge it before these concerns are addressed in an RFC call? Which part of my comment is not constructive? Provenance simply isn't of high value for an ecosystem that doesn't ship binaries, and incomplete (in terms of a dep graph) provenance info is of equally low value. If you disagree, I'd love to hear your reasons. Functionality being optional does not mean there can't be ecosystem effects. If a "security" property exists, and it gains enough adoption, anyone not using it will be effectively punished for not having it - which makes it non-optional. |
Add support for verifying sigstore attestations when fetching the registry.manifest. This will be ased in CLI as part of `audit signatures`. RFC: npm/rfcs#626 Signed-off-by: Philip Harrison <philip@mailharrison.com> Co-authored-by: Brian DeHamer <bdehamer@github.com> Signed-off-by: Philip Harrison <philip@mailharrison.com>
Update `audit signatures` to also verify Sigstore attestations. Additional changes: - Adding error message to json error output as there are a lot of different failure cases with signature verification that would be hard to debug without this - Adding predicateType to json error output for attestations to diffentiate between provenance and publish attestations References: - Pacote changes: npm/pacote#259 - RFC: npm/rfcs#626 Signed-off-by: Philip Harrison <philip@mailharrison.com>
RFC for linking public npm packages to the source code repository and build it originated from.
View rendered version