Replies: 6 comments
-
Thoughts: Have This "vicious circle" is a bit of a sad reality. Automatic checks are kind of useful, but a bit antithetic to the mission. :/ . Though I agree that having something for new users, is a reason to stick around and use the tool. Maybe there could be just another result "column" for automatic sanity checks, with ability to ignore/force i in the end return code result. |
Beta Was this translation helpful? Give feedback.
-
Just to inform you of the low numbers that I see on https://web.crev.dev/rust-reviews/. I think that reviewers need some guidelines on what and how to review. |
Beta Was this translation helpful? Give feedback.
-
I'm quite positively surprised how many reviews they are. Considering how unrewarding it is, and that no one is paying these people, it's pretty amazing. Generally, I still think that if the idea is to work, the bulk of reviews will come from organizations publishing reviews that are byproduct of their security teams and developers being paid to do it. To certain extend that might already be the case.
I very much encourage an approach where independent actors publish machine-prepared reviews using whatever automation they think works (ML, AIs, whatever), and |
Beta Was this translation helpful? Give feedback.
-
I think crev needs more than this. Defaults are important. Standard recommended usage is important. User experience is important. Currently all of these are not great. It's:
If someone creates a crev repo with results of automated scan, then using it will not be great. These results will be just generic CrevIds, without any special treatment in the UI. Users will either need to know that such special Crev account exists, and how to add it to their WoT. Or if a popular user trusts that repo, then all users who don't want automated checks will have to learn how to distust that Id or filter by trust levels and minimum number of reviews required. This is not easy. I know that for you it's "just" a few flags, but for others that's a learning curve. Ideally, for users who want to ask "does this project contain malware?" crev should be answering clearly "yes"/"no"/"you need to review or trust this and that to be sure", and it should be something simple like |
Beta Was this translation helpful? Give feedback.
-
But it is really impossible to answer this question for a power user. One can answer a question "does at least 2 people I somewhat trust think it's OK", and that's what A casual user can just appeal to authority. So eg. https://lib.rs can set the bar to two reviewers in a curated WoT, and even include some metrics that crev does not crate, and display a nice big ✔️ next to a crate name when it seems "good enough". What's even the point of bothering people with going to a CLI app? Maybe they could just upload a And generally, as things are going right now - everything is moving to "the cloud" and people expect not having to do much about anything. Github is analyzing deps via scans and matching security databases and so on, so any CLI-based tooling is always going to be by very necessity targeting power users, that want to do something extra. |
Beta Was this translation helpful? Give feedback.
-
IMO, there's no way around the fact that reviewing code manually is a lot of work, and there's barely enough motivation for people to work on their own open source projects, and most of it is gone when it comes to reviewing other people's code. It's not fun, takes a lot of fun, will not give you recognition or fame, not even all that useful (I mean ... 99.99% of the time the dependencies are at least not malicious). |
Beta Was this translation helpful? Give feedback.
-
Crev doesn't have enough reviews yet to make any sizeable real-world project fully "pass" the verification. I think this creates a vicious cycle: it's not useful yet, so few users use it, and because few users use it, it doesn't get enough reviews.
Therefore, I think it's necessary to find a way to break this cycle. Crev needs to give some useful assessment even for projects that don't have enough review coverage.
I know that for some users it's important to actually have 100% of dependencies fully manually reviewed. OTOH everyone else who's not using crev yet is checking 0% of their deps, so anything more than 0% is still an improvement.
Most users should be able to run
cargo crev verify
(orcargo crev someothercommand
) in CI and have it give a useful yes/no answer already.I'm suggesting that when crates don't have any reviews, we can still try to give some heuristic-based score:
Lean more on trusted owners. If a crate is published by a trusted owner, then let it pass.
cargo crev trust @username
?)cargo crev verify
display who needs to be trusted in order to pass the verification.Compute a risk score of each dependency based on multiple factors, e.g. does it use build.rs or proc-macros (these run code at build time, bypass static code analysis). Does it use unsafe/no_mangle/link? Is it popular, is it old, is it from a trusted author.
Beta Was this translation helpful? Give feedback.
All reactions