Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RUSTSEC-2020-0011 is not a security vulnerability. #275

Closed
hdevalence opened this issue Apr 24, 2020 · 63 comments
Closed

RUSTSEC-2020-0011 is not a security vulnerability. #275

hdevalence opened this issue Apr 24, 2020 · 63 comments

Comments

@hdevalence
Copy link

PR #268 added a security advisory related to plutonium, a crate that hides unsafe usage.

However, the security advisory does not report a security issue with the crate, or any defect with its intended functionality, but makes a value judgement about whether the crate's intended behavior is good.

Security advisories are not for "crates we don't like", they're for conveying information about defects in intended behavior.

@tarcieri
Copy link
Member

It’s definitely an interesting and unusual case.

It might make sense to create a new class of informational advisories for cases like this, as those are the existing mechanism for publishing advisories which aren’t specifically an advisory of a security vulnerability (presently they are used exclusively for tracking unmaintained crates). Informational advisories are surfaced as warnings instead of vulnerabilities.

@Shnatsel
Copy link
Member

Shnatsel commented Apr 24, 2020

I see it was filed as a regular security advisory rather than a special "informational" advisory type we use for e.g. unmaintained crates (example). @najamelan how do you feel about downgrading the advisory to a warning? This will still allow using it in cargo-deny, which I believe was the original motivation for the advisory.

@hdevalence
Copy link
Author

The crate isn't unmaintained, though. By all appearances it implements its intended functionality perfectly, the only problem is that some people don't like that functionality.

@myrrlyn
Copy link
Contributor

myrrlyn commented Apr 24, 2020

Co-signed. As written, the crate is a modification to the visual representation of Rust source code; it does not represent or use a vulnerability in either the compiler’s intake parser or output generator. The only change it makes is in the reader-facing representation of a narrow part of syntax.

plutonium represents a point of discussion about the philosophical approach that we as authors and viewers of Rust syntax take with regard to a class of operation. It does not represent any flaw in either the production or execution of a program, and as such, does not qualify as a vulnerability.

@alex
Copy link
Member

alex commented Apr 24, 2020

this is code that let's you write memory unsafe code without the unsafe keyword. That's unsoundness. When it happens by accident it's a security issue, when it happens on purpose, it still is.

@myrrlyn
Copy link
Contributor

myrrlyn commented Apr 24, 2020

No, it is not unsoundness. Unsoundness is specifically a term for the emission of a class of incorrect executable instructions, or a class of semantic input that causes the compiler to construct an incorrect view of the abstract program. It does not apply to the dialect of syntax that viewers observe.

@tarcieri
Copy link
Member

tarcieri commented Apr 24, 2020

The crate isn't unmaintained, though. By all appearances it implements its intended functionality perfectly, the only problem is that some people don't like that functionality.

Informational advisories are modeled as an enum:

https://github.com/RustSec/rustsec-crate/blob/ee8b2e5/src/advisory/informational.rs#L10-L21

pub enum Informational {
    /// Security notices for a crate which are published on <https://rustsec.org>
    /// but don't represent a vulnerability in a crate itself.
    Notice,

    /// Crate is unmaintained / abandoned
    Unmaintained,

    /// Other types of informational advisories: left open-ended to add
    /// more of them in the future.
    Other(String),
}

I think we could probably use the catch-all "notice" category for this purpose. This surfaces as a warning in cargo audit.

@andrew-d
Copy link

FWIW, my vote is that some sort of notification is correct here. To draw a comparison: if a crate bundled and made use of a shared object that contained a vulnerability, I would want to know about that, even if all the Rust code in the crate was safe/sound since the expected behavior ("this crate is safe and behaves as expected") is violated. Similarly, plutonium subverts the expectations and security guarantees of a developer that transitively depends on it, in a way that could result in unsafety or unsoundness, and that meets the bar for a reportable vulnerability to me.

@najamelan
Copy link
Contributor

najamelan commented Apr 24, 2020

I'm fine with this being a warning. I just feel this should never slip unnoticed into a dependency graph, and the crate was deliberately designed to exploit a limitation of cargo-geiger so people wouldn't notice they have unsafe in certain dependencies.

It is further problematic as grepping for the string unsafe is a common review technique disabled by this crate, which warrants a warning in my opinion.

I can only conclude that the intend and purpose of this crate is to make reviewing for memory safety harder, which is not something I think should be on crates. If people want to write unsafe code without using the keyword at home, I couldn't care less, but this invites anyone not wanting hassle for using unsafe to just add it as a dep in Cargo.toml and I'd rather we don't make that easier and less transparent.

@myrrlyn
Copy link
Contributor

myrrlyn commented Apr 24, 2020

If we're going to debate the virtue of a source file, then I'd like to respectfully rejoin that, for example, I am the author of a crate with a ridiculously high unsafe metric despite actually only having a select few parts of the codebase that are actually unsafe. The #[safe] annotation allows authors and readers alike to distinguish between "this performs an action that, while safe, is not registered as such with the compiler", and "this is unsafe and warrants significant attention".

This crate provides an opportunity to decrease the background-noise present and increase the signaling capacity of the unsafe keyword, so that rg unsafe becomes actually useful.

@najamelan
Copy link
Contributor

@myrrlyn I'm sorry we disagree, but as a reviewer "these uses of unsafe don't merit review" is one of the assumptions you definitely want to double check, so you want to find all occurrences and not only the ones the author thought were maybe dangerous.

@Manishearth
Copy link
Contributor

I don't think "this circumvents cargo-geiger" is enough justification. cargo-geiger is a separate tool, and one with a flawed metric that this crate neatly illustrates. If cargo-geiger users care about this crate being caught, cargo-geiger should handle that, not the rust security advisory database.

I don't quite agree with @myrrlyn -- I don't think one should use this crate in production code -- however, his underlying point about degrees of unsafe is one I strongly agree with. cargo-geiger does nothing to help there. Perhaps that's not the point of the tool, but it's just one of the problems with treating the count of unsafe as a security metric. At best it gives a rough filtering to areas of code you might want to look at closer.

And this advisory database is about security.

@hdevalence
Copy link
Author

@najamelan

I can only conclude that the intend and purpose of this crate is to make reviewing for memory safety harder, which is not something I think should be on crates.

The thing is, security advisories are not about what you think should be on crates.io.

There is no security issue here. There should not be a security advisory, informational or otherwise. It devalues the meaning of this database.

Let's be clear: we're talking about a crate with 0 reverse dependencies and (as of this writing) 102 downloads by anyone, ever. The only reason there's an "advisory" is because someone saw it mentioned on Reddit and decided they didn't like it.

@sunjay
Copy link

sunjay commented Apr 25, 2020

If cargo-geiger users care about this crate being caught, cargo-geiger should handle that, not the rust security advisory database.

+1 on this and the rest of Manish's comment as well. I think this particular sentence should be the main focus of this issue. The security advisory should be removed and a check for this should be added to cargo-geiger if that is deemed necessary.

@yaahc
Copy link

yaahc commented Apr 25, 2020

Would putting this crate in the advisory database prevent someone from forking and renaming it and then using that fork? If not it seems like this must be fixed in cargo geiger rather than via an advisory.

@CAD97
Copy link
Contributor

CAD97 commented Apr 25, 2020

If plutonium deserves an advisory, does my rc-borrow? It's not directly trying to hide it's internal unsafe usage, but because basically the entire API is emitted from a macro, cargo-geiger shows it as not containing any unsafe.

Or what about proc-ceasar? The point of that crate is to make your source code completely unreadable by applying the Ceasar Cypher to it. That both hides unsafe usage from cargo-geiger and makes the source code illegible to a human; it's strictly worse than plutonium thus.

It's not a security db's job to pass judgement on whether a library is "good." Both plutonium and proc-ceasar are demonstrations of how proc-macros can be used to obfuscate source code from naive static analysis. They both do exactly what they advertise. There's nothing malicious nor exploitable about the crates themselves.

You can argue that maybe they shouldn't have been published on crates. But that's not the security advisory db's point to argue.

It's also worth mentioning that this advisory won't help prevent any malicious actors from using this technique. It's trivial to reimplement an unsafe hiding macro in your own crate, and any malicious actor is going to do that before pulling in a crate whose express purpose is replacing the unsafe keyword.

If anything, this advisory makes it easier to find out about this trick, and accomplishes the exact opposite of its intent of avoiding hiding of unsafe from cargo-geiger.

@tarcieri
Copy link
Member

tarcieri commented Apr 25, 2020

It seems we're pretty polarized on this issue. I see both sides of it as well: it's definitely true this is not a vulnerability in and of itself, however it'd be nice if, somewhere, there existed a list of crates which circumvent static analysis around safety.

There's a case to be made that RustSec isn't the place for it, however in developing the informational advisory feature originally to handle unmaintained crates, my observation was the advisory format is great for cataloging this information.

Furthermore, it's been designed to be quite flexible: the alerting behavior for informational crates can be configured on a category-by-category basis. As it were, if we were to catalogue this advisory as informational, by default it wouldn't be surfaced by cargo audit at all unless the user explicitly opted into doing so via configuration/arguments.

If we do want to go down this road, I think it might make sense to define a new category of informational advisories for this purpose, although I'm a bit at a loss as to what to call it.

Here's what I'd propose as a path forward:

  • Mark RUSTSEC-2020-0011 as an informational = "notice" advisory as I suggested earlier, which would make it invisible to anyone who does not explicitly opt into warnings for notice advisories
  • If people feel particularly strongly about "yanking" this advisory from the database entirely, please open a PR that adds obsolete = true to it (or update Remove security advisory about plutonium. #276 to do this), which will flag it as ignored/yanked by the client (by policy we don't delete advisories once they've been assigned an ID, ala crates.io's yank feature not actually "deleting" the crate). Once that happens we can continue the discussion there.

@najamelan
Copy link
Contributor

najamelan commented Apr 25, 2020

I am fine with this being a notice or a warning, but I do find it problematic if that is hidden by default. For one, I have been using and thus to some extend relying on cargo-audit for a while, but I didn't know that I'm not seeing all categories of warnings. And I can't find anything on how to configure that in the README. If it were up to me, cargo publish would refuse to publish crates with dependencies which have advisories without requiring a flag, like --allow-dirty does for a dirty working directory.

Running cargo help audit reveals:

    -L, --log-level <log-level>
            The log level for messages [default: warn]

Running cargo audit --log-level notice gives:

error: unrecognized option `--log-level`

cargo-audit 0.11.2
Tony Arcieri <bascule@gmail.com>
Audit Cargo.lock for crates with security vulnerabilities

FLAGS:
    -c, --config CONFIG       path to configuration file
    -h, --help                print help message
    -v, --verbose             be verbose

It doesn't work, but there is now a mention of a configuration file, hmm... Where is the documentation for how to use the configuration file?

I think the point is that people should consent to something like this getting into their dependency graph. Unsafety should be opt in, not opt out. Currently you already need to know about and not forget to run tools like cargo-audit before every release, but if you then also have to know about mysterious configuration settings, it's quite self defeating.

I am fine with notices and warnings returning 0 exit status for CI and such, but for the 0-2 lines of output that an up to date crate should have, is it really that bad to just show everything by default?

Would putting this crate in the advisory database prevent someone from forking and renaming it and then using that fork?

No it wouldn't. But if they add it to crates, anyone who notices can file an advisory for it. And if not they need to use a git dep, something that is visible during compilation and can be more consistently detected by cargo deny. For sure there are many ways to do malicious things that are hard to detect right now. The ecosystem isn't very secure. And it's not really feasible to manually verify every version of every dependency, especially as the ecosystem is young and many crates update often. That's where automated tools come in to alleviate the problem somewhat.

If not it seems like this must be fixed in cargo geiger

People are working on that.

@CAD97
Copy link
Contributor

CAD97 commented Apr 25, 2020

@najamelan have you misunderstood what plutonium does? All it does is change how you write the unsafe keyword, from

fn scary() {
    unsafe {
        // code
    }
}

to

#[safe]
fn scary() {
    // code
}

and as such, unsafety is still opt in. It's existence changes nothing about Rust's unsafe model, it just introduces a different way of writing code that adheres to it.


As for having it be a "this crate shouldn't be used in production code" warning: for one, that's a value judgement that a centralized DB shouldn't be in charge of. (Instead, maybe use a crev review of "don't use," which is a level that they do offer.) But more importantly, how do you define an actual cutoff point for what crates deserve that kind of advisory? There's thousands of toy libraries that shouldn't be used in production, do they all deserve an advisory?


And I must stress once again: plutonium is not doing anything sneaky. All it is doing is providing a macro that inserts an unsafe block around the body of the function. It's trivial for me to write a macro_rules macro pro_gamer_move! that does the exact same thing and which impedes cargo-geiger in the exact same fashion.

plutonium does not deserve a security notice. The domain of "bad style" crate advisory is for review, and cargo-crev handles it nicely.


Perhaps also cargo-deny should offer functionality to blacklist crates yourself.

@tarcieri
Copy link
Member

tarcieri commented Apr 25, 2020

@najamelan

I am fine with this being a notice or a warning, but I do find it problematic if that is hidden by default. For one, I have been using and thus to some extend relying on cargo-audit for a while, but I didn't know that I'm not seeing all categories of warnings.

For what it's worth, there aren't presently any advisories in the database using the informational = "notice" setting, so you haven't been missing anything.

Agreed the docs could be better, also there are some unmerged PRs which make the warning functionality for informational advisories more flexible.

@yaahc

Would putting this crate in the advisory database prevent someone from forking and renaming it and then using that fork? If not it seems like this must be fixed in cargo geiger rather than via an advisory.

This is true of any crate an advisory is filed against.

The best we can do is handle crates on a case-by-case basis.

@CAD97

As for having it be a "this crate shouldn't be used in production code" warning: for one, that's a value judgement that a centralized DB shouldn't be in charge of.

I would be perfectly fine with removing the verbiage to that effect from the advisory.

Would you mind opening a PR?

@tarcieri
Copy link
Member

I marked RUSTSEC-2020-0011 as informational = "notice" in #278

@CAD97
Copy link
Contributor

CAD97 commented Apr 25, 2020

I made the wording more objective in #279.

I still believe that the advisory should be marked obsolete and the description point people to use cargo-crev for this kind of "bad code judgement," but don't have the bandwidth to write the wording currently. The description should probably also note how the advisory got in when it shouldn't (being more a value judgement than an objective advisory, even with my objectivity PR).

@najamelan
Copy link
Contributor

and as such, unsafety is still opt in

It is if you use the crate, not if it creeps into your dependency graph.

Just to be clear. I just reported this to cargo-geiger, because I felt people should know about it, where it was suggested this should be dealt with by the advisory. At cargo-deny they also think this is something that should be handled by the advisory, so I filed a PR here.

Given that this seems to be an uphill battle, just know that it doesn't matter that much to me. Cargo geiger will get fixed, and have got it hardcoded in my cargo-deny configuration. We can agree to disagree, but every time time you do a cargo test and cargo has internet access, you are running random untrusted code on your dev machine. That's the state of affairs today.

Probably some companies do development uniquely in containers/VM, but there probably isn't many given the level of friction it adds. And there is no reason to believe Rust developers pick better passwords than npm developers. Seeing plutonium just reminds me of how dire the situation is and warning people about it is a drop in the ocean, but it's a drop. I'd just hope that collectively we find ways to get closer to improving on this one step at a time.

@CAD97
Copy link
Contributor

CAD97 commented Apr 25, 2020

@najamelan if you care about trusting the packages you're running, you should be using cargo-crev and reviewing the code you're downloading. Libraries definitely do not need unsafe to do malicious things, and cargo-geiger should not in any way be a substitute for actual code review.

I suppose you should just go ahead and ban proc-caesar as it also allows writing the unsafe keyword without writing the unsafe keyword, and rc-borrow because it shows up as "clean" in cargo-geiger due to having all of the API produced by a macro.

unsafe is useful as a hint of where to pay specific attention to memory safety, but is not anywhere close to being able to ignore reviewing unsafe-free crates.

You opted in to whatever your dependency is doing when you trusted them to provide a library without doing thorough code review yourself.

It really does sound like what you need is cargo-crev. It even provides the geiger numbers as a guideline towards the most important review targets.

@tarcieri
Copy link
Member

@CAD97 cargo-crev is an interesting tool, but it has comparatively lower adoption/reach, and while you may happen to disagree with it, it seems people are also interested in using the RustSec Database for this purpose. I’d also like to point out these tools aren’t a dichotomy: I’d suggest using both.

@CAD97
Copy link
Contributor

CAD97 commented Apr 25, 2020

I mostly just hope that the RustSec DB can stick to mostly objective advisories. crev's web-of-trust model is built to handle subjective review and trust; the central authority that rustsec uses is best when the central authority has some amount of (perceived) objectivity.

@Manishearth
Copy link
Contributor

This is exactly how the informational advisory feature works.

I actually think the "unsound" class is typically more severe than "unmaintained". Not in this case, but in general.

In this case it requires another layer of intent so I still feel it's a different class and should not be the subject of this db.

@RalfJung
Copy link
Contributor

RalfJung commented Apr 27, 2020

It's second-order unsound because it is behaving as advertised, but the advertised behavior is bad.

That's like saying a buffer overflow is not a bug when the crate says that it is intentional. I do not buy that argument. Sure this is a social system, but it risks becoming meaningless if we do not try to uphold some objective standard. "Soundness" is the objective standard on which the entire Rust ecosystem rests. The social part is the one where we all agree (or not) that soundness is important, and unsoundness a bug.

"(Un)sound" is an objective technical criterion, on which crate author intentions have no influence. (The only part I think that is subjective here is how to mark a macro as being "unsafe to use"; I do not know if "it should have unsafe in its name" is the consensus opinion.) Plutonium is unsound. Documentation can point out that it is deliberately unsound, but no amount of documentation can change the fact that it is unsound. Thus, is should have an advisory filed with whatever treatment we give to soundness bugs.

We could decide that unsoundness is not a problem when it is intentional. That is how I understand your argument. I strongly disagree with that.

I'd be way more comfortable with this if we distinguished between "concrete vulns" and "unsound", but we don't, so I'm wary of expanding the definition.

I'd say we should make that distinction. If we did, would you object to an "unsound" advisory being file for plutonium? (EDIT: ah, looks like you do not. We remain in very strong disagreement then.)

@CAD97
Copy link
Contributor

CAD97 commented Apr 27, 2020

@RalfJung, here's a thought experiment:

I think unsafe fn and unsafe { } are different enough that they should have different keywords; the former expresses a requirement on the user to remain safe, and is potentially unsound to call; the latter is an upholding of requirements, and is never unsound to execute. (Assume I've put things in micro modules to eliminate safe code that can break safety invariants.)

Believing this strongly, I publish a new crate, trusted, which provides a single macro:

#[macro_export]
macro_rules! trusted {
    {$($tt:tt)*} => {unsafe{$($tt)*}};
}

with the intent of using this macro instead of unsafe blocks for when the unsafety is properly and fully contained within the block.

Is this an unsound crate? Does it deserve an advisory, informational or not? I'd argue that it adds an important distinction between encapsulated unsafe and "leaky unsafe" that also relies on safe code outside the block for correctness.

I ask this because plutonium::safe is the exact same API, just applied as an attribute to a function rather than a block style macro. I seem to recall safety-dance discussing potentially providing a version of #[safe] that takes a review identifier and a hash of the code, and prevents compiling the code if it changes from the reviewed. It'd not be incorrect if that attribute also made the body of the function an unsafe block, as it, by existing, points to a review showing the annotated function safe.

I'd find it unfortunate if we said macros were sound or not based on whether their name contains the string unsafe.

If a macro is intended to always be safe to use, and in some cases allows you to execute unsafe code without upholding the preconditions, then that would be a soundness issue.

If a macro is intended to be unsafe to use, even if it doesn't contain the string unsafe in it's name, I don't think its existence is unsound. You can argue that it's a bad crate, sure, but not that it deserves to be linted out of existence.

It's sort of blunt to put it that way, but it comes down to the fact that you need to read the docs for the macros you're using, because they can do anything. (Including, but not limited to, stealing your Bitcoin wallet at compile time.)

I'm in full agreement with @Manishearth here; you can use plutonium to write unsound, potentially vulnerable code, but plutonium itself is not unsound nor potentially vulnerable.


This is definitely on the edge. proc-ceasar is, honestly, probably also on the edge. The main worry here is, imho, that accepting these "maybe" cases into the central authority decreases trust in said central authority.

It would definitely benefit RustSec if it could provide objective rules for when something definitely does deserve an advisory, and some objective rules for things that definitely doesn't deserve an advisory, and should be delegated to more opinionated services like crev, geiger, and/or deny. Even if the middle ground is fuzzy, it's beneficial to have clearly defined areas and guidelines that can be used to make decisions for the middle area.

I would personally put the useful "yes this is advisory worthy" main goalposts at

  • "major:" libraries that, through the exclusive use of safe APIs, can perform unsound operations (exhibit undefined behavior).
    • As the language does not provide for unsafe macros, macros are considered safe by default, but may be considered unsafe to use based on their documentation.
  • "minor:" libraries that fail to uphold documented postconditions in a way that isn't major (defined above), such as panicking when documented not to.

Just a side note: to those saying crev is "worse" than RustSec for this kind of thing because it doesn't have a central authority, for one that's sort of the point, that it doesn't need one. But more importantly, nothing prevents the use of a crev proof repository as a central authority.

And yes, crev struggles to bootstrap itself to being useful, due in part to adoption. But if people don't adopt crev because it doesn't have adoption, it'll never get that adoption in the first place.

@CAD97
Copy link
Contributor

CAD97 commented Apr 27, 2020

A final small wonder: is this (at least in part) due to documentation? If plutonium were more diplomatic in it's documentation, would that change your opinion of its worthiness?

I will fully admit that the documentation of plutonium is less than diplomatic; its expressed purpose is to "make everything less safe".

But I think the RustSec central authority should at least attempt to avoid making calls based on the diplomacy of documentation. There is, imho, nothing inherently wrong with using macros to create a dialect of Rust where unsafe is spelled differently.

@najamelan
Copy link
Contributor

najamelan commented Apr 27, 2020

I think unsafe fn and unsafe { } are different enough that they should have different keywords; the former expresses a requirement on the user to remain safe, and is potentially unsound to call; the latter is an upholding of requirements, and is never unsound to execute. (Assume I've put things in micro modules to eliminate safe code that can break safety invariants.)

  • writing unsafe {} means I'm the one upholding requirements.
  • writing pub unsafe fn means I expose a function for which the compiler, nor I can uphold the requirements, so the user needs to use unsafe {} as a speed bump and to indicate to others (reviewers) that there is a risk of this author using the API without upholding those requirements.

For more, see https://doc.rust-lang.org/nomicon/safe-unsafe-meaning.html. From this link:

The need for all of this separation boils down a single fundamental property of Safe Rust, the soundness property:

No matter what, Safe Rust can't cause Undefined Behavior.

So what ever I throw in the macro from plutonium, it can never cause any unsoundness, because the compiler has my back right? Unfortunately @RalfJung has already demonstrated that is not the case.

By not requiring you to use the keyword unsafe, the author of plutonium guarantees that their implementation takes over from rustc in guaranteeing that whatever you put in it will be checked for safety. That is unless we question a very well ingrained contract in the rust language and community.

Since that is obviously not the case, by using the macro, you say to any onlookers, "nothing to see here, rustc has your back, move on". Which is lying, because you just turned off rustc without wanting us to know that you did. That alone doesn't inspire trust if you ask me.

The difference between unsafe fn and unsafe {} is that in the former the callers code needs to be audited. In the latter, if this happens in a pub fn, whatever the caller does, if they don't use unsafe can never introduce unsoundness. It is the code that implements the function that needs to be more carefully scrutinized for errors. So if you want your users to not have to worry about making mistakes provide them with a safe API. No need for a macro for that. Just don't try to hide the fact that you are playing with fire inside their house.

All in all, the two are conceptually very similar. In unsafe {}, you are the caller, in pub unsafe fn you delegate responsibility to a downstream caller. That's all. We mainly want to know about the former. The latter still means that the documentation you wrote about how to use it correctly might be wrong and still allows creating unsoundness even if users follow the docs correctly, so you still have some responsibility and you can still make things blow up, but less likely than in the former. And by definition it's obvious that something unsafe is happening, so it is less dangerous in that sense. Users will often check the code as well as the docs, because it's obvious they have some responsibility if something goes wrong.

I seem to recall safety-dance discussing potentially providing a version of #[safe] that takes a review identifier and a hash of the code, and prevents compiling the code if it changes from the reviewed. It'd not be incorrect if that attribute also made the body of the function an unsafe block, as it, by existing, points to a review showing the annotated function safe.

I'm not entirely sure I get what you mean here, but you can have pub fn, no mention of unsafe on your API that inside the function body uses unsafe {}. That is, unsafe is not contagious.

As for reviews, just put a comment on it with a link to the review, or have the actual review text and name of the reviewer in the comment. Of course, if you later change the code, the review is no longer valid and I imagine that's what the discussion was about. No need to remove the unsafe keyword though.

@CAD97
Copy link
Contributor

CAD97 commented Apr 27, 2020

I'm not entirely sure I get what you mean here, but you can have pub fn, no mention of unsafe on your API that inside the function body uses unsafe {}. That is, unsafe is not contagious.

And that's what the #[safe] attribute in plutonium is, once you strip away the documentation's presentation of "make everything less safe" anyway.

It is just a transform from #[safe] fn foo() { /* code */ } to fn foo() {unsafe{ /* code */ }}. Inside the function annotated by #[plutonium::safe], you're not writing Safe Rust, you're writing Unsafe Rust. If you accept that #[plutonium::safe] is an "unsafe macro" (which the language has no definition of, so falls entirely to documentation), then there is nothing wrong with this.

To be clear, I know the difference between unsafe blocks and functions. That's why I think that you can argue for my thought experiment trusted! as being a meaningful distinction. #[plutonium::safe] is then the exact API, just as an attribute rather than a block macro.

So let me ask you directly, @najamelan:

Say I (or @myrrlyn, who has expressed that using this sort of macro would make his crate maintenance easier by separating "kinds" of unsafe) release the trusted crate, whose entire body I've inlined below.

/// Alias for an `unsafe` block.
///
/// This should be used to mark `unsafe` blocks that are not in any way dependent
/// on external safe code for correctness. That is, any unsafe preconditions should be
/// upheld solely by code within this block and safety invariants of the code it calls.
#[macro_export]
macro_rules! trusted {
    {$($tt:tt)*} => {unsafe{$($tt)*}};
}

Would you see it fit to submit a security advisory for that crate? After all, I can write

#[macro_use]
extern crate trusted;

fn main() {
    trusted! {
        *std::ptr::null::<u8>();
    }
}

and cause UB without the string unsafe anywhere in my code.

If you would, I strongly disagree with you. trusted! is explicitly a different way of spelling unsafe that communicates a different intent than just an unsafe block.

If you wouldn't, what makes this any different than #[plutonium::safe]?

@najamelan
Copy link
Contributor

Would you see it fit to submit a security advisory for that crate?

Yes

It is just a transform from #[safe] fn foo() { /* code */ } to fn foo() {unsafe{ /* code */ }}

I think I really don't understand what makes the former desirable over the latter. Eg. what problem are you trying to solve? If you want to distinguish between "different kinds of unsafe" in your crate, just put a keyword of your choice in a comment, then you can even search through your codebase for it, no? No unsound, no lying to reviewers, ...

@tarcieri
Copy link
Member

tarcieri commented Apr 27, 2020

@RalfJung

We could decide that unsoundness is not a problem when it is intentional. That is how I understand your argument. I strongly disagree with that.

This seems to be the core argument here. I am inclined to agree with you, with the caveat that deliberately unsound APIs do not contain a “vulnerability” per se (but are still fundamentally “unsound” from an objective, technical perspective).

I feel there is some baggage from the way this particular advisory was filed it’d be nice to separate discussion around the above topic on, if anyone wants to open a new issue for discussion (not to mention this issue is closed)

@CAD97
Copy link
Contributor

CAD97 commented Apr 27, 2020

@najamelan and I don't see the purpose of requiring unsafe code to use the unsafe keyword when this is the reason that Rust has its macro system. Nothing is unsound about trusted!, unless you have a draconian view of soundness the Rust language is unsound because it allows you to cause UB in unsafe blocks. How is writing trusted! different from writing unsafe for terms of soundness of API?

I personally agree that it's probably not worth it to use trusted! rather than unsafe. But I don't think the answer is to uniformly declare trusted! as evil and attempt to lint it out of existence. (And to be clear: that is what a RustSec vulnerability advisory is (perceived as): an attempt to lint (the unfixed versions of) a crate out of use.)

In any case, I'm not going to argue this point any further one way or another. I agree that you probably shouldn't be using plutonium, after all!

What I disagree with is trying to ban plutonium or any other crate that tries to offer a different way to scope unsafe. Experimentation is a good thing. I see linting plutonium away as roughly equivalent to linting fehler away: many people would prefer that both of these crates aren't used. But the macro system exists specifically so that people can create micro scoped dialects that express their problem domain better than macroless Rust.

@Manishearth
Copy link
Contributor

Manishearth commented Apr 27, 2020

If we did, would you object to an "unsound" advisory being file for plutonium? (EDIT: ah, looks like you do not. We remain in very strong disagreement then.)

If we did, I would still dislike it, but I would not be bothered to actually object.

That's like saying a buffer overflow is not a bug when the crate says that it is intentional.

A buffer overflow is different, though. If a crate intentionally causes a buffer overflow, it should be marked as a soundness vuln. This is one level removed, this crate enables others to intentionally escape unsafe. People could, for example, use it the way @CAD97 (and earlier, @myrrlyn) describe, as an "alternate unsafe block" with different semantics and it would be totally safe. Hell, we've had multiple discussions in the past whether or not unsafe should have been called assert_safe, safe, trustme, trusted, or another one of many words.

Intent matters because the people writing intentional bugs do not need to be warned, their downstream does. If someone uses plutonium for bad, they should absolutely have their crate placed in this db.

(I actually really want the converse of this crate, a macro that converts fn foo(args) {body} to unsafe fn foo(args){ fn foo_inner(args){body}; foo_inner() } so that we can experiment with rust-lang/rfcs#2585 )

@najamelan
Copy link
Contributor

attempt to lint it out of existence ... trying to ban plutonium ... I see linting plutonium away

I apologize if the wording in the advisory that it shouldn't be used in production was to strong and to opinionated, and I have no problem with that being changed, but other than that, I really don't see where you are coming from.

Nobody is trying to ban anything or lint it out of existence. Only make sure that people are conscious that something really fishy slipped into their dependency tree and that they are ok with that. If there are many like you, it wouldn't be linted out of existence at all. It does sound a bit like the argument that GMO usage shouldn't be labled in ingredient lists because people won't want to buy it.

@Manishearth
Copy link
Contributor

That analogy is facile: When you go to the store and buy GMO produce, does the cashier give you a stern look and say "are you SURE you want this"? Labeling something does not make a value judgement, warning on it does. This is why crev and geiger are the right places for this, crev is equipped to handle "this crate does not gel with my subjective values". Crev is the counterpart of the GMO label, not this database.

@tarcieri
Copy link
Member

tarcieri commented Apr 27, 2020

@Manishearth what about allowing cargo deny to use existing RustSec infrastructure to manage (optional) blacklists?

(Perhaps I should ask them about that idea before suggesting it up front, but)

@Manishearth
Copy link
Contributor

Like, third party blacklist databases? Sure. I don't think it belongs here, though

@tarcieri
Copy link
Member

More like a collection of crates which deliberately expose unsound behavior as @RalfJung described here:

#275 (comment)

...but rather than exposing that information through cargo audit, deliberately expose it through cargo deny instead, allowing it to use the existing RustSec infrastructure which it already integrates with.

@Manishearth
Copy link
Contributor

Sure, sounds good

@Manishearth
Copy link
Contributor

Manishearth commented Apr 28, 2020

I also do think the database should try and distinguish between "vulnerability" and "can accidentally let you write UB" (i.e. unsound), because the social processes around fixing the two are different -- specifically the latter may not require action on part of the end user, but might on part of intermediate crates. (End users still need to know, and perhaps pressure their upstreams to verify things, or audit them themselves)

@bjorn3
Copy link

bjorn3 commented Apr 28, 2020

A buffer overflow is different, though. If a crate intentionally causes a buffer overflow, it should be marked as a soundness vuln. This is one level removed, this crate enables others to intentionally escape unsafe.

fn call_asm_bytes(code: &[u8]) {
    unsafe {
        region::protect(code.as_ptr(), code.len(), region::Protection::ReadWriteExecute).unwrap();
        (*(&code as *const _ as *const fn()))();
    }
}

also let's you intentionally escape unsafe code. It doesn't by itself cause a buffer overflow. Instead you have to pass it asm bytes that perform buffer overflow. However it is unsound, as it doesn't verify the asm bytes for memory safety. This is not much different from other crates that allow you to intentionally escape unsafe.

@najamelan
Copy link
Contributor

najamelan commented Apr 28, 2020

That analogy is facile: When you go to the store and buy GMO produce, does the cashier give you a stern look and say "are you SURE you want this"?

If that cashier is called cargo-audit and is there because this shop advertises that they have health and environmental expert at their customers disposal, and I specifically ask them if there is anything I might want to know about this product in those regards, they better.

Unfortunately that is not the analogy. It's more like I go and eat at an organic restaurant, because you know, unlike some C-burger restaurants, organic restaurants are not supposed to allow unsoundness in safe food. It so happens that I am one of those customers who always has their FDA expert and a mobile lab with them when they go out eating, so I ask my pocket expert to test the food.

The reasoning @CAD97 brings forth here is that my FDA expert should not tell me they detected GMO's in the organic food, because if they did, GMO's would be linted out of existence, which is kind of saying people really shouldn't have the choice, even if they go through lengths of specifically trying to find out about this, because I know that they will reject it and my business model only works if I can force it onto people.

I don't say that was their intention, I'm only saying the logic really resembles.

@CAD97
Copy link
Contributor

CAD97 commented Apr 28, 2020

@najamelan the fact that you've argued for cargo publish to require the dirty flag if cargo audit is not clean damages your argument that this would be purely opt-in to pedantic checking.

@najamelan
Copy link
Contributor

That would be my preference, and I feel it's common sense in a safety focused language, but that is not what happens (not today at least), and not what this advisory was about. It also doesn't mean that that needs to happen for all categories of advisories. What I take away from this discussion is that there is demand for:

  • more distinction in different types of advisories
  • formal description of what goes in each category
  • better configuration options about what output you want to see

A possible implementation for cargo publish could be that it requires a flag to publish something to crates if it contains an exploitable vulnerability, but just prints the output from cargo-audit to the console on warnings.

@workingjubilee
Copy link

Just like "GMO" is in actuality a very specifically legislated definition that has gaping holes you could slip a three-eyed fish through, and so people do in fact eat what they would probably label "GMO food" if its production was described to them (whether or not that is a naive decision on their part), it is not the case that people have a choice about whether or not their code includes unsafe in Rust. I assert this because it is common knowledge that the core and standard libraries rely on unsafe code and so it is challenging to write any practical program without eventually invoking an unsafe function (somewhere, deeply buried in the compiler). If the utility of such warnings as unsafe are that we can verify the code and establish for ourselves that the safe interfaces to unsafe functions are indeed safe, and the primary purpose of plutonium is to circumvent cargo geiger, then plutonium already existed even before it was written, as even if you listed off all the locations to me, it is beyond me to verify the entirety of rustc and, having experimented, it appears cargo geiger does not even attempt to trace my crate's choices with use std::.

Sure, people say that open source makes bugs easier to notice, but I sure haven't checked if all 2701 contributors to rustc are in fact not secretly all alter egos creatively devised by Ken Thompson, here to make us really reflect on trusting trust. It would have been really clever of Ken Thompson to find a way to download his mind into my brain if so, but it would hardly be the most surprising discovery of the year, all things considered. I mean... 2020.

@RalfJung
Copy link
Contributor

RalfJung commented May 2, 2020

@Manishearth

(I actually really want the converse of this crate, a macro that converts fn foo(args) {body} to unsafe fn foo(args){ fn foo_inner(args){body}; foo_inner() } so that we can experiment with rust-lang/rfcs#2585 )

You mean like https://crates.io/crates/unsafe_fn ? :D

@CAD97 Those are some good arguments. Indeed I already pointed out the weak spot in my own argument being the definition of soundness of a macro.

Re trusted!, part of me feels like it should be called unsafe_trusted!. But maybe that's just me desperately clinging to an objective soundness condition for macros? I feel like we ought to have a systematic way across the ecosystem to figure out if calling a macro could accidentally cause UB, to use @Manishearth's formulation. But maybe that's hopeless?

So, given there seems to be rough consensus that tracking soundness violations in the DB would make sense but should be a distinct category, is macro soundness really the main remaining point of contention here? As in, @CAD97 @Manishearth would you agree that a crate with a public function like this should get an advisory filed (in the soundness category) and no amount of docs on the crate's side can make this not an unsoundness?

@CAD97
Copy link
Contributor

CAD97 commented May 2, 2020

Yes, I fully agree that a publicly-exposed safe fn that performs unsafe operations without internally guaranteeing that their preconditions are met is worthy of an advisory that the function is unsound. The language has a way to mark preconditions of a fn necessary for soundness: unsafe.

The language provides no such mechanism for macros.

Tangent on the safety of macros

I fully agree that the default for macros is to be safe, and if at all possible, a macro that is not safe to call should require using unsafe to call it by calling some unsafe fn internally without an unsafe block internal to the macro. But I also strongly believe that because the language provides no mechanism for actually marking a macro as unsafe to call, that property can only be determined from documentation. If a macro does not have a # Safety header in its documentation, I would still be on board with filing a soundness advisory against it. To be honest, I think the only reasonable macro that allows writing arbitrary unsafe code inside it is a macro whose entire purpose is to be an alias for unsafe.

Unsafe code is tricky, and doing any macro source transforms on unsafe code is scary.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.