-
Notifications
You must be signed in to change notification settings - Fork 367
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New informational categorization: "unsound" #300
Comments
Another potential advisory you could add to the list is #298.
The short version is: "A crate is sound if safe code using the crate cannot cause Undefined Behavior. Otherwise, it is unsound." The devil is in the details, of course, which is why some individual judgment might still be required in some cases:
|
The three bugs listed here do not fall into the same bug category, and they should be treated differently. Soundness bug that may introduce an exploitable bugAny reachable exploitable bug in a program qualifies as a security vulnerability. For executable files, the judgement is easy. If there is a program path that handles untrusted input (file, network socket, etc.) and triggers an exploitable bug, it should be considered as a security bug. However, for libraries, there is one more layer of indirection. A user of a library decide how to use APIs in their program, and whether or not a bug in API becomes an exploitable bug depends on how the user structure their program. To my best knowledge, such bugs are handled as security vulnerabilities by many existing security advisories such as CVE and NPM. Node.js Ecosystem Triage Guidelines defines a library vulnerability as:
What's not described here is how NPM handles a potential security bug in uncommon usage of libraries. The answer is that they are also treated as security vulnerabilities, and NPM assigns different severity score depends on how likely those APIs will be used in a vulnerable way. CVE considers the similar aspect when assigning CVSS score, too. For example, take a look at these two NPM security advisories: They are both classified as SQL injection bugs. The first bug was categorized as low severity because the vulnerable variables are unlikely to be controlled by user input, while the second bug was categorized as high severity because there are common legit scenarios where the bug can be triggered by an attacker. Unless we have a good reason not to follow the standard of existing security advisories, I would much prefer to classify a bug in this category as an actual security vulnerability, not an informational one. Note that we already have been filing advisories for this type of bugs, such as RUSTSEC-2019-0034. One difference is that Rust security advisory currently doesn't assign severity score to reported bugs. I think it is good to have an intuitive score system so that people can estimate the impact of the bug quickly (maybe I will open another issue to discuss this).
Undefined behavior that happens to work as expectedThere is another category of soundness bug in which the code contains an undefined behavior but happens to work, because the compiler didn't exploit those UBs and the behavior of the compiled binary matches the intuition of the programmer. Many uninitialized memory related UBs and raw pointer mutability violations are in this category. Those code works fine for now, and probably won't break in the mean time, but there is no guarantee that they will behave in the same way in the future. This is a unique type of bugs for Rust. In C/C++, where undefined behaviors are so prevalent, not many people seem to be interested in this type of bugs unless they start causing problems. In Rust, we can detect and prevent those errors effectively with MIRI, which is becoming an executable specification of Rust to certain degree. I have no strong opinion for this type of bugs. I feel filing a security advisory for these bugs is a bit too much unless they create observable bugs in the output program, but I can also agree to people who think this type of bugs are potential time bombs and should be listed in the advisory. I'm also either fine with filing an informational advisory or even do nothing. I'm not aware of any Rust security advisory filed for this type of unobservable bugs. If we decide to file advisories for this type of bugs, I expect a lot of advisory will be filed for the first few weeks or months for known bugs such as bugs found by Miri.
|
I do not think this is a meaningful distinction from your first category. Since this is Undefined Behavior, an unrelated change anywhere in the program, or a compiler update, can silently and unpredictably turn a "happens to work" bug into a "may be exploitable" bug. In particular, using |
The difference here is that "there is known way how this bug might create security vulnerability" versus "although the program contains UB, there is no known condition that it goes wrong." This is not a distinction of the fundamental characteristics of bugs, but it is more of a distinction of observed/expected security impact of bugs. I didn't say that all uninitialized memory related bugs are in the second category. If we know that such bugs cause actual security-related misbehaviors under certain condition, those bugs should get security advisories with the description of when it happens. When I said The bugs that I had in mind for the second category are something like these: // creates an uninitialized reference and immediately initializes it
let a: &mut u32 = &mut unsafe { mem::uninitialized() };
*a = 42; let a = 1;
let a_ref = &a;
// overwrite the content under shared reference without using `UnsafeCell`
unsafe { *(a_ref as *const i32 as *mut i32) = 2; }
// theoretically this can print anything or even crash the entire program,
// but in practice I'm only aware of the possible results of printing 1 or 2
println!("{}", a_ref); // creates a reference to an invalid location but never dereferences it
let a: &u32 = unsafe { &*(0x1234 as *const u32) }; These are the most common type of UBs that I've encountered from not actively maintained crates, and I'm not aware of any case that these UBs turn into exploitable bugs. If there is any known such case, I'm happy to modify my comment to reclassify them as actual security bugs. Edit: The third code generates a trap instruction when given certain addresses on AArch64. This is an observable misbehavior and might be DoS vector in some cases.
I agree to you. I'm not suggesting that they are completely safe. They are bugs that should be fixed. However, the decision here is not to determine whether they are bugs or not, but if we should file security advisories for them. If one of such bug is fixed before start causing observable problems, do we want a security advisory (maybe informational one) for that bug? In my opinion, whether a bug has known security impact or not is a difference that should be considered when filing a security advisory. P.S. Note that actual bugs are not written in such obviously wrong way. These are simplified patterns for the demonstration purpose. The first pattern usually creates an uninitialized array on the stack and uses only initialized portion of it. The second pattern is often found in concurrency-related code. There is no guarantee that the value will be ever updated, or it will be one of 1 or 2 once it is updated, but as long as the code handles both cases of the stale value and the updated value, it won't likely become a security bug by itself. |
@Qwaz the distinction between "there is known way how this bug might create security vulnerability" versus "although the program contains UB, there is no known condition that it goes wrong" is one we've tried to capture in security advisories vs informational advisories, where this issue is discussing the latter. For cases that do fall into the first category, my recommendation is to publish a security advisory that documents a known vulnerability. |
@tarcieri I was disagreeing to this part of the original issue description. The summary of my argument is:
|
So let me back up: you've stated things in a somewhat vague way that loses the nuances of what we're trying to capture here, namely with the statement "there is no known condition that it goes wrong". This is a contradiction in terms (or at least, evokes "if tree falls in the woods"): if you're saying there is no way to actually execute code with undefined behavior (i.e. if that's what you mean by "wrong") I'd say by definition the program doesn't have any. Or to flip that around: if code is unsound, by definition it must be possible for things to go wrong. See #276 (as mentioned in the OP) of a case that's interesting for this category type: This is the case for almost all of the advisories in this proposed category: if a programmer goes out of their way to abuse undefined behavior / soundness bugs to achieve memory corruption, in a lot of cases it will be possible, but actively seeking out memory exploitation by abusing soundness bugs is categorically different from code where security vulnerabilities are likely to arise in practice by using an unsound dependency. |
I'm saying that this is a wrong nuance to capture with informational advisories. The whole demonstration of how CVE and NPM handle libraries bugs was to disagree with this idea, that Sure, we can decide to change that, but repeating here, "unless we have a good reason not to follow the standard of existing security advisories, I would much prefer to classify a bug in this category as an actual security vulnerability, not an informational one."
Not at all! There is no clear cut definition of "security vulnerability", but I think "a bug that allows an attacker to perform an action that shouldn't be allowed" captures important concepts well, so let's build our discussion around this (or please suggest another definition). What I meant by "goes wrong" is introducing a security vulnerability. Soundness is somewhat related, but an orthogonal concept that captures the formal correctness of the program. As @RalfJung said, "a crate is sound if safe code using the crate cannot cause Undefined Behavior. Otherwise, it is unsound." Not all unsound programs exhibit security vulnerabilities (due to how compliers work), and not all security vulnerabilities are undefined behaviors (e.g., logic errors). Consider a program that creates an uninitialized reference and initializes it immediately. The program is unsound and exhibits an undefined behavior. There is no argument here. But is it a security vulnerability? No. As @RalfJung said, "an unrelated change anywhere in the program, or a compiler update, can silently and unpredictably turn a "happens to work" bug into a "may be exploitable" bug", but it's not a security bug until that actually happens, and I don't think it will happen in the meantime unless we overhaul how Rust/LLVM works. Then, the next question is that do we want a security advisory for this type of soundness bug? That's the part that I don't have strong opinion of. I'm all fine with filing a security advisory, filing an informational advisory, or do nothing as long as it is backed up with reasonable arguments. The topic I have strong opinion on is that we should file an actual, not informational, security advisory for a library that allows its consumer to introduce security vulnerabilities using its public safe APIs.
|
Code like this // creates an uninitialized reference and immediately initializes it
let a: &mut u32 = unsafe { mem::uninitialized() }; is known to cause traps on ARM architectures at least. I honestly have no interest in further "weaponizing" such actually exploited UB to an exploit for a concrete vulnerability -- I would rather spend my time to find another instance of UB elsewhere, than to determine just how bad this one case of UB is. If other people enjoy that kind of game, of course I won't stop them. ;)
Notice the part where I said "unrelated change anywhere in the program". The compiler doesn't need to change at all to change behavior here; it suffices to update some entirely unrelated dependency. This is unlikely, but possible. Maybe my mindset is too theoretical, but this is enough for me to be worried. ;) I feel like the new thread here is derailing the original discussion. The original discussion was about whether "we know this crate is unsound" is sufficient grounds to file some kind of advisory. IMO the answer is yes, and a lot of people agreed in discussion, so I wonder what it would take to move forward with that. Now @Qwaz says that some of these unsoundness bugs maybe should get a higher severity than just "this is unsound". As in, if we have further information beyond "this crate is unsound", what do we do with that information? I have no strong opinion here, except that there should be a new issue to discuss that case as it is fairly unrelated to what we do if all we know is "there is an unsoundness". To give a concrete example, rio should definitely get an "this is unsound" advisory; if the advisory gets bumped to "this is insecure" is a separate discussion and off-topic in this thread. |
The original issue suggested to create an informational advisory category for crates which is known to be unsound but not known to contain a security vulnerability. My main objection was against the original issue's definition of "not known to contain a security vulnerability", and I don't think that the discussion on that is totally off-topic as the result of the discussion will decide which type of bugs will go into the suggested category. It would have been a separate discussion if the original issue didn't contain "not known to contain a security vulnerability" part and security advisories and unsound advisories were filed independently for the same bug. However, I'm fine with separating the discussion into two different issues, such that (1) this issue is confined to discuss whether we should create an informational unsound advisory category for unsound but not known to be security bugs, and (2) create another issue to discuss the condition to qualify as a security bug, if that will help us making a progress in the discussion. I think there is a general agreement on (1) that we want such advisories, if we postpone the discussion on the definition of "not known to contain a security vulnerability" to issue (2). Off-topic question about trap on ARM architectureI realized that the example should have been: // creates an uninitialized reference and immediately initializes it
let a: &mut u32 = &mut unsafe { mem::uninitialized() }; The original code converted an uninitialized value to a reference instead of creating a reference to an uninitialized value 😛. Actually, debug build with rustc 1.44.1 on x86 panics with a cute panic message "thread 'main' panicked at 'attempted to leave type With that mistake fixed, the code compiled with the same version of rustc seems to work very well on |
(The ARM problem was from before I added those "cute panics" as extra safeguards. You can still get that same old behavior with |
If you think this project's definition of what constitutes a concrete security vulnerability is underspecified, I would definitely suggest creating an issue separate from this one for that discussion. I'll note you can find our current vulnerability criteria in CONTRIBUTING.md. This isn't intended to be exhaustive (in the way, say, Common Weakness Enumeration is) as we deliberately place certain potential vulnerability types out-of-scope for the purposes of this project. For example, while we do accept advisories for concrete DoS attacks against network services written in Rust where a remote attack is possible, we have opted against filing advisories for every possible unexpected panic under the sun as a potential DoS vector. I see the "unsound" categorization proposed in this issue as something which sits in the middle between those two extremes: not representing a concrete threat which we can point to directly and say "this meets the CWE criteria for security vulnerability X", but something which is still potentially much more dangerous than an unexpected panic, with the possibility for a consumer of that dependency to accidentally create a vulnerability in their Rust code where in absence of interaction with an "unsound" crate, that code would otherwise be safe (i.e. at worst causing a panic). I don't think there's a lot of precedent for this, particularly in other language-centric security advisory databases, as it's trying to capture nuances that are somewhat unique to Rust's safe vs. That said, I think the definition of what otherwise meets the definition of a security vulnerability is already covered by materials present in this project as well as vulnerability categorizations like CWE, and if you'd like to continue discussing that, it'd be best to open a separate issue. |
@tarcieri so what would be the next step to move the idea of informational advisories for soundness problems further? Is a PR against https://github.com/RustSec/advisory-db/blob/master/CONTRIBUTING.md#criteria, or is there some other process to follow, or are you waiting for more feedback, or ...? |
@RalfJung concretely, it'd be opening a PR to add a new variant to this enum: https://docs.rs/rustsec/0.20.0/rustsec/advisory/informational/enum.Informational.html Updating |
Ah, those are in different repos... probably best to do one first to centralize discussion. |
Yeah, sounds good. I'd suggest starting with a PR against https://github.com/RustSec/rustsec-crate to add a new variant to the |
Opened the second issue to discuss the extent of security bugs in #313 |
So regarding that, this is basically the discussion happening in #313, right? "unsound" is an informational advisory, while that file lists what qualifies as a vulnerability. I don't know where in that file would be a good place to discuss unsoundess. Should this be a new subsection? |
It'd be good to add an additional section on informational advisories in general, also noting we track unmaintained crates |
Based on the discussion on #276, as well as on the "Security advisories for April 2020: rustqlite, os_str_bytes, flatbuffers" Reddit thread, it sounds like there are a number of people who do want to know about "unsound" crates (i.e. crates which present an unsound API in safe Rust), but do not in and of themselves contain a security vulnerability.
I think the best path forward here is to track these crates using informational advisories, similar to the ones we use for unmaintained crates. It'd be good to provide a precise technical definition of what is considered "unsound" (I'll defer to someone like @RalfJung for that), but in short: crates which provide APIs which do not uphold the invariants of safe Rust, or use unsafe in a way that does not uphold the invariants expected in correct unsafe code. Such crates don't contain a vulnerability in and of themselves (or else they would deserve security advisories), but can be misused by other crates in order to create code containing e.g. a concrete memory safety vulnerability.
We have both existing vulnerabilities in the database which fit this categorization, and some open requests to file them:
Now granted some of these are debatable as to whether or not they should be classed as a security vulnerability (e.g. #293). That said, it seems to me this information is worth collecting and this will likely keep coming up.
Informational advisories are surfaced as warnings by
cargo audit
(andcargo deny
), and are therefore kept out-of-band of security vulnerabilities (they can be escalated to a hard failure via the user opting in).Given the, I propose adding a new "unsound" categorization for informational advisories to the following enum:
https://docs.rs/rustsec/0.20.0/rustsec/advisory/informational/enum.Informational.html
After that, we can review existing advisories which would fit within this categorization, and proceed filing advisories for the above three crates (
bigint
,rio
,smallvec
)The text was updated successfully, but these errors were encountered: