-
Notifications
You must be signed in to change notification settings - Fork 152
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unsafe register access option. #714
Comments
Seems reasonable to me. |
I think this decision should not be made with only the individual registers in mind. You are of course right that on most (all?) microcontrollers, there are registers which are dangerous to access and should not be accessed carelessly. It should better not be possible for random application code to access them outside of However, in practice this is already the case: The registers are only accessible via owned singletons, and code with access to those singletons is usually limited to some HAL and a relatively small amount of initialization code. Therefore, I think there are mainly two sensible choices:
|
The Rust docs note:
If it is safe to obtain a singleton, and it is safe to write to a register using it, but that write performs undefined behaviour in certain scenarios (per the manufacturer's specification), then isn't the svd2rust API unsound regardless of whether access to such objects is 'limited' to whoever calls the safe API first? We already distinguish between However I think the proposed third option is reasonable:
|
This is what we have for writes today, using the Currently I don't believe But, @pellico, if I understand your suggestion correctly, you're saying "all write access" and also "all read access with side effects" should be unsafe - in other words, the only safe operations would be reads without side effects, is that right? I assume you are referring to |
I don't think it's so easy. UB is a construct defined by the programming language, specified in terms of an abstract machine. It doesn't even address interactions with hardware, and for a good reason: It's difficult enough to define UB inside of this relatively simple abstraction. If you extend the notion of UB to include hardware outside of that abstract machine, and try to apply the rule you cited in a strict way, so many APIs would need to become Of course, the LED example is ridiculous. It was meant to be, obviously. But where do you draw the line? There are a lot of reasonable choices. And when there is a discussion where it's not sure if some hardware interaction should be marked as IMHO doing an individual assessment of every single register and its possible values is a gigantic amount of work, and not justified by the gain in safety or ease of use. (And every time we decide that a decision was wrong and a register or value should be marked as So I'd prefer to find easy rules, knowing that they can't cover every detail in the best way possible. Which is why I like the idea "every register access is unsafe". The current handling, which as far as I understand makes writes unsafe unless the PAC lists allowed values, is fine as well. But it's already both more complex and opening more room for discussions. |
I'm agree. All or nothing. For fields with |
I agree, but that's not work that svd2rust has to do. It's work that the silicon vendor or other maintainer of the PAC can elect to do if it suits them - especially if they are looking to use the svd2rust generated output in a safety critical system. This is assuming such attributes exist in the SVD standard - I'm not an expert on that.
I agree it's not Undefined Behaviour, but as the author of an API, if I had a function would would crash the chip (e.g. What I think is novel here is that Infineon are offering a first-party svd2rust generated API for their microcontrollers and so customers, rather than community HAL maintainers, have to use it - hopefully correctly.
There is an attractiveness to this position that I do understand. Maybe it's simply not appropriate to have these rules encoded in the types and APIs generated by svd2rust output, and we just say that some extra level of API would need to be layered over the top. You just put a big note at the top that says "Hey, just because the PAC has a safe API, doesn't mean you it's right for you to poke any given register at any given moment". |
Yes this correct ! |
Could you link where is described that UB is referring to the abstract machine? From Rust Reference
Moreover is listed here as unsafe operations:
I think here is someway involved the hardware behavior (many intrinsics are hardware specific instructions and some of them are used to write registers) At the end of day this boils down to have a definition of unsafety and comply with this definition. I would be fine if in Rust Spec and Reference Manual will be written that undefined behavior of HW peripheral could happens in Rust safe code. My proposal for where to draw the line of UB in HW is:
👍 |
Sorry, I don't have a quote at hand, and I'm not sure if it's explicitly written somewhere. So technically I may have been wrong when I wrote "UB is a construct defined by the programming language, specified in terms of an abstract machine." But I think the conclusions are still valid. As an example of something which doesn't seem to be covered by the definition of UB, see totally_safe_transmute, https://blog.yossarian.net/2021/03/16/totally_safe_transmute-line-by-line, which relies on access to Also noted that UB is a far reaching concept. It doesn't only mean "the machine may crash somehow". Instead, the compiler can do any code transformation under the assumption that UB will never happen. So you can't argue your way out of UB. (Like in: "I don't mind that there is a race condition, because I don't really care if the result is wrong every now and then, when some unfortunate timing happens." No, if the compiler can see the UB, it can just assume that the code will never be executed and for example replace it with some constant value, which will always be wrong. Or worse.) As I understand it, this is not what we are talking about, here. From the compiler's point of view, those memory-mapped registers are just some random memory locations without specific meaning. As long as you observe the rules which also apply to regular memory, the compiler doesn't care what you do to them. Yes, turning of your cpu clock while your program is running probably isn't a good idea. But it also is not UB (like in "the compiler might ignore it"), but it will halt your program in a very (hardware-)defined way. |
For context, rusts stance on |
We have in microcontrollers with behaviors that are more undefined and not consistently reproducible than writing out of an array boundary. When a behavior is declared undefined in a microcontroller user manual, it means that nobody has tested the condition in simulation or in real device. The result of the undefined operation is unpredictable. The behavior can change for every device or even for same device at different point in time (this is typical for power control, some PLL control circuit or other mixed signal peripherals). About /proc/self/mem ExampleRegarding the example of access of
It looks like that the real argument was: too difficult to create a safe However if I follow the argument described in https://doc.rust-lang.org/std/os/unix/io/index.html#procselfmem-and-similar-os-features (external world shall not be considered) I am not understanding the following:
Bottom-lineI think that we could agree which are the requirements to be fulfilled to consider an API are not well defined and there is not a common understanding in the community and practice. Why is important to have well defined requirements for safe API ?As embedded developer that shall provide to a customer a Rust safe and sound library I want to match exactly his expectation but at the same time I don't want to put more effort than required. In other words, safe property is a contract on the API that I want to fulfill at minimum. Temporary proposal until proper definition of
|
You're right, it's not consistent to say you don't need
This is just a peculiarity of svd2rust. It defaults to being unsafe to write any register, if specific enumeratedValues are provided then those are safe, and if those values cover all possible bit patterns then
👍
My preference would be for such a switch to not only make all access |
Maybe the 'unsafe' API can just give address to the RegisterBlock objects? It's a raw pointer to a struct so it's all unsafe, and you can read/modify/write the registers as you wish. We'd just need to expose constants for the shifts and masks required to access each field in each register, exactly like the C API. Or was the idea to have an 'unsafe_write' etc which took a closure with a proxy arg that only had unsafe methods? An svd2rust flag would often mean building and publishing both variants which is more work. A feature flag in the generated code might work? |
|
Yea, we already have |
I think that removing the concept of owned singletons has some deep impact of Low Level Driver architecture that may be undesirable. Owned singleton allows composition of driver that are coming from different sources. Sometime ago we developed a tool where different developers can create peripheral driver components that can be instantiated multiple times (component ~ class). Each component shall declare which kind of HW resource group it needs (HW resource group is modeled by group of register bitfields that implements the same functionality). The tool exclusively assign to each instance a different group of register bitfields based on connectivity requested and which pin end user want to use. Due to the lack This allowed to combine in the same project different kind of drivers for the same peripherals e.g. one UART was consumed by a basic driver instance, another UART plus a DMA channel was consumed by a more advanced component. Without PAL ownership the low level drivers shall come from the same package to avoid multiple independent access of same registers. This is still durable because in my experience the developers will end up in using the LLD provided by silicon vendor (the ones that expose all functionality) and with this one they develop higher abstraction layers e.g. standardized HAL. I am assuming silicon vendor will sometime offer Rust LLD :-) The removal of ownership would simplify the work of PAL provider but I don't fully understand all benefits for PAL user. If the problem is the too coarse granularity I think it would be not too difficult for semiconductor vendor to split in smaller units that still make sense from functionality point of view. |
+1 for removing peripheral singletons and making all register access unsafe. I've found they have a few severe downsides, I've written about them here. Embassy's HALs use this approach and define their own singletons at the HAL layer. It has turned out to work fine, and completely solves these downsides. ( |
Following the definition of unsafety as described in Ferrocene spec and Rust reference, unsafe code may result in undefined behavior.
In our microcontrollers we can trigger an undefined behavior for some peripherals if some write/read order is not followed.
Therefore I think that all write access and read access that has side effect (SVD support this attribute) shall be declared as unsafe just because the HW could have some undefined behavior.
HAL or Low Level Driver in Rust shall solve the safety issue by providing API that forbid to trigger undefined behavior.
Moreover I find someway a contradiction that presently all register access (with some exception) is considered safe while if I call a low level driver implemented in C is considered unsafe. I see a clear similarity between register access and C API.
Do I miss something ?
Proposal:
Provide a svd2rust option to mark all all write access and read access that has side effect as unsafe.
This will not break backward compatibility and it will let to migrate to a safer implementation.
PS. Someone in embassy team share the same concerns.
The text was updated successfully, but these errors were encountered: