Skip to content

Minutes 14 Mar 2024

Paul Albertella edited this page Mar 21, 2024 · 3 revisions

Host: Paul Albertella

Participants: Igor Stoppa, Daniel Weingaertner, Pete Brink, Sebastian Hetze, Luigi Pellechia, Raffaele Giannessi, Florian Wuehr, Phillipp Ahman

Agenda:

  • Finish defining the 'bash use case' that OSEP will consider as a test subject for applying the safety checklist [1]
  • Topics to propose for Elisa Workshop (Lund, 4/5 June)? [2]
  • Defining 'core' parts or functions of Linux? [3]
  • Process and criteria for adopting contributed content [4]

Discussion

Naming for the use case

  • “Fear the KNU!”
  • “Don’t let the KNU trample all over your safety tasks”
  • “The KNU is lurking in EL1”

Serious point: the KNU represents a set of possible interferences (with some example C code to illustrate it) - how does your use of Linux deal with these?

Igor: Trying to illustrate that many of the strategies / mitigations for achieving FFI (e.g. userspace device drivers) can still be undetectably corrupted by a misbehaving piece of kernel code. There are no protections for this. There are protections designed for protecting from meaningful attacks through the process memory map, but not from corruption of the linear map by the kernel.

  • Luigi: Doesn’t this create more fear about using Linux in safety?
  • Igor: Yes, but if we are not honest about the risks, we are not taking safety seriously

Paul: Are there strategies for dealing with the threat / risk?

  • Igor: Given state of Linux now, we cannot say that Linux is ‘safe’, but we can say that ‘given this system design, I can have a safe system in spite of Linux’
  • Igor: Basis for trust is being honest about the risks / issues - our reasons for choosing Linux are not about safety.

Pete: Early discussions in ELISA seemed to imply that there could be such a thing as a ‘safe Linux’, but we are saying that this is not the case.

  • Real time capability in Linux has been debated and extended for many years, but it cannot be relied upon absolutely.
  • CHERI / Morello - safety layer (hardware enforced) to deal with some of the problems
  • But there is a performance impact and this does not absolve you of the need to argue that it is safe.

The ‘dream’ (especially in automotive) was to have Linux completely replace the plethora of ECUs, but it is not possible to have no other components involved in the system.

  • Making the case for Linux as part of a system is dependent on the system’s requirements - high availability (or quick recovery) in the event that the Linux-based system fails in an unrecoverable way. There are experimental solutions for rebooting quickly, but these are not fast enough for realtime scenarios.

Paul: Do we need a position statement summarising this ^^ ?

Igor: Another document “So you want to design a safety-related system involving Linux…” with guidance about hardware choices, system design decisions.

Next:

  • Plan for how we can as ELISA or OSEP can ‘endorse’ the set of arguments and foundational principles that Igor’s documents articulate
  • Illustrate this by applying it to a simple use case
  • Show what a solution would look like for that use case
  • May be for a very circumscribed system / system design
  • This is still a useful starting point
  • If we can’t find a solution, then we can at least come to a conclusion!

Igor: Next problem - how can you trust a kernel subsystem if you don’t know how it works?

Clone this wiki locally