-
Notifications
You must be signed in to change notification settings - Fork 7
Minutes 12 Oct 2023
Paul Albertella edited this page Nov 1, 2023
·
1 revision
Host: Paul Albertella Participants: Sebastian Hetze, Lukas Bulwhan, Igor Stoppa
- Discuss 'Proven in use' document draft from Sebastian
- Continue discussion of models describing Linux role in safety-critical systems
- Sebastian
- Working with Bitcom - have internal GitHub repository where material is worked on in private before things are published
- Can help to identify potential commercial / political issues
- Lukas: This is addressed in the code of conduct / disclaimers associated with ELISA
- e.g. for minutes:
The discussions in these meetings are exploratory. The opinions expressed by participants are not necessarily the policy of the companies.
- Need to add a disclaimer like this to the GitHub repository making this clear for submissions to the project. Do we need to extend this disclaimer?
- Sebastian: Also concerned about implication that text is ‘authoritative’ Igor: If the output is all subjective, it is not so useful
- Lukas: Issue of ‘implied competence’ has been discussed before
- Perhaps have contributors include a ‘competence profile’ - mini CV, to address this
- Readers of material need an equivalent of ‘caveat emptor’
- Igor: Can also establish validity by reference to sources of information / data informing the writer’s conclusions
- Sebastian: Will add it in PR - can discuss at the workshop if possible
Models of the role of Linux:
- Linux is present in the system, but has no role in any safety scenario, other than as a source of interference
- Linux is present and has an active role in a safety function, but no responsibility for ensuring that it is correct
- Linux has responsibility for some parts of a safety function or functions
- Linux has responsibility for all safety functions
What other examples are there?
- Igor: Case where a hypervisor is being used to partition workloads of different criticality
- Lukas: What about an example where an application is run a) on a Windows system and b) on a Linux system, and then the results are compared on c) a different OS running on different hardware? Which model does this correspond to?
- 2, because Linux has a role as b) but is not ultimately responsible for ensuring that the safety function is correct (which is assigned to c)
- Sebastian: Example of where Linux seems to be an increasingly attractive option: running a workload that is part of a safety function, but does not have ultimate responsibility *Paul: Question is not only do these models cover all possibilities, also how are these useful?
- Igor: Also need to consider availability as part of these models
- Lukas: Yes - availability meaning that we need to assert that, not only is the result correct, but that this verified result is available within a certain timeframe.
- Igor: But what if both systems provide the wrong result?
- This is a failure that we can’t protect against
Examples of 1:
-
IVI system in a car that is running alongside a safety function
-
Linux running on a mobile phone that connects to a car
-
Could we think about this in terms of how much trust are we placing in Linux in a given system context?
- e.g. Can we assume only ‘credible’ fault or do we have a malicious attacker that can exploit faults or chains of faults?
- This brings security into the equation as well
-
We should be classifying the kinds of faults or interference that we need to consider with respect to Linux.