Dates and Events:

OSADL Articles:

2023-11-12 12:00

Open Source License Obligations Checklists even better now

Import the checklists to other tools, create context diffs and merged lists


2023-03-01 12:00

Embedded Linux distributions

Results of the online "wish list"


2022-01-13 12:00

Phase #3 of OSADL project on OPC UA PubSub over TSN successfully completed

Another important milestone on the way to interoperable Open Source real-time Ethernet has been reached


2021-02-09 12:00

Open Source OPC UA PubSub over TSN project phase #3 launched

Letter of Intent with call for participation is now available



Utilizing security methods of FLOSS GPOS for safety

Nicholas Mc Guire, Lanzhou University

Contrary to safety, security has been a major concern of general purpose operating systems (GPOS) for a long time. Notably the establishment of FLOSS GPOS in the server market can be directly correlated with their security properties and their growth with the continuous improvement of security capabilities. Fundamentally,
security measures can be split into two broad categories:

  • Reactive methods
  • Preventive measures

The reactive methods including detecting methods, i.e. intrusion detection, are not discussed here, rather we will focus on the preventive security methods and discuss which are potentially relevant for safety as well.

Some preventive measures in security have a clear pendant in the safety domain - stringent development life cycle from requirements to maintenance. This is maybe best visible in the key security functions like secure hash functions, encryption functions or random number generators. A distinct difference to the safety domain, though, is that the security domain has been building on open and publicly peer reviewed algorithms for decades - after identifying that security by secrecy (referred to as security by obscurity) does not work. This message has not yet arrived fully in the safety domain where one equally could argue that the secrecy of safety concepts and safety cases is a serious systematic deficit - and publication of the same would reveal many systematic defects.

As with safety related systems, security has understood that it is not a realistic target to eliminate all systematic faults from complex software by development life-cycle methodologies only - there is a residual defect probability that will never be zero. To mitigate some of these faults, the security domain has employed an interesting, and generalizable, strategy. Systematic faults are converted to random faults by randomizing the environment of the executive. With other words, rather than eliminating the individual defect - which is systematic in nature - the effect is obscured by introducing a non-deterministic environment. An example of this is Address Space Randomization (ASR) which in itself can't protect against buffer-overflows, but can ensure with a certain probability, that two systems will not show the same response to this systematic fault, given the same input. Translating this back to the safety domain, we arrive at a fault class that can be mitigated by randomization of the system in the sense that a systematic fault no longer manifest itself as identical false-positive outputs of the system, thus allowing utilizing well established methods to mitigate the class of random faults to mitigate systematic faults.

By utilizing security methods, most notably the plethora of randomization strategies, a N-out-of-M system is in principle able to cover some systematic faults - which is still an area that needs further research. As safety related systems in many cases cover random faults by replication, converting a subset of systematic faults found in FLOSS components into random faults by randomization of specific execution environment properties, allows covering these without further efforts at the architectural level. At the same time security is becoming an issue in safety (see IEC 61508 Ed 2 Appendix B), and the desire to use open-networks (i.e. RFID in cars, GSM-R in rail, WIFI in avionics) are making security related threats to safety related systems a relevant safety issue, thus mitigation of security related issues are on the table for the next generation safety related systems anyway. In this paper we first will outline the key differences between safety and security and the impact on the suitability of methodologies. Next we discusses what security methods might be of interests and what class of faults can be covered by these methods. We then establish a principle model: converting a systematic hardware/software faults to manifest itself as a random fault by randomization of execution environment properties. Thus, this will allow to utilize the well established security methodologies in FLOSS operating systems like GNU/Linux to mitigate some classes of residual systematic faults by architectural means. Finally we conclude with a summary including a short rant on the systematic fault constituted by secrecy in the safety domain.