2024-03-19 - 08:22

Dates and Events:

OSADL Articles:

2023-11-12 12:00

Open Source License Obligations Checklists even better now

Import the checklists to other tools, create context diffs and merged lists


2023-03-01 12:00

Embedded Linux distributions

Results of the online "wish list"


2022-01-13 12:00

Phase #3 of OSADL project on OPC UA PubSub over TSN successfully completed

Another important milestone on the way to interoperable Open Source real-time Ethernet has been reached


2021-02-09 12:00

Open Source OPC UA PubSub over TSN project phase #3 launched

Letter of Intent with call for participation is now available



OSADL Projects

OSADL QA Farm on Real-time of Mainline Linux

About - Hardware - CPUs - Benchmarks - Graphics - Benchmarks - Kernels - Boards/Distros - Latency monitoring - Latency plots - System data - Profiles - Compare - Awards

The worst-case latency of a system depends on a large number of variables; some of them are part of the kernel configuration. This section exemplifies the effect of the kernel configuration on the worst-case latency. All systems displayed here are based on identical hardware and BIOS settings (for details see links below). Load generation and cyclictest parameters also do not differ. The systems only differ with respect to their kernel configuration:

Differences between kernel configurations
Between And Context diff
r0s0 r1s0 r0s0-r1s0
r0s0 r3s0 r0s0-r3s0
r0s0 rbs0 r0s0-rbs0
r1s0 r3s0 r1s0-r3s0
r1s0 rbs0 r1s0-rbs0
r3s0 rbs0 r3s0-rbs0

Last update 4 minutes ago

Continuous worst-case latency monitoring
Please note that the recorded values represent maxima of 5-min intervals. Thus, the data in the columns labeled "Min:" and "Avg:" should not be considered; the only relevant result is the maximum of consecutive 5-min maxima at the rightmost column labeled "Max:".

Legend

 
System in rack #0, slot #0

This system was highly optimized with respect to a minimum worst-case latency (refer to system profile and kernel configuration), e.g.:

  • Defined 64-bit instruction set (x86_64)
  • Disabled tickless system (# CONFIG_NO_HZ is not set)
  • Disabled throttling (performance scaling governor)
  • Unconfigured any debugging except enabled latency histograms
System in rack #1, slot #0

This 32-bit system was highly optimized with respect to a minimum worst-case latency (refer to system profile and kernel configuration), e.g.:

  • Disabled tickless system (# CONFIG_NO_HZ is not set)
  • Disabled throttling (performance scaling governor)
  • Unconfigured any debugging except enabled latency histograms
System in rack #3, slot #0

This system uses a standard kernel configuration as could be found in a standard Linux distribution (refer to system profile and kernel configuration):

  • Enabled tickless system (CONFIG_NO_HZ=y)
  • Enabled CPU frequency scaling (ondemand scaling governor)
System in rack #b, slot #0

This system uses a standard kernel configuration as could be found in a standard Linux distribution; in addition many debugging options are enabled (refer to system profile and kernel configuration):

  • Enabled tickless system (CONFIG_NO_HZ=y)
  • Enabled CPU frequency scaling (ondemand scaling governor)
  • Configured various debug options (including CONFIG_DEBUG_STACKOVERFLOW=y) and enabled latency histograms

Latency plots

The related latency plots from the most recent cyclictest run are given here.

Generation of CPU load

Between 7 a.m. and 1 p.m. and between 7 p.m. and 1 a.m., a simulated application scenario is running using cyclictest at priority 99 with a cycle interval of 200 µs and a user program at normal priority that creates burst loads of memory, filesystem and network accesses. The particular cyclictest command is specified in every system's profile referenced above and on the next page. The load generator results in an average CPU load of 0.2 and a network bandwidth of about 8 Mb/s per system. Histogram data obtained from the cyclictest runs are used to create latency plots (aka Linux real-time plots) that are also referenced above and on the next page. Profiles and latency plots are updated twice a day.