2022-01-16 - 11:06

Dates and Events:

OSADL Articles:

2022-01-13 12:00

Phase #3 of OSADL project on OPC UA PubSub over TSN successfully completed

Another important milestone on the way to interoperable real-time Ethernet has been reached

2021-09-06 12:00

OSADL Talks at EWC 2022

All about legal and technical issues and solutions when using Open Source software in industry

2021-02-09 12:00

Open Source OPC UA PubSub over TSN project phase #3 launched

Letter of Intent with call for participation is now available

2016-11-12 12:00

Raspberry Pi and real-time Linux

Let's have a look at the OSADL QA Farm data

OSADL Projects

OSADL QA Farm on Real-time of Mainline Linux

About - Hardware - CPUs - Benchmarks - Graphics - Benchmarks - Kernels - Boards/Distros - Latency monitoring - Latency plots - System data - Profiles - Compare - Awards

Real-time Ethernet (UDP) worst-case round-trip time monitoring

Wakeup latency of all systems - Real-time optimization - Peer-to-peer UDP duplex link - OPC UA PubSub over TSN - Powerlink - Ethercat - Network load - kvm - Sleep states

Each of two Linux real-time test systems (rack #1, slot #2 and rack #5, slot #4) are equipped with a second Ethernet adapter. They are connected to each other using a cross-over cable to form a real-time network communication based on a peer-to-peer full duplex UDP link. The server (rack #1, slot #2) and the client (rack #5, slot #4) run standard user-space applications solely based on POSIX network calls such as bind() and connect(). The plot below is generated from subsequent 5-minute maxima of the time elapsed between sending an UDP frame and receiving the response packet. The test is running twice a day for three hours at a cycle interval of 500 µs. Thus, the maximum at the rightmost column of the below 30-h plot is based on a total of nine hours of recording time resulting in a total of 64,800,000 individual timed cycles. In addition, the maxima per week, per month and per year are stored as well to further add to the confidence of the result. The related histogram is available here.

Configurations and settings

The following recommendations assume that no user space task or IRQ thread is running with a priority higher than 79. If the network traffic is sent at a high frequency and/or with a high payload that may prevent RCU from catching up, the kernel configuration should contain


Kernel version 2.6.x:

  • Set the priority of the related Ethernet IRQ thread to 90, bind it to a selected CPU
  • Disable irqbalance or provide adequate environment IRQBALANCE_BANNED_CPUS= and IRQBALANCE_BANNED_INTERRUPTS=
  • Set the priority of the sirq-net-rx kernel thread of the selected CPU to 89
  • Set the priority of the sirq-net-tx kernel thread of the selected CPU to 89
  • Set the priority of the related user-space application to 80 and bind it to the selected CPU

Kernel versions 3.0 to 3.4 (without splitsoftirq backport):

The softirq split that was available in kernel 2.6.x has not been re-implemented before kernel version 3.6. To achieve a comparable worst-case latency as under 2.6.x, the following settings must be made (requires at least two processor cores):

  • Specify kernel command line parameter irqaffinity=<othercpus> isolcpus=<cpu>
  • Set the priority of the Ethernet IRQ thread to 90 and bind it to the isolated CPU
  • Disable irqbalance
  • Set the priority of the softirqd kernel thread of the isolated CPU to 89
  • Set the priority of the related user-space application to 80 and bind it to the isolated CPU

Kernel versions 3.2 and 3.4 with splitsoftirq backport, and kernel versions 3.6 and later:

The softirq workaround explained above is no longer needed! Kernel developer and RT maintainer Thomas Gleixner found the most elegant solution that directly runs the network task in the context of the IRQ thread of the related device and, thus, implicitly adopts its priority. This avoids any additional configuration; in consequence, a network RT task is now configured in the same way as any other non-network RT task that requires deterministic response to a device interrupt, i.e. by simply setting the priority of the IRQ thread and the user-space application.

  • Set the priority of the Ethernet IRQ thread to 90, bind it to a suitable core in a multi-core processor system
  • Disable irqbalance
  • Set the priority of the related user-space application to 80, bind it to the same core as the IRQ thread in a multi-core processor system
  • "That's All Folks!"

The softirq split was backported to 3.2 and 3.4, but the related patches are not part of the regular 3.2-rt and 3.4-rt release. They are available here and must be applied separately. Alternatively, Steven Rostedt has created a -featN branch of the RT patch that contains the softirq split backport.

Last update 5 minutes ago

Real-time Ethernet worst-case round-trip time recording
Please note that the recorded values represent maxima of 5-min intervals. Thus, the data in the columns labeled "Min:" and "Avg:" should not be considered; the only relevant result is the maximum of consecutive 5-min maxima at the rightmost column labeled "Max:".

Real-time traffic
Real-time traffic the round-trip time of which is displayed above.

Non real-time traffic
Non real-time UDP traffic generated using the iperf tool - artificially limited to 20 Mb/s by traffic shaping and policy.


The interfaces of the two systems in rack #1/slot #2 and rack #5/slot #4 are configured as VLAN and the packets are sent at VLAN priority (QoS) 7 which is the highest possible value. The systems are connected to ports of a VLAN-capable switch (HP J9773A 2530-24G). The two ports are set to the same VLAN ID as the two network interfaces and also given highest priority.