Dates and Events: |
OSADL Articles:
2023-11-12 12:00
Open Source License Obligations Checklists even better nowImport the checklists to other tools, create context diffs and merged lists
2022-07-11 12:00
Call for participation in phase #4 of Open Source OPC UA open62541 support projectLetter of Intent fulfills wish list from recent survey
2022-01-13 12:00
Phase #3 of OSADL project on OPC UA PubSub over TSN successfully completedAnother important milestone on the way to interoperable Open Source real-time Ethernet has been reached
2021-02-09 12:00
Open Source OPC UA PubSub over TSN project phase #3 launchedLetter of Intent with call for participation is now available |
OSADL QA Farm on Real-time of Mainline Linux
About - Hardware - CPUs - Benchmarks - Graphics - Benchmarks - Kernels - Boards/Distros - Latency monitoring - Latency plots - System data - Profiles - Compare - Awards
Real-time Ethernet (UDP) worst-case round-trip time monitoring
Wakeup latency of all systems - Real-time optimization - Peer-to-peer UDP duplex link - OPC UA PubSub over TSN - Powerlink - Ethercat - Network load - kvm - Sleep states
Two pairs of Linux real-time test systems are equipped with second Ethernet adapters. The pairs are connected to each other using a cross-over cable to form a real-time network communication based on a peer-to-peer full duplex UDP link. All systems run standard user-space applications solely based on POSIX network calls such as bind() and connect(). The first plots of the two main sections below are generated from subsequent 5-minute maxima of the time elapsed between sending a UDP frame and receiving the response packet. The tests are running twice a day for three hours at a cycle interval of 500 µs. Thus, the maximum at the rightmost columns of the below 30-h plots are based on a total of nine hours of recording time resulting in a total of 64,800,000 individual timed cycles. The related histograms are available here.
Configurations and settings
The following recommendations assume that no user space task or IRQ thread is running with a priority higher than 79. If the network traffic is sent at a high frequency and/or with a high payload that may prevent RCU from catching up, the kernel configuration should contain
CONFIG_RCU_BOOST=y
CONFIG_RCU_BOOST_PRIO=99
Kernel version 2.6.x:
- Set the priority of the related Ethernet IRQ thread to 90, bind it to a selected CPU
- Disable irqbalance or provide adequate environment IRQBALANCE_BANNED_CPUS= and IRQBALANCE_BANNED_INTERRUPTS=
- Set the priority of the sirq-net-rx kernel thread of the selected CPU to 89
- Set the priority of the sirq-net-tx kernel thread of the selected CPU to 89
- Set the priority of the related user-space application to 80 and bind it to the selected CPU
Kernel versions 3.0 to 3.4 (without splitsoftirq backport):
The softirq split that was available in kernel 2.6.x has not been re-implemented before kernel version 3.6. To achieve a comparable worst-case latency as under 2.6.x, the following settings must be made (requires at least two processor cores):
- Disable CONFIG_RT_GROUP_SCHED
- Specify kernel command line parameter irqaffinity=<othercpus> isolcpus=<cpu>
- Set the priority of the Ethernet IRQ thread to 90 and bind it to the isolated CPU
- Disable irqbalance
- Set the priority of the softirqd kernel thread of the isolated CPU to 89
- Set the priority of the related user-space application to 80 and bind it to the isolated CPU
Kernel versions 3.2 and 3.4 with splitsoftirq backport, and kernel versions 3.6 up to 4.14:
The softirq workaround explained above is no longer needed! Kernel developer and RT maintainer Thomas Gleixner found the most elegant solution that directly runs the network task in the context of the IRQ thread of the related device and, thus, implicitly adopts its priority. This avoids any additional configuration; in consequence, a network RT task is now configured in the same way as any other non-network RT task that requires deterministic response to a device interrupt, i.e. by simply setting the priority of the IRQ thread and the user-space application.
- Set the priority of the Ethernet IRQ thread to 90, bind it to a suitable core in a multi-core processor system
- Disable irqbalance
- Set the priority of the related user-space application to 80, bind it to the same core as the IRQ thread in a multi-core processor system
- "That's All Folks!"
The softirq split was backported to 3.2 and 3.4, but the related patches are not part of the regular 3.2-rt and 3.4-rt release. They are available here and must be applied separately. Alternatively, Steven Rostedt has created a -featN branch of the RT patch that contains the softirq split backport. The below plots are generated on the server and the client systems at the primary slots of rack #1, slot #2 and rack #5, slot #4, respectively.
Last update 2 minutes ago
Kernel versions 4.16 and later:
In version 4.15 of the mainline Linux kernel, developers decided to rework the softirq framework in such a way that the above explained softirq split could no longer be implemented. In order to cope with this new situation, the recommendation was given to always use a multi-core processor for real-time networking. This allows to isolate one of the cores from the remaining system and run real-time network exclusively on that core. A feature that was added in kernel version 4.17 to copy the affinity mask of the hard IRQ to the IRQ service routine can additionally be used to avoid migrating to another core while executing the IRQ handlers; however, some hardware devices may not support this.
The following example configuration assumes that a 4-core processor is used with core #3 isolated for real-time and the network device is named enp0s25 and uses interrupt #27:
- Add isolcpus=3 to the kernel command line. This will prevent user space processes from running on core #3.
- Write the affinity mask 0x7 to the virtual file smp_affinity of all interrupts at /proc/irq/<irqnum> to prevent them from running on core #3. As already mentioned above, this feature may not be implemented for all devices, see script and example output.
cd /proc/irq
for i in [0-9]*
do
echo 7 >$i/smp_affinity 2>/dev/null
done
- Write the affinity mask 0x8 to the virtual file /proc/irq/27/smp_affinity of the network interrupt:
echo 8 >27/smp_affinity
- Determine the process IDs of all kernel threads and set their affinity mask to 0x7. This may not work in all cases, see script and example output.
- Set the priority of the network interrupt service routine that is irq/27-enp0s25 in our case to 98.
- Set the affinity mask of the related user-space application to 0x8 and its priority to 97.
The below plots are generated on the server and the client systems at the shadow slots of rack #1, slot #2 and rack #5, slot #4, respectively.
Last update 5 minutes ago
Topology
The interfaces of the two systems in rack #1/slot #2 and rack #5/slot #4 are configured as VLAN and the packets are sent at VLAN priority (QoS) 7 which is the highest possible value. The systems are connected to ports of a VLAN-capable switch (HP J9773A 2530-24G). The two ports are set to the same VLAN ID as the two network interfaces and also given highest priority.