Software TSN-Switch with Linux

In this blog post, I am going to explain how to set up a software TSN (Time-sensitive Networking) switch with Linux supporting the Time-aware Shaper from the IEEE 802.1Q standard (formerly IEEE 802.1Qbv).

A disclaimer first: TSN came with the goal of enabling ‘hard’ real-time communication with deterministic bounds on network delay and jitter over standard Ethernet (IEEE 802.3 networks). Obviously, it can be debated to which extent a software switch and the Linux kernel can support such deterministic bounds. You have to decide yourself whether you want to trust a software TSN switch with the control of your industrial plant, networked car, nuclear plant, or communication within your space station. But seriously, I think software TSN switches can be useful even without hard guarantees, be it for teaching students TSN in your network lab, doing experiments without investing into expensive hardware, or researching how to actually extend the current implementation to finally really provide deterministic guarantees. In some respects, software switches even have inherent advantages, in particular, if it comes to flexibility. We will see that later when it comes to classifying packets using information from all network layers (not just the data link layer where Ethernet resides). Even combining TSN and SDN (Software-defined Networking) now seems to be a matter of ‘plumbing’ together an SDN switch and a suitable Linux queuing discipline for TSN.

If you have never heard of TSN or the Time-aware Shaper, you should start reading the following TSN background section first, where I give a brief overview of the Time-aware Shaper. All TSN experts can safely skip this section and directly jump to the description of the Time-aware Priority Shaper (TAPRIO), the new Linux queuing discipline implementing the Time-aware Shaper. Finally, I will show how to integrate a Linux bridge, iptables classifier, and the Time-aware Priority Shaper into a software TSN switch.

TSN Background: Time-aware Shaper

Time-sensitive Networking (TSN) is a collection of IEEE standards to enable real-time communication over IEEE 802.3 networks (Ethernet). Although several implementations of real-time Ethernet technologies have already existed for some time in the past, TSN now brings real-time communication to standard Ethernet as defined by IEEE. With TSN, a TSN-enabled Ethernet can now transport both, real-time and non-real-time traffic over one converged network.

At the center of the TSN standards are different so-called shapers, which some people would call schedulers, and others queuing disciplines, so don’t be confused if I use these words interchangeably. Deterministic real-time communication with very low delay and jitter is the realm of the so-called Time-aware Shaper (TAS). Basically, the TAS implements a TDMA scheme, by giving packets (or frames as they are called on the data link layer) of different traffic classes access to the medium within different time slots. To understand the technical details better, let’s have a look at how a packet traverses a switch. The following figure shows a simplified but sufficiently accurate view onto the data path of a TSN switch.

                      incoming packet (from driver/NIC)
|                      Forwarding Logic                      +
               | output on port 1   ...    | output on port n       
               v                           v
+          Classifier            +
    |          |             |
    v          v             v
+-------+  +-------+     +-------+  
|       |  |       |     |       |
| Queue |  | Queue | ... | Queue |
|  TC0  |  |  TC1  |     |  TC7  |  
|       |  |       |     |       |
+-------+  +-------+     +-------+
    |          |             |         +-------------------+ 
    v          v             v         | Gate Control List |
+-------+  +-------+     +-------+     | t1 10000000       |  
| Gate  |<-| Gate  | ... | Gate  |<----| t2 01111111       |
+-------+  +-------+     +-------+     | ...               |
    |          |             |         | repeat            |
    v          v             v         +-------------------+
|     Transmission Selection     |     
        to driver/NIC 

First, the packets enters the switch through the incoming port or the network interface controller (NIC) of your Linux box implementing the software switch. Then, the forwarding logic decides on which outgoing port to forward the packet. So far, this is not different from an ordinary switch.

Then comes the more interesting part from the point of view of a TSN switch. For the following discussion, we zoom into one outgoing port (this part of the figure should be replicated n times, once for each outgoing port). First, the classifier decides, which traffic class the packet belongs to. To this end, the VLAN tag of the packet contains a three-bit Priority Code Point (PCP) field. So it should not come as a big surprise that eight different traffic classes are supported, each having its own outgoing FIFO queue, i.e., eight queues per outgoing port.

Now comes the big time (no pun intended) of the Time-aware Shaper (TAS): Behind each queue is a gate. If the gate of a queue is open, the first packet in the queue is eligible for transmission. If the gate is closed, the queue cannot transmit. Whether a gate is open or closed is defined by the time schedule stored in the Gate Control List (GCL). Each entry in the GCL has a timestamp defining the time when the state of the gates should change to a given state. For instance the entry ‘t1 10000000’ says that at time t1 gate 0 should be open (1) and gates 1-7 should be closed (0). After the end of the schedule, the schedule repeats in a cyclic fashion, i.e., t1, t2, etc. define relative times with respect to the start of a cycle. For a defined behavior, the clocks of all switches need to be synchronized, so all switches refer to the same cycle base time with their schedules. This is the job of the PTP (Precision Time Protocol).

The idea is that gates along the path of a time-sensitive packet are opened and closed such that an upper bound on network delay and jitter can be guarateed despite concurrent traffic, which might need to wait behind closed gates. How to calculate time schedules to guaranteed a desired upper bound on network delay and jitter is out of the scope of the IEEE standard. Actually, it’s a hard problem and subject to active research. I will not got into detail here, but just mention that also we have defined algorithms for calculating TAS schedules as part of our research at University of Stuttgart [1] as well as others (a survey with further references can be found here [2]).

Ok, well enough, but what happens if two gates are open at the same time? Yes, that’s allowed, as the schedule entry ‘t2 011111111’ shows where 7 gates are open all at the same time. Then Transmission Selection will refer to a second scheduling algorithm, e.g., strict priority queuing to decided which packet from a queue with open gate is allowed to transmit. You see, several scheduling mechanisms work together here, and I did not even mention the other IEEE shapers such as the Credit-based Shaper, which could be added to this picture. Here, we just focus on the TAS, keeping in mind that also many hardware switches might not implement all possible shapers defined by IEEE.

The Linux Time-aware Priority Shaper

The Time-aware Shaper as defined by IEEE standards introduced in the previous section is implemented by the Linux queuing descipline Time-aware Priority Shaper (TAPRIO). So let’s see how to configure TAPRIO for a network interface.

Configuration of queuing disciplines (or QDISCs for short) is done with the tc (traffic control) tool. Let’s assume that we want to set up TAPRIO for all traffic leaving through the network interface enp2s0f1. Then, the tc command could look as follows:

$ tc qdisc replace dev enp2s0f1 parent root handle 100 taprio \
num_tc 2 \
map 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 \
queues 1@0 1@1 \
base-time 1554445635681310809 \
sched-entry S 01 800000 sched-entry S 02 200000 \
clockid CLOCK_TAI

Here, we replace the existing QDISC (maybe the default one) of the device enp2s0f1 by a TAPRIO QDISC, which is placed right at the root of the device. We need to provide a unique handle (100) for this QDISC.

We define two traffic classes (num_tc 2). As you have seen above, an IEEE switch might have queues for up to 8 traffic classes. TAPRIO supports up to 16 traffic classes, although your NIC then also would need as many TX queues (see below).

Then, we need to define how to classify packets, i.e., how to assign packets to traffic classes. To this end, TAPRIO uses the priority field of the sk_buff (socket buffer, SKB) structure. The SKB is the internal kernel data structure for managing packets. Since the SKB is a kernel structure, you cannot directly set it from user space. One way of setting it from user space is to use the SO_PRIORITY socket option by the sending application. However, since we are implementing a switch here, the sending application might reside on another host, so for our use case this is not an option. As described below, we will use another possibility, namely iptables, to set the priority field of SKB before they reach the QDISC. For now, let’s assume the priority is set somehow. Then, the map parameter defines the mapping of SKB priority values to traffic classes (TC) using a bit vector. You can read the bit vector ‘1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1’ as follows: map priority 0 (first bit from the left) to TC1, priority 1 to TC0, and priorities 2-15 to TC1 (16 mappings for 16 possible traffic classes).

Next, we map traffic classes to TX queues of the network device. Modern network devices typically implement more than one TX queue for outgoing traffic. How many TX queues are supported by your device, you can find out with the following command:

$ ls /sys/class/net/enp2s0f1/queues/
rx-0 rx-1 rx-2 rx-3 rx-4 rx-5 rx-6 rx-7 tx-0 tx-1 tx-2 tx-3 tx-4 tx-5 tx-6 tx-7

Here, the network device enp2s0f1 supports 8 TX queues, more than enough for our two traffic classes. The parameter ‘queues 1@0 1@1’ reads like this: The first entry 1@0 defines the mapping of the first traffic class (TC0) to TX queues, the second entry 1@1 the mapping of the second traffic class (TC1), and so on. Each entry defines a range of queues to which the traffic class should be mapped using the schema queue_count@queue_offset. That is, 1@0 means map (the traffic class) to 1 TX queue starting at queue index 0, i.e., queue range [0,0]. The second class is also mapped to 1 queue at queue index 1 (1@1). You can also map one traffic class to several TX queues by increasing the count parameter beyond 1. Make sure that queue ranges do not overlap.

Next, we define the schedule of the Time-aware Shaper implemented by TAPRIO. First of all, we need to define a base time as a reference for the cyclic schedule. Every scheduling cycle starts at base_time + k*cycle_time. The cycle time (duration of the cycle until it repeats) is implicitly defined by the sum of the times (interval durations) of the schedule entries (see below), in our example 800000 ns + 200000 ns = 1000000 ns = 1 ms. The base time is defined in nano seconds according to some clock. The reference clock to be used is defined by parameter clockid. CLOCK_TAI is the International Atomic Time. The advantages of TAI are: TAI is not adjusted by leap seconds in contrast to CLOCK_REALTIME, and TAI refers to a well-defined starting time in contrast to CLOCK_MONOTONIC.

Finally, we need to define the entries of the Gate Control List, i.e., the points in time when gates should open or close (or in other words: the time intervals during which gates are open or closed). For instance, ‘sched-entry S 01 800000’ says that the gate of TC0 (least significant bit in bit vector) opens at the start of the cycle for 800000 ns duration, and all other gates are closed for this interval. Then, 800000 ns after the start of the cycle, the entry ‘sched-entry S 02 200000’ defines that the gate of TC1 (second bit of bit vector) opens for 200000 ns, and all other gates are closed.

Note that as said above, you can also open multiple gates at the same time by setting multiple bits of the bit vector. Now, the transmission selection algorithm should decide which packet from one of the queues with open gate to send next. The manual page of TAPRIO does not clearly say, which open queue gets priority. However, from looking at the source code of TAPRIO, it seems that TAPRIO gives open queues with smaller queue number priority.

Software TSN Switch

Now that we know how to use the TAPRIO QDISC, we can finally set up our software TSN switch. The software TSN switch integrates three parts:

  • A software switch (aka bridge) taking care of forwarding packets to the right outgoing port.
  • Per traffic class a classifier implemented through iptables defining the priority of forwarded packets, which is then mapped to a corresponding traffic class by TAPRIO.
  • Per outgoing switch port (network interface) a TAPRIO QDISC.

First, we set up a software switch (aka bridge) called br0:

$ brctl addbr br0

We assign two network interfaces (enp2s0f0 and enp2s0f1) to the switch, which we first put into promiscuous mode, so the switch will see all incoming packets:

$ ip link set dev enp2s0f0 promisc on
$ ip link set dev enp2s0f1 promisc on

Then, we assign the two interfaces to the switch:

$ brctl addif br0 enp2s0f0
$ brctl addif br0 enp2s0f1

Finally, we bring the switch up:

$ ip link set dev br0 up

Next, we define classifiers for each traffic class using iptables. As said above, TAPRIO uses the SKB priority field to map priorities to traffic classes. Assume that we want to assign priority 1 to all UDP packets with destination port 6666. With iptables, we can implement a corresponding classifier rule like this:

$ iptables -t mangle -A POSTROUTING -p udp --dport 6666 -j CLASSIFY --set-class 0:1

The mangle table (‘-t mangle’) is used to modify packets, so this is what we need here. The priority field of the SKB can be set using the argument ‘-j CLASSIFY’ together with the argument ‘–set-class 0:1’ to define the priority value (here 0:1, i.e., 1). The argument name ‘set-class’ might sound confusing because the mapping of priorities to traffic classes is actually done by the TAPRIO QDISC. This value is actually the value for the priority field of the SKB. The argument ‘-A POSTROUTING’ appends a rule to the POSTROUTING chain, which is invoked after the forwarding decision, just before the packet reaches the QDISC, so the QDISC can see the priority field set by the iptables rule (classifier). UDP packets can be matched by the protocol argument ‘-p’, and the destination port by the ‘–dport’ argument. For each traffic class, you need to set up a corresponding classifier rule.

One more thing to note is that bridges typically work on layer 2 (data link layer), whereas iptables typically deal with higher layers (network layer and transport layer). Since kernel 2.6, bridged traffic can be exposed to iptables through bridge-nf. So we need to enable forwarding bridged IPv4 traffic and IPv6 traffic to iptables by setting the following sysctl entries to 1:

$ echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
$ echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables

If you are missing these sysctl entries, be sure to load the br_netfilter module first:

$ modprobe br_netfilter

Note that iptables is pretty versatile in classifying packets since we can use information from different protocol layers, not just layer 2. If you actually want to match on layer 2 information such as the PCP field of the VLAN tag (as typically done by pure layer 2 TSN switches), we need to make VLAN tag information visible to iptables using the following command:

$ echo 1 > /proc/sys/net/bridge/bridge-nf-filter-vlan-tagged

Then, you can match the bits of the PCP using the u32 module of iptables (I didn’t actually check this, so I hope I am not off by some bits):

$ iptables -t mangle -A POSTROUTING -m u32 --u32 "12&0x0000E000=0x0000200" -j CLASSIFY --set-class 0:1

‘12&0x0000E000’ specifies the bits to match, where 12 is the byte offset from the beginning of the frame (starting with 0 for the first byte), and 0x0000E000 is the mask applied to the matched 4 bytes. In the Ethernet frame, the VLAN tag is preceeded by 2×6 bytes for the destination MAC address and source MAC address, respectively. Thus, the offset of the VLAN tag is 12 bytes. The PCP field are the 3 most significant bits of the 3rd byte of the VLAN tag. Thus, the mask is 0x0000E000. 0x0000200 is the PCP value 1. If you plan to use the PCP field for classification, you need to ensure that all packets are VLAN-tagged since this rule matches on raw bits without checking whether the packets is actually VLAN-tagged.

Finally, for each port, set up the TAPRIO qdisc responsible for time-aware scheduling of outgoing traffic of that port as already shown above.

A Small Test

Finally, we can test our software TSN switch in a little scenario with a single two-port TSN software switch. The switch has two physical 1 GE ports. We have two senders, each sending a stream of UDP packets to a different receiver process, i.e., we have two flows (the red and the blue flow). Both senders reside on the same host attached to switch port #1. Both receivers reside on another host attached to switch port #2.

On port #2 (towards the receivers), we set up a TAPRIO QDISC with 800 us time slot for the blue flow (traffic class 0, TC0) and 200 us time slot for the red flow (traffic class 1, TC1):

$ tc qdisc replace dev enp2s0f1 parent root handle 100 taprio \
num_tc 2 \
map 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 \
queues 1@0 1@1 \
base-time 1554445635681310809 \
sched-entry S 01 800000 sched-entry S 02 200000 \
clockid CLOCK_TAI

Two classifiers are set up to classify red traffic to port 7777 as TC1 mapped from priority 0 (0:0), and blue traffic to port 6666 as TC0 mapped from priority 1 (0:1):

$ iptables -t mangle -A POSTROUTING -p udp --dport 6666 -j CLASSIFY --set-class 0:1
$ iptables -t mangle -A POSTROUTING -p udp --dport 7777 -j CLASSIFY --set-class 0:0

Both senders send as fast as possible. At the receiver side, incoming packets are timestamped using the hardware timestamping feature of the NIC, thus, recorded time stamps should be pretty accurate.

The following figure shows the arrival times of packets at the receivers. We draw a vertical line whenever a packet of the red or blue flow is received (the individual lines blend together at higher data rates). As we can see, the packets of the two flows arrive nicely separated within their assigned time slots of duration 800 us and 200 us, respectively. After 1 ms, the cycle repeats. So time-aware shaping works!

time-shaped traffic

Time-shaped traffic

We can also see that right at the start of each time slot, packets arrive at maximum rate (1 Gbps), whereas later in a time slot, the rate is lower (see blue flow). The reason for this is that right after the gate of a queue opens, queued packets are forwarded over the outgoing interface (port) at full line rate. However, since in our scenario both flows share the same incoming (bottleneck) link into the switch, both flows only have an average data rate of 500 Mbps each into the switch, thus, the data rate on the outgoing link drops, when the open queue runs empty. Note that the time-aware shaper is shaping the outgoing traffic, not the incoming traffic. So this is not a problem of the switch, but due to the setup.

Where to go from here

In this tutorial, we configured a simple software TSN switch. One interesting extension could be to configure an SDN+TSN-Switch by replacing the Linux bridge by an Open vSwitch. Maybe, this will be part of another tutorial.

Raspberry Pi Going Realtime with RT Preempt

[UPDATE 2016-05-13: Added pre-compiled kernel version 4.4.9-rt17 for all Raspberry Pi models (Raspberry Pi Model A(+), Model B(+), Zero, Raspberry Pi 2, Raspberry Pi 3). Added build instructions for Raspberry Pi 2/3.

A real-time operating system gives you deterministic bounds on delay and delay variation (jitter). Such a real-time operating system is an essential prerequisite for implementing so-called Cyber Physical Systems, where a computer controls a physical process. Prominent examples are the control of machines and robots in production environments (Industry 4.0), drones, etc.

RT Preempt is a popular patch for the Linux kernel to transform Linux into such a realtime operating system. Moreover, the Raspberry Pi has many nice features to interface with sensors and actuators like SPI, I2C, and GPIO so it seems to be a good platform for hosting a controller in a cyber-physical system. Consequently, it is very attractive to install Linux with the RT Preempt patch on the Raspberry Pi.

Exactly this is what I do here: I provide detailed instructions on how to install a Linux kernel with RT Preempt patch on a Raspberry Pi. Basically, I wrote this document to document the process for myself, and it is more or less a collection of information you will find on the web. But anyway, I hope I can save some people some time.

And to save you even more time, here is the pre-compiled kernel (including kernel modules, firmware, and device tree) for the Raspberry Pi Model A(+),B(+), Raspberry Pi Zero, Raspberry Pi 2 Model B, Raspberry Pi 3 Model B:

To install this pre-compiled kernel, login to your Raspberry Pi running Raspbian (if you have not installed Raspbian already, you can find an image here:, and execute the following commands (I recommend to do a backup of your old image since this procedure will overwrite the old kernel):

pi@raspberry ~$ sudo rm -r /boot/overlays/
pi@raspberry ~$ sudo rm -r /lib/firmware/
pi@raspberry ~$ cd /tmp
pi@raspberry ~$ wget
pi@raspberry ~$ tar xzf kernel-4.4.9-rt17.tgz
pi@raspberry ~$ cd boot
pi@raspberry ~$ sudo cp -rd * /boot/
pi@raspberry ~$ cd ../lib
pi@raspberry ~$ sudo cp -dr * /lib/
pi@raspberry ~$ sudo /sbin/reboot

With this patched kernel, I could achieve bounded latency well below 200 microseconds on a fully loaded 700 MHz Raspberry Pi Model B (see results below). This should be safe for tasks with a cycle time of 1 ms.

Since compiling the kernel on the Pi is very slow, I will cross compile the kernel on a more powerful host. You can distinguish the commands executed on the host and the Pi by looking at the prompt of the shell in the following commands.

Install Vanilla Raspbian on your Raspberry Pi

Download Raspbian from and install it on your SD card.

Download Raspberry Pi Kernel Sources

On your host (where you want to cross-compile the kernel), download the latest kernel sources from Github:

user@host ~$ git clone
user@host ~$ cd linux

If you like, you can switch to an older kernel version like 4.1:

user@host ~/linux$ git checkout rpi-4.1.y

Patch Kernel with RT Preempt Patch

Next, patch the kernel with the RT Preempt patch. Choose the patch matching your kernel version. To this end, have a look at the Makefile. VERSION, PATCHLEVEL, and SUBLEVEL define the kernel version. At the time of writing this tutorial, the latest kernel was version 4.4.9. Patches for older kernels can be found in folder “older”.

user@host ~/linux$ wget
user@host ~/linux$ zcat patch-4.4.9-rt17.patch.gz | patch -p1

Install and Configure Tool Chain

For cross-compiling the kernel, you need the tool chain for ARM on your machine:

user@host ~$ git clone
user@host ~$ export ARCH=arm
user@host ~$ export CROSS_COMPILE=/home/user/tools/arm-bcm2708/gcc-linaro-arm-linux-gnueabihf-raspbian/bin/arm-linux-gnueabihf-
user@host ~$ export INSTALL_MOD_PATH=/home/user/rtkernel

Later, when you install the modules, they will go into the directory specified by INSTALL_MOD_PATH.

Configure the kernel

Next, we need to configure the kernel for using RT Preempt.

For Raspberry Pi Model A(+), B(+), Zero, execute the following commands:

user@host ~$ export KERNEL=kernel
user@host ~$ make bcmrpi_defconfig

For Raspberry Pi 2/3 Model B, execute these commands:

user@host ~$ export KERNEL=kernel7
user@host ~$ make bcm2709_defconfig

An alternative way is to export the configuration from a running Raspberry Pi:

pi@raspberry$ sudo modprobe configs
user@host ~/linux$ scp pi@raspberry:/proc/config.gz ./
user@host ~/linux$ zcat config.gz > .config

Then, you can start to configure the kernel:

user@host ~/linux$ make menuconfig

In the kernel configuration, enable the following settings:

  • CONFIG_PREEMPT_RT_FULL: Kernel Features → Preemption Model (Fully Preemptible Kernel (RT)) → Fully Preemptible Kernel (RT)
  • Enable HIGH_RES_TIMERS: General setup → Timers subsystem → High Resolution Timer Support (Actually, this should already be enabled in the standard configuration.)

Build the Kernel

Now, it’s time to cross-compile and build the kernel and its modules:

user@host ~/linux$ make zImage
user@host ~/linux$ make modules
user@host ~/linux$ make dtbs
user@host ~/linux$ make modules_install

The last command installs the kernel modules in the directory specified by INSTALL_MOD_PATH above.

Transfer Kernel Image, Modules, and Device Tree Overlay to their Places on Raspberry Pi

We are now ready to transfer everything to the Pi. To this end, you could mount the SD card on your PC. I prefer to transfer everything over the network using a tar archive:

user@host ~/linux$ mkdir $INSTALL_MOD_PATH/boot
user@host ~/linux$ ./scripts/mkknlimg ./arch/arm/boot/zImage $INSTALL_MOD_PATH/boot/$KERNEL.img
user@host ~/linux$ cp ./arch/arm/boot/dts/*.dtb $INSTALL_MOD_PATH/boot/
user@host ~/linux$ cp -r ./arch/arm/boot/dts/overlays $INSTALL_MOD_PATH/boot
user@host ~/linux$ cd $INSTALL_MOD_PATH
user@host ~/linux$ tar czf /tmp/kernel.tgz *
user@host ~/linux$ scp /tmp/kernel.tgz pi@raspberry:/tmp

Then on the Pi, install the real-time kernel (this will overwrite the old kernel image!):

pi@raspberry ~$ cd /tmp
pi@raspberry ~$ tar xzf kernel.tgz
pi@raspberry ~$ sudo rm -r /lib/firmware/
pi@raspberry ~$ sudo rm -r /boot/overlays/
pi@raspberry ~$ cd boot
pi@raspberry ~$ sudo cp -rd * /boot/
pi@raspberry ~$ cd ../lib
pi@raspberry ~$ sudo cp -dr * /lib/

Most people also disable the Low Latency Mode (llm) for the SD card:

pi@raspberry /boot$ sudo nano cmdline.txt

Add the following option:



pi@raspberry ~$ sudo /sbin/reboot

Latency Evaluation

For sure, you want to know the latency bounds achieved with the RT Preempt patch. To this end, you can use the tool cyclictest with the following test case:

  • clock_nanosleep(TIMER_ABSTIME)
  • Cycle interval 500 micro-seconds
  • 100,000 loops
  • 100 % load generated by running the following commands in parallel:
    • On the Pi:
      pi@raspberry ~$ cat /dev/zero > /dev/null
    • From another host:
      user@host ~$ sudo ping -i 0.01 raspberrypi
  • 1 thread (I used a Raspberry Pi model B with only one core)
  • Locked memory
  • Process priority 80
pi@raspberry ~$ git clone git://
pi@raspberry ~$ cd rt-tests/
pi@raspberry ~/rt-test$ make all
pi@raspberry ~/rt-test$ sudo ./cyclictest -m -t1 -p 80 -n -i 500 -l 100000

On a Raspberry Pi model B at 700 MHz, I got the following results:

T: 0 ( 976) P:80 I:500 C: 100000 Min: 23 Act: 40 Avg: 37 Max: 95

With some more tests, the worst case latency sometimes reached about 166 microseconds. Adding a safety margin, this should be safe for cycletimes of 1 ms.

I also observed that using other timers than clock_nanosleep(TIMER_ABSTIME)—e.g., system timers (sys_nanosleep and sys_setitimer)—, the latency was much higher with maximum values above 1 ms. Thus, for low latencies, I would only rely on clock_nanosleep(TIMER_ABSTIME).