Key 2.0 — Bluetooth IoT Door Lock

What is Key 2.0?

Key 2.0 (or Key20 for short) is a Bluetooth IoT Door Lock. It turns a conventional electric door lock into a smart door lock that can be opened using a smartphone without the need for a physical key. Thus, Key20 is the modern version of a physical key, or, as the name suggests, the key version 2.0 for the Internet of Things (IoT) era.

Key20 consists of two parts:

  1. Door lock controller device, which is physically connected to the electric door lock and wirelessly via BLE to the mobile app.
  2. Mobile app implementing the user interface to unlock the door and communicating with the door lock controller through BLE.

You can get a quick impression on how Key20 works by watching the following video:

The following image shows the Key20 door lock controller device and the Key20 app running on a smartphone.

Key 2.0 App and Door Lock Controller Device

Key 2.0 App and Door Lock Controller Device

The main features of Key20 are:

  • Using state-of-the-art security mechanisms (Elliptic Curve Diffie-Hellman Key Exchange (ECDH), HMAC) to protect against attacks.
  • Open-source software and hardware, including an open implementation of the security mechanisms. No security by obscurity! Source code for the app and door lock controller as well as Eagle files (schematic and board layout) are available on GitHub.
  • Maker-friendly: using easily available cheap standard components (nRF51822 BLE chip, standard electronic parts), easy to manufacture circuit board, and open-source software and hardware design.
  • Works with BLE-enabled Android 4.3 mobile devices (and of course newer versions). Porting to other mobile operating systems like iOS should be straightforward.
  • Liberal licensing of software and hardware under the Apache License 2.0 and the CERN Open Hardware License 1.0, respectively.

Security Concepts

A door lock obviously requires security mechanisms to protect from unauthorized requests to open the door. To this end, Key20 implements the following state of the art security mechanisms.

Authorization of Door Open Requests with HMAC

All door open requests are authorized through a Keyed Hash Message Authentication Code (HMAC). A 16 byte nonce (big random number) is generated by the door lock controller for each door open request as soon as a BLE connection is made to the door lock controller. The nonce is sent to the mobile app. Both, the nonce and the shared secret, are used by the mobile app to calculate a 512 bit HMAC using the SHA-2 hashing algorithm, which is then truncated to 256 bits (HMAC512-256), and sent to the door lock controller. The door lock controller also calculates an HMAC based on the nonce and the shared secret, and only if both HMACs match, the door will be opened.

The nonce is only valid for one door open request and effectively prevents replay attacks, i.e., an attacker sniffing on the radio channel and replaying the sniffed HMAC later. Note that the BLE radio communication is not encrypted, and it actually does not need to be encrypted since a captured HMAC is useless when re-played.

Moreover, each nonce is only valid for 15 s to prevent man-in-the-middle attacks where an attacker intercepts the HMAC and does not forward it immediatelly but waits until the (authorized) user walks away after he is not able to open the door. Later the attacker would then send the HMAC to the door lock controller to open the door. With a time window of only 15 s (which could be reduced further), such attacks are futile since the authorized user will still be at the door.

Note that the whole authentication procedure does not include heavy-weight asymmetric crypto functions, but only light-weight hashing algorithms, which can be performed on the door lock device featuring an nRF51822 micro-controller (ARM Cortex M0) very fast in order not to delay door unlocking.

With respect to the random nonce we would like to note the following. First, the nRF51822 chip includes a random number generator for generating random numbers from thermal noise, so nonces should be of high quality, i.e., truly random. An attack by cooling down the Bluetooth chip to reduce randomness due to thermal noise is not relevant here since this requires physical access to the lock controller installed within the building, i.e., the attacker is then already in your house.

Secondly, 128 bit nonces provide reasonable security for our purpose. Assume one door open request per millisecond (very pessimistic assumption!) and 100 years of operation, i.e., less than n = 2^42 requests to be protected. With 128 bit nonces, we have m = 2^128 possible nonce values. Then the birthday paradox can be used to calculate the probability p of at least one pair of requests sharing the same nonce, or, inversely, no nonces shared by any pair of requests. An approximation of p for n << m is p(n,m) = 1 – e^((-n^2)/(2*m)), which practically evaluates to 0 for n = 2^42 and m = 2^128. Even for n = 2^52 (one request per us; actually not possible with BLE), p(2^52,2^128) < 3e-8, which is about the probability to be hit by lightning, which is about 5.5e-8.

Exchanging Keys with Elliptic Curve Diffie Hellman Key Exchange (ECDH)

Obviously, the critical part is the establishment of a shared secret between the door lock controller and the mobile app. Anybody in possession of the shared secret can enter the building, thus, we must ensure that only the lock controller and the Key20 app know the secret. To this end, we use Elliptic Curve Diffie-Hellman (ECDH) key exchange based on Curve 25519. We assume that the door lock controller is installed inside the building that is secured by the lock—if the attacker is already in your home, the door lock is futile anyway. Thus, only the authorized user (owner of the building) has physical access to the door lock controller.

First, the user needs to press a button on the door lock controller device to enter key exchange mode (the red button in the pictures). Then both, the mobile app and the door lock controller calculate different key pairs based on the Elliptic Curve 25519 and exchange their public keys, which anyone can know. Using the public key of the other party and their own private keys, the lock controller and the app can calculate the same shared secret.

Using Curve 25519 and the Curve 25519 assembler implementation optimized for ARM Cortex-M0 from the Micro NaCl project, key pairs and shared secrets can be calculated in sub-seconds on the nRF51822 BLE chip (ARM Cortex M0).

Without further measures, DH is susceptible to man-in-the-middle attacks where an attacker actively manipulates the communication between mobile app and door lock controller. With such attacks, the attacker could exchange his own public key with both, the lock controller and the app to establish two shared secrets between him and the door lock controller, and between him and the mobile app. We prevent such attacks with the following mechanism. After key exchange, the mobile app and the door lock device both display a checksum (hash) of their version of the exchanged shared secret. The user will visually check these checksums to verify that they are the same. If they are the same, no man-in-the-middle attack has happened since the man in the middle cannot calculate the same shared secret as the door lock controller and the mobile app (after all, the private keys of door lock controller and mobile app remain private). Only then the user will confirm the key by pressing buttons on the door lock controller and the mobile app. Remember that only the authorized user has physical access to the door lock controller since it is installed within the building to be secured by the lock.

The following image shows the mobile app and the door lock controller displaying a shared secret checksum after key exchange. The user can confirm this secret by pushing the green button the the lock controller device and the Confirm Key button of the app.

Key 2.0: key checksum verification after key exchange.

Key 2.0: key checksum verification after key exchange.

Why not Standard Bluetooth Security?

Actually, Bluetooth 4.2 implements security concepts similar to the mechanisms described above. So it is a valid question why don’t we just rely on the security concepts implemented by Bluetooth?

A good overview why Bluetooth might not be as secure as we would like it to be is provided by Francisco Corella. So we refer the interested reader to his page for the technical details and a discussion of Bluetooth security. We also would like to add that many mobile devices still do not implement Bluetooth 4.2 but only Bluetooth 4.0, which is even less secure than Bluetooth 4.2.

So we decided not to rely on Bluetooth security mechanisms, but rather implement all security protocols on the application layer using state of the art security mechanisms as described above.

Bluetooth Door Lock Controller Device

The following image shows the door lock controller and its components.

Key 2.0 Door Lock Controller Device

Key 2.0 Door Lock Controller Device

The Door Lock Controller Device needs to be connected to the electric door lock (2 cables). You can simply replace a manual switch by the door lock controller device.

The door lock controller needs to be placed in Bluetooth radio range to the door and inside the building. Typical radio ranges are about 10 m. Depending on the walls, the distance might be shorter or longer. In our experience, one concrete wall is no problem, but two might block the radio signal.

The main part of the hardware is an nRF51822 BLE chip from Nordic Semiconductors. The nRF51822 features an ARM Cortex M0 micro-controller and a so-called softdevice implementing the Bluetooth stack, which runs together with the application logic on the ARM Cortex M0 processor.

An LCD is used to implement the secure key exchange procedure described above (visual key verification to avoid man-in-the-middle attacks).

For more technical details including schematics, board layout, and source code please visit the Key20 GitHub page.

Android App

The app requires a BLE-enabled mobile device running Android version 4.3 “Jelly Bean” (API level 18) or higher.

The following images show the two major tabs of the app: one for opening the door, and the second for exchanging keys between the app and the door lock controller.

Key 2.0 App: door unlock tab

Key 2.0 App: door unlock tab


Key 2.0 App: key exchange tab

Key 2.0 App: key exchange tab

The source code is available from the Key20 GitHub page.

ECDH-Curve25519-Mobile: Elliptic Curve Diffie-Hellman Key Exchange for Android Devices with Curve 25519


ECDH-Curve25519-Mobile implements Diffie-Hellman key exchange based on the Elliptic Curve 25519 for Android devices. It is released into the public domain and available through GitGub.

How I came across Curve 25519 … and the problem to be solved

Recently, I had to implement Diffie-Hellman key exchange for an Internet of Things (IoT) application, namely, a smart door lock (more about this later in another post). This system consists of a low-power embedded device featuring an ARM Cortex M0 microcontroller communicating via Bluetooth Low-Energy (BLE) with an Android app.

First, I had some doubts whether compute-intensive asymmetric cryptography could be implemented efficiently on a weak ARM Cortex M0. However, then I came across the Curve 25519, an elliptic curve proposed by Daniel J. Bernstein for Elliptic Curve Diffie-Hellman (ECDH) key exchange. In addition to the fact that Curve 25519 can be implemented very efficiently, there exists an implementation targeting ARM Cortex M0 from the Micro NaCl project. So I gave this implementation a try, and it turned out to be really fast.

So I decided to use ECDH with Curve 25519 for my IoT system. Thanks to Micro NaCl, the part for the microcontroller was implemented very quickly. However, I also needed an implementation for Android. My first thought was to use the popular Bouncy/Spongy Castle crypto library. However, it turned out that although they come with a definition of Curve 25519, they use a different elliptic curve representation, namly, the Weierstrass form rather than the Montgomery form used by NaCl. One option would have been to convert between the two representations, but to me it seemed less intuitive to convert the Montgomery curve back and forth when I could stick to one representation.

So the problem was now to find a Curve 25519 implementation for Android using the Montgomery form. And that was not so easy. So I finally decided to take the code from the NaCl project and make it accessible to the Android world. And the result is ECDH-Curve25519-Mobile.

What is ECDH-Curve25519-Mobile?

ECDH-Curve25519-Mobile implements Diffie-Hellman key exchange based on the Elliptic Curve 25519 for Android devices.

ECDH-Curve25519-Mobile is based on the NaCl crypto implementation, more specifically AVRNaCl, written by Michael Hutter and Peter Schwabe, who dedicated their implementation to the public domain. ECDH-Curve25519-Mobile follows their example and also dedicates the code to the public domain using the Unlicense. Actually, the core of ECDH-Curve25519-Mobile is NaCl code, and ECDH-Curve25519-Mobile is just a simple JNI (Java Native Interface) wrapper around it to make it accessible from Java on Android devices. So I gratefully acknowledge the work of the NaCl team and their generous dedication of their code to the public domain!

ECDH-Curve25519-Mobile is a native Android library since NaCl is implemented in C rather than Java. However, it can be easily compiled for all Android platforms like ARM or x86, so this is not a practical limitation compared to a Java implementation. The decision to base ECDH-Curve25519-Mobile on NaCl was not so much the performance you can gain from a native implementation—actually AVRNaCl leaves some room for performance improvements since it originally targeted 8 bit microcontrollers—, but using an implementation from crypto experts who actually work together with Daniel J. Bernstein as the inventor of Curve 25519.

How to use it?

I do not want to repeat everything already said in the description of ECDH-Curve25519-Mobile available at GitHub. Let me just show you some code to give you an impression that it is really easy to use from within your Android app:

// Create Alice's secret key from a big random number.
SecureRandom random = new SecureRandom();
byte[] alice_secret_key = ECDHCurve25519.generate_secret_key(random);
// Create Alice's public key.
byte[] alice_public_key = ECDHCurve25519.generate_public_key(alice_secret_key);

// Bob is also calculating a key pair.
byte[] bob_secret_key = ECDHCurve25519.generate_secret_key(random);
byte[] bob_public_key = ECDHCurve25519.generate_public_key(bob_secret_key);

// Assume that Alice and Bob have exchanged their public keys.

// Alice is calculating the shared secret.
byte[] alice_shared_secret = ECDHCurve25519.generate_shared_secret(
    alice_secret_key, bob_public_key);

// Bob is also calculating the shared secret.
byte[] bob_shared_secret = ECDHCurve25519.generate_shared_secret(
    bob_secret_key, alice_public_key);

More details can be found on the ECDH-Curve25519-Mobile project page at GitHub. Hope to see you there!

Testing USB-C to USB-A/Micro-USB Cables for Conformance

Many new mobile devices now feature an USB-C connector. In order to connect these devices to USB devices or chargers with a USB-A or Micro-USB connector, you need an adapter or cable with USB-C plug on the one side and USB-A/Micro-USB connector on the other.

As first discovered by Google engineer Benson Leung, many of these USB-C to USB-A/Micro-USB cables or adapters do not conform with the USB standard allowing USB-C devices to draw excessive power, which might damage the host or charger permanently.

Recently, I bought a Nexus 5x featuring a USB-C connector and now faced the problem of figuring out whether my USB-C to USB-A cable is conforming with the standard. So I bought a USB-C connector (actually, not so easy to get as I thought) and tested my cable with a multimeter. Of course, this works fine, and fortunately, my cable was OK. Then I thought: Why not build a little device to quickly check cables without multimeter? Just plug in the cable and see whether it is OK or not.

That’s exactly what I present here: an Arduino-based device to check USB-C to USB-A/Micro-USB cables and adapters for standard conformity. Two images of the board are shown below. It’s not very complex at all as you will see, and I don’t claim this to be rocket science. It’s just a little practical tool. Everything is completely open source, the code as well as the hardware design (printed circuit board), and you can download both from my Github repository.

USB-C Adapter Tester

USB-C Adapter Tester

USB-C Adapter Tester

USB-C Adapter Tester


I don’t want to repeat everything that has already been said elsewhere. However, to keep this page self-contained, I quickly describe the problem in plain words so you can easily understand the solution.

USB-C allows the USB host or charger (called downstream-facing port, DFP) to signal to the powered USB-C device (called upstream-facing port, UFP) how much current it can provide. This is implemented by a defined current flowing from DFP to UFP over the CC (Channel Configuration) line of the USB-C connector. 80 µA +- 20 % signal “Default USB power” (900 mA for “Super Speed” devices), whereas 180 µA +-8 % signal 1.5 A, and 330 µA signal 3.0 A.

So far, so good. A USB-C host or charger will know how much power it can provide and signal the correct value by sending the corresponding current over the CC line. The problem starts with “legacy” devices with USB-A or Micro-USB connector. These connectors don’t have a CC pin, thus, the host or charger cannot signal to the USB-C device how much current they can provide. In this case, the current on the CC line is “generated” by the cable or adapter using a simple resistor RP connecting the 5 V line to the CC line of the UFP. You might remember: R = V/I. So by selecting the right resistor in the cable/adapter, a certain current ICC is flowing through the CC line. Actually, the UFP connects CC through another 5.1k resistor (RD) to ground, so you have to consider the series resistance of RP and RD when calculating ICC. RP = 56k corresponds to about 80 µA (corresponding to “Default USB-Power”), RP = 22k to 180 µA (corresponding to 1.5 A), and RP = 10k to 300 µA (corresponding to 3.0 A).

Note that now the adapter cable rather than the upstream USB host or USB charger is defining the maximum current the downstream USB-C device can pull! However, the cable cannot know to which host or charger it will be connected and how much current this host or charger can actually provide. So the only safe choice for RP is a value resulting in 80 µA on the CC line corresponding to “Default USB Power”, i.e., a 56k resistor. Unfortunately, some cable and adapter manufacturers don’t use 56k resistors but lower values like 10k resistors. If your host can just provide the required “Default USB Power”, it might get grilled.


Now that we know what to check, we can build our USB-C-Adapter-Tester shown on the images above. This tester consists of a microcontroller (Atmega 328p; same chip as used by the Arduino UNO) featuring an Analog-to-Digital Converter (ADC). The ADC measures the voltage drop along a 5.1k resistor (actually, two separate 5.1k resistors on different channels of the ADC since USB-C features two CC so you can plug-in the USB-C cable either way). Knowing the resistance and the voltage drop measured by the ADC, the microcontroller calculates ICC. If ICC is within the specified range (80 µA +- 20 %), an LED signaling a “good” cable is turned on from an GPIO pin. If it is outside the range, another LED signaling a “bad” cable is turned on.

The cable to be checked is also powering the microcontroller from the USB host or charger. The good old Atmega 328p can be powered from 5V, which is the voltage of USB-A and Micro-USB.

Since the internal voltage reference of the Atmega might not be very precise, I used an external 2.5 V voltage reference diode to provide a reference voltage to the ADC. If you trust the internal 1.1 V voltage reference of the Atmega, you can save this part.

As said, the USB-C connector was a little hard to get, but I finally found one at an E-Bay shop.

For the implementation of the code, I used the Arduino platform. The device is programmed through a standard 6 pin in-system programmer port.

As soon as you plug in the cable under test, the microcontroller starts measuring the voltage drop, translates it to current, compares it to the specified range, and switches on the corresponding LED signaling a good or bad cable.

If you want to etch the PCB yourself, I provide the Eagle files in the Git repository. Of course, you can also simply use a standard Arduino UNO instead of the shown PCB.

Several cables and adapters were tested with this device. The Micro-USB/USB-C adapter that came with the Nexus 5x phone was OK as well as my axxbiz USB-A/USB-C cable. Some Micro-USB/USB-C adapters were not OK (using 10k resistor instead of 51k resistors). Benson Leung tested many more cables if you are interested in what to buy.

I hope your USB cable is OK :)

BLE-V-Monitor: How car batteries join the Internet of Things

The Internet of Things (IoT) envisions a world where virtually everything is connected and able to communicate. Today, I want to present one such IoT application, namely, the BLE-V-Monitor: a battery voltage monitor for vehicels (cars, motorbikes).

BLE-V-Monitor consists of an Arduino-based monitoring device and an Android app. The BLE-V-Monitor device is connected to the car battery to monitor the battery voltage and record voltage histories. The app queries the current voltage and voltage history via Bluetooth Low Energy (BLE) and displays them to the user. Below you can see an image of the circuit board of the BLE-V-Monitor device and two sceenshots of the app showing the current voltage, charge status, and voltage history.

BLE-V-Monitor Board

BLE-V-Monitor board.


BLE-V-Monitor app: voltage and charge status

BLE-V-Monitor app: voltage and charge status

BLE-V-Monitor app: minutely history

BLE-V-Monitor app: minutely voltage history

The main features of BLE-V-Monitor are:

  • Voltage and battery charge status monitoring
  • Recording of minutely, hourly, and daily voltage histories
  • Bluetooth Low Energy (BLE) to transmit voltage samples to smartphones, tablets, Internet gateways, etc.
  • Very low energy consumption
  • Android app for displaying current voltage and voltage histories
  • Open source hardware (CERN Open Hardware Licence v1.2) and software (Apache License 2.0)


According to a recent study of ADAC (the largest automobile club in Europe), 46 % of car breakdowns are due to electrical problems, mostly empty or broken batteries. Personally, I know several incidents, where a broken or empty battery was the reason for breakdowns of cars or motorbikes. So no question: there is a real problem to be solved.

The major problem with an empty battery is that you might not realize it until you turn the key, or for those of you with a more modern car, push the engine start button. And then it is already too late! So wouldn’t it be nice if the battery could tell you in advance, when it needs to be recharged and let you know its status (weakly charge, fully charged, discharged, etc.)?

That’s where the Internet of Things comes into play: the “thing” is your car battery, which is able to communicate its voltage and charge status using wireless communication technologies.

Let me present you some technical details of BLE-V-Monitor, to show you how to implement this specific IoT use case. More details including Android and Arduino source code and hardware design (PCB layout) can be found on Github:


The technical design of BLE-V-Monitor was driven by two key requirements:

  1. Keep it as simple as possible: Simple and commonly available hardware; through-hole PCB design to allow for simple etching and soldering.
  2. Very low energy consumption. What is the use of a battery monitor consuming substantial energy? Just to give you an idea that this is not trivial even considering the fact that a car battery stores a lot of energy (usually more than 40 Ah even for smaller cars): Consider the current of one standard LED, which is about 15 mA connected through a resistor to your 12 V car battery. After two hours, this LED and the resistor consumed 2 h * 15 mA * 12 V = 30 mAh * 12 V energy. Now, assume starting your motor with a starter motor drawing 50 A on average over a 2 s starting period. In this scenario, starting your motor once consumes 2 s * 50 A * 12 V = 28 mAh * 12 V. Thus, in less than two hours, the LED and its resistor consumed about the same energy as starting your car once. I know, this scenario is highly simplified, but it might serve to show that even a small consumer (in our case the BLE-Monitor device) is significant if it is running for a long time. Consequently, as a goal we want to bring down the average energy consumption of the monitoring device far below 1 mA.


Technically, BLE-V-Monitor consists of the BLE-V-Monitor device already shown above and a smartphone app for Android.

The BLE-V-Monitor device periodically samples the voltage of the battery, and the app uses Bluetooth Low Energy (BLE) to query the battery voltage when the smartphone is close to the car. Instead of using a smartphone, you could also install some dedicated (fixed) hardware (e.g., a Raspberry Pi with a Bluetooth USB stick in your garage), but since I walk by my car every day and the range of BLE was sufficient to receive the signal even one floor above the garage, I did not consider this option so far.

In order not to lose data while the smartphone is not within BLE range, the BLE-V-Monitor device records minutely, daily, and hourly histories in RAM, which can then be queried by the smartphone.

This approach based on BLE has several advantages: It is cheap. It is energy efficient. Clients can be implemented with many existing devices since BLE is commonly available in most consumer devices, in particular, mobile devices and cheap single-board computers like the Raspberry Pi (using a Bluetooth USB stick).

BLE-V-Monitor Device

The BLE-V-Monitor device is based on the Arduino platform. It uses an ATmega 328P microcontroller and the BLE module MOD-nRF8001 from Olimex with the Nordic Semiconductors BLE chip nRF8001. The ATmega is programmed via an in-system programmer (ISP) and interfaces with the BLE module through SPI. Overall, if you build this device yourself, the hardware might cost you less than 20$.  And since we rely on a simple and energy efficient microcontroller and BLE together with small duty cycles, the current consumption can be below 100 microampere (including everything like the 3.3 V voltage regulator to power the microcontroller and BLE module from the car battery).

To measure voltage, we use the 10 bit analog/digital converter (ADC) of the ATmega (no extra ADC component required). The voltage range that can be measured ranges from 0 to 18 V, thus, the resolution is 18 V / 1024 = 17.6 mV, which is fine-grained enough to derive the charge status of the battery (see voltage thresholds below). Note that while the car is running, the car’s alternator provides more than 12 V to charge the battery (about 15 V for my car as can be seen from the voltage history screenshot). A voltage divider with large resistor values (to save energy) is used to divide the battery voltage. Since we use a 2.5 V reference voltage, 18 V is mapped to 2.5 V by the voltage divider. The 2.5 V reference voltage is provided by the very precise micropower voltage reference diode LM285-2.5, which is only powered on demand through a GPIO pin of the ATmega during sampling to minimize energy consumption as much as possible. Since the resistors of the voltage divider have large values to save energy, a 100 nF capacitor in parallel to the second resistor of the voltage divider provides a low impedance source to the ADC (this 100 nF capacitor is much larger than the 14 pF sampling capacitor of the ATmega).

A 18 V varistor (not shown on the image; it’s an SMD on the backside of the PCB since I only had an SMD version available) protects from transient voltage spikes above 18 V. Since varistors typically age whenever they shunt excessive voltage, a fuse limits the current to protect against a short circuit of the varistor.

A micropower voltage regulator (LP295x) provides 3.3 V to the ATmega and BLE module. The 100 mA that can be provided by this regulator are more than sufficient to power the ATmega and BLE module while being active, and a very low quiescent current of only 75 microampere ensures efficient operation with small duty cycles.

BLE-V-Monitor App

The BLE-V-Monitor App is implemented for Android (version 4.3 or higher since we need the BLE features of Android). It consists of a tab view with a fragment to display the current voltage, and three more fragments to display minutely, hourly, and daily voltage histories, respectively.

The charge status of a lead–acid car battery can be quite easily derived from its voltage. We use the following voltage levels to estimate the charge status on the client side:

  • 100 % charged (fully charged): about 12.66 V
  • 75 % charged (charged): about 12.35 V
  • 50 % charged (weakly charged): about 12.10 V
  • 25 % charged (discharged): about 11.95 V
  • 0 % charged (over discharged): about 11.7 V

The screenshots above show some examples of the current voltage, charge status, and voltage histories. In the history screenshot you can also identify two periods when the car was running where the charging voltage reached about 15 V.

Final Prototype

The following photos show how the BLE-V-monitor PCB is mounted inside a case and the placement of the monitoring device right in front of the battery of my car (in this photo, the device is already connected to the battery but not yet fixed). Fortunately, older cars have plenty of space and not a lot of useless plastic hiding every part of the motor.

BLE-V-Monitor device

BLE-V-Monitor device with case


BLE-V-Monitor device in car

BLE-V-Monitor device mounted in car and connected to car battery

The pull relief (knot) might not look very elegant but it is highly effective.

Obviously, plastic is the better choice for the case since the Bluetooth module is inside. Still, I had some concerns that all the metal of the car would shield Bluetooth signals too much, but it works suprisingly well. Even one floor above the garage with the metal engine hood and a concrete ceiling between device and client I can still receive a weak signal and I can still query the battery status.

Where to go from here?

Obviously, there is some potential to further improve the functionality. Beyond just monitoring the raw voltage and mapping it to a charge status, we could analyse the voltage data to find out whether the battery is still in a healthy condition. For instance, we could look at voltage peaks and analyse the voltage histories to find out how quickly the battery discharges, and how these values change over the lifetime of the battery. To this end, you could send the data to the cloud. Although I think, you could implement such simple “small data” analytics also on the smartphone or even on the microcontroller of the monitoring device.

However, the battery or car vendor might want to collect the status of all of their batteries in the cloud for other reasons, for instance, to improve maintenance and product quality, or to offer advanced services. With the cloud, everything becomes a service, so why not offering “battery as a service”? Instead of buying the battery, you buy the service of always having enough energy to operate your car. When the performance of your battery is degrading over time, the vendor already knows and sends you a new battery well before the old one is completely broken or invites you to visit a garage where they exchange the battery for you (this service would be include in the “battery as a service” fees).

I hope you found this little journey to the IoT interesting. Have a good trip, wherever you go!

Raspberry Pi Going Realtime with RT Preempt

[UPDATE 2016-05-13: Added pre-compiled kernel version 4.4.9-rt17 for all Raspberry Pi models (Raspberry Pi Model A(+), Model B(+), Zero, Raspberry Pi 2, Raspberry Pi 3). Added build instructions for Raspberry Pi 2/3.

A real-time operating system gives you deterministic bounds on delay and delay variation (jitter). Such a real-time operating system is an essential prerequisite for implementing so-called Cyber Physical Systems, where a computer controls a physical process. Prominent examples are the control of machines and robots in production environments (Industry 4.0), drones, etc.

RT Preempt is a popular patch for the Linux kernel to transform Linux into such a realtime operating system. Moreover, the Raspberry Pi has many nice features to interface with sensors and actuators like SPI, I2C, and GPIO so it seems to be a good platform for hosting a controller in a cyber-physical system. Consequently, it is very attractive to install Linux with the RT Preempt patch on the Raspberry Pi.

Exactly this is what I do here: I provide detailed instructions on how to install a Linux kernel with RT Preempt patch on a Raspberry Pi. Basically, I wrote this document to document the process for myself, and it is more or less a collection of information you will find on the web. But anyway, I hope I can save some people some time.

And to save you even more time, here is the pre-compiled kernel (including kernel modules, firmware, and device tree) for the Raspberry Pi Model A(+),B(+), Raspberry Pi Zero, Raspberry Pi 2 Model B, Raspberry Pi 3 Model B:

To install this pre-compiled kernel, login to your Raspberry Pi running Raspbian (if you have not installed Raspbian already, you can find an image here:, and execute the following commands (I recommend to do a backup of your old image since this procedure will overwrite the old kernel):

pi@raspberry ~$ sudo rm -r /boot/overlays/
pi@raspberry ~$ sudo rm -r /lib/firmware/
pi@raspberry ~$ cd /tmp
pi@raspberry ~$ wget
pi@raspberry ~$ tar xzf kernel-4.4.9-rt17.tgz
pi@raspberry ~$ cd boot
pi@raspberry ~$ sudo cp -rd * /boot/
pi@raspberry ~$ cd ../lib
pi@raspberry ~$ sudo cp -dr * /lib/
pi@raspberry ~$ sudo /sbin/reboot

With this patched kernel, I could achieve bounded latency well below 200 microseconds on a fully loaded 700 MHz Raspberry Pi Model B (see results below). This should be safe for tasks with a cycle time of 1 ms.

Since compiling the kernel on the Pi is very slow, I will cross compile the kernel on a more powerful host. You can distinguish the commands executed on the host and the Pi by looking at the prompt of the shell in the following commands.

Install Vanilla Raspbian on your Raspberry Pi

Download Raspbian from and install it on your SD card.

Download Raspberry Pi Kernel Sources

On your host (where you want to cross-compile the kernel), download the latest kernel sources from Github:

user@host ~$ git clone
user@host ~$ cd linux

If you like, you can switch to an older kernel version like 4.1:

user@host ~/linux$ git checkout rpi-4.1.y

Patch Kernel with RT Preempt Patch

Next, patch the kernel with the RT Preempt patch. Choose the patch matching your kernel version. To this end, have a look at the Makefile. VERSION, PATCHLEVEL, and SUBLEVEL define the kernel version. At the time of writing this tutorial, the latest kernel was version 4.4.9. Patches for older kernels can be found in folder "older".

user@host ~/linux$ wget
user@host ~/linux$ zcat patch-4.4.9-rt17.patch.gz | patch -p1

Install and Configure Tool Chain

For cross-compiling the kernel, you need the tool chain for ARM on your machine:

user@host ~$ git clone
user@host ~$ export ARCH=arm
user@host ~$ export CROSS_COMPILE=/home/user/tools/arm-bcm2708/gcc-linaro-arm-linux-gnueabihf-raspbian/bin/arm-linux-gnueabihf-
user@host ~$ export INSTALL_MOD_PATH=/home/user/rtkernel

Later, when you install the modules, they will go into the directory specified by INSTALL_MOD_PATH.

Configure the kernel

Next, we need to configure the kernel for using RT Preempt.

For Raspberry Pi Model A(+), B(+), Zero, execute the following commands:

user@host ~$ export KERNEL=kernel
user@host ~$ make bcmrpi_defconfig

For Raspberry Pi 2/3 Model B, execute these commands:

user@host ~$ export KERNEL=kernel7
user@host ~$ make bcm2709_defconfig

An alternative way is to export the configuration from a running Raspberry Pi:

pi@raspberry$ sudo modprobe configs
user@host ~/linux$ scp pi@raspberry:/proc/config.gz ./
user@host ~/linux$ zcat config.gz > .config

Then, you can start to configure the kernel:

user@host ~/linux$ make menuconfig

In the kernel configuration, enable the following settings:

  • CONFIG_PREEMPT_RT_FULL: Kernel Features → Preemption Model (Fully Preemptible Kernel (RT)) → Fully Preemptible Kernel (RT)
  • Enable HIGH_RES_TIMERS: General setup → Timers subsystem → High Resolution Timer Support (Actually, this should already be enabled in the standard configuration.)

Build the Kernel

Now, it’s time to cross-compile and build the kernel and its modules:

user@host ~/linux$ make zImage
user@host ~/linux$ make modules
user@host ~/linux$ make dtbs
user@host ~/linux$ make modules_install

The last command installs the kernel modules in the directory specified by INSTALL_MOD_PATH above.

Transfer Kernel Image, Modules, and Device Tree Overlay to their Places on Raspberry Pi

We are now ready to transfer everything to the Pi. To this end, you could mount the SD card on your PC. I prefer to transfer everything over the network using a tar archive:

user@host ~/linux$ mkdir $INSTALL_MOD_PATH/boot
user@host ~/linux$ ./scripts/mkknlimg ./arch/arm/boot/zImage $INSTALL_MOD_PATH/boot/$KERNEL.img
user@host ~/linux$ cp ./arch/arm/boot/dts/*.dtb $INSTALL_MOD_PATH/boot/
user@host ~/linux$ cp -r ./arch/arm/boot/dts/overlays $INSTALL_MOD_PATH/boot
user@host ~/linux$ cd $INSTALL_MOD_PATH
user@host ~/linux$ tar czf /tmp/kernel.tgz *
user@host ~/linux$ scp /tmp/kernel.tgz pi@raspberry:/tmp

Then on the Pi, install the real-time kernel (this will overwrite the old kernel image!):

pi@raspberry ~$ cd /tmp
pi@raspberry ~$ tar xzf kernel.tgz
pi@raspberry ~$ sudo rm -r /lib/firmware/
pi@raspberry ~$ sudo rm -r /boot/overlays/
pi@raspberry ~$ cd boot
pi@raspberry ~$ sudo cp -rd * /boot/
pi@raspberry ~$ cd ../lib
pi@raspberry ~$ sudo cp -dr * /lib/

Most people also disable the Low Latency Mode (llm) for the SD card:

pi@raspberry /boot$ sudo nano cmdline.txt

Add the following option:



pi@raspberry ~$ sudo /sbin/reboot

Latency Evaluation

For sure, you want to know the latency bounds achieved with the RT Preempt patch. To this end, you can use the tool cyclictest with the following test case:

  • clock_nanosleep(TIMER_ABSTIME)
  • Cycle interval 500 micro-seconds
  • 100,000 loops
  • 100 % load generated by running the following commands in parallel:
    • On the Pi:
      pi@raspberry ~$ cat /dev/zero > /dev/null
    • From another host:
      user@host ~$ sudo ping -i 0.01 raspberrypi
  • 1 thread (I used a Raspberry Pi model B with only one core)
  • Locked memory
  • Process priority 80
pi@raspberry ~$ git clone git://
pi@raspberry ~$ cd rt-tests/
pi@raspberry ~/rt-test$ make all
pi@raspberry ~/rt-test$ sudo ./cyclictest -m -t1 -p 80 -n -i 500 -l 100000

On a Raspberry Pi model B at 700 MHz, I got the following results:

T: 0 ( 976) P:80 I:500 C: 100000 Min: 23 Act: 40 Avg: 37 Max: 95

With some more tests, the worst case latency sometimes reached about 166 microseconds. Adding a safety margin, this should be safe for cycletimes of 1 ms.

I also observed that using other timers than clock_nanosleep(TIMER_ABSTIME)—e.g., system timers (sys_nanosleep and sys_setitimer)—, the latency was much higher with maximum values above 1 ms. Thus, for low latencies, I would only rely on clock_nanosleep(TIMER_ABSTIME).

Faros BLE Beacon with Google Eddystone Support

Today I want to introduce Faros, an open-source Bluetooth Low Energy (BLE) beacon supporting Google’s open beacon format Eddystone implemented for the popular Arduino platform.

With the invention of Apple’s iBeacon, BLE beacons became very popular as a positioning technology. Now Google has released a new open beacon format called Eddystone (named after a famous lighthouse), which is more versatile. Eddystone supports three frame types for broadcasting data:

  • UID frames broadcasting identifiers, namely a namespace identifier to group a set of beacons, and an instance identifier to identify an individual beacon.
  • URL frames broadcasting a short URL, so you can think of this as a radio-based QR code replacement.
  • Telemetry (TLM) frames to check the health status of beacons. These frames contain the beacon temperature, battery level, uptime, and counters for broadcasted frames to facilitate the management of beacons.

The complete protocol specification is online, so I do not go into much technical detail here.

Faros (named after the Faros of Alexandria, which was one of the seven wonders of the ancient world) is an implementation of the Eddystone protocol targeting the popular Arduino platform and using the nRF8001 chip of Nordic Semiconductors. The main features of Faros are:

  • Full support for all Eddystone frame types (UID, URL, TLM)
  • Energy efficiency allowing for runtimes of several years
  • Using popular and powerful hardware platforms: Arduino and nRF8001 BLE chip
  • Simplicity of hardware: easy to build using a commodity Arduino or our Faros board together with the BLE module MOD-nRF8001 from Olimex
  • Liberal licensing: Apache License 2.0 for software, CERN Open Hardware Licence v1.2 for hardware

Faros is hosted at GitHub and includes:

  • Source code for Arduino
  • Faros board schematics and board layouts

Below you see several versions of Faros beacons:

  • Faros software running on a commodity Arduino Pro Micro, powered and programmed through the USB connector. This could also be easily setup on a breadboard.
  • Self-made Faros printed circuit board (50 mm x 70 mm) with an ATmega328P powered by two AA-size batteries. A PDF with the PCB mask is included in the Git repository.
  • Faros printed circuit board (50 mm x 50 mm) manufactured by Seeed Studio using an ATmega328P. 10 PCBs cost about 15 $ including shipping. Gerber files are included in the Git repository.
Faros on Arduino Pro Micro

Faros running on Arduino Pro Micro

Faros board with ATMega 326P

Self-made Faros board with ATmega328P

Faros Board

Faros board manufactured by Seeed Studio with ATmega328P

The Faros board further lowers cost and energy consumption—two very important requirements for field deployments. The Faros board just needs the BLE module and an ATmega328P (where the “P” stands for pico-power). It is programmed via ISP so you do not need USB. The ATmega is put into power-down mode whenever possible, and then consumes only few micro-ampere. The nRF8001 will wake up the ATmega whenever it has an event to transmit. Moreover, the watch dog timer wakes up the ATmega periodically to switch the Eddystone frame types. Thus, the beacon can send all three frame types sequentially in a round robin fashion.

The Faros board is kept as simple as possible (through-hole design, no SMD
components). It comes in two versions: (1) a single-sided 50 mm x 70 mm layout
that is well-suited for self-etching; (2) a double-sided 50 mm x 50 mm layout that can be sent to a PCB manufacturer.

The Faros board also has a 3 mm low-current LED, which can be switched on by a digital pin of the Arduino. It draws 10 mA at 3 V, which is a lot if one targets runtimes of several years! So if you aim at maximum battery lifetime, only send short pulses in long intervals (e.g., one 100 ms pulse every 30 s amounts to about 33 uA average current), or even better: rely on TLM frames (management is their job anyway).

Of course, the Faros board can be run from batteries. The nRF8001 can run down to 1.9 V; the ATmega down to 1.8 V at frequencies <= 4 MHz. The maximum voltage for the nRF8001 is 3.6 V. Thus, one good option is to use two AA- or two AAA-size alkaline batteries. They are cheap. They provide > 1800 mAh, which should suffice for several years of runtime. Two batteries are discharged at about 2.0 V (then the voltage drops rapidly), which fits nicely our desired voltage range of 1.9 – 3.0 V. And at runtimes of several years, no re-charging is required (you rather replace the device than changing the batteries). Of course, you can also try out other options like coin cells (e.g., one CR 2450 @ 3.0 V, 560 mAh), or one battery (1.5 V) plus a step-up converter (which wastes maybe more than 20 % energy for conversion and is probably more expensive than a second AA or AAA battery).

Finally, here are some screenshots from Google’s Eddystone Validator and Nordic’s nRF Master Control Panel showing the content of the Eddystone frames broadcasted by a Faros beacon.

Eddystone data sent by Faros beacon

Eddystone data sent by Faros beacon

UID Eddystone frame sent by Faros beacon

UID Eddystone frame sent by Faros beacon

URL Eddystone frame sent by Faros beacon

URL Eddystone frame sent by Faros beacon

TLM Eddystone frame sent by Faros beacon

TLM Eddystone frame sent by Faros beacon

All further technical details can be found in the Faros GIT repository at GitHub.

Have fun building your own Eddystone beacon with Faros!

Introducing SDN-MQ: A Powerful and Simple-to-Use Northbound Interface for OpenDaylight

One of the essential parts of an SDN controller is the so-called northbound interface through which network control applications implementing control logic interface with the SDN controller. The SDN controller then uses the OpenFlow protocol to program the switches according to the instructions of the control application. Since the northbound interface is the “API to the network”, a well-designed interface is essential for the acceptance and success of the SDN controller.

Ideally, the northbound interface should be powerful and still simple. Powerful means that it should expose all essential functionalities of OpenFlow to the control application. Certainly, the most essential function of SDN is flow programming to define forwarding table entries on the switches. Flow programming should include pro-active flow-programming, where the control application proactively decides to program a flow (e.g., a static flow), on the one hand. On the other hand, the northbound interface should support reactive flow programming where the control application reacts to packet-in events triggered by packets without matching forwarding table entries.

Simple means that the programmer should be able to use technologies that he is familiar with. So in short, the ideal northound interface should be as simple as possible, but not simpler.

Current Northbound Interfaces and Observed Limitations

OpenDaylight currently offers two kinds of northbound interfaces:

  1. RESTful interfaces using XML/JSON over HTTP.
  2. OSGi allowing for implementing control logic as OSGi services.

RESTful interfaces are simple to use since they are based on technologies that many programmers are familiar with and that are used in many web services. Parsing and creating JSON or XML messages and sending or receiving these messages over HTTP is straightforward and well-supported by many libraries. However, due to the request/response nature of REST and HTTP, these interfaces are restricted to proactive flow programming. The very essential feature of reacting to packet-in events is missing.

OSGi interfaces are powerful. Control applications can use any feature of the OpenFlow standard (implemented by the controller). However, they are much more complex than RESTful interfaces since OSGi itself is a complex technology. Moreover, OSGi is targeted at Java, which is nice for integrating it with the Java-based OpenDaylight controller, but bad if you want to use any other language to implement your control logic like C++ or Python.

So none of these interface seems to be simple and powerful at the same time.

How SDN can Benefit from Message-oriented Middleware

So how can we have the best of both worlds: a simple and powerful interface? The keyword (or maybe at least one possible keyword) is message-oriented middleware. As shown in the following figure, a message-oriented middleware (MOM) decouples the SDN controller from the control application through message queues for request/response interaction (proactive flow programming) and publish/subscribe topics for event-based interaction (reactive flow programming). So we can program flows through a request-response interface implemented by message queues and react to packet-in events by subscribing to events through message topics.


Moreover, messages can be based on simple textual formats like XML or JSON making message creation and interpretation as simple as for the RESTful interfaces mentioned above, however, without their restriction to request/response interaction.

Since a MOM decouples the SDN controller from the control application, the control logic can be implemented in any programming language. SDN controller and application talk to each other using JSON/XML, and the MOM takes care to transport messages from application to SDN controller and vice versa.

This decoupling also allows for the horizontal distribution of control logic by running control applications on several hosts. Such a decoupling “in space” is perfect for scaling out horizontally.

MOMs not only decouple the controller and control application in space but also in time. So the receiver does not need to consume the message at the time when it is sent. Messages can be buffered by the MOM and delivered when the control application or SDN controller are available and ready to process it. Although being a nice feature in general, time decoupling might not be strictly essential for SDN since usually we want a timely reaction of both controller and application. Still, it might be handy for some delay tolerant functions.

SDN-MQ: Integrating Message-oriented Middleware and SDN Controller

SDN-MQ integrates a message-oriented middleware with the OpenDaylight controller. In more detail, SDN-MQ is based on the Java Messaging Service (JMS) standard. The basic fatures of SDN-MQ are:

  • All messages are consequently based on JSON making message generation and interpretation straightforward.
  • SDN-MQ supports proactive and reactive flow programming without the need to implement complex OSGi services.
  • SDN-MQ supports message filtering for packet-in events through standard JMS selectors. So the control application can define, which packet-in events to receive based on packet header fields like source and destination adddresses. According to the publish/subscribe paradigm, multiple control applications can receive packet-in event notifications for the same packet.
  • SDN control logic can be distributed horizontally to different hosts for scaling out control logic.
  • Although SDN-MQ is based on the Java-based JMS standard, JMS servers such as Apache ActiveMQ support further language-independent protocols like STOMP (Streaming Text Oriented Messaging Protocol). Therefore, cross-language control applications implemented in C++, Python, JavaScript, etc. are supported.
  • Besides packet-in events and flow programming, SDN-MQ supports further essential functionality such as packet forwarding/injection via the controller.
  • SDN-MQ is open source and licensed through the Eclipse license (similar to OpenDaylight). The full source code is available at GitHub.

The figure below shows the basic architecture of SDN-MQ. SDN-MQ is implemented as OSGi services executed within the same OSGi framework as the the OpenDaylight OSGi services. SDN-MQ uses the OpenDaylight services to provide its service to the control application. So basically, SDN-MQ acts as a bridge between OpenDaylight and control application.


Three services are implemented by SDN-MQ to date:

  • Packet-in service to receive packet-in events including packet filtering based on header fields using JMS selectors.
  • Flow programming to define flow table entries on switches.
  • Packet forwarding to forward either packets received through packet-in events or new packets created by the application.

The JMS middleware transports messages between the SDN-MQ services and the control applications. As JMS middleware, we have used ActiveMQ so far, but any JMS-compliant service should work. If the message-oriented middleware is supporting other language-independent protocols (such as STOMP), control applications can be implemented in any supported language.

Where to go from here

In my next blog post, I will explain in detail how to use SDN-MQ. Until then, you can find more details and programming examples on the SDN-MQ website at GitHub.

Stay tuned!

Reactive Flow Programming with OpenDaylight

In my last OpenDaylight tutorial, I demonstrated how to implement an OSGi module for OpenDaylight. In this tutorial, I will show how to use these modules for reactive flow programming and packet forwarding.

In detail, you will learn:

  • how to decode incoming packets
  • how to set up flow table entries including packet match rules and actions
  • how to forward packets


To make things concrete, we consider a simple scenario in this tutorial: load balancing of a TCP service (e.g., a web service using HTTP over TCP). The basic idea is that TCP connections to a service addressed through a public IP address and port number are distributed among two physical server instances using IP address re-writing performed by an OpenFlow switch. Whenever a client opens a TCP connection to the service, one of the server instances is chosen randomly, and a forwarding rule is installed by the network controller on the ingress switch to forward all incoming packets of this TCP connection to the chosen server instance. In order to make sure that the server instance accepts the packets of the TCP connection, the destination IP address is re-written to the IP address of the chosen server instance, and the destination MAC address is set to the MAC address of the server instance. In the reverse direction from server to client, the switch re-writes the source IP address of the server to the public IP address of the service. Therefore, to the client it looks like the response is coming from the public IP address. Thus, load balancing is transparent to the client.

To keep things simple, I do not consider the routing of packets. Rather, I assume that the clients and the two server instances are connected to the same switch on different ports (see figure below). Moreover, I also simplify MAC address resolution by setting a static ARP table entry at the client host for the public IP address. Since there is no physical server assigned to the public IP address, we just set a fake MAC address (in a real setup, the gateway of the data center would receive the client request, so we would not need an extra MAC address assigned to the public IP address).


I assume that you have read the previous tutorial, so I skip some explanations on how to set up an OpenDaylight Maven project, subscribe to services, and further OSGi module basics.

You can find all necessary files of this tutorial in this archive: myctrlapp.tar.gz

The folder myctrlapp containts the Maven project of the OSGi module. You can compile and create the OSGi bundle with the following command

user@host:$ tar xzf myctrlapp.tar.gz
user@host:$ cd ~/myctrlapp
user@host:$ mvn package

The corresponding Eclipse project can be created using

user@host:$ cd ~/myctrlapp
user@host:$ mvn eclipse:eclipse

Registering Required Services and Subscribing to Packet-in Events

For our simple load balancer, we need the following OpenDaylight services:

  • Data Packet Service for decoding incoming packets and encoding and sending outgoing packets.
  • Flow Programmer Service for setting flow table entries on the switch.
  • Switch Manager Service to determine the outport of packets forwarded to the server instances.

As explained in my previous tutorial, we register for OSGi services by implementing the configureInstance(...) method of the Activator class:

public void configureInstance(Component c, Object imp, String containerName) {
    log.trace("Configuring instance");

    if (imp.equals(PacketHandler.class)) {
        // Define exported and used services for PacketHandler component.

        Dictionary<String, Object> props = new Hashtable<String, Object>();
        props.put("salListenerName", "mypackethandler");

        // Export IListenDataPacket interface to receive packet-in events.
        c.setInterface(new String[] {IListenDataPacket.class.getName()}, props);

        // Need the DataPacketService for encoding, decoding, sending data packets
            "setDataPacketService", "unsetDataPacketService").setRequired(true));

        // Need FlowProgrammerService for programming flows
            "setFlowProgrammerService", "unsetFlowProgrammerService").setRequired(true));

        // Need SwitchManager service for enumerating ports of switch
            "setSwitchManagerService", "unsetSwitchManagerService").setRequired(true));

set... and unset... define names of callback methods. These callback methods are implemented in our PacketHandler class to receive service proxy objects, which can be used to call the services:

 * Sets a reference to the requested DataPacketService
void setDataPacketService(IDataPacketService s) {
    log.trace("Set DataPacketService.");

    dataPacketService = s;

 * Unsets DataPacketService
void unsetDataPacketService(IDataPacketService s) {
    log.trace("Removed DataPacketService.");    

    if (dataPacketService == s) {
        dataPacketService = null;

 * Sets a reference to the requested FlowProgrammerService
void setFlowProgrammerService(IFlowProgrammerService s) {
    log.trace("Set FlowProgrammerService.");

    flowProgrammerService = s;

 * Unsets FlowProgrammerService
void unsetFlowProgrammerService(IFlowProgrammerService s) {
    log.trace("Removed FlowProgrammerService.");

    if (flowProgrammerService == s) {
        flowProgrammerService = null;

 * Sets a reference to the requested SwitchManagerService
void setSwitchManagerService(ISwitchManager s) {
   log.trace("Set SwitchManagerService.");

   switchManager = s;

 * Unsets SwitchManagerService
void unsetSwitchManagerService(ISwitchManager s) {
    log.trace("Removed SwitchManagerService.");

    if (switchManager == s) {
        switchManager = null;

Moreover, we register for packet-in events in the Activator class. To this end, we must declate that we implement the IListenDataPacket interface (line 11). This interface basically consists of one callback method receiveDataPacket(...) for receiving packet-in events as described next.

Handling Packet-in Events

Whenever a packet without matching flow table entry arrives at the switch, it is sent to the controller and the event handler receiveDataPacket(...) of our packet handler class is called with the received packet as parameter:

public PacketResult receiveDataPacket(RawPacket inPkt) {
    // The connector, the packet came from ("port")
    NodeConnector ingressConnector = inPkt.getIncomingNodeConnector();
    // The node that received the packet ("switch")
    Node node = ingressConnector.getNode();

    log.trace("Packet from " + node.getNodeIDString() + " " + ingressConnector.getNodeConnectorIDString());

    // Use DataPacketService to decode the packet.
    Packet pkt = dataPacketService.decodeDataPacket(inPkt);

    if (pkt instanceof Ethernet) {
        Ethernet ethFrame = (Ethernet) pkt;
        Object l3Pkt = ethFrame.getPayload();

        if (l3Pkt instanceof IPv4) {
            IPv4 ipv4Pkt = (IPv4) l3Pkt;
            InetAddress clientAddr = intToInetAddress(ipv4Pkt.getSourceAddress());
            InetAddress dstAddr = intToInetAddress(ipv4Pkt.getDestinationAddress());
            Object l4Datagram = ipv4Pkt.getPayload();

            if (l4Datagram instanceof TCP) {
                TCP tcpDatagram = (TCP) l4Datagram;
                int clientPort = tcpDatagram.getSourcePort();
                int dstPort = tcpDatagram.getDestinationPort();

                if (publicInetAddress.equals(dstAddr) && dstPort == SERVICE_PORT) {
          "Received packet for load balanced service");

                    // Select one of the two servers round robin.

                    InetAddress serverInstanceAddr;
                    byte[] serverInstanceMAC;
                    NodeConnector egressConnector;

                    // Synchronize in case there are two incoming requests at the same time.
                    synchronized (this) {
                        if (serverNumber == 0) {
                  "Server 1 is serving the request");
                            serverInstanceAddr = server1Address;
                            serverInstanceMAC = SERVER1_MAC;
                            egressConnector = switchManager.getNodeConnector(node, SERVER1_CONNECTOR_NAME);
                            serverNumber = 1;
                        } else {
                  "Server 2 is serving the request");
                            serverInstanceAddr = server2Address;
                            serverInstanceMAC = SERVER2_MAC;
                            egressConnector = switchManager.getNodeConnector(node, SERVER2_CONNECTOR_NAME);
                            serverNumber = 0;

                    // Create flow table entry for further incoming packets

                    // Match incoming packets of this TCP connection 
                    // (4 tuple source IP, source port, destination IP, destination port)
                    Match match = new Match();
                    match.setField(MatchType.DL_TYPE, (short) 0x0800);  // IPv4 ethertype
                    match.setField(MatchType.NW_PROTO, (byte) 6);       // TCP protocol id
                    match.setField(MatchType.NW_SRC, clientAddr);
                    match.setField(MatchType.NW_DST, dstAddr);
                    match.setField(MatchType.TP_SRC, (short) clientPort);
                    match.setField(MatchType.TP_DST, (short) dstPort);

                    // List of actions applied to the packet
                    List actions = new LinkedList();

                    // Re-write destination IP to server instance IP
                    actions.add(new SetNwDst(serverInstanceAddr));

                    // Re-write destination MAC to server instance MAC
                    actions.add(new SetDlDst(serverInstanceMAC));

                    // Output packet on port to server instance
                    actions.add(new Output(egressConnector));

                    // Create the flow
                    Flow flow = new Flow(match, actions);

                    // Use FlowProgrammerService to program flow.
                    Status status = flowProgrammerService.addFlow(node, flow);
                    if (!status.isSuccess()) {
                        log.error("Could not program flow: " + status.getDescription());
                        return PacketResult.CONSUME;

                    // Create flow table entry for response packets from server to client

                    // Match outgoing packets of this TCP connection 
                    match = new Match();
                    match.setField(MatchType.DL_TYPE, (short) 0x0800); 
                    match.setField(MatchType.NW_PROTO, (byte) 6);
                    match.setField(MatchType.NW_SRC, serverInstanceAddr);
                    match.setField(MatchType.NW_DST, clientAddr);
                    match.setField(MatchType.TP_SRC, (short) dstPort);
                    match.setField(MatchType.TP_DST, (short) clientPort);

                    // Re-write the server instance IP address to the public IP address
                    actions = new LinkedList();
                    actions.add(new SetNwSrc(publicInetAddress));
                    actions.add(new SetDlSrc(SERVICE_MAC));

                    // Output to client port from which packet was received
                    actions.add(new Output(ingressConnector));

                    flow = new Flow(match, actions);
                    status = flowProgrammerService.addFlow(node, flow);
                    if (!status.isSuccess()) {
                        log.error("Could not program flow: " + status.getDescription());
                        return PacketResult.CONSUME;

                    // Forward initial packet to selected server

                    log.trace("Forwarding packet to " + serverInstanceAddr.toString() + " through port " + egressConnector.getNodeConnectorIDString());

                    return PacketResult.CONSUME;

    // We did not process the packet -> let someone else do the job.
    return PacketResult.IGNORED;

Our load balancer reacts as follows to packet-in events. First, it uses the Data Packet Service to decode the incoming packet using method decodeDataPacket(inPkt). We are only interested in packets addressed to the public IP address and port number of our load-balanced service. Therefore, we have to check the destination IP address and port number of the received packet. To this end, we iteratively decode the packet layer by layer. First, we check whether we received an Ethernet frame, and get the payload of the frame, which should be an IP packet for a TCP connection. If the payload of the frame is indeed an IPv4 packet, we typecast it to the corresponding IPv4 packet class and use the methods getSourceAddress(...) and getDestinationAddress(...) to retrieve the IP addresses of the client (source) and service (destination). Then, we go up one layer and check for a TCP payload to retrieve the port information in a similar way.

After we have retrieved the IP address and port information from the packet, we check whether it is targeted at our load-balanced service (line 28). If it is not addressed at our service, we ignore the packet and let another handler process it (if any) by returning packetResult.IGNORED as a result of the packet handler.

If the packet is addressed at our service, we choose one of the two physical service instances in a round-robin fashion (line 38–52). The idea is to send the first request to server 1, second request to server 2, third to server 1 again, etc. Note that we might have multiple packet handlers for different packets executed in parallel (at least, we should not rely on a sequential execution as long as we do not know how OpenDaylight handles requests). Therefore, we synchronize this part of the packet handler, to make sure that only one thread is in this code section at a time.

Programming Flows

To forward a packet to the selected server instance, we select its IP address and MAC address as target addresses for each packet of this TCP connection from the client. To this end, we use IP address and MAC address re-writing to re-write the IP destination address and MAC destination address of each incoming packet of this connection to the selected server address. Note that a TCP connection is identified by the 4-tuple [source IP, source MAC, destination IP, destination MAC]. Therefore, we use this information as match criteria for the flow that performs address re-writing and packet forwarding.

A flow table entry consists of a match rule and list of actions. As said, the match rule should identify packets of a certain TCP connection. To this end, we create a new Match object, and set the required fields as shown in line 58–64. Since we are matching on a TCP/IPv4 datagram, we must make sure to identify this packet type by setting the ethertype (0×0800 meaning IPv4) and protocol id (6 meaning TCP). Moreover, we set the source and destination IP address and port information of the client and service that identifies the individual TCP connection.

Afterwards, we define the actions to be applied to a matched packet of the TCP connection. We set an action for re-writing the IP destination address to the IP address of the selected server instance, as well as the destination MAC address (line 70 and 73). Moreover, we define an output action to forward packets over the switch port of the server instance. In line 43 and line 49, we use the Switch Manager Service to retrieve the corresponding connector of the switch by its name. Note that these names are not simply the port numbers but s1-eth1 and s1-eth1 in my setup using Mininet. If you want to find out the name of a port, you can use the web GUI of the OpenDaylight controller (http://controllerhost:8080/) and inspect the port names of the switch.

Sometimes, it might also be handy to enumerate all connectors of a switch (node) — e.g., to flood a packet — using the following method:

Set ports = switchManager.getUpNodeConnectors(node)

Finally, we create the flow with match criteria and actions, and program the switch using the Flow Programmer service in line 82.

In the reverse direction from server to client, we also install a flow that re-writes the source IP address and MAC address of outgoing packets to the address information of the public service (line 90–112).

Forwarding Packets

However, we are not done yet. Although now every new packet of the connection will be forwarded to the right server instance, we also have to forward the received initial packet (TCP SYN request) to the right server. To this end, we modify the destinaton address information of this packet as shown in line 116–119. Then, we use the Data Packet Service to forward the packet using method transmitDataPacket(...).

In this example, we simply re-used the received packet. However, sometimes you might want to create and send a new packet. To this end, you create the payloads of the packets on the different layers and encode them as a raw packet using the Data Packet Service:

TCP tcp = new TCP();
IPv4 ipv4 = new IPv4();
ipv4.setProtocol((byte) 6);
Ethernet ethernet = new Ethernet();
RawPacket destPkt = dataPacketService.encodeDataPacket(ethernet);


Following the instructions from my last tutorial, you can compile the OSGi bundle using Maven as follows:

user@host:$ cd ~/myctrlapp
user@host:$ mvn package

Then you start the OpenDaylight controller (here, I assume you use the release version located in directory ~/opendaylight):

user@host:$ cd ~/opendaylight
user@host:$ ./

Afterwards, to avoid conflicts with our service, you should first stop OpenDaylight’s simple forwarding service and OpenDaylight’s load balancing service (which has nothing to do with our load balancing service) from the OSGi console:

osgi> ss | grep simple
171 ACTIVE org.opendaylight.controller.samples.simpleforwarding_0.4.1.SNAPSHOT
osgi> stop 171
osgi> osgi> ss | grep loadbalancer
150 ACTIVE org.opendaylight.controller.samples.loadbalancer.northbound_0.4.1.SNAPSHOT
187 ACTIVE org.opendaylight.controller.samples.loadbalancer_0.5.1.SNAPSHOT
osgi> stop 187

Both of these services implement packet handlers, and for now we want to make sure that they do not interfere with our handler.

Then, we can install our compiled OSGi bundle (located in /home/user/myctrlapp/target)

osgi> install file:/home/user/myctrlapp/target/myctrlapp-0.1.jar
Bundle id is 256

and start it:

osgi> start 256

You can also change the log level of our bundle to see log output down to the trace level:

osgi> setLogLevel de.frank_durr.myctrlapp.PacketHandler trace

Next, we create a simple Mininet topology with one switch and three hosts:

user@host:$ sudo mn --controller=remote,ip= --topo single,3 --mac --arp

Be sure to use the IP address of your OpenDaylight controller host. The option --mac assigns a MAC address according to the host number to each host (e.g., 00:00:00:00:00:01 for the first host). In our implementation, we use these addresses as hard-coded constants.

Option --arp pre-populates the ARP cache of the hosts. I use host 1 and 2 as the server hosts with the IP addresses and Host 3 runs the client. Therefore, I also set a static ARP table entry on host 3 for the public IP address of the service (

mininet> xterm h3
mininet h3> arp -s 00:00:00:00:00:64

On host 1 and 2 we start two simple servers using netcat listening on port 7777:

mininet> xterm h1
mininet> xterm h2
mininet h1> nc -l 7777
mininet h2> nc -l 7777

Then, we send a message to our service from the client on host 3 using again netcat:

mininet h3> echo "Hello" | nc 7777

Now, you should see the output “Hello” in the xterm of host 1. If you execute the same command again, the output will appear in the xterm of host 2. This shows that requests (TCP connections) are correctly distributed among the two servers.

Where to go from here

Basically, you can now implement your service using reactive flow programming. However, some further services might be helpful. For instance, according to the paradigm of logically centralized control, it might be interesting to query the global topology of the network, locations of hosts, etc. This, I plan to cover in future tutorials.

Developing OSGi Components for OpenDaylight

In this tutorial, I will explain how to develop an OSGi component for OpenDaylight that is implementing custom network control logic. In contrast to the REST interface, which I have explained in one of my last posts, OSGi components can receive packet-in events, which are triggered when a packet without matching flow table entry arrives at a switch. Therefore, in order to do reactive flow programming, OSGi components are the right way to go in OpenDaylight.

Even for experienced Java programmers, the learning curve for developing OSGi components for OpenDaylight is quite steep. OpenDaylight uses powerful development tools and techniques like Maven and OSGi. Moreover, the project structure is quite complex and the number of Java classes overwhelming at first. However, as you will see in this tutorial, the development process is quite straightforward and thanks to Maven very convenient.

In order to explain everything step by step, I will go through the development of a simple OSGi component. This component does nothing special. It basically displays a message when an IPv4 packet is received to show the destination address, data path id, and ingress port. However, you will learn many things that will help you in developing your own control components like:

  • How to setup an OpenDaylight Maven project?
  • How to install, uninstall, start, and stop an OSGi bundle in OpenDaylight at runtime?
  • How to manage the OSGi component dependencies and life-cycle?
  • How to receive packet-in events through data packet listeners?
  • How to decode packets using the OpenDaylight Data Packet Service

I should note here, that I will use the so-called API-driven Service Abstraction Layer (SAL) of OpenDaylight. OpenDaylight implements a second alternative API called the Model-driven SAL. This API I might cover in a future post.

So let’s get started!

The Big Picture

The figure below shows the architecture of our system. It consists of a number of OSGi bundles that bundle together Java classes, resources, and a manifest file. One of these bundles called the MyControlApp bundle is the bundle we are developing in this tutorial. Other bundles are coming from the OpenDaylight project like the SAL (Service Abstraction Layer) bundle.

Bundles are executed atop the OSGi Framework (Equinox in OpenDaylight). The interesting thing about OSGi is that bundles can be installed and removed at runtime, so you do not have to stop the SDN controller to add or modify control logic.


As you can also see, OSGi bundles are offering services that can be called by other OSGi components. One interesting service that comes with OpenDaylight and that we will use during this tutorial is the Data Packet Service (interface IDataPacketService) to decode data packets.

Although our simple control component is not offering functionality to any other bundle, it is important to understand that in order to receive packet-in events, it has to offer a service implementing the IListenDataPacket interface. Whenever an OpenFlow packet-in event arrives at the controller, the SAL invokes the components that implement the IListenDataPacket interface, among them our bundle.


Before we start developing our component, we should get a running copy of OpenDaylight. Lately, the first release version of OpenDaylight was released. You can get a copy from this URL.

Or you can get the latest version from the OpenDaylight GIT repository and compile it yourself:

user@host:$ git clone
user@host:$ cd ./controller/opendaylight/distribution/opendaylight/
user@host:$ mvn clean install

Actually, in order to develop an OpenDaylight OSGi component, you do not need the OpenDaylight source code! As we will see below, we can just import the required components as JARs from the OpenDaylight repository.

During the compile process, you see that Maven downloads many Java packages on the fly. If you have never used Maven before, this can be quite confusing. Haven’t we just downloaded the complete project with git? Actually, Maven can automatically download project dependencies (libraries, plugins) from a remote repository and place them into your local repository so they are available during the build process. Your local repository usually resides in ~/.m2. If you look into this repository after you have compiled OpenDaylight, you will see all the libraries that Maven downloaded:

user@host:$ ls ~/.m2/repository/
antlr                     classworlds          commons-fileupload  dom4j          jline  regexp
aopalliance               com                  commons-httpclient  eclipselink    junit  stax
asm                       commons-beanutils    commons-io          equinoxSDK381  log4j  virgomirror
backport-util-concurrent  commons-cli          commons-lang        geminiweb      net    xerces
biz                       commons-codec        commons-logging     io             orbit  xml-apis
bsh                       commons-collections  commons-net         javax          org    xmlunit
ch                        commons-digester     commons-validator   jfree          oro

For instance, you see that Maven has downloaded the Apache Xerces XML parser. We will come back to this nice feature later when we discuss our project dependencies.

I will refer to the root directory of the controller as ~/controller in the following.

Creating the Maven Project

Now we start developing our OSGi component. Since OpenDaylight is based on Maven, it is a good idea to also use Maven for our own project. So we start by creating a Maven project for our OSGi component. First, create the following project structure. I will refer to the root directory of our component as ~/myctrlapp:


Obviously, Java implementations go into the folder src/main/java. I used the package de.frank_durr.myctrlapp for the implementation of my control component.

Essential to the Maven build process is a so-called Project Object Model (POM) file called pom.xml that you have to create in the folder ~/myctrlapp with the following content:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="" xmlns:xsi="" xsi:schemaLocation="">





        <!-- OpenDaylight releases -->
        <!-- OpenDaylight snapshots -->

First, we define our group id (unique id of our organization) and artifact id (name of our component/project) as well as a version number. The packaging element specifies that an OSGi bundle (JAR file with classes, resources, and manifest file) should be built.

During the Maven build process, plugins are invoked. One very important plugin here is the Bundle plugin from the Apache Felix project that creates our OSGi bundle. The import element specifies every package that should be imported by the bundle. The wildcard * imports “everything referred to by the bundle content, but not contained in the bundle” [Apache Felix], which is reasonable and much less cumbersome than specifying the imports explicitly. Moreover, we export every implementation from our package.

The bundle activator is called during the life-cycle of our bundle when it is started or stopped. Below I show how it is used to register for services used by our component and how to export the interface of our component.

The dependency element specifies other packages to which our component has a dependency. Remember when I said that Maven will download required libraries (JARs) automatically to your local repository in ~/.m2? Of course, it can only do that if you tell Maven what you need. We basically need the API-driven Service Abstraction Layer (SAL) of OpenDaylight. The OpenDaylight project provides an own repository with the readily-compiled components (see repositories element). Thus, Maven will download the JARs from this remote repository. No need to import all the source code of OpenDaylight into Eclipse! In my example, I use the release version 0.7.0. You can also use a snapshot by changing the version to 0.7.0-SNAPSHOT (or whatever version is available in the snapshot repository; just browse the repository URL given above to find out). If you need further packages, have a look at the central Maven repository.

From this POM file, you can now create an Eclipse project by executing:

user@host:$ cd ~/myctrlapp
user@host:$ mvn eclipse:eclipse

Remember to re-create the Eclipse project using this command, when you make changes to the POM.

Afterwards, you can import the project into Eclipse:

  • Menu Import / General / Existing projects into workspace
  • Select root folder ~/myctrlapp

Implementation of OSGi Component: The Activator

In order to implement our OSGi component, we only need two class files: an OSGi activator registering our component with the OSGi framework and a packet handler implementing the control logic and executing actions whenever a packet-in event is received.

First, we implement the activator by creating the file in the directory ~/myctrlapp/src/main/java/frank_durr/myctrlapp:

package de.frank_durr.myctrlapp;
import java.util.Dictionary;
import java.util.Hashtable;

import org.opendaylight.controller.sal.core.ComponentActivatorAbstractBase;
import org.opendaylight.controller.sal.packet.IDataPacketService;
import org.opendaylight.controller.sal.packet.IListenDataPacket;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class Activator extends ComponentActivatorAbstractBase {

    private static final Logger log = LoggerFactory.getLogger(PacketHandler.class);

    public Object[] getImplementations() {
        log.trace("Getting Implementations");

        Object[] res = { PacketHandler.class };
        return res;

    public void configureInstance(Component c, Object imp, String containerName) {
        log.trace("Configuring instance");

        if (imp.equals(PacketHandler.class)) {

            // Define exported and used services for PacketHandler component.

            Dictionary<String, Object> props = new Hashtable<String, Object>();
            props.put("salListenerName", "mypackethandler");

            // Export IListenDataPacket interface to receive packet-in events.
            c.setInterface(new String[] {IListenDataPacket.class.getName()}, props);

            // Need the DataPacketService for encoding, decoding, sending data packets
            c.add(createContainerServiceDependency(containerName).setService(IDataPacketService.class).setCallbacks("setDataPacketService", "unsetDataPacketService").setRequired(true));


We extend the base class ComponentActivatorAbstractBase from the OpenDaylight controller. Developers already familiar with OSGi know that there are two methods start() and stop() that are called by the OSGi framework when the bundle is started or stopped, respectively. These two methods are overridden in the class ComponentActivatorAbstractBase to mange the life-cycle of an OpenDaylight component. From there, the two methods getImplementations() and configureInstance() are called.

The method getImplementations() returns the classes implementing components of this bundle. A bundle can implement more than one component, for instance, a packet handler for ARP requests and one for IP packets. However, our bundle just implements one component: the one reacting to packet-in events, which is implemented by our PacketHandler class (the second class described below). So we just return one implementation.

Method configureInstance() configures the component and, in particular, declares exported service interfaces and the services used. Since an OSGi bundle can implement more than one component, it is good style to check, which component should be configured in line 26.

Then we declare the services exported by our component. Recall that in order to receive packet-in events, the component has to implement the service interface IListenDataPacket. Therefore, by specifying that our class PacketHandler implements this interface in line 34, we implicitly register our component as listener for packet-in events. Moreover, we have to give our listener a name (line 31) using the property salListenerName. If you want to understand in detail, what is happening during registration, I recommend to have a look at the method setListenDataPacket() of class org.opendaylight.controller.sal.implementation.internal.DataPacketService. There you will see that so far, packet handlers are called sequentially. There might be many components that have registered for packet-in events, and you cannot force OpenDaylight to call your listener first before another one gets the event. So the order in which listeners are called is basically unspecified. However, you can create dependency lists using the property “salListenerDependency”. Moreover, using the property “salListenerFilter” you can set a org.opendaylight.controller.sal.match.Match object for the listener to filter packets according to header fields. Otherwise, you will receive all packets (if not other listener consumes it before our handler is called; see below).

Besides exporting our packet listener implementation, we also use other services. These dependencies are declared in line 37. In our example, we only use one service implementing the IDataPacketService interface. You might say now, “fine, but how do I get the object implementing this service to call it?”. To this end, you define two callback functions as part of your component class (PacketHandler), here called setDataPacketService() and unsetDataPacketService(). These callback functions are called with a reference to the service (see implementation of PacketHandler below).

Implementation of OSGi Component: The Packet Handler

The second part of our implementation is the packet handler, which receives packet-in events (the class that you have configured through the activator above). To this end, we implement the class PacketHandler by creating the following file in the directory ~/myctrlapp/src/main/java/frank_durr/myctrlapp:

package de.frank_durr.myctrlapp;


import org.opendaylight.controller.sal.core.Node;
import org.opendaylight.controller.sal.core.NodeConnector;
import org.opendaylight.controller.sal.packet.Ethernet;
import org.opendaylight.controller.sal.packet.IDataPacketService;
import org.opendaylight.controller.sal.packet.IListenDataPacket;
import org.opendaylight.controller.sal.packet.IPv4;
import org.opendaylight.controller.sal.packet.Packet;
import org.opendaylight.controller.sal.packet.PacketResult;
import org.opendaylight.controller.sal.packet.RawPacket;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class PacketHandler implements IListenDataPacket {

    private static final Logger log = LoggerFactory.getLogger(PacketHandler.class);
    private IDataPacketService dataPacketService;

    static private InetAddress intToInetAddress(int i) {
        byte b[] = new byte[] { (byte) ((i>>24)&0xff), (byte) ((i>>16)&0xff), (byte) ((i>>8)&0xff), (byte) (i&0xff) };
        InetAddress addr;
        try {
            addr = InetAddress.getByAddress(b);
        } catch (UnknownHostException e) {
            return null;

        return addr;

     * Sets a reference to the requested DataPacketService
     * See Activator.configureInstance(...):
     * c.add(createContainerServiceDependency(containerName).setService(
     * IDataPacketService.class).setCallbacks(
     * "setDataPacketService", "unsetDataPacketService")
     * .setRequired(true));
    void setDataPacketService(IDataPacketService s) {
        log.trace("Set DataPacketService.");

        dataPacketService = s;

     * Unsets DataPacketService
     * See Activator.configureInstance(...):
     * c.add(createContainerServiceDependency(containerName).setService(
     * IDataPacketService.class).setCallbacks(
     * "setDataPacketService", "unsetDataPacketService")
     * .setRequired(true));
    void unsetDataPacketService(IDataPacketService s) {
        log.trace("Removed DataPacketService.");

        if (dataPacketService == s) {
            dataPacketService = null;

    public PacketResult receiveDataPacket(RawPacket inPkt) {
        log.trace("Received data packet.");

        // The connector, the packet came from ("port")
        NodeConnector ingressConnector = inPkt.getIncomingNodeConnector();
        // The node that received the packet ("switch")
        Node node = ingressConnector.getNode();

        // Use DataPacketService to decode the packet.
        Packet l2pkt = dataPacketService.decodeDataPacket(inPkt);

        if (l2pkt instanceof Ethernet) {
            Object l3Pkt = l2pkt.getPayload();
            if (l3Pkt instanceof IPv4) {
                IPv4 ipv4Pkt = (IPv4) l3Pkt;
                int dstAddr = ipv4Pkt.getDestinationAddress();
                InetAddress addr = intToInetAddress(dstAddr);
                System.out.println("Pkt. to " + addr.toString() + " received by node " + node.getNodeIDString() + " on connector " + ingressConnector.getNodeConnectorIDString());
                return PacketResult.KEEP_PROCESSING;
        // We did not process the packet -> let someone else do the job.
        return PacketResult.IGNORED;

As you can see, our handler implements the listener interface IListenDataPacket. This interface declares the function receiveDataPacket(), which is called with the raw packet after a packet-in event from OpenFlow.

In order to parse the raw packet, we use the OpenDaylight Data Packet Service (object dataPacketService). As described for the activator, during the component configuration, we set two callback functions in our packet handler implementation, namely, setDataPacketService() and unsetDataPacketService(). Method setDataPacketService() is called with a reference to the data packet service, which is then used for parsing raw packets. After receiving a raw packet “inPkt”, we call dataPacketService.decodeDataPacket(inPkt) to get a layer 2 frame. Using instanceof, we can check for the class of the returned packet. If it is an Ethernet frame, we go on and get the payload from this frame, which is the layer 3 packet. Again, we check the type, and if it is an IPv4 packet, we dump the destination address.

Moreover, the example shows how to determine the node (i.e., switch) that received the packet and connector (i.e., port) on which the packet was received (lines 72 and 75).

Finally, we decide whether the packet should be further processed by another handler, or whether we want to consume the packet by returning a corresponding return value. PacketResult.KEEP_PROCESSING says, our handler has processed the packet, but others should also be allowed to do so. PacketResult.CONSUME means, no other handler after us receives the packet anymore (as described above, handlers are sorted in a list and called sequentially). PacketResult.IGNORED says, packet processing should go on since we did not handle the packet.

Deploying the OSGI Bundle

Now that we have implemented our component, we can first compile and bundle it using Maven:

user@host:$ cd ~/myctrlapp
user@host:$ mvn package

If our POM file and code are correct, this should create the bundle (JAR file) ~/myctrlapp/target/myctrlapp-0.1.jar.

This bundle can now be installed in the OSGi framework Equinox of OpenDaylight. First, start the controller:

user@host:$ cd ~/controller/opendaylight/distribution/opendaylight/target/distribution.opendaylight-osgipackage/opendaylight/
user@host:$ ./

In the OSGi console install the bundle by specifying its URL:

osgi> install file:/home/user/myctrlapp/target/myctrlapp-0.1.jar
Bundle id is 256

We see that the id of our bundle is 256. Using this id, we can start the bundle next:

osgi> start 256

You can check, whether it is running by listing all OSGi bundles using the command ss:

osgi> ss
251 ACTIVE org.opendaylight.controller.hosttracker.implementation_0.5.1.SNAPSHOT
252 ACTIVE org.opendaylight.controller.sal-remoterpc-connector_1.0.0.SNAPSHOT
253 ACTIVE org.opendaylight.controller.config-persister-api_0.2.3.SNAPSHOT
256 ACTIVE de.frank_durr.myctrlapp_0.1.0

Similarly, you can stop and uninstall the bundle using the commands stop and uninstall, respectively:

osgi> stop 256
osgi> uninstall 256

Before we test our bundle, we stop two OpenDaylight services, namely, the Simple Forwarding Service and Load Balancing Service:

osgi> ss | grep simple
171 ACTIVE org.opendaylight.controller.samples.simpleforwarding_0.4.1.SNAPSHOT
osgi> stop 171
osgi> osgi> ss | grep loadbalancer
150 ACTIVE org.opendaylight.controller.samples.loadbalancer.northbound_0.4.1.SNAPSHOT
187 ACTIVE org.opendaylight.controller.samples.loadbalancer_0.5.1.SNAPSHOT
osgi> stop 187

Why did we do that? Because these are the two services also implementing a packet listener. For testing, we do want to make sure, they are not getting in our way and consuming packets before we can get them.


For testing, we use a simple linear Mininet topology with two switches and two hosts connected at the ends of the line:

user@host:$ sudo mn --controller=remote,ip= --topo linear,2

The given IP is the IP of our controller host.

Now let’s ping host 2 from host 1 and see the output in the OSGi console:

mininet> h1 ping h2

Pkt. to / received by node 00:00:00:00:00:00:00:01 on connector 1
Pkt. to / received by node 00:00:00:00:00:00:00:02 on connector 1

You see that our handler received a packet from both switches with the data path ids 00:00:00:00:00:00:00:01 and 00:00:00:00:00:00:00:02 as well as the ports (1) on which they have been received and the destination IP addresses and So it worked.

Where to go from here?

What I did not show in this tutorial is how to send a packet. If you join me again, you can see that in one of my next tutorials here on this blog.

Securing OpenDaylight’s REST Interfaces

OpenDaylight comes with a set of REST interfaces. For instance, in one of my previous posts, I have introduced OpenDaylight’s REST interface for programming flows. With these interfaces, you can easily outsource your control logic to a remote server other than the server on which the OpenDaylight controller is running. Basically, the controller offers a web service, and the control application invokes this service sending REST requests via HTTP.

Although the concept to offer network services as web services is very nice and lowers the barriers to “program” the network significantly, it also brings up security problems well known from web services. If you do not authenticate clients, any client that can send HTTP requests to your controller can control your network — certainly something you want to avoid!

Therefore, in this post, I will show how to secure OpenDaylight’s REST interfaces.

Authentication in OpenDaylight

The REST interfaces are so-called northbound interfaces between controller and control application. So you can think of the controller as the service and the control application as the client as shown in the figure below.


In order to ensure that the controller only accepts requests from authorized clients, clients have to authenticate themselves. OpenDaylight uses HTTP Basic authentication, which is based on user names and passwords (default: admin, admin). Sounds good: So only a client with the valid password can invoke the service … or is there a problem? In order to see the security threats, we have to take a closer look at the HTTP Basic authentication mechanism.

The follwing command invokes the Flow Programmer service of OpenDaylight via cURL and prints the HTTP header information of the request:

user@host:$ curl -u admin:admin -H 'Accept: application/xml' -v 'http://localhost:8080/controller/nb/v2/flowprogrammer/default/'
* About to connect() to localhost port 8080 (#0)
* Trying connected
* Server auth using Basic with user 'admin'
> GET /controller/nb/v2/flowprogrammer/default/ HTTP/1.1
> Authorization: Basic YWRtaW46YWRtaW4=
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/ libidn/1.23 librtmp/2.3
> Host: localhost:8080
> Accept: application/xml
< HTTP/1.1 200 OK
< Server: Apache-Coyote/1.1
< Cache-Control: private
< Expires: Thu, 01 Jan 1970 01:00:00 CET
< Set-Cookie: JSESSIONIDSSO=9426E0F12A0A0C80BE549451707EF339; Path=/
< Set-Cookie: JSESSIONID=DB23D1EE61348E101E6CE8117A04B8D8; Path=/
< Content-Type: application/xml
< Content-Length: 62
< Date: Sun, 12 Jan 2014 16:50:38 GMT
* Connection #0 to host localhost left intact
* Closing connection #0
<?xml version="1.0" encoding="UTF-8" standalone="yes"?><list/>

The interesting header field is “Authorization” with its value “Basic YWRtaW46YWRtaW4=”. Here, “YWRtaW46YWRtaW4=” is the user name and password sent from the client to the controller. Although this value looks quite cryptic, it is actually plain text. This value is simply the Base64 encoding of the user name and password string “admin:admin”. Base64 is a simple translation of 8 bit characters to 6 bit characters involving no encryption or hashing at all! Basically, it comes from the time when SMTP was restricted to sending 7 bit ASCII characters. Everything else like binary (8 bit) content had to be translated to 7 bit characters first, and exactly that’s the job of Base64 encoding. You can use a paper and pencil to decode it. Just interpret the bit pattern of three 8 bit characters as four 6 bit characters and look-up the values of the 6 bit characters in the Base64 table. Or if you are lazy, just use one of the many Base64 decoders in the WWW.

Now the problem should become obvious. If your network between client and controller is non-trusted and an attacker can eavesdrop on the communication channel, he can read your user name and password.

Securing the REST Interface

Now that we see the problem, also the solution should become obvious. We need a secure channel between client and controller, so an attacker cannot read the header fields of the HTTP request. The HTTPS standard provides exactly that. Moreover, the client can make sure that it really connects to the right controller, and not the controller of an attacker who just wants to intercept our password. So we use HTTPS to encrypt the channel between client and controller and to authenticate the controller, and HTTP Basic authentication to authenticate the client.

So the trick is, enabling HTTPS in OpenDaylight, which is turned off by default. Note that above we used the insecure HTTP protocol on port 8080. Now we want to use HTTPS on port 8443 (or 443 if you want to use the official HTTPS port instead of the alternative port).

OpenDaylight uses the Tomcat servlet container to provide its web services. Therefore, the steps to enable HTTPS are very similar to configuring Tomcat.

First, we need a server certificate that the client can use to authenticate the server. Of course, a certificate signed by a trusted certification authority would be best. However, here I will show how to create your own self-signed certificate using the Java keytool:

user@host:$ keytool -genkeypair -v -alias tomcat -storepass changeit -validity 1825 -keysize 1024 -keyalg DSA
What is your first and last name?
What is the name of your organizational unit?
[Unknown]: Institute of Parallel and Distributed Systems
What is the name of your organization?
[Unknown]: University of Stuttgart
What is the name of your City or Locality?
[Unknown]: Stuttgart
What is the name of your State or Province?
[Unknown]: Baden-Wuerttemberg
What is the two-letter country code for this unit?
[Unknown]: de
Is, OU=Institute of Parallel and Distributed Systems, O=University of Stuttgart, L=Stuttgart, ST=Baden-Wuerttemberg, C=de correct?
[no]: yes

Generating 1,024 bit DSA key pair and self-signed certificate (SHA1withDSA) with a validity of 1,825 days
for:, OU=Institute of Parallel and Distributed Systems, O=University of Stuttgart, L=Stuttgart, ST=Baden-Wuerttemberg, C=de
Enter key password for <tomcat>
(RETURN if same as keystore password):
Re-enter new password:
[Storing /home/duerrfk/.keystore]

This creates a certificate valid for five years (1825 days) and stores it in the keystore .keystore in my home directory /home/duerrfk. As first and last name, we use the DNS name of the machine, the controller is running on. The rest should be pretty obvious.

With this information, we can now configure the OpenDaylight controller. First, check out and compile OpenDaylight, if you haven’t done so already:

user@host:$ git clone
user@host:$ cd controller/opendaylight/distribution/opendaylight/
user@host:$ mvn clean install

Now edit the following file, where “controller” is the root directory of the controller you checked out:

user@host:$ emacs controller/opendaylight/distribution/opendaylight/target/distribution.opendaylight-osgipackage/opendaylight/configuration/tomcat-server.xml

Comment in the following XML element:

<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
maxThreads="150" scheme="https" secure="true"
clientAuth="false" sslProtocol="TLS"

Use the keystore location and password that you used before with the keytool command.

Now you can start OpenDaylight and connect via HTTPS to the controller on port 8443. Use your web browser to try it.

Making Secure Calls

One last step needs to be done before we can call the controller securely by a client. If you did not use a certificate signed by a well-known certification authority — like we did above with our self-signed certficate –, you need to present the client with the server certificate it should use for authenticating the controller. If you are using cURL, the required option is “–cacert”:

user@host:$ curl -u admin:admin --cacert ~/cert-duerr-mininet.pem -v -H 'Accept: application/xml' ''

So the last question is, how do we get the server certificate in PEM format? To this end, we can use openssl to call our server and store the returned certificate:

user@host:$ openssl s_client -connect

The PEM certificate is everything between “—–BEGIN CERTIFICATE—–” and “—–END CERTIFICATE—–” (including these two lines), so we can just copy this to a file. Note that you have to make sure that the call is actually going to the right server (not the server of an attacker). So better call it from the machine where your controller is running to avoid a “chicken and egg” problem.


Now we can securely outsource our control application to a remote host, for instance, a host in our campus network or a cloud server running in a remote data center.