Fiery IoT: A Tutorial on Implementing IoT Cloud Services with Google Firebase — Part 1


In this tutorial, we will show how to implement an IoT service with Google Firebase. The IoT service consists of IoT home gateways of users for connecting sensors, and mobile apps as clients. In particular, we will show how to use the Firebase realtime database for storing sensor events and and notifying clients in realtime about events.

In the first part, we will show how to connect the IoT home gateways of users to the realtime database, and how to use Google accounts as authentication method for the IoT home gatways of individual users to implement a multi-tenancy service with Firebase. For implementing IoT gateways, we will use Node.js plus an Apache web-server.

In part two, we connect mobile apps receiving realtime notifications of sensor events.

Although we take an IoT service as motivating example, many parts of the tutorial are generic and also applicable to mobile services in general. Thus, this tutorial could also be useful for other Firebase users beyond the IoT.

All code of the tutorial can be found at GitHub.

Motivation: The Detect-Store-Notify Pattern

Google Firebase is a cloud-based application platform that was originally developed for supporting mobile apps. In particular, Firebase includes a so-called realtime database that besides just storing data can also sync mobile clients in “realtime”. To this end, apps can register for data change events, and whenever data is updated, the app gets a notification and a snapshot of the changed data items. In this tutorial, we will show that such a realtime database not only facilitates the development of mobile apps in general, but also IoT applications and services in particular.

We observe that many IoT applications follow a pattern that can be best described as detect-store-notify. First, an event is detected. In IoT scenarios such events are often triggered by changes in the physical world detected by sensors. Two very simple examples are the ringing of a door bell (door bell event) or a temperature sensor detecting a sensor value higher or lower than a user-defined threshold value (temperature event). Since typically events are defined to detect some meaningful change in the physical world like “a person standing at the front door” (door bell event) or “temperature in living room too low” (temperature event), they serve as input to some control application, which automatically triggers actions, or an app implementing a user interface to notify the user. For instance, a control application could automatically turn-up the heating after a “temperature low” event, or a notification could be presented on a mobile phone after a door bell event. In any case, we need to forward an event notification to some application.

At the same time, we often want to keep a history of past events, i.e., we want to store sensor events persistently in a database. Then later the user can get an overview of what happened, or we can analyze event histories. In general, in the age of “big data”, there seems to be a trend not to discard data, which later could prove to be useful, but to keep it for later processing, data mining, or machine learning.

A realtime database as included with Firebase covers both, the storage of data as well as sending notifications. Whenever a sensor event is written to the database, a value update is sent out to subscribers. The lean-and-mean tree-based data scheme of Firebase, which is similar to JSON objects, makes it easy to structure your data to allow for targeted subscriptions. For instance, one could add a sub-tree of sensor ids to let applications register to updates of individual sensors. At the same time, we could add events to a sub-tree of sensor types like temperature, door bell, etc. to subscribe to events by the type of sensors:

|  |
|  |--[sensor id 1]
|  |  |
|  |  |--[sensor event 1]
|  |  |  |
|  |  |  |--[sensor data]
|  ...
|  |
|  |--temperature
|  |  |
|  |  |--[sensor id 1]
|  |  |  |
|  |  |  |--[sensor event 1]
|  |  |  |  |
|  |  |  |  |--[sensor data]
|  |  ...
|  |
|  |--doorbell
|  |  |

Note that redundant data of the same sensor event is stored several times in the database, once in the sensor id sub-tree and once in the sensor type sub-tree. This denormalized storage of redundant data might seem counter-intuitive for users of relational databases. However, it is required for efficiently retrieving data and subscribing to events in Firebase by selecting a tree node subsuming all of its child nodes. Notifications are then sent out for all changes in a sub-tree rooted at the node of interest.

Of course, you could use dedicated services for both individual functions (storing and notifications), for instance, an MQTT messaging service plus a database like Mongo DB. However, setting up and connecting these services and apps is more complex than just using a realtime database offered as a single integrated service. As we will see during this tutorial, the Firebase realtime database offered as a software service by Google makes it very easy to connect an IoT gateway and apps to the realtime database and implement functions for reading, writing, and eventing with just a few lines of code. In addition, Firebase handles user authentication, multi-tenancy, and scalability in an elegant and easy way.

Goal of this Tutorial

In this tutorial, we show step-by-step how to implement a simple IoT service with Google Firebase. The architecture of this IoT service consists of three types of components:

  • Sensors detecting events and sending them to the private IoT home gateway of the owner of the sensors.
  • IoT home gateways installed in the home network of each user (sensor owner). These gateways collect sensor events, and possibly pre-filter and enrich them, for instance by comparing the sensor value to a threshold and adding a timestamp. Then events are forwarded by the IoT home gateway to the Firebase database running in the Google cloud.
  • The Firebase realtime database storing event histories and sending event notifications to the mobile app of the user or other services interested in sensor events.
sensor 1 --|
           |                 sensor events
sensor 2 ----> IoT Gateway ----------------|        Mobile App
           |   of user 1                   |        of user 1
sensor n --|                               |             ^
                                           v             |  notification
                                         Google ---------| 
                                   Realtime Database ----|
                                           ^             |  notification
                                           |             v
sensor 1 --|                               |        Mobile App 
           |                 sensor events |        of user 2
sensor 2 ----> IoT Gateway ----------------|
           |   of user 2
sensor n --|

The IoT service should serve a larger population of users, each one with his own sensors and IoT home gateway, with only one Firebase database (multi-tenancy). In other words, in this tutorial, we take the role of an IoT service provider offering an IoT smart-home service for many customers. Setting up a dedicated Firebase database for each customer would be too costly and would not scale with the number of customers in terms of complexity. Instead, we have one database for all customers/users. Each user should only have access to his own data and events. Consequently, the Firebase database must implement access control to protect datasets of different users.

We will not pay much attention to the sensor part and connecting sensors to the IoT home gateway, and rather focus to the components connected to the Firebase database, i.e., the IoT home gateway and apps. If you are interested in how to connect sensors to the IoT home gateway, you could have a look at our Door Bell 2.0 project, which connects a simple door beel sensor via Bluetooth Low Energy (BLE) to a Node.js IoT gateway. It should be rather straightforward to merge the Node.js code from the Door Bell 2.0 project and the code of the IoT gateway presented in this tutorial.

All code of the tutorial can be found at GitHub.

We first look at how to integrate IoT gateways of implemented in Node.js with Firebase, before we consider the synchonization of mobile apps implemented in Android with Firebase.

Setting up Firebase

We want all sensor events to be stored in the Firebase database. Obviously, to this end, we first need to set up a database in Firebase:

  1. Log-in to Firebase at
  2. Create a new project. We give this project the name “FieryIoT”. Firebase automatically assigns a project id (say “fieryiot-12345″) and Web API key (say “abcdefghijklmnopqrstuvwxyz1234567890″), as you can verify by showing the project settings (wheel symbol).
Firebase Settings

Firebase Settings

We later will use Google user accounts to sign-in to Firebase to protect and isolate data of different users. To this end, we must enable the corresponding authentication method. In the Firebase console, go to “Authentication” and enable the “Google” authentication method. Here, you can also show your web-client id, which is later used during authentication. The web-client id should look like this:

Next, we need to define a schema and access rules for our database. Firebase uses a tree-based structure for the database similar to JSON objects. Our database has the following tree structure:

   |--[user1 id]
   |  |
   |  |--[event id]
   |     |
   |     |--[event data 1]
   |     |
   |     |--... 
   |--[user2 id]
   |  |
   |  |--[event id]
   |     |
   |     |--[event data 1]
   |     | 
   |     |--...

Compared to the example in the motivation, where we used different sub-trees for different sensors and sensor types, this example is simpler for the sake of a simpler description. However, extensions should be straightforward.

Note that every user has his own branch defined by his user id under the node sensorevents to store his own IoT events (e.g., a door bell event, door unlock events, etc.). Actually, we do not need to define this structure in the Firebase console, unlike in an SQL database where you would need to define the table structure first before storing data. Firebase is schemaless, however, we could define validation rules to ensure consistency of stored data, which we will not do here to keep things simple.

However, we must define security rules to protect data from unauthorized access (reading and writing) by other users than the owner of the sensors. This will also prevent users from adding branches/data in sub-trees of other users, i.e., anywhere outside their own branch.

Set up the following security rules by opening “Database / Rules” in the Firebase console and adding the following security rules:

    "rules": {
        "sensorevents" : {
            "$user_id": {
                ".read": "$user_id === auth.uid",
                ".write": "$user_id === auth.uid"

These rules allow each user to read and write only his own branch /sensorevents/[user id] as defined by his user id. $user_id is a placeholder for the user id, and auth.uid is a placeholder for the id of an authenticated user. Access rights will be inherited down the hierarchy, i.e., each user has read and write access to all data below the node /sensorevents/[user id]. Users must authenticate as shown below such that Firebase can check these rules online.

Security Rules

Security Rules

Firebase also includes a simulator to check security rules from the Firebase console before making them effective by publishing them. Try it out with our rules defined above by trying to read and write from and to different branches with authenticated and non-authenticated users! Then publish the rules in the console before going to the next step.

Authentication of IoT Home Gateways to Firebase

An IoT home gateway is implemented by a Node.js process. In order to let the IoT home gateway write sensor events to the Firebase database, it needs to authenticate itself to Firebase. Firebase supports different authentication methods. Here, we use the Google account of the user owning the IoT home gateway for authentication. The advantage of this method is that you can serve different users with the same Firebase database (multi-tenancy). Every tenant (IoT home gateway owner with a Google account) has only access to its own branch in the database, i.e., every tenant can only access its own dataset.

To implement the Google account authentication method, the user needs to pass-on authentication information to the IoT home gateway (Node.js server). We implement this through a web-frontend of the IoT gateway. To this end, the machine running the IoT home gateway is actually hosting two servers:

  1. Web-sever (Apache server)
  2. IoT home gateway (Node.js server)

The user signs-in to its Google account through its web-browser by clicking a button on a web-page downloaded from the web-server. The web-browser interacts with a Google server to sign-in to its Google account (JavaScript code executed in the web-browser). After signing-in, a credential is transferred to the web-browser. This credential is then passed on to the IoT gateway via the web-server, i.e., the web-server (Apache) acts as a reverse proxy between the external client (browser) and the internal server (IoT home gateway = Node.js server). We pass on the credential through HTTP POST requests from the web-browser to the web-server, and from the web-server to the IoT gateway. For the second step (proxy to IoT gateway), the IoT gateway also implements an HTTP server in Node.js. The credential is then used by the IoT gateway to authenticate to Firebase. Note that all communication from the browser is going through the Apache web-server, so we do not have to deal with any cross-site security problems since to the web-browser this looks like a single server.

In detail, we need to go through the following steps:

  1. Setup a web-page on the web-server for signing-in to Google.
  2. Configure Apache to act as a reverse proxy for passing on the credential to the IoT home gateway.
  3. Implement the HTTP server on the IoT gateway in Node.js to receive the credential from the web-server and authenticate to Firebase.

Web-page for Signing-in to Google

Create a sign-in web-page on the web-server. The following bare-minimum web-page just shows he required sign-in button:

Web-page

Web-page

<!DOCTYPE html>


<meta name="google-signin-client_id" content="">
<meta name="google-signin-cookiepolicy" content="single_host_origin">
<meta name="google-signin-scope" content="profile email">

<script src="" async defer></script>


function onSignIn(googleUser) {
    var id_token = googleUser.getAuthResponse().id_token;
    // Send credential to IoT gateway via proxy through HTTP POST request.
    var httpReq = new XMLHttpRequest();
    httpReq.onloadend = function () {
    var url = "/auth/credential";"POST", url, true);
    httpReq.setRequestHeader('Content-Type', 'text/plain; charset=UTF-8');




<div class="g-signin2" data-onsuccess="onSignIn" data-theme="dark"></div>



The client id “” must be replaced by the OAuth client id (web client ID) of the Firebase project created above. You can also visit the following page to find out the ids of all of your Google projects:

The JavaScript code for signing-in comes from Google (script element referring to We just need to add a callback function onSignIn() that is called when the user has signed in. This function sends the credential as HTTP POST request (XMLHttpRequest()) to the web-server. Note the resource /auth/credential used for the POST request. The Apache web-server is configured to forward all requests for resources /auth/* to the IoT gateway (reverse proxy configuration) as shown next.

Configuring Apache for Passing-on Credentials to the IoT Gateway

Enable the required modules of the Apache web-server for reverse proxying:

$ sudo a2enmod proxy proxy_http

Configure the Apache web-server to forward HTTP requests to the URL http://myiotgateway/auth/* to the IoT gateway (Node.js server) listening on port 8080 of the same host also running the web-server (localhost from the point of view of the web-server). Note that it makes sense to use HTTPS rather than HTTP between browser and web-server since then the credential will be transferred over an encrypted channel over the network (this is less critical for messages between web-server (proxy) and IoT gateway since they run on the same host and no messages can be observed in the network). You can find many instructions how to setup Apache with SSL in the WWW. For the sake of simplicity we will continue with plain HTTP here. It might also be a good idea not to expose the web-frontend to the Internet by setting firewall and/or Apache rules on the IoT gateway host, for instance, to just allow requests from the local area network of the IoT home gatway to minimize the attack surface.

Add the following block to your Apache configuration:

# No need to enable this for *reverse* proxies.
ProxyRequests off

Require all denied
Require ip 192.168.1
Require local

ProxyPass "/auth" "http://localhost:8080/"

The Proxy element will allow only for requests from the network or (localhost). ProxyPass forwards all requests for the partial URL http://myiotgateway/auth/ to the IoT gateway on the localhost on port 8080.

IoT Gateway Implementation (Node.js)

The IoT gateway is implemented in Node.js as shown below.

Before you can use this code, you must install the Firebase Node.js code provided by Google with the following command executed from folder iot-gateway (the folder containing the Node.js implementation of the IoT gateway):

$ npm install firebase

You should then see a folder node_modules with the Firebase library code.

In the code, you need to define your Firebase project and database setting in the structure fbconfig. You got the API key and Firebase project id when creating the Firebase project above.

var http = require('http');
var firebase = require('firebase');

const port = 8080;
const host = 'localhost';

var fbconfig = {
    apiKey: "abcdefghijklmnopqrstuvwxyz1234567890",
    authDomain: "",
    databaseURL: ""
    //storageBucket: "",

firebase.auth().onAuthStateChanged(function(user) {
    if (user) {
        console.log("Signed in to Firebase");
    } else {
        console.log("No user signed in");

function authenticateToFirebaseGoogleUser(idToken) {
    // Sign in with credential of Google user.
    var credential = firebase.auth.GoogleAuthProvider.credential(idToken);
        function(error) {
            console.log("Error signing in to Firebase with user " +
       + ": " + error.message + " (" +
                error.code + ")");

server = http.createServer(function(req, res) {
    if (req.method == 'POST') {
        console.log("POST request");
        var body = '';
        req.on('data', function(data) {
            body += data;
        req.on('end', function() {
        res.statusCode = 200;
        res.setHeader('Content-Type', 'text/plain');
        res.end('Credential received\n');
    } else {
        // Methods other than POST are not allowed.
        // Allowed methods are returned in 'Allow' header field.
        console.log("Unsupported HTTP request: " + req.method);
        res.statusCode = 405;
        res.setHeader('Content-Type', 'text/plain');
        res.setHeader('Allow', 'POST');
        res.end("Method not supported\n");
server.listen(port, host);
console.log('HTTP server listening on ' + host + ':' + port);

The IoT gateway receives the credential from the web-server through an HTTP POST request (lines following server = http.createServer(function(req, res)). The IoT gateway will only handle POST requests. Any other request (GET, OPTIONS, …) will not be accepted (HTTP status code 405 “Method Not Allowed”). The handler for the POST request receives the credential from the body of the HTTP POST request and returns a 200 “OK” status code.

The credential is then used for authentication to Firebase as shown in function function authenticateToFirebaseGoogleUser(idToken). The idToken is the data received from the web-server (proxy), which is converted to a credential object with firebase.auth.GoogleAuthProvider.credential(idToken). With the command firebase.auth().signInWithCredential(credential), the authentication with Firebase is triggered using this credential. If the authentication succeeds, the callback function firebase.auth().onAuthStateChanged(function(user)) will be called with the signed in user.

Now, the IoT home gateway is ready to use the Firebase database for reading and writing data from/to the database. You can start the IoT home gateway like this:

$ node iot-gateway.js

Writing Data to the Database

Typically, sensor events from sensors connected to the IoT home gateway would trigger updates from the IoT home gateway to the Firebase database. As mentioned above, we will not focus in the sensor-to-gateway connection in this tutorial, but rather will focus on the interaction between IoT home gateway and the Firebase database. Therefore, we simulate sensor updates by a simple timer in the IoT home gateway to periodically trigger updates to the database every 15 s:

// Simulate sensor events through a periodic timer.
function sensorUpdate() {
    console.log("Sensor event");

    var user = firebase.auth().currentUser;
    if (user) {
        // User is signed-in
        var uid = user.uid;
        var databaseRef = firebase.database();
        var newEventRef = databaseRef.ref('sensorevents/' + uid).push();
        var timestamp = new Date().toString();
            'value': 'foo-sensor-value',
            'time': timestamp
        console.log("Added new item to database");
var timerSensorUpdates = setInterval(sensorUpdate, 15000);

The interesting part here is function sensorUpdate(), which writes a sensor event to the database in the sub-tree sensorevents/. Remember the secrutity rule we have set-up above? There, we defined that an authorized user can write to exactly this sub-tree sensorevents/ defined by his user id. Function push() adds an element with a unique id to this sub-tree and returns a reference to this element. Then, we can set the values of this element using function set() with some key/value pairs. It’s that simple!

If the user it not signed, firebase.auth().currentUser will be undefined, so we cannot write to the database since only authorized users can write items to their own branch defined by the user id.

You can also try to add something in another branch outside the user branch. Then, you should receive a “permission denied” error.

The following image shows the database content after some updates. In the Firebase console, you can watch in realtime how these values are added every 15 s:

Database Content

Database Content


Stay tuned for part two of the tutorial explaining how to connect apps to receive sensor event updates in realtime!

Door Bell 2.0 — IoT Door Bell

What is Door Bell 2.0?

Door Bell 2.0 (or DoorBell20 for short) is a Bluetooth Low Energy (BLE) appliance to monitor a door bell and send notifications whenever the door bell rings. It turns a conventional door bell into a smart door bell that can be connected to the Internet of Things (IoT)., e.g., using the DoorBell20 If This Then That (IFTTT) client. Thus, DoorBell20 is the modern version of a door bell, or, as the name suggests, the door bell version 2.0 for the IoT era.

Full source code and hardware design is available at GitHub.

DoorBell20 consists for two major parts:

  • The DoorBell20 monitoring device, which is connected in parallel to the door bell and wirelessly via BLE to a client running on a remote IoT gateway, e.g., a Raspberry Pi with Bluetooth stick.
  • A DoorBell20 client running on the IoT gateway passing on notifications received via BLE to a remote cloud service. Different clients can be implemented for different IoT cloud services. So far, DoorBell20 includes a client for If This Then That (IFTTT), which makes it very easy to trigger different actions when a door bell event is detected. For instance, a notification can be sent to a mobile phone or trigger an IP camera installed at the door to take pictures.

The following ASCII art shows the big picture of how DoorBell20 works.

                  [IoT Cloud Service]
                  [  (e.g., IFTTT)  ]
                           | ^
                 Internet  | | Door Bell Event Notifications
                [      IoT Gateway      ]
                [ w/ DoorBell20 Client  ]
                [ (e.g., IFTTT Trigger) ]
                           |  ^
           BLE Connection  |  | Door Bell Event Notifications  
|___________[DoorBell20 Monitoring Device]_________|
|                                                  |
|____________________[Door Bell]___________________|
|                                                  |
|                                                  |
|                                                 \   Door Bell Push Button
|                                                  \
|                                                  |
|________________(Voltage Source)__________________|
                 (    12 VAC    )

The following images show the DoorBell20 monitoring device, its connection to a door bell, and a door bell event notification displayed by the If This Then That (IFTTT) app on a mobile phone.


DoorBell20 monitoring device

DoorBell20 device connected to door bell.

DoorBell20 device connected to door bell.

IFTTT client showing door bell event notification.

IFTTT client showing door bell event notification.

The main features of DoorBell20 are:

  • Open-source software and hardware. Source code for the door bell monitoring device and IFTTT client as well as Eagle files (schematic and board layout) are provided.
  • Maker-friendly: using easily available cheap standard components (nRF51822 BLE chip, standard electronic parts), easy to manufacture circuit board, and open-source software and hardware design.
  • Includes a client for the popular and versatile If This Then That (IFTTT) service to facilitate the development of IoT applications integrating DoorBell20.
  • Liberal licensing of software and hardware under the Apache License 2.0 and the CERN Open Hardware License 1.0, respectively.

DoorBell20 Monitoring Device

The following images show the DoorBell20 hardware and schematic:

DoorBell20 monitoring device

DoorBell20 monitoring device


DoorBell20 monitoring device

DoorBell20 monitoring device

Schematic of DoorBell20 device

Schematic of DoorBell20 device

The DoorBell20 monitoring device is based on the BLE chip nRF51822 by Nordic Semiconductors. The nRF51822 features an ARM Cortex M0 processor implementing both, the application logic and the BLE stack (so-called softdevice). DoorBell20 uses the S110 softdevice version 8.0. See next sub-section on how to flash the softdevice and the application code. We use a so-called “Bluetooth 4.0″ breakout boards with an nRF51822 (version 3, variant AA w/ 16 kB of RAM and 256 kB flash memory) and two 2×9 connectors (2 mm pitch), which you can buy over the Internet for about 6 US$ including shipping.

We isolate the 12 VAC door bell circuit from the microcontroller using an opto-isolator. A rectifier and 5 V voltage regulater is used to power the LED of the opto-isolator whenever the door bell is ringing. A GPIO pin of the nRF51822 connected to the other side of the opto-isolator is then detecting the event. In addition to the integrate protection mechanisms of the LM2940 voltage regulator (short circuit and thermal overload protection, shutdown during transients), a varistor protects from voltage transients since many door bells are inductive loads inducing voltage spikes when switched off. Since varistors age with every voltage transient, a fuse is added to protect the door bell circuit from a short circuit of the varistor.

The nRF51822 is powered by two AA batteries. No additional voltage regulator is required, which increased the energy efficiency, and the monitoring device is expected to run for years from a pair of AA batteries. Note that we did not implement a reverse polarity protection, so be careful to insert the batteries correctly.

The schemtic and circuit board layout (PCB) of the DoorBell20 monitoring device for Eagle as well as the firmware can be found at GitHub. We deliberately used a simple single-sided through-hole design to help makers producing their own boards.

IFTTT DoorBell20 Client

DoorBell20 can be connected to any BLE client running on a remote machine. After receiveing a BLE notification about a door bell event, the client can then trigger local actions, and can forward the event to a remote IoT cloud service. DoorBell20 comes with a client for connecting to the popular If This Then That (IFTTT) cloud service.

Whenever a notification for a door bell alarm is received, a web request is sent to the IFTTT Maker Channel triggering an event with a pre-defined name. You can then define your own IFTTT recipes to decide what to do with this event like showing a notification on your smartphone through the IFTTT app, as shown in the following image.

IFTTT client showing door bell event notification.

IFTTT client showing door bell event notification.

For further technical details, please have a look at the documentation and source code provided at GitHub.

Key 2.0 — Bluetooth IoT Door Lock

What is Key 2.0?

Key 2.0 (or Key20 for short) is a Bluetooth IoT Door Lock. It turns a conventional electric door lock into a smart door lock that can be opened using a smartphone without the need for a physical key. Thus, Key20 is the modern version of a physical key, or, as the name suggests, the key version 2.0 for the Internet of Things (IoT) era.

Key20 consists of two parts:

  1. Door lock controller device, which is physically connected to the electric door lock and wirelessly via BLE to the mobile app.
  2. Mobile app implementing the user interface to unlock the door and communicating with the door lock controller through BLE.

You can get a quick impression on how Key20 works by watching the following video:

The following image shows the Key20 door lock controller device and the Key20 app running on a smartphone.

Key 2.0 App and Door Lock Controller Device

Key 2.0 App and Door Lock Controller Device

The main features of Key20 are:

  • Using state-of-the-art security mechanisms (Elliptic Curve Diffie-Hellman Key Exchange (ECDH), HMAC) to protect against attacks.
  • Open-source software and hardware, including an open implementation of the security mechanisms. No security by obscurity! Source code for the app and door lock controller as well as Eagle files (schematic and board layout) are available on GitHub.
  • Maker-friendly: using easily available cheap standard components (nRF51822 BLE chip, standard electronic parts), easy to manufacture circuit board, and open-source software and hardware design.
  • Works with BLE-enabled Android 4.3 mobile devices (and of course newer versions). Porting to other mobile operating systems like iOS should be straightforward.
  • Liberal licensing of software and hardware under the Apache License 2.0 and the CERN Open Hardware License 1.0, respectively.

Security Concepts

A door lock obviously requires security mechanisms to protect from unauthorized requests to open the door. To this end, Key20 implements the following state of the art security mechanisms.

Authorization of Door Open Requests with HMAC

All door open requests are authorized through a Keyed Hash Message Authentication Code (HMAC). A 16 byte nonce (big random number) is generated by the door lock controller for each door open request as soon as a BLE connection is made to the door lock controller. The nonce is sent to the mobile app. Both, the nonce and the shared secret, are used by the mobile app to calculate a 512 bit HMAC using the SHA-2 hashing algorithm, which is then truncated to 256 bits (HMAC512-256), and sent to the door lock controller. The door lock controller also calculates an HMAC based on the nonce and the shared secret, and only if both HMACs match, the door will be opened.

The nonce is only valid for one door open request and effectively prevents replay attacks, i.e., an attacker sniffing on the radio channel and replaying the sniffed HMAC later. Note that the BLE radio communication is not encrypted, and it actually does not need to be encrypted since a captured HMAC is useless when re-played.

Moreover, each nonce is only valid for 15 s to prevent man-in-the-middle attacks where an attacker intercepts the HMAC and does not forward it immediatelly but waits until the (authorized) user walks away after he is not able to open the door. Later the attacker would then send the HMAC to the door lock controller to open the door. With a time window of only 15 s (which could be reduced further), such attacks are futile since the authorized user will still be at the door.

Note that the whole authentication procedure does not include heavy-weight asymmetric crypto functions, but only light-weight hashing algorithms, which can be performed on the door lock device featuring an nRF51822 micro-controller (ARM Cortex M0) very fast in order not to delay door unlocking.

With respect to the random nonce we would like to note the following. First, the nRF51822 chip includes a random number generator for generating random numbers from thermal noise, so nonces should be of high quality, i.e., truly random. An attack by cooling down the Bluetooth chip to reduce randomness due to thermal noise is not relevant here since this requires physical access to the lock controller installed within the building, i.e., the attacker is then already in your house.

Secondly, 128 bit nonces provide reasonable security for our purpose. Assume one door open request per millisecond (very pessimistic assumption!) and 100 years of operation, i.e., less than n = 2^42 requests to be protected. With 128 bit nonces, we have m = 2^128 possible nonce values. Then the birthday paradox can be used to calculate the probability p of at least one pair of requests sharing the same nonce, or, inversely, no nonces shared by any pair of requests. An approximation of p for n << m is p(n,m) = 1 – e^((-n^2)/(2*m)), which practically evaluates to 0 for n = 2^42 and m = 2^128. Even for n = 2^52 (one request per us; actually not possible with BLE), p(2^52,2^128) < 3e-8, which is about the probability to be hit by lightning, which is about 5.5e-8.

Exchanging Keys with Elliptic Curve Diffie Hellman Key Exchange (ECDH)

Obviously, the critical part is the establishment of a shared secret between the door lock controller and the mobile app. Anybody in possession of the shared secret can enter the building, thus, we must ensure that only the lock controller and the Key20 app know the secret. To this end, we use Elliptic Curve Diffie-Hellman (ECDH) key exchange based on Curve 25519. We assume that the door lock controller is installed inside the building that is secured by the lock—if the attacker is already in your home, the door lock is futile anyway. Thus, only the authorized user (owner of the building) has physical access to the door lock controller.

First, the user needs to press a button on the door lock controller device to enter key exchange mode (the red button in the pictures). Then both, the mobile app and the door lock controller calculate different key pairs based on the Elliptic Curve 25519 and exchange their public keys, which anyone can know. Using the public key of the other party and their own private keys, the lock controller and the app can calculate the same shared secret.

Using Curve 25519 and the Curve 25519 assembler implementation optimized for ARM Cortex-M0 from the Micro NaCl project, key pairs and shared secrets can be calculated in sub-seconds on the nRF51822 BLE chip (ARM Cortex M0).

Without further measures, DH is susceptible to man-in-the-middle attacks where an attacker actively manipulates the communication between mobile app and door lock controller. With such attacks, the attacker could exchange his own public key with both, the lock controller and the app to establish two shared secrets between him and the door lock controller, and between him and the mobile app. We prevent such attacks with the following mechanism. After key exchange, the mobile app and the door lock device both display a checksum (hash) of their version of the exchanged shared secret. The user will visually check these checksums to verify that they are the same. If they are the same, no man-in-the-middle attack has happened since the man in the middle cannot calculate the same shared secret as the door lock controller and the mobile app (after all, the private keys of door lock controller and mobile app remain private). Only then the user will confirm the key by pressing buttons on the door lock controller and the mobile app. Remember that only the authorized user has physical access to the door lock controller since it is installed within the building to be secured by the lock.

The following image shows the mobile app and the door lock controller displaying a shared secret checksum after key exchange. The user can confirm this secret by pushing the green button the the lock controller device and the Confirm Key button of the app.

Key 2.0: key checksum verification after key exchange.

Key 2.0: key checksum verification after key exchange.

Why not Standard Bluetooth Security?

Actually, Bluetooth 4.2 implements security concepts similar to the mechanisms described above. So it is a valid question why don’t we just rely on the security concepts implemented by Bluetooth?

A good overview why Bluetooth might not be as secure as we would like it to be is provided by Francisco Corella. So we refer the interested reader to his page for the technical details and a discussion of Bluetooth security. We also would like to add that many mobile devices still do not implement Bluetooth 4.2 but only Bluetooth 4.0, which is even less secure than Bluetooth 4.2.

So we decided not to rely on Bluetooth security mechanisms, but rather implement all security protocols on the application layer using state of the art security mechanisms as described above.

Bluetooth Door Lock Controller Device

The following image shows the door lock controller and its components.

Key 2.0 Door Lock Controller Device

Key 2.0 Door Lock Controller Device

The Door Lock Controller Device needs to be connected to the electric door lock (2 cables). You can simply replace a manual switch by the door lock controller device.

The door lock controller needs to be placed in Bluetooth radio range to the door and inside the building. Typical radio ranges are about 10 m. Depending on the walls, the distance might be shorter or longer. In our experience, one concrete wall is no problem, but two might block the radio signal.

The main part of the hardware is an nRF51822 BLE chip from Nordic Semiconductors. The nRF51822 features an ARM Cortex M0 micro-controller and a so-called softdevice implementing the Bluetooth stack, which runs together with the application logic on the ARM Cortex M0 processor.

An LCD is used to implement the secure key exchange procedure described above (visual key verification to avoid man-in-the-middle attacks).

For more technical details including schematics, board layout, and source code please visit the Key20 GitHub page.

Android App

The app requires a BLE-enabled mobile device running Android version 4.3 “Jelly Bean” (API level 18) or higher.

The following images show the two major tabs of the app: one for opening the door, and the second for exchanging keys between the app and the door lock controller.

Key 2.0 App: door unlock tab

Key 2.0 App: door unlock tab


Key 2.0 App: key exchange tab

Key 2.0 App: key exchange tab

The source code is available from the Key20 GitHub page.

Testing USB-C to USB-A/Micro-USB Cables for Conformance

Many new mobile devices now feature an USB-C connector. In order to connect these devices to USB devices or chargers with a USB-A or Micro-USB connector, you need an adapter or cable with USB-C plug on the one side and USB-A/Micro-USB connector on the other.

As first discovered by Google engineer Benson Leung, many of these USB-C to USB-A/Micro-USB cables or adapters do not conform with the USB standard allowing USB-C devices to draw excessive power, which might damage the host or charger permanently.

Recently, I bought a Nexus 5x featuring a USB-C connector and now faced the problem of figuring out whether my USB-C to USB-A cable is conforming with the standard. So I bought a USB-C connector (actually, not so easy to get as I thought) and tested my cable with a multimeter. Of course, this works fine, and fortunately, my cable was OK. Then I thought: Why not build a little device to quickly check cables without multimeter? Just plug in the cable and see whether it is OK or not.

That’s exactly what I present here: an Arduino-based device to check USB-C to USB-A/Micro-USB cables and adapters for standard conformity. Two images of the board are shown below. It’s not very complex at all as you will see, and I don’t claim this to be rocket science. It’s just a little practical tool. Everything is completely open source, the code as well as the hardware design (printed circuit board), and you can download both from my Github repository.

USB-C Adapter Tester

USB-C Adapter Tester

USB-C Adapter Tester

USB-C Adapter Tester


I don’t want to repeat everything that has already been said elsewhere. However, to keep this page self-contained, I quickly describe the problem in plain words so you can easily understand the solution.

USB-C allows the USB host or charger (called downstream-facing port, DFP) to signal to the powered USB-C device (called upstream-facing port, UFP) how much current it can provide. This is implemented by a defined current flowing from DFP to UFP over the CC (Channel Configuration) line of the USB-C connector. 80 µA +- 20 % signal “Default USB power” (900 mA for “Super Speed” devices), whereas 180 µA +-8 % signal 1.5 A, and 330 µA signal 3.0 A.

So far, so good. A USB-C host or charger will know how much power it can provide and signal the correct value by sending the corresponding current over the CC line. The problem starts with “legacy” devices with USB-A or Micro-USB connector. These connectors don’t have a CC pin, thus, the host or charger cannot signal to the USB-C device how much current they can provide. In this case, the current on the CC line is “generated” by the cable or adapter using a simple resistor RP connecting the 5 V line to the CC line of the UFP. You might remember: R = V/I. So by selecting the right resistor in the cable/adapter, a certain current ICC is flowing through the CC line. Actually, the UFP connects CC through another 5.1k resistor (RD) to ground, so you have to consider the series resistance of RP and RD when calculating ICC. RP = 56k corresponds to about 80 µA (corresponding to “Default USB-Power”), RP = 22k to 180 µA (corresponding to 1.5 A), and RP = 10k to 300 µA (corresponding to 3.0 A).

Note that now the adapter cable rather than the upstream USB host or USB charger is defining the maximum current the downstream USB-C device can pull! However, the cable cannot know to which host or charger it will be connected and how much current this host or charger can actually provide. So the only safe choice for RP is a value resulting in 80 µA on the CC line corresponding to “Default USB Power”, i.e., a 56k resistor. Unfortunately, some cable and adapter manufacturers don’t use 56k resistors but lower values like 10k resistors. If your host can just provide the required “Default USB Power”, it might get grilled.


Now that we know what to check, we can build our USB-C-Adapter-Tester shown on the images above. This tester consists of a microcontroller (Atmega 328p; same chip as used by the Arduino UNO) featuring an Analog-to-Digital Converter (ADC). The ADC measures the voltage drop along a 5.1k resistor (actually, two separate 5.1k resistors on different channels of the ADC since USB-C features two CC so you can plug-in the USB-C cable either way). Knowing the resistance and the voltage drop measured by the ADC, the microcontroller calculates ICC. If ICC is within the specified range (80 µA +- 20 %), an LED signaling a “good” cable is turned on from an GPIO pin. If it is outside the range, another LED signaling a “bad” cable is turned on.

The cable to be checked is also powering the microcontroller from the USB host or charger. The good old Atmega 328p can be powered from 5V, which is the voltage of USB-A and Micro-USB.

Since the internal voltage reference of the Atmega might not be very precise, I used an external 2.5 V voltage reference diode to provide a reference voltage to the ADC. If you trust the internal 1.1 V voltage reference of the Atmega, you can save this part.

As said, the USB-C connector was a little hard to get, but I finally found one at an E-Bay shop.

For the implementation of the code, I used the Arduino platform. The device is programmed through a standard 6 pin in-system programmer port.

As soon as you plug in the cable under test, the microcontroller starts measuring the voltage drop, translates it to current, compares it to the specified range, and switches on the corresponding LED signaling a good or bad cable.

If you want to etch the PCB yourself, I provide the Eagle files in the Git repository. Of course, you can also simply use a standard Arduino UNO instead of the shown PCB.

Several cables and adapters were tested with this device. The Micro-USB/USB-C adapter that came with the Nexus 5x phone was OK as well as my axxbiz USB-A/USB-C cable. Some Micro-USB/USB-C adapters were not OK (using 10k resistor instead of 51k resistors). Benson Leung tested many more cables if you are interested in what to buy.

I hope your USB cable is OK :)

Raspberry Pi Going Realtime with RT Preempt

[UPDATE 2016-05-13: Added pre-compiled kernel version 4.4.9-rt17 for all Raspberry Pi models (Raspberry Pi Model A(+), Model B(+), Zero, Raspberry Pi 2, Raspberry Pi 3). Added build instructions for Raspberry Pi 2/3.

A real-time operating system gives you deterministic bounds on delay and delay variation (jitter). Such a real-time operating system is an essential prerequisite for implementing so-called Cyber Physical Systems, where a computer controls a physical process. Prominent examples are the control of machines and robots in production environments (Industry 4.0), drones, etc.

RT Preempt is a popular patch for the Linux kernel to transform Linux into such a realtime operating system. Moreover, the Raspberry Pi has many nice features to interface with sensors and actuators like SPI, I2C, and GPIO so it seems to be a good platform for hosting a controller in a cyber-physical system. Consequently, it is very attractive to install Linux with the RT Preempt patch on the Raspberry Pi.

Exactly this is what I do here: I provide detailed instructions on how to install a Linux kernel with RT Preempt patch on a Raspberry Pi. Basically, I wrote this document to document the process for myself, and it is more or less a collection of information you will find on the web. But anyway, I hope I can save some people some time.

And to save you even more time, here is the pre-compiled kernel (including kernel modules, firmware, and device tree) for the Raspberry Pi Model A(+),B(+), Raspberry Pi Zero, Raspberry Pi 2 Model B, Raspberry Pi 3 Model B:

To install this pre-compiled kernel, login to your Raspberry Pi running Raspbian (if you have not installed Raspbian already, you can find an image here:, and execute the following commands (I recommend to do a backup of your old image since this procedure will overwrite the old kernel):

pi@raspberry ~$ sudo rm -r /boot/overlays/
pi@raspberry ~$ sudo rm -r /lib/firmware/
pi@raspberry ~$ cd /tmp
pi@raspberry ~$ wget
pi@raspberry ~$ tar xzf kernel-4.4.9-rt17.tgz
pi@raspberry ~$ cd boot
pi@raspberry ~$ sudo cp -rd * /boot/
pi@raspberry ~$ cd ../lib
pi@raspberry ~$ sudo cp -dr * /lib/
pi@raspberry ~$ sudo /sbin/reboot

With this patched kernel, I could achieve bounded latency well below 200 microseconds on a fully loaded 700 MHz Raspberry Pi Model B (see results below). This should be safe for tasks with a cycle time of 1 ms.

Since compiling the kernel on the Pi is very slow, I will cross compile the kernel on a more powerful host. You can distinguish the commands executed on the host and the Pi by looking at the prompt of the shell in the following commands.

Install Vanilla Raspbian on your Raspberry Pi

Download Raspbian from and install it on your SD card.

Download Raspberry Pi Kernel Sources

On your host (where you want to cross-compile the kernel), download the latest kernel sources from Github:

user@host ~$ git clone
user@host ~$ cd linux

If you like, you can switch to an older kernel version like 4.1:

user@host ~/linux$ git checkout rpi-4.1.y

Patch Kernel with RT Preempt Patch

Next, patch the kernel with the RT Preempt patch. Choose the patch matching your kernel version. To this end, have a look at the Makefile. VERSION, PATCHLEVEL, and SUBLEVEL define the kernel version. At the time of writing this tutorial, the latest kernel was version 4.4.9. Patches for older kernels can be found in folder "older".

user@host ~/linux$ wget
user@host ~/linux$ zcat patch-4.4.9-rt17.patch.gz | patch -p1

Install and Configure Tool Chain

For cross-compiling the kernel, you need the tool chain for ARM on your machine:

user@host ~$ git clone
user@host ~$ export ARCH=arm
user@host ~$ export CROSS_COMPILE=/home/user/tools/arm-bcm2708/gcc-linaro-arm-linux-gnueabihf-raspbian/bin/arm-linux-gnueabihf-
user@host ~$ export INSTALL_MOD_PATH=/home/user/rtkernel

Later, when you install the modules, they will go into the directory specified by INSTALL_MOD_PATH.

Configure the kernel

Next, we need to configure the kernel for using RT Preempt.

For Raspberry Pi Model A(+), B(+), Zero, execute the following commands:

user@host ~$ export KERNEL=kernel
user@host ~$ make bcmrpi_defconfig

For Raspberry Pi 2/3 Model B, execute these commands:

user@host ~$ export KERNEL=kernel7
user@host ~$ make bcm2709_defconfig

An alternative way is to export the configuration from a running Raspberry Pi:

pi@raspberry$ sudo modprobe configs
user@host ~/linux$ scp pi@raspberry:/proc/config.gz ./
user@host ~/linux$ zcat config.gz > .config

Then, you can start to configure the kernel:

user@host ~/linux$ make menuconfig

In the kernel configuration, enable the following settings:

  • CONFIG_PREEMPT_RT_FULL: Kernel Features → Preemption Model (Fully Preemptible Kernel (RT)) → Fully Preemptible Kernel (RT)
  • Enable HIGH_RES_TIMERS: General setup → Timers subsystem → High Resolution Timer Support (Actually, this should already be enabled in the standard configuration.)

Build the Kernel

Now, it’s time to cross-compile and build the kernel and its modules:

user@host ~/linux$ make zImage
user@host ~/linux$ make modules
user@host ~/linux$ make dtbs
user@host ~/linux$ make modules_install

The last command installs the kernel modules in the directory specified by INSTALL_MOD_PATH above.

Transfer Kernel Image, Modules, and Device Tree Overlay to their Places on Raspberry Pi

We are now ready to transfer everything to the Pi. To this end, you could mount the SD card on your PC. I prefer to transfer everything over the network using a tar archive:

user@host ~/linux$ mkdir $INSTALL_MOD_PATH/boot
user@host ~/linux$ ./scripts/mkknlimg ./arch/arm/boot/zImage $INSTALL_MOD_PATH/boot/$KERNEL.img
user@host ~/linux$ cp ./arch/arm/boot/dts/*.dtb $INSTALL_MOD_PATH/boot/
user@host ~/linux$ cp -r ./arch/arm/boot/dts/overlays $INSTALL_MOD_PATH/boot
user@host ~/linux$ cd $INSTALL_MOD_PATH
user@host ~/linux$ tar czf /tmp/kernel.tgz *
user@host ~/linux$ scp /tmp/kernel.tgz pi@raspberry:/tmp

Then on the Pi, install the real-time kernel (this will overwrite the old kernel image!):

pi@raspberry ~$ cd /tmp
pi@raspberry ~$ tar xzf kernel.tgz
pi@raspberry ~$ sudo rm -r /lib/firmware/
pi@raspberry ~$ sudo rm -r /boot/overlays/
pi@raspberry ~$ cd boot
pi@raspberry ~$ sudo cp -rd * /boot/
pi@raspberry ~$ cd ../lib
pi@raspberry ~$ sudo cp -dr * /lib/

Most people also disable the Low Latency Mode (llm) for the SD card:

pi@raspberry /boot$ sudo nano cmdline.txt

Add the following option:



pi@raspberry ~$ sudo /sbin/reboot

Latency Evaluation

For sure, you want to know the latency bounds achieved with the RT Preempt patch. To this end, you can use the tool cyclictest with the following test case:

  • clock_nanosleep(TIMER_ABSTIME)
  • Cycle interval 500 micro-seconds
  • 100,000 loops
  • 100 % load generated by running the following commands in parallel:
    • On the Pi:
      pi@raspberry ~$ cat /dev/zero > /dev/null
    • From another host:
      user@host ~$ sudo ping -i 0.01 raspberrypi
  • 1 thread (I used a Raspberry Pi model B with only one core)
  • Locked memory
  • Process priority 80
pi@raspberry ~$ git clone git://
pi@raspberry ~$ cd rt-tests/
pi@raspberry ~/rt-test$ make all
pi@raspberry ~/rt-test$ sudo ./cyclictest -m -t1 -p 80 -n -i 500 -l 100000

On a Raspberry Pi model B at 700 MHz, I got the following results:

T: 0 ( 976) P:80 I:500 C: 100000 Min: 23 Act: 40 Avg: 37 Max: 95

With some more tests, the worst case latency sometimes reached about 166 microseconds. Adding a safety margin, this should be safe for cycletimes of 1 ms.

I also observed that using other timers than clock_nanosleep(TIMER_ABSTIME)—e.g., system timers (sys_nanosleep and sys_setitimer)—, the latency was much higher with maximum values above 1 ms. Thus, for low latencies, I would only rely on clock_nanosleep(TIMER_ABSTIME).

Introducing SDN-MQ: A Powerful and Simple-to-Use Northbound Interface for OpenDaylight

One of the essential parts of an SDN controller is the so-called northbound interface through which network control applications implementing control logic interface with the SDN controller. The SDN controller then uses the OpenFlow protocol to program the switches according to the instructions of the control application. Since the northbound interface is the “API to the network”, a well-designed interface is essential for the acceptance and success of the SDN controller.

Ideally, the northbound interface should be powerful and still simple. Powerful means that it should expose all essential functionalities of OpenFlow to the control application. Certainly, the most essential function of SDN is flow programming to define forwarding table entries on the switches. Flow programming should include pro-active flow-programming, where the control application proactively decides to program a flow (e.g., a static flow), on the one hand. On the other hand, the northbound interface should support reactive flow programming where the control application reacts to packet-in events triggered by packets without matching forwarding table entries.

Simple means that the programmer should be able to use technologies that he is familiar with. So in short, the ideal northound interface should be as simple as possible, but not simpler.

Current Northbound Interfaces and Observed Limitations

OpenDaylight currently offers two kinds of northbound interfaces:

  1. RESTful interfaces using XML/JSON over HTTP.
  2. OSGi allowing for implementing control logic as OSGi services.

RESTful interfaces are simple to use since they are based on technologies that many programmers are familiar with and that are used in many web services. Parsing and creating JSON or XML messages and sending or receiving these messages over HTTP is straightforward and well-supported by many libraries. However, due to the request/response nature of REST and HTTP, these interfaces are restricted to proactive flow programming. The very essential feature of reacting to packet-in events is missing.

OSGi interfaces are powerful. Control applications can use any feature of the OpenFlow standard (implemented by the controller). However, they are much more complex than RESTful interfaces since OSGi itself is a complex technology. Moreover, OSGi is targeted at Java, which is nice for integrating it with the Java-based OpenDaylight controller, but bad if you want to use any other language to implement your control logic like C++ or Python.

So none of these interface seems to be simple and powerful at the same time.

How SDN can Benefit from Message-oriented Middleware

So how can we have the best of both worlds: a simple and powerful interface? The keyword (or maybe at least one possible keyword) is message-oriented middleware. As shown in the following figure, a message-oriented middleware (MOM) decouples the SDN controller from the control application through message queues for request/response interaction (proactive flow programming) and publish/subscribe topics for event-based interaction (reactive flow programming). So we can program flows through a request-response interface implemented by message queues and react to packet-in events by subscribing to events through message topics.


Moreover, messages can be based on simple textual formats like XML or JSON making message creation and interpretation as simple as for the RESTful interfaces mentioned above, however, without their restriction to request/response interaction.

Since a MOM decouples the SDN controller from the control application, the control logic can be implemented in any programming language. SDN controller and application talk to each other using JSON/XML, and the MOM takes care to transport messages from application to SDN controller and vice versa.

This decoupling also allows for the horizontal distribution of control logic by running control applications on several hosts. Such a decoupling “in space” is perfect for scaling out horizontally.

MOMs not only decouple the controller and control application in space but also in time. So the receiver does not need to consume the message at the time when it is sent. Messages can be buffered by the MOM and delivered when the control application or SDN controller are available and ready to process it. Although being a nice feature in general, time decoupling might not be strictly essential for SDN since usually we want a timely reaction of both controller and application. Still, it might be handy for some delay tolerant functions.

SDN-MQ: Integrating Message-oriented Middleware and SDN Controller

SDN-MQ integrates a message-oriented middleware with the OpenDaylight controller. In more detail, SDN-MQ is based on the Java Messaging Service (JMS) standard. The basic fatures of SDN-MQ are:

  • All messages are consequently based on JSON making message generation and interpretation straightforward.
  • SDN-MQ supports proactive and reactive flow programming without the need to implement complex OSGi services.
  • SDN-MQ supports message filtering for packet-in events through standard JMS selectors. So the control application can define, which packet-in events to receive based on packet header fields like source and destination adddresses. According to the publish/subscribe paradigm, multiple control applications can receive packet-in event notifications for the same packet.
  • SDN control logic can be distributed horizontally to different hosts for scaling out control logic.
  • Although SDN-MQ is based on the Java-based JMS standard, JMS servers such as Apache ActiveMQ support further language-independent protocols like STOMP (Streaming Text Oriented Messaging Protocol). Therefore, cross-language control applications implemented in C++, Python, JavaScript, etc. are supported.
  • Besides packet-in events and flow programming, SDN-MQ supports further essential functionality such as packet forwarding/injection via the controller.
  • SDN-MQ is open source and licensed through the Eclipse license (similar to OpenDaylight). The full source code is available at GitHub.

The figure below shows the basic architecture of SDN-MQ. SDN-MQ is implemented as OSGi services executed within the same OSGi framework as the the OpenDaylight OSGi services. SDN-MQ uses the OpenDaylight services to provide its service to the control application. So basically, SDN-MQ acts as a bridge between OpenDaylight and control application.


Three services are implemented by SDN-MQ to date:

  • Packet-in service to receive packet-in events including packet filtering based on header fields using JMS selectors.
  • Flow programming to define flow table entries on switches.
  • Packet forwarding to forward either packets received through packet-in events or new packets created by the application.

The JMS middleware transports messages between the SDN-MQ services and the control applications. As JMS middleware, we have used ActiveMQ so far, but any JMS-compliant service should work. If the message-oriented middleware is supporting other language-independent protocols (such as STOMP), control applications can be implemented in any supported language.

Where to go from here

In my next blog post, I will explain in detail how to use SDN-MQ. Until then, you can find more details and programming examples on the SDN-MQ website at GitHub.

Stay tuned!

Reactive Flow Programming with OpenDaylight

In my last OpenDaylight tutorial, I demonstrated how to implement an OSGi module for OpenDaylight. In this tutorial, I will show how to use these modules for reactive flow programming and packet forwarding.

In detail, you will learn:

  • how to decode incoming packets
  • how to set up flow table entries including packet match rules and actions
  • how to forward packets


To make things concrete, we consider a simple scenario in this tutorial: load balancing of a TCP service (e.g., a web service using HTTP over TCP). The basic idea is that TCP connections to a service addressed through a public IP address and port number are distributed among two physical server instances using IP address re-writing performed by an OpenFlow switch. Whenever a client opens a TCP connection to the service, one of the server instances is chosen randomly, and a forwarding rule is installed by the network controller on the ingress switch to forward all incoming packets of this TCP connection to the chosen server instance. In order to make sure that the server instance accepts the packets of the TCP connection, the destination IP address is re-written to the IP address of the chosen server instance, and the destination MAC address is set to the MAC address of the server instance. In the reverse direction from server to client, the switch re-writes the source IP address of the server to the public IP address of the service. Therefore, to the client it looks like the response is coming from the public IP address. Thus, load balancing is transparent to the client.

To keep things simple, I do not consider the routing of packets. Rather, I assume that the clients and the two server instances are connected to the same switch on different ports (see figure below). Moreover, I also simplify MAC address resolution by setting a static ARP table entry at the client host for the public IP address. Since there is no physical server assigned to the public IP address, we just set a fake MAC address (in a real setup, the gateway of the data center would receive the client request, so we would not need an extra MAC address assigned to the public IP address).


I assume that you have read the previous tutorial, so I skip some explanations on how to set up an OpenDaylight Maven project, subscribe to services, and further OSGi module basics.

You can find all necessary files of this tutorial in this archive: myctrlapp.tar.gz

The folder myctrlapp containts the Maven project of the OSGi module. You can compile and create the OSGi bundle with the following command

user@host:$ tar xzf myctrlapp.tar.gz
user@host:$ cd ~/myctrlapp
user@host:$ mvn package

The corresponding Eclipse project can be created using

user@host:$ cd ~/myctrlapp
user@host:$ mvn eclipse:eclipse

Registering Required Services and Subscribing to Packet-in Events

For our simple load balancer, we need the following OpenDaylight services:

  • Data Packet Service for decoding incoming packets and encoding and sending outgoing packets.
  • Flow Programmer Service for setting flow table entries on the switch.
  • Switch Manager Service to determine the outport of packets forwarded to the server instances.

As explained in my previous tutorial, we register for OSGi services by implementing the configureInstance(...) method of the Activator class:

public void configureInstance(Component c, Object imp, String containerName) {
    log.trace("Configuring instance");

    if (imp.equals(PacketHandler.class)) {
        // Define exported and used services for PacketHandler component.

        Dictionary<String, Object> props = new Hashtable<String, Object>();
        props.put("salListenerName", "mypackethandler");

        // Export IListenDataPacket interface to receive packet-in events.
        c.setInterface(new String[] {IListenDataPacket.class.getName()}, props);

        // Need the DataPacketService for encoding, decoding, sending data packets
            "setDataPacketService", "unsetDataPacketService").setRequired(true));

        // Need FlowProgrammerService for programming flows
            "setFlowProgrammerService", "unsetFlowProgrammerService").setRequired(true));

        // Need SwitchManager service for enumerating ports of switch
            "setSwitchManagerService", "unsetSwitchManagerService").setRequired(true));

set... and unset... define names of callback methods. These callback methods are implemented in our PacketHandler class to receive service proxy objects, which can be used to call the services:

 * Sets a reference to the requested DataPacketService
void setDataPacketService(IDataPacketService s) {
    log.trace("Set DataPacketService.");

    dataPacketService = s;

 * Unsets DataPacketService
void unsetDataPacketService(IDataPacketService s) {
    log.trace("Removed DataPacketService.");    

    if (dataPacketService == s) {
        dataPacketService = null;

 * Sets a reference to the requested FlowProgrammerService
void setFlowProgrammerService(IFlowProgrammerService s) {
    log.trace("Set FlowProgrammerService.");

    flowProgrammerService = s;

 * Unsets FlowProgrammerService
void unsetFlowProgrammerService(IFlowProgrammerService s) {
    log.trace("Removed FlowProgrammerService.");

    if (flowProgrammerService == s) {
        flowProgrammerService = null;

 * Sets a reference to the requested SwitchManagerService
void setSwitchManagerService(ISwitchManager s) {
   log.trace("Set SwitchManagerService.");

   switchManager = s;

 * Unsets SwitchManagerService
void unsetSwitchManagerService(ISwitchManager s) {
    log.trace("Removed SwitchManagerService.");

    if (switchManager == s) {
        switchManager = null;

Moreover, we register for packet-in events in the Activator class. To this end, we must declate that we implement the IListenDataPacket interface (line 11). This interface basically consists of one callback method receiveDataPacket(...) for receiving packet-in events as described next.

Handling Packet-in Events

Whenever a packet without matching flow table entry arrives at the switch, it is sent to the controller and the event handler receiveDataPacket(...) of our packet handler class is called with the received packet as parameter:

public PacketResult receiveDataPacket(RawPacket inPkt) {
    // The connector, the packet came from ("port")
    NodeConnector ingressConnector = inPkt.getIncomingNodeConnector();
    // The node that received the packet ("switch")
    Node node = ingressConnector.getNode();

    log.trace("Packet from " + node.getNodeIDString() + " " + ingressConnector.getNodeConnectorIDString());

    // Use DataPacketService to decode the packet.
    Packet pkt = dataPacketService.decodeDataPacket(inPkt);

    if (pkt instanceof Ethernet) {
        Ethernet ethFrame = (Ethernet) pkt;
        Object l3Pkt = ethFrame.getPayload();

        if (l3Pkt instanceof IPv4) {
            IPv4 ipv4Pkt = (IPv4) l3Pkt;
            InetAddress clientAddr = intToInetAddress(ipv4Pkt.getSourceAddress());
            InetAddress dstAddr = intToInetAddress(ipv4Pkt.getDestinationAddress());
            Object l4Datagram = ipv4Pkt.getPayload();

            if (l4Datagram instanceof TCP) {
                TCP tcpDatagram = (TCP) l4Datagram;
                int clientPort = tcpDatagram.getSourcePort();
                int dstPort = tcpDatagram.getDestinationPort();

                if (publicInetAddress.equals(dstAddr) && dstPort == SERVICE_PORT) {
          "Received packet for load balanced service");

                    // Select one of the two servers round robin.

                    InetAddress serverInstanceAddr;
                    byte[] serverInstanceMAC;
                    NodeConnector egressConnector;

                    // Synchronize in case there are two incoming requests at the same time.
                    synchronized (this) {
                        if (serverNumber == 0) {
                  "Server 1 is serving the request");
                            serverInstanceAddr = server1Address;
                            serverInstanceMAC = SERVER1_MAC;
                            egressConnector = switchManager.getNodeConnector(node, SERVER1_CONNECTOR_NAME);
                            serverNumber = 1;
                        } else {
                  "Server 2 is serving the request");
                            serverInstanceAddr = server2Address;
                            serverInstanceMAC = SERVER2_MAC;
                            egressConnector = switchManager.getNodeConnector(node, SERVER2_CONNECTOR_NAME);
                            serverNumber = 0;

                    // Create flow table entry for further incoming packets

                    // Match incoming packets of this TCP connection 
                    // (4 tuple source IP, source port, destination IP, destination port)
                    Match match = new Match();
                    match.setField(MatchType.DL_TYPE, (short) 0x0800);  // IPv4 ethertype
                    match.setField(MatchType.NW_PROTO, (byte) 6);       // TCP protocol id
                    match.setField(MatchType.NW_SRC, clientAddr);
                    match.setField(MatchType.NW_DST, dstAddr);
                    match.setField(MatchType.TP_SRC, (short) clientPort);
                    match.setField(MatchType.TP_DST, (short) dstPort);

                    // List of actions applied to the packet
                    List actions = new LinkedList();

                    // Re-write destination IP to server instance IP
                    actions.add(new SetNwDst(serverInstanceAddr));

                    // Re-write destination MAC to server instance MAC
                    actions.add(new SetDlDst(serverInstanceMAC));

                    // Output packet on port to server instance
                    actions.add(new Output(egressConnector));

                    // Create the flow
                    Flow flow = new Flow(match, actions);

                    // Use FlowProgrammerService to program flow.
                    Status status = flowProgrammerService.addFlow(node, flow);
                    if (!status.isSuccess()) {
                        log.error("Could not program flow: " + status.getDescription());
                        return PacketResult.CONSUME;

                    // Create flow table entry for response packets from server to client

                    // Match outgoing packets of this TCP connection 
                    match = new Match();
                    match.setField(MatchType.DL_TYPE, (short) 0x0800); 
                    match.setField(MatchType.NW_PROTO, (byte) 6);
                    match.setField(MatchType.NW_SRC, serverInstanceAddr);
                    match.setField(MatchType.NW_DST, clientAddr);
                    match.setField(MatchType.TP_SRC, (short) dstPort);
                    match.setField(MatchType.TP_DST, (short) clientPort);

                    // Re-write the server instance IP address to the public IP address
                    actions = new LinkedList();
                    actions.add(new SetNwSrc(publicInetAddress));
                    actions.add(new SetDlSrc(SERVICE_MAC));

                    // Output to client port from which packet was received
                    actions.add(new Output(ingressConnector));

                    flow = new Flow(match, actions);
                    status = flowProgrammerService.addFlow(node, flow);
                    if (!status.isSuccess()) {
                        log.error("Could not program flow: " + status.getDescription());
                        return PacketResult.CONSUME;

                    // Forward initial packet to selected server

                    log.trace("Forwarding packet to " + serverInstanceAddr.toString() + " through port " + egressConnector.getNodeConnectorIDString());

                    return PacketResult.CONSUME;

    // We did not process the packet -> let someone else do the job.
    return PacketResult.IGNORED;

Our load balancer reacts as follows to packet-in events. First, it uses the Data Packet Service to decode the incoming packet using method decodeDataPacket(inPkt). We are only interested in packets addressed to the public IP address and port number of our load-balanced service. Therefore, we have to check the destination IP address and port number of the received packet. To this end, we iteratively decode the packet layer by layer. First, we check whether we received an Ethernet frame, and get the payload of the frame, which should be an IP packet for a TCP connection. If the payload of the frame is indeed an IPv4 packet, we typecast it to the corresponding IPv4 packet class and use the methods getSourceAddress(...) and getDestinationAddress(...) to retrieve the IP addresses of the client (source) and service (destination). Then, we go up one layer and check for a TCP payload to retrieve the port information in a similar way.

After we have retrieved the IP address and port information from the packet, we check whether it is targeted at our load-balanced service (line 28). If it is not addressed at our service, we ignore the packet and let another handler process it (if any) by returning packetResult.IGNORED as a result of the packet handler.

If the packet is addressed at our service, we choose one of the two physical service instances in a round-robin fashion (line 38–52). The idea is to send the first request to server 1, second request to server 2, third to server 1 again, etc. Note that we might have multiple packet handlers for different packets executed in parallel (at least, we should not rely on a sequential execution as long as we do not know how OpenDaylight handles requests). Therefore, we synchronize this part of the packet handler, to make sure that only one thread is in this code section at a time.

Programming Flows

To forward a packet to the selected server instance, we select its IP address and MAC address as target addresses for each packet of this TCP connection from the client. To this end, we use IP address and MAC address re-writing to re-write the IP destination address and MAC destination address of each incoming packet of this connection to the selected server address. Note that a TCP connection is identified by the 4-tuple [source IP, source MAC, destination IP, destination MAC]. Therefore, we use this information as match criteria for the flow that performs address re-writing and packet forwarding.

A flow table entry consists of a match rule and list of actions. As said, the match rule should identify packets of a certain TCP connection. To this end, we create a new Match object, and set the required fields as shown in line 58–64. Since we are matching on a TCP/IPv4 datagram, we must make sure to identify this packet type by setting the ethertype (0×0800 meaning IPv4) and protocol id (6 meaning TCP). Moreover, we set the source and destination IP address and port information of the client and service that identifies the individual TCP connection.

Afterwards, we define the actions to be applied to a matched packet of the TCP connection. We set an action for re-writing the IP destination address to the IP address of the selected server instance, as well as the destination MAC address (line 70 and 73). Moreover, we define an output action to forward packets over the switch port of the server instance. In line 43 and line 49, we use the Switch Manager Service to retrieve the corresponding connector of the switch by its name. Note that these names are not simply the port numbers but s1-eth1 and s1-eth1 in my setup using Mininet. If you want to find out the name of a port, you can use the web GUI of the OpenDaylight controller (http://controllerhost:8080/) and inspect the port names of the switch.

Sometimes, it might also be handy to enumerate all connectors of a switch (node) — e.g., to flood a packet — using the following method:

Set ports = switchManager.getUpNodeConnectors(node)

Finally, we create the flow with match criteria and actions, and program the switch using the Flow Programmer service in line 82.

In the reverse direction from server to client, we also install a flow that re-writes the source IP address and MAC address of outgoing packets to the address information of the public service (line 90–112).

Forwarding Packets

However, we are not done yet. Although now every new packet of the connection will be forwarded to the right server instance, we also have to forward the received initial packet (TCP SYN request) to the right server. To this end, we modify the destinaton address information of this packet as shown in line 116–119. Then, we use the Data Packet Service to forward the packet using method transmitDataPacket(...).

In this example, we simply re-used the received packet. However, sometimes you might want to create and send a new packet. To this end, you create the payloads of the packets on the different layers and encode them as a raw packet using the Data Packet Service:

TCP tcp = new TCP();
IPv4 ipv4 = new IPv4();
ipv4.setProtocol((byte) 6);
Ethernet ethernet = new Ethernet();
RawPacket destPkt = dataPacketService.encodeDataPacket(ethernet);


Following the instructions from my last tutorial, you can compile the OSGi bundle using Maven as follows:

user@host:$ cd ~/myctrlapp
user@host:$ mvn package

Then you start the OpenDaylight controller (here, I assume you use the release version located in directory ~/opendaylight):

user@host:$ cd ~/opendaylight
user@host:$ ./

Afterwards, to avoid conflicts with our service, you should first stop OpenDaylight’s simple forwarding service and OpenDaylight’s load balancing service (which has nothing to do with our load balancing service) from the OSGi console:

osgi> ss | grep simple
171 ACTIVE org.opendaylight.controller.samples.simpleforwarding_0.4.1.SNAPSHOT
osgi> stop 171
osgi> osgi> ss | grep loadbalancer
150 ACTIVE org.opendaylight.controller.samples.loadbalancer.northbound_0.4.1.SNAPSHOT
187 ACTIVE org.opendaylight.controller.samples.loadbalancer_0.5.1.SNAPSHOT
osgi> stop 187

Both of these services implement packet handlers, and for now we want to make sure that they do not interfere with our handler.

Then, we can install our compiled OSGi bundle (located in /home/user/myctrlapp/target)

osgi> install file:/home/user/myctrlapp/target/myctrlapp-0.1.jar
Bundle id is 256

and start it:

osgi> start 256

You can also change the log level of our bundle to see log output down to the trace level:

osgi> setLogLevel de.frank_durr.myctrlapp.PacketHandler trace

Next, we create a simple Mininet topology with one switch and three hosts:

user@host:$ sudo mn --controller=remote,ip= --topo single,3 --mac --arp

Be sure to use the IP address of your OpenDaylight controller host. The option --mac assigns a MAC address according to the host number to each host (e.g., 00:00:00:00:00:01 for the first host). In our implementation, we use these addresses as hard-coded constants.

Option --arp pre-populates the ARP cache of the hosts. I use host 1 and 2 as the server hosts with the IP addresses and Host 3 runs the client. Therefore, I also set a static ARP table entry on host 3 for the public IP address of the service (

mininet> xterm h3
mininet h3> arp -s 00:00:00:00:00:64

On host 1 and 2 we start two simple servers using netcat listening on port 7777:

mininet> xterm h1
mininet> xterm h2
mininet h1> nc -l 7777
mininet h2> nc -l 7777

Then, we send a message to our service from the client on host 3 using again netcat:

mininet h3> echo "Hello" | nc 7777

Now, you should see the output “Hello” in the xterm of host 1. If you execute the same command again, the output will appear in the xterm of host 2. This shows that requests (TCP connections) are correctly distributed among the two servers.

Where to go from here

Basically, you can now implement your service using reactive flow programming. However, some further services might be helpful. For instance, according to the paradigm of logically centralized control, it might be interesting to query the global topology of the network, locations of hosts, etc. This, I plan to cover in future tutorials.

Developing OSGi Components for OpenDaylight

In this tutorial, I will explain how to develop an OSGi component for OpenDaylight that is implementing custom network control logic. In contrast to the REST interface, which I have explained in one of my last posts, OSGi components can receive packet-in events, which are triggered when a packet without matching flow table entry arrives at a switch. Therefore, in order to do reactive flow programming, OSGi components are the right way to go in OpenDaylight.

Even for experienced Java programmers, the learning curve for developing OSGi components for OpenDaylight is quite steep. OpenDaylight uses powerful development tools and techniques like Maven and OSGi. Moreover, the project structure is quite complex and the number of Java classes overwhelming at first. However, as you will see in this tutorial, the development process is quite straightforward and thanks to Maven very convenient.

In order to explain everything step by step, I will go through the development of a simple OSGi component. This component does nothing special. It basically displays a message when an IPv4 packet is received to show the destination address, data path id, and ingress port. However, you will learn many things that will help you in developing your own control components like:

  • How to setup an OpenDaylight Maven project?
  • How to install, uninstall, start, and stop an OSGi bundle in OpenDaylight at runtime?
  • How to manage the OSGi component dependencies and life-cycle?
  • How to receive packet-in events through data packet listeners?
  • How to decode packets using the OpenDaylight Data Packet Service

I should note here, that I will use the so-called API-driven Service Abstraction Layer (SAL) of OpenDaylight. OpenDaylight implements a second alternative API called the Model-driven SAL. This API I might cover in a future post.

So let’s get started!

The Big Picture

The figure below shows the architecture of our system. It consists of a number of OSGi bundles that bundle together Java classes, resources, and a manifest file. One of these bundles called the MyControlApp bundle is the bundle we are developing in this tutorial. Other bundles are coming from the OpenDaylight project like the SAL (Service Abstraction Layer) bundle.

Bundles are executed atop the OSGi Framework (Equinox in OpenDaylight). The interesting thing about OSGi is that bundles can be installed and removed at runtime, so you do not have to stop the SDN controller to add or modify control logic.


As you can also see, OSGi bundles are offering services that can be called by other OSGi components. One interesting service that comes with OpenDaylight and that we will use during this tutorial is the Data Packet Service (interface IDataPacketService) to decode data packets.

Although our simple control component is not offering functionality to any other bundle, it is important to understand that in order to receive packet-in events, it has to offer a service implementing the IListenDataPacket interface. Whenever an OpenFlow packet-in event arrives at the controller, the SAL invokes the components that implement the IListenDataPacket interface, among them our bundle.


Before we start developing our component, we should get a running copy of OpenDaylight. Lately, the first release version of OpenDaylight was released. You can get a copy from this URL.

Or you can get the latest version from the OpenDaylight GIT repository and compile it yourself:

user@host:$ git clone
user@host:$ cd ./controller/opendaylight/distribution/opendaylight/
user@host:$ mvn clean install

Actually, in order to develop an OpenDaylight OSGi component, you do not need the OpenDaylight source code! As we will see below, we can just import the required components as JARs from the OpenDaylight repository.

During the compile process, you see that Maven downloads many Java packages on the fly. If you have never used Maven before, this can be quite confusing. Haven’t we just downloaded the complete project with git? Actually, Maven can automatically download project dependencies (libraries, plugins) from a remote repository and place them into your local repository so they are available during the build process. Your local repository usually resides in ~/.m2. If you look into this repository after you have compiled OpenDaylight, you will see all the libraries that Maven downloaded:

user@host:$ ls ~/.m2/repository/
antlr                     classworlds          commons-fileupload  dom4j          jline  regexp
aopalliance               com                  commons-httpclient  eclipselink    junit  stax
asm                       commons-beanutils    commons-io          equinoxSDK381  log4j  virgomirror
backport-util-concurrent  commons-cli          commons-lang        geminiweb      net    xerces
biz                       commons-codec        commons-logging     io             orbit  xml-apis
bsh                       commons-collections  commons-net         javax          org    xmlunit
ch                        commons-digester     commons-validator   jfree          oro

For instance, you see that Maven has downloaded the Apache Xerces XML parser. We will come back to this nice feature later when we discuss our project dependencies.

I will refer to the root directory of the controller as ~/controller in the following.

Creating the Maven Project

Now we start developing our OSGi component. Since OpenDaylight is based on Maven, it is a good idea to also use Maven for our own project. So we start by creating a Maven project for our OSGi component. First, create the following project structure. I will refer to the root directory of our component as ~/myctrlapp:


Obviously, Java implementations go into the folder src/main/java. I used the package de.frank_durr.myctrlapp for the implementation of my control component.

Essential to the Maven build process is a so-called Project Object Model (POM) file called pom.xml that you have to create in the folder ~/myctrlapp with the following content:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="" xmlns:xsi="" xsi:schemaLocation="">





        <!-- OpenDaylight releases -->
        <!-- OpenDaylight snapshots -->

First, we define our group id (unique id of our organization) and artifact id (name of our component/project) as well as a version number. The packaging element specifies that an OSGi bundle (JAR file with classes, resources, and manifest file) should be built.

During the Maven build process, plugins are invoked. One very important plugin here is the Bundle plugin from the Apache Felix project that creates our OSGi bundle. The import element specifies every package that should be imported by the bundle. The wildcard * imports “everything referred to by the bundle content, but not contained in the bundle” [Apache Felix], which is reasonable and much less cumbersome than specifying the imports explicitly. Moreover, we export every implementation from our package.

The bundle activator is called during the life-cycle of our bundle when it is started or stopped. Below I show how it is used to register for services used by our component and how to export the interface of our component.

The dependency element specifies other packages to which our component has a dependency. Remember when I said that Maven will download required libraries (JARs) automatically to your local repository in ~/.m2? Of course, it can only do that if you tell Maven what you need. We basically need the API-driven Service Abstraction Layer (SAL) of OpenDaylight. The OpenDaylight project provides an own repository with the readily-compiled components (see repositories element). Thus, Maven will download the JARs from this remote repository. No need to import all the source code of OpenDaylight into Eclipse! In my example, I use the release version 0.7.0. You can also use a snapshot by changing the version to 0.7.0-SNAPSHOT (or whatever version is available in the snapshot repository; just browse the repository URL given above to find out). If you need further packages, have a look at the central Maven repository.

From this POM file, you can now create an Eclipse project by executing:

user@host:$ cd ~/myctrlapp
user@host:$ mvn eclipse:eclipse

Remember to re-create the Eclipse project using this command, when you make changes to the POM.

Afterwards, you can import the project into Eclipse:

  • Menu Import / General / Existing projects into workspace
  • Select root folder ~/myctrlapp

Implementation of OSGi Component: The Activator

In order to implement our OSGi component, we only need two class files: an OSGi activator registering our component with the OSGi framework and a packet handler implementing the control logic and executing actions whenever a packet-in event is received.

First, we implement the activator by creating the file in the directory ~/myctrlapp/src/main/java/frank_durr/myctrlapp:

package de.frank_durr.myctrlapp;
import java.util.Dictionary;
import java.util.Hashtable;

import org.opendaylight.controller.sal.core.ComponentActivatorAbstractBase;
import org.opendaylight.controller.sal.packet.IDataPacketService;
import org.opendaylight.controller.sal.packet.IListenDataPacket;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class Activator extends ComponentActivatorAbstractBase {

    private static final Logger log = LoggerFactory.getLogger(PacketHandler.class);

    public Object[] getImplementations() {
        log.trace("Getting Implementations");

        Object[] res = { PacketHandler.class };
        return res;

    public void configureInstance(Component c, Object imp, String containerName) {
        log.trace("Configuring instance");

        if (imp.equals(PacketHandler.class)) {

            // Define exported and used services for PacketHandler component.

            Dictionary<String, Object> props = new Hashtable<String, Object>();
            props.put("salListenerName", "mypackethandler");

            // Export IListenDataPacket interface to receive packet-in events.
            c.setInterface(new String[] {IListenDataPacket.class.getName()}, props);

            // Need the DataPacketService for encoding, decoding, sending data packets
            c.add(createContainerServiceDependency(containerName).setService(IDataPacketService.class).setCallbacks("setDataPacketService", "unsetDataPacketService").setRequired(true));


We extend the base class ComponentActivatorAbstractBase from the OpenDaylight controller. Developers already familiar with OSGi know that there are two methods start() and stop() that are called by the OSGi framework when the bundle is started or stopped, respectively. These two methods are overridden in the class ComponentActivatorAbstractBase to mange the life-cycle of an OpenDaylight component. From there, the two methods getImplementations() and configureInstance() are called.

The method getImplementations() returns the classes implementing components of this bundle. A bundle can implement more than one component, for instance, a packet handler for ARP requests and one for IP packets. However, our bundle just implements one component: the one reacting to packet-in events, which is implemented by our PacketHandler class (the second class described below). So we just return one implementation.

Method configureInstance() configures the component and, in particular, declares exported service interfaces and the services used. Since an OSGi bundle can implement more than one component, it is good style to check, which component should be configured in line 26.

Then we declare the services exported by our component. Recall that in order to receive packet-in events, the component has to implement the service interface IListenDataPacket. Therefore, by specifying that our class PacketHandler implements this interface in line 34, we implicitly register our component as listener for packet-in events. Moreover, we have to give our listener a name (line 31) using the property salListenerName. If you want to understand in detail, what is happening during registration, I recommend to have a look at the method setListenDataPacket() of class org.opendaylight.controller.sal.implementation.internal.DataPacketService. There you will see that so far, packet handlers are called sequentially. There might be many components that have registered for packet-in events, and you cannot force OpenDaylight to call your listener first before another one gets the event. So the order in which listeners are called is basically unspecified. However, you can create dependency lists using the property “salListenerDependency”. Moreover, using the property “salListenerFilter” you can set a org.opendaylight.controller.sal.match.Match object for the listener to filter packets according to header fields. Otherwise, you will receive all packets (if not other listener consumes it before our handler is called; see below).

Besides exporting our packet listener implementation, we also use other services. These dependencies are declared in line 37. In our example, we only use one service implementing the IDataPacketService interface. You might say now, “fine, but how do I get the object implementing this service to call it?”. To this end, you define two callback functions as part of your component class (PacketHandler), here called setDataPacketService() and unsetDataPacketService(). These callback functions are called with a reference to the service (see implementation of PacketHandler below).

Implementation of OSGi Component: The Packet Handler

The second part of our implementation is the packet handler, which receives packet-in events (the class that you have configured through the activator above). To this end, we implement the class PacketHandler by creating the following file in the directory ~/myctrlapp/src/main/java/frank_durr/myctrlapp:

package de.frank_durr.myctrlapp;


import org.opendaylight.controller.sal.core.Node;
import org.opendaylight.controller.sal.core.NodeConnector;
import org.opendaylight.controller.sal.packet.Ethernet;
import org.opendaylight.controller.sal.packet.IDataPacketService;
import org.opendaylight.controller.sal.packet.IListenDataPacket;
import org.opendaylight.controller.sal.packet.IPv4;
import org.opendaylight.controller.sal.packet.Packet;
import org.opendaylight.controller.sal.packet.PacketResult;
import org.opendaylight.controller.sal.packet.RawPacket;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class PacketHandler implements IListenDataPacket {

    private static final Logger log = LoggerFactory.getLogger(PacketHandler.class);
    private IDataPacketService dataPacketService;

    static private InetAddress intToInetAddress(int i) {
        byte b[] = new byte[] { (byte) ((i>>24)&0xff), (byte) ((i>>16)&0xff), (byte) ((i>>8)&0xff), (byte) (i&0xff) };
        InetAddress addr;
        try {
            addr = InetAddress.getByAddress(b);
        } catch (UnknownHostException e) {
            return null;

        return addr;

     * Sets a reference to the requested DataPacketService
     * See Activator.configureInstance(...):
     * c.add(createContainerServiceDependency(containerName).setService(
     * IDataPacketService.class).setCallbacks(
     * "setDataPacketService", "unsetDataPacketService")
     * .setRequired(true));
    void setDataPacketService(IDataPacketService s) {
        log.trace("Set DataPacketService.");

        dataPacketService = s;

     * Unsets DataPacketService
     * See Activator.configureInstance(...):
     * c.add(createContainerServiceDependency(containerName).setService(
     * IDataPacketService.class).setCallbacks(
     * "setDataPacketService", "unsetDataPacketService")
     * .setRequired(true));
    void unsetDataPacketService(IDataPacketService s) {
        log.trace("Removed DataPacketService.");

        if (dataPacketService == s) {
            dataPacketService = null;

    public PacketResult receiveDataPacket(RawPacket inPkt) {
        log.trace("Received data packet.");

        // The connector, the packet came from ("port")
        NodeConnector ingressConnector = inPkt.getIncomingNodeConnector();
        // The node that received the packet ("switch")
        Node node = ingressConnector.getNode();

        // Use DataPacketService to decode the packet.
        Packet l2pkt = dataPacketService.decodeDataPacket(inPkt);

        if (l2pkt instanceof Ethernet) {
            Object l3Pkt = l2pkt.getPayload();
            if (l3Pkt instanceof IPv4) {
                IPv4 ipv4Pkt = (IPv4) l3Pkt;
                int dstAddr = ipv4Pkt.getDestinationAddress();
                InetAddress addr = intToInetAddress(dstAddr);
                System.out.println("Pkt. to " + addr.toString() + " received by node " + node.getNodeIDString() + " on connector " + ingressConnector.getNodeConnectorIDString());
                return PacketResult.KEEP_PROCESSING;
        // We did not process the packet -> let someone else do the job.
        return PacketResult.IGNORED;

As you can see, our handler implements the listener interface IListenDataPacket. This interface declares the function receiveDataPacket(), which is called with the raw packet after a packet-in event from OpenFlow.

In order to parse the raw packet, we use the OpenDaylight Data Packet Service (object dataPacketService). As described for the activator, during the component configuration, we set two callback functions in our packet handler implementation, namely, setDataPacketService() and unsetDataPacketService(). Method setDataPacketService() is called with a reference to the data packet service, which is then used for parsing raw packets. After receiving a raw packet “inPkt”, we call dataPacketService.decodeDataPacket(inPkt) to get a layer 2 frame. Using instanceof, we can check for the class of the returned packet. If it is an Ethernet frame, we go on and get the payload from this frame, which is the layer 3 packet. Again, we check the type, and if it is an IPv4 packet, we dump the destination address.

Moreover, the example shows how to determine the node (i.e., switch) that received the packet and connector (i.e., port) on which the packet was received (lines 72 and 75).

Finally, we decide whether the packet should be further processed by another handler, or whether we want to consume the packet by returning a corresponding return value. PacketResult.KEEP_PROCESSING says, our handler has processed the packet, but others should also be allowed to do so. PacketResult.CONSUME means, no other handler after us receives the packet anymore (as described above, handlers are sorted in a list and called sequentially). PacketResult.IGNORED says, packet processing should go on since we did not handle the packet.

Deploying the OSGI Bundle

Now that we have implemented our component, we can first compile and bundle it using Maven:

user@host:$ cd ~/myctrlapp
user@host:$ mvn package

If our POM file and code are correct, this should create the bundle (JAR file) ~/myctrlapp/target/myctrlapp-0.1.jar.

This bundle can now be installed in the OSGi framework Equinox of OpenDaylight. First, start the controller:

user@host:$ cd ~/controller/opendaylight/distribution/opendaylight/target/distribution.opendaylight-osgipackage/opendaylight/
user@host:$ ./

In the OSGi console install the bundle by specifying its URL:

osgi> install file:/home/user/myctrlapp/target/myctrlapp-0.1.jar
Bundle id is 256

We see that the id of our bundle is 256. Using this id, we can start the bundle next:

osgi> start 256

You can check, whether it is running by listing all OSGi bundles using the command ss:

osgi> ss
251 ACTIVE org.opendaylight.controller.hosttracker.implementation_0.5.1.SNAPSHOT
252 ACTIVE org.opendaylight.controller.sal-remoterpc-connector_1.0.0.SNAPSHOT
253 ACTIVE org.opendaylight.controller.config-persister-api_0.2.3.SNAPSHOT
256 ACTIVE de.frank_durr.myctrlapp_0.1.0

Similarly, you can stop and uninstall the bundle using the commands stop and uninstall, respectively:

osgi> stop 256
osgi> uninstall 256

Before we test our bundle, we stop two OpenDaylight services, namely, the Simple Forwarding Service and Load Balancing Service:

osgi> ss | grep simple
171 ACTIVE org.opendaylight.controller.samples.simpleforwarding_0.4.1.SNAPSHOT
osgi> stop 171
osgi> osgi> ss | grep loadbalancer
150 ACTIVE org.opendaylight.controller.samples.loadbalancer.northbound_0.4.1.SNAPSHOT
187 ACTIVE org.opendaylight.controller.samples.loadbalancer_0.5.1.SNAPSHOT
osgi> stop 187

Why did we do that? Because these are the two services also implementing a packet listener. For testing, we do want to make sure, they are not getting in our way and consuming packets before we can get them.


For testing, we use a simple linear Mininet topology with two switches and two hosts connected at the ends of the line:

user@host:$ sudo mn --controller=remote,ip= --topo linear,2

The given IP is the IP of our controller host.

Now let’s ping host 2 from host 1 and see the output in the OSGi console:

mininet> h1 ping h2

Pkt. to / received by node 00:00:00:00:00:00:00:01 on connector 1
Pkt. to / received by node 00:00:00:00:00:00:00:02 on connector 1

You see that our handler received a packet from both switches with the data path ids 00:00:00:00:00:00:00:01 and 00:00:00:00:00:00:00:02 as well as the ports (1) on which they have been received and the destination IP addresses and So it worked.

Where to go from here?

What I did not show in this tutorial is how to send a packet. If you join me again, you can see that in one of my next tutorials here on this blog.

Securing OpenDaylight’s REST Interfaces

OpenDaylight comes with a set of REST interfaces. For instance, in one of my previous posts, I have introduced OpenDaylight’s REST interface for programming flows. With these interfaces, you can easily outsource your control logic to a remote server other than the server on which the OpenDaylight controller is running. Basically, the controller offers a web service, and the control application invokes this service sending REST requests via HTTP.

Although the concept to offer network services as web services is very nice and lowers the barriers to “program” the network significantly, it also brings up security problems well known from web services. If you do not authenticate clients, any client that can send HTTP requests to your controller can control your network — certainly something you want to avoid!

Therefore, in this post, I will show how to secure OpenDaylight’s REST interfaces.

Authentication in OpenDaylight

The REST interfaces are so-called northbound interfaces between controller and control application. So you can think of the controller as the service and the control application as the client as shown in the figure below.


In order to ensure that the controller only accepts requests from authorized clients, clients have to authenticate themselves. OpenDaylight uses HTTP Basic authentication, which is based on user names and passwords (default: admin, admin). Sounds good: So only a client with the valid password can invoke the service … or is there a problem? In order to see the security threats, we have to take a closer look at the HTTP Basic authentication mechanism.

The follwing command invokes the Flow Programmer service of OpenDaylight via cURL and prints the HTTP header information of the request:

user@host:$ curl -u admin:admin -H 'Accept: application/xml' -v 'http://localhost:8080/controller/nb/v2/flowprogrammer/default/'
* About to connect() to localhost port 8080 (#0)
* Trying connected
* Server auth using Basic with user 'admin'
> GET /controller/nb/v2/flowprogrammer/default/ HTTP/1.1
> Authorization: Basic YWRtaW46YWRtaW4=
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/ libidn/1.23 librtmp/2.3
> Host: localhost:8080
> Accept: application/xml
< HTTP/1.1 200 OK
< Server: Apache-Coyote/1.1
< Cache-Control: private
< Expires: Thu, 01 Jan 1970 01:00:00 CET
< Set-Cookie: JSESSIONIDSSO=9426E0F12A0A0C80BE549451707EF339; Path=/
< Set-Cookie: JSESSIONID=DB23D1EE61348E101E6CE8117A04B8D8; Path=/
< Content-Type: application/xml
< Content-Length: 62
< Date: Sun, 12 Jan 2014 16:50:38 GMT
* Connection #0 to host localhost left intact
* Closing connection #0
<?xml version="1.0" encoding="UTF-8" standalone="yes"?><list/>

The interesting header field is “Authorization” with its value “Basic YWRtaW46YWRtaW4=”. Here, “YWRtaW46YWRtaW4=” is the user name and password sent from the client to the controller. Although this value looks quite cryptic, it is actually plain text. This value is simply the Base64 encoding of the user name and password string “admin:admin”. Base64 is a simple translation of 8 bit characters to 6 bit characters involving no encryption or hashing at all! Basically, it comes from the time when SMTP was restricted to sending 7 bit ASCII characters. Everything else like binary (8 bit) content had to be translated to 7 bit characters first, and exactly that’s the job of Base64 encoding. You can use a paper and pencil to decode it. Just interpret the bit pattern of three 8 bit characters as four 6 bit characters and look-up the values of the 6 bit characters in the Base64 table. Or if you are lazy, just use one of the many Base64 decoders in the WWW.

Now the problem should become obvious. If your network between client and controller is non-trusted and an attacker can eavesdrop on the communication channel, he can read your user name and password.

Securing the REST Interface

Now that we see the problem, also the solution should become obvious. We need a secure channel between client and controller, so an attacker cannot read the header fields of the HTTP request. The HTTPS standard provides exactly that. Moreover, the client can make sure that it really connects to the right controller, and not the controller of an attacker who just wants to intercept our password. So we use HTTPS to encrypt the channel between client and controller and to authenticate the controller, and HTTP Basic authentication to authenticate the client.

So the trick is, enabling HTTPS in OpenDaylight, which is turned off by default. Note that above we used the insecure HTTP protocol on port 8080. Now we want to use HTTPS on port 8443 (or 443 if you want to use the official HTTPS port instead of the alternative port).

OpenDaylight uses the Tomcat servlet container to provide its web services. Therefore, the steps to enable HTTPS are very similar to configuring Tomcat.

First, we need a server certificate that the client can use to authenticate the server. Of course, a certificate signed by a trusted certification authority would be best. However, here I will show how to create your own self-signed certificate using the Java keytool:

user@host:$ keytool -genkeypair -v -alias tomcat -storepass changeit -validity 1825 -keysize 1024 -keyalg DSA
What is your first and last name?
What is the name of your organizational unit?
[Unknown]: Institute of Parallel and Distributed Systems
What is the name of your organization?
[Unknown]: University of Stuttgart
What is the name of your City or Locality?
[Unknown]: Stuttgart
What is the name of your State or Province?
[Unknown]: Baden-Wuerttemberg
What is the two-letter country code for this unit?
[Unknown]: de
Is, OU=Institute of Parallel and Distributed Systems, O=University of Stuttgart, L=Stuttgart, ST=Baden-Wuerttemberg, C=de correct?
[no]: yes

Generating 1,024 bit DSA key pair and self-signed certificate (SHA1withDSA) with a validity of 1,825 days
for:, OU=Institute of Parallel and Distributed Systems, O=University of Stuttgart, L=Stuttgart, ST=Baden-Wuerttemberg, C=de
Enter key password for <tomcat>
(RETURN if same as keystore password):
Re-enter new password:
[Storing /home/duerrfk/.keystore]

This creates a certificate valid for five years (1825 days) and stores it in the keystore .keystore in my home directory /home/duerrfk. As first and last name, we use the DNS name of the machine, the controller is running on. The rest should be pretty obvious.

With this information, we can now configure the OpenDaylight controller. First, check out and compile OpenDaylight, if you haven’t done so already:

user@host:$ git clone
user@host:$ cd controller/opendaylight/distribution/opendaylight/
user@host:$ mvn clean install

Now edit the following file, where “controller” is the root directory of the controller you checked out:

user@host:$ emacs controller/opendaylight/distribution/opendaylight/target/distribution.opendaylight-osgipackage/opendaylight/configuration/tomcat-server.xml

Comment in the following XML element:

<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
maxThreads="150" scheme="https" secure="true"
clientAuth="false" sslProtocol="TLS"

Use the keystore location and password that you used before with the keytool command.

Now you can start OpenDaylight and connect via HTTPS to the controller on port 8443. Use your web browser to try it.

Making Secure Calls

One last step needs to be done before we can call the controller securely by a client. If you did not use a certificate signed by a well-known certification authority — like we did above with our self-signed certficate –, you need to present the client with the server certificate it should use for authenticating the controller. If you are using cURL, the required option is “–cacert”:

user@host:$ curl -u admin:admin --cacert ~/cert-duerr-mininet.pem -v -H 'Accept: application/xml' ''

So the last question is, how do we get the server certificate in PEM format? To this end, we can use openssl to call our server and store the returned certificate:

user@host:$ openssl s_client -connect

The PEM certificate is everything between “—–BEGIN CERTIFICATE—–” and “—–END CERTIFICATE—–” (including these two lines), so we can just copy this to a file. Note that you have to make sure that the call is actually going to the right server (not the server of an attacker). So better call it from the machine where your controller is running to avoid a “chicken and egg” problem.


Now we can securely outsource our control application to a remote host, for instance, a host in our campus network or a cloud server running in a remote data center.