Fiery IoT: A Tutorial on Implementing IoT Cloud Services with Google Firebase — Part 1

Abstract

In this tutorial, we will show how to implement an IoT service with Google Firebase. The IoT service consists of IoT home gateways of users for connecting sensors, and mobile apps as clients. In particular, we will show how to use the Firebase realtime database for storing sensor events and and notifying clients in realtime about events.

In the first part, we will show how to connect the IoT home gateways of users to the realtime database, and how to use Google accounts as authentication method for the IoT home gatways of individual users to implement a multi-tenancy service with Firebase. For implementing IoT gateways, we will use Node.js plus an Apache web-server.

In part two, we connect mobile apps receiving realtime notifications of sensor events.

Although we take an IoT service as motivating example, many parts of the tutorial are generic and also applicable to mobile services in general. Thus, this tutorial could also be useful for other Firebase users beyond the IoT.

All code of the tutorial can be found at GitHub.

Motivation: The Detect-Store-Notify Pattern

Google Firebase is a cloud-based application platform that was originally developed for supporting mobile apps. In particular, Firebase includes a so-called realtime database that besides just storing data can also sync mobile clients in “realtime”. To this end, apps can register for data change events, and whenever data is updated, the app gets a notification and a snapshot of the changed data items. In this tutorial, we will show that such a realtime database not only facilitates the development of mobile apps in general, but also IoT applications and services in particular.

We observe that many IoT applications follow a pattern that can be best described as detect-store-notify. First, an event is detected. In IoT scenarios such events are often triggered by changes in the physical world detected by sensors. Two very simple examples are the ringing of a door bell (door bell event) or a temperature sensor detecting a sensor value higher or lower than a user-defined threshold value (temperature event). Since typically events are defined to detect some meaningful change in the physical world like “a person standing at the front door” (door bell event) or “temperature in living room too low” (temperature event), they serve as input to some control application, which automatically triggers actions, or an app implementing a user interface to notify the user. For instance, a control application could automatically turn-up the heating after a “temperature low” event, or a notification could be presented on a mobile phone after a door bell event. In any case, we need to forward an event notification to some application.

At the same time, we often want to keep a history of past events, i.e., we want to store sensor events persistently in a database. Then later the user can get an overview of what happened, or we can analyze event histories. In general, in the age of “big data”, there seems to be a trend not to discard data, which later could prove to be useful, but to keep it for later processing, data mining, or machine learning.

A realtime database as included with Firebase covers both, the storage of data as well as sending notifications. Whenever a sensor event is written to the database, a value update is sent out to subscribers. The lean-and-mean tree-based data scheme of Firebase, which is similar to JSON objects, makes it easy to structure your data to allow for targeted subscriptions. For instance, one could add a sub-tree of sensor ids to let applications register to updates of individual sensors. At the same time, we could add events to a sub-tree of sensor types like temperature, door bell, etc. to subscribe to events by the type of sensors:

|
|--sensorid
|  |
|  |--[sensor id 1]
|  |  |
|  |  |--[sensor event 1]
|  |  |  |
|  |  |  |--[sensor data]
|  ...
|
|--sensortype
|  |
|  |--temperature
|  |  |
|  |  |--[sensor id 1]
|  |  |  |
|  |  |  |--[sensor event 1]
|  |  |  |  |
|  |  |  |  |--[sensor data]
|  |  ...
|  |
|  |--doorbell
|  |  |
...

Note that redundant data of the same sensor event is stored several times in the database, once in the sensor id sub-tree and once in the sensor type sub-tree. This denormalized storage of redundant data might seem counter-intuitive for users of relational databases. However, it is required for efficiently retrieving data and subscribing to events in Firebase by selecting a tree node subsuming all of its child nodes. Notifications are then sent out for all changes in a sub-tree rooted at the node of interest.

Of course, you could use dedicated services for both individual functions (storing and notifications), for instance, an MQTT messaging service plus a database like Mongo DB. However, setting up and connecting these services and apps is more complex than just using a realtime database offered as a single integrated service. As we will see during this tutorial, the Firebase realtime database offered as a software service by Google makes it very easy to connect an IoT gateway and apps to the realtime database and implement functions for reading, writing, and eventing with just a few lines of code. In addition, Firebase handles user authentication, multi-tenancy, and scalability in an elegant and easy way.

Goal of this Tutorial

In this tutorial, we show step-by-step how to implement a simple IoT service with Google Firebase. The architecture of this IoT service consists of three types of components:

  • Sensors detecting events and sending them to the private IoT home gateway of the owner of the sensors.
  • IoT home gateways installed in the home network of each user (sensor owner). These gateways collect sensor events, and possibly pre-filter and enrich them, for instance by comparing the sensor value to a threshold and adding a timestamp. Then events are forwarded by the IoT home gateway to the Firebase database running in the Google cloud.
  • The Firebase realtime database storing event histories and sending event notifications to the mobile app of the user or other services interested in sensor events.
sensor 1 --|
           |                 sensor events
sensor 2 ----> IoT Gateway ----------------|        Mobile App
           |   of user 1                   |        of user 1
sensor n --|                               |             ^
                                           v             |  notification
                                         Google ---------| 
                                        Firebase  
                                   Realtime Database ----|
                                           ^             |  notification
                                           |             v
sensor 1 --|                               |        Mobile App 
           |                 sensor events |        of user 2
sensor 2 ----> IoT Gateway ----------------|
           |   of user 2
sensor n --|

The IoT service should serve a larger population of users, each one with his own sensors and IoT home gateway, with only one Firebase database (multi-tenancy). In other words, in this tutorial, we take the role of an IoT service provider offering an IoT smart-home service for many customers. Setting up a dedicated Firebase database for each customer would be too costly and would not scale with the number of customers in terms of complexity. Instead, we have one database for all customers/users. Each user should only have access to his own data and events. Consequently, the Firebase database must implement access control to protect datasets of different users.

We will not pay much attention to the sensor part and connecting sensors to the IoT home gateway, and rather focus to the components connected to the Firebase database, i.e., the IoT home gateway and apps. If you are interested in how to connect sensors to the IoT home gateway, you could have a look at our Door Bell 2.0 project, which connects a simple door beel sensor via Bluetooth Low Energy (BLE) to a Node.js IoT gateway. It should be rather straightforward to merge the Node.js code from the Door Bell 2.0 project and the code of the IoT gateway presented in this tutorial.

All code of the tutorial can be found at GitHub.

We first look at how to integrate IoT gateways of implemented in Node.js with Firebase, before we consider the synchonization of mobile apps implemented in Android with Firebase.

Setting up Firebase

We want all sensor events to be stored in the Firebase database. Obviously, to this end, we first need to set up a database in Firebase:

  1. Log-in to Firebase at https://console.firebase.google.com/
  2. Create a new project. We give this project the name “FieryIoT”. Firebase automatically assigns a project id (say “fieryiot-12345″) and Web API key (say “abcdefghijklmnopqrstuvwxyz1234567890″), as you can verify by showing the project settings (wheel symbol).
Firebase Settings

Firebase Settings

We later will use Google user accounts to sign-in to Firebase to protect and isolate data of different users. To this end, we must enable the corresponding authentication method. In the Firebase console, go to “Authentication” and enable the “Google” authentication method. Here, you can also show your web-client id, which is later used during authentication. The web-client id should look like this:

1234567890123-12345678901234567890abcdefghijklmnopqrstuvwxyz.apps.googleusercontent.com

Next, we need to define a schema and access rules for our database. Firebase uses a tree-based structure for the database similar to JSON objects. Our database has the following tree structure:

|
|--sensorevents
   |
   |--[user1 id]
   |  |
   |  |--[event id]
   |     |
   |     |--[event data 1]
   |     |
   |     |--... 
   |
   |--[user2 id]
   |  |
   |  |--[event id]
   |     |
   |     |--[event data 1]
   |     | 
   |     |--...
   |
   |--...

Compared to the example in the motivation, where we used different sub-trees for different sensors and sensor types, this example is simpler for the sake of a simpler description. However, extensions should be straightforward.

Note that every user has his own branch defined by his user id under the node sensorevents to store his own IoT events (e.g., a door bell event, door unlock events, etc.). Actually, we do not need to define this structure in the Firebase console, unlike in an SQL database where you would need to define the table structure first before storing data. Firebase is schemaless, however, we could define validation rules to ensure consistency of stored data, which we will not do here to keep things simple.

However, we must define security rules to protect data from unauthorized access (reading and writing) by other users than the owner of the sensors. This will also prevent users from adding branches/data in sub-trees of other users, i.e., anywhere outside their own branch.

Set up the following security rules by opening “Database / Rules” in the Firebase console and adding the following security rules:

{
    "rules": {
        "sensorevents" : {
            "$user_id": {
                ".read": "$user_id === auth.uid",
                ".write": "$user_id === auth.uid"
             }
        }
    }
}

These rules allow each user to read and write only his own branch /sensorevents/[user id] as defined by his user id. $user_id is a placeholder for the user id, and auth.uid is a placeholder for the id of an authenticated user. Access rights will be inherited down the hierarchy, i.e., each user has read and write access to all data below the node /sensorevents/[user id]. Users must authenticate as shown below such that Firebase can check these rules online.

Security Rules

Security Rules

Firebase also includes a simulator to check security rules from the Firebase console before making them effective by publishing them. Try it out with our rules defined above by trying to read and write from and to different branches with authenticated and non-authenticated users! Then publish the rules in the console before going to the next step.

Authentication of IoT Home Gateways to Firebase

An IoT home gateway is implemented by a Node.js process. In order to let the IoT home gateway write sensor events to the Firebase database, it needs to authenticate itself to Firebase. Firebase supports different authentication methods. Here, we use the Google account of the user owning the IoT home gateway for authentication. The advantage of this method is that you can serve different users with the same Firebase database (multi-tenancy). Every tenant (IoT home gateway owner with a Google account) has only access to its own branch in the database, i.e., every tenant can only access its own dataset.

To implement the Google account authentication method, the user needs to pass-on authentication information to the IoT home gateway (Node.js server). We implement this through a web-frontend of the IoT gateway. To this end, the machine running the IoT home gateway is actually hosting two servers:

  1. Web-sever (Apache server)
  2. IoT home gateway (Node.js server)

The user signs-in to its Google account through its web-browser by clicking a button on a web-page downloaded from the web-server. The web-browser interacts with a Google server to sign-in to its Google account (JavaScript code executed in the web-browser). After signing-in, a credential is transferred to the web-browser. This credential is then passed on to the IoT gateway via the web-server, i.e., the web-server (Apache) acts as a reverse proxy between the external client (browser) and the internal server (IoT home gateway = Node.js server). We pass on the credential through HTTP POST requests from the web-browser to the web-server, and from the web-server to the IoT gateway. For the second step (proxy to IoT gateway), the IoT gateway also implements an HTTP server in Node.js. The credential is then used by the IoT gateway to authenticate to Firebase. Note that all communication from the browser is going through the Apache web-server, so we do not have to deal with any cross-site security problems since to the web-browser this looks like a single server.

In detail, we need to go through the following steps:

  1. Setup a web-page on the web-server for signing-in to Google.
  2. Configure Apache to act as a reverse proxy for passing on the credential to the IoT home gateway.
  3. Implement the HTTP server on the IoT gateway in Node.js to receive the credential from the web-server and authenticate to Firebase.

Web-page for Signing-in to Google

Create a sign-in web-page on the web-server. The following bare-minimum web-page just shows he required sign-in button:

Sign-in Web-page

Sign-in Web-page

<!DOCTYPE html>
<html>

<head>

<meta name="google-signin-client_id" content="1234567890123-12345678901234567890abcdefghijklmnopqrstuvwxyz.apps.googleusercontent.com">
<meta name="google-signin-cookiepolicy" content="single_host_origin">
<meta name="google-signin-scope" content="profile email">

<script src="https://apis.google.com/js/platform.js" async defer></script>

<script>

function onSignIn(googleUser) {
    var id_token = googleUser.getAuthResponse().id_token;
    // Send credential to IoT gateway via proxy through HTTP POST request.
    var httpReq = new XMLHttpRequest();
    httpReq.onloadend = function () {
        alert(id_token);
    };
    var url = "/auth/credential";
    httpReq.open("POST", url, true);
    httpReq.setRequestHeader('Content-Type', 'text/plain; charset=UTF-8');
    httpReq.send(id_token);
}

</script>

</head>

<body>

<div class="g-signin2" data-onsuccess="onSignIn" data-theme="dark"></div>

</body>

</html>

The client id “1234567890123-12345678901234567890abcdefghijklmnopqrstuvwxyz.apps.googleusercontent.com” must be replaced by the OAuth client id (web client ID) of the Firebase project created above. You can also visit the following page to find out the ids of all of your Google projects:

https://console.developers.google.com/apis/credentials?project=_

The JavaScript code for signing-in comes from Google (script element referring to https://apis.google.com/js/platform.js). We just need to add a callback function onSignIn() that is called when the user has signed in. This function sends the credential as HTTP POST request (XMLHttpRequest()) to the web-server. Note the resource /auth/credential used for the POST request. The Apache web-server is configured to forward all requests for resources /auth/* to the IoT gateway (reverse proxy configuration) as shown next.

Configuring Apache for Passing-on Credentials to the IoT Gateway

Enable the required modules of the Apache web-server for reverse proxying:

$ sudo a2enmod proxy proxy_http

Configure the Apache web-server to forward HTTP requests to the URL http://myiotgateway/auth/* to the IoT gateway (Node.js server) listening on port 8080 of the same host also running the web-server (localhost from the point of view of the web-server). Note that it makes sense to use HTTPS rather than HTTP between browser and web-server since then the credential will be transferred over an encrypted channel over the network (this is less critical for messages between web-server (proxy) and IoT gateway since they run on the same host and no messages can be observed in the network). You can find many instructions how to setup Apache with SSL in the WWW. For the sake of simplicity we will continue with plain HTTP here. It might also be a good idea not to expose the web-frontend to the Internet by setting firewall and/or Apache rules on the IoT gateway host, for instance, to just allow requests from the local area network of the IoT home gatway to minimize the attack surface.

Add the following block to your Apache configuration:

# No need to enable this for *reverse* proxies.
ProxyRequests off

Require all denied
Require ip 192.168.1
Require local

ProxyPass "/auth" "http://localhost:8080/"

The Proxy element will allow only for requests from the network 192.168.1.0/24 or 127.0.0.0/8 (localhost). ProxyPass forwards all requests for the partial URL http://myiotgateway/auth/ to the IoT gateway on the localhost on port 8080.

IoT Gateway Implementation (Node.js)

The IoT gateway is implemented in Node.js as shown below.

Before you can use this code, you must install the Firebase Node.js code provided by Google with the following command executed from folder iot-gateway (the folder containing the Node.js implementation of the IoT gateway):

$ npm install firebase

You should then see a folder node_modules with the Firebase library code.

In the code, you need to define your Firebase project and database setting in the structure fbconfig. You got the API key and Firebase project id when creating the Firebase project above.

var http = require('http');
var firebase = require('firebase');

const port = 8080;
const host = 'localhost';

var fbconfig = {
    apiKey: "abcdefghijklmnopqrstuvwxyz1234567890",
    authDomain: "fieryiot-12345.firebaseapp.com",
    databaseURL: "https://fieryiot-12345.firebaseio.com"
    //storageBucket: ".appspot.com",
};
firebase.initializeApp(fbconfig);

firebase.auth().onAuthStateChanged(function(user) {
    if (user) {
        console.log("Signed in to Firebase");
    } else {
        console.log("No user signed in");
    }
});

function authenticateToFirebaseGoogleUser(idToken) {
    // Sign in with credential of Google user.
    var credential = firebase.auth.GoogleAuthProvider.credential(idToken);
    firebase.auth().signInWithCredential(credential).catch(
        function(error) {
            console.log("Error signing in to Firebase with user " +
                error.email + ": " + error.message + " (" +
                error.code + ")");
        });
}

server = http.createServer(function(req, res) {
    if (req.method == 'POST') {
        console.log("POST request");
        var body = '';
        req.on('data', function(data) {
            body += data;
        });
        req.on('end', function() {
            authenticateToFirebaseGoogleUser(body);
        });
        res.statusCode = 200;
        res.setHeader('Content-Type', 'text/plain');
        res.end('Credential received\n');
    } else {
        // Methods other than POST are not allowed.
        // Allowed methods are returned in 'Allow' header field.
        console.log("Unsupported HTTP request: " + req.method);
        res.statusCode = 405;
        res.setHeader('Content-Type', 'text/plain');
        res.setHeader('Allow', 'POST');
        res.end("Method not supported\n");
    }
});
server.listen(port, host);
console.log('HTTP server listening on ' + host + ':' + port);

The IoT gateway receives the credential from the web-server through an HTTP POST request (lines following server = http.createServer(function(req, res)). The IoT gateway will only handle POST requests. Any other request (GET, OPTIONS, …) will not be accepted (HTTP status code 405 “Method Not Allowed”). The handler for the POST request receives the credential from the body of the HTTP POST request and returns a 200 “OK” status code.

The credential is then used for authentication to Firebase as shown in function function authenticateToFirebaseGoogleUser(idToken). The idToken is the data received from the web-server (proxy), which is converted to a credential object with firebase.auth.GoogleAuthProvider.credential(idToken). With the command firebase.auth().signInWithCredential(credential), the authentication with Firebase is triggered using this credential. If the authentication succeeds, the callback function firebase.auth().onAuthStateChanged(function(user)) will be called with the signed in user.

Now, the IoT home gateway is ready to use the Firebase database for reading and writing data from/to the database. You can start the IoT home gateway like this:

$ node iot-gateway.js

Writing Data to the Database

Typically, sensor events from sensors connected to the IoT home gateway would trigger updates from the IoT home gateway to the Firebase database. As mentioned above, we will not focus in the sensor-to-gateway connection in this tutorial, but rather will focus on the interaction between IoT home gateway and the Firebase database. Therefore, we simulate sensor updates by a simple timer in the IoT home gateway to periodically trigger updates to the database every 15 s:

// Simulate sensor events through a periodic timer.
function sensorUpdate() {
    console.log("Sensor event");

    var user = firebase.auth().currentUser;
    if (user) {
        // User is signed-in
        var uid = user.uid;
        var databaseRef = firebase.database();
        var newEventRef = databaseRef.ref('sensorevents/' + uid).push();
        var timestamp = new Date().toString();
        newEventRef.set({
            'value': 'foo-sensor-value',
            'time': timestamp
        });
        console.log("Added new item to database");
    }
}
var timerSensorUpdates = setInterval(sensorUpdate, 15000);

The interesting part here is function sensorUpdate(), which writes a sensor event to the database in the sub-tree sensorevents/. Remember the secrutity rule we have set-up above? There, we defined that an authorized user can write to exactly this sub-tree sensorevents/ defined by his user id. Function push() adds an element with a unique id to this sub-tree and returns a reference to this element. Then, we can set the values of this element using function set() with some key/value pairs. It’s that simple!

If the user it not signed, firebase.auth().currentUser will be undefined, so we cannot write to the database since only authorized users can write items to their own branch defined by the user id.

You can also try to add something in another branch outside the user branch. Then, you should receive a “permission denied” error.

The following image shows the database content after some updates. In the Firebase console, you can watch in realtime how these values are added every 15 s:

Database Content

Database Content

Outlook

Stay tuned for part two of the tutorial explaining how to connect apps to receive sensor event updates in realtime!

Door Bell 2.0 — IoT Door Bell

What is Door Bell 2.0?

Door Bell 2.0 (or DoorBell20 for short) is a Bluetooth Low Energy (BLE) appliance to monitor a door bell and send notifications whenever the door bell rings. It turns a conventional door bell into a smart door bell that can be connected to the Internet of Things (IoT)., e.g., using the DoorBell20 If This Then That (IFTTT) client. Thus, DoorBell20 is the modern version of a door bell, or, as the name suggests, the door bell version 2.0 for the IoT era.

Full source code and hardware design is available at GitHub.

DoorBell20 consists for two major parts:

  • The DoorBell20 monitoring device, which is connected in parallel to the door bell and wirelessly via BLE to a client running on a remote IoT gateway, e.g., a Raspberry Pi with Bluetooth stick.
  • A DoorBell20 client running on the IoT gateway passing on notifications received via BLE to a remote cloud service. Different clients can be implemented for different IoT cloud services. So far, DoorBell20 includes a client for If This Then That (IFTTT), which makes it very easy to trigger different actions when a door bell event is detected. For instance, a notification can be sent to a mobile phone or trigger an IP camera installed at the door to take pictures.

The following ASCII art shows the big picture of how DoorBell20 works.

                  [IoT Cloud Service]
                  [  (e.g., IFTTT)  ]
                           | ^
                 Internet  | | Door Bell Event Notifications
                           |
                [      IoT Gateway      ]
                [ w/ DoorBell20 Client  ]
                [ (e.g., IFTTT Trigger) ]
                           |  ^
           BLE Connection  |  | Door Bell Event Notifications  
                           |
|___________[DoorBell20 Monitoring Device]_________|
|                                                  |
|____________________[Door Bell]___________________|
|                                                  |
|                                                  |
|                                                 \   Door Bell Push Button
|                                                  \
|                                                  |
|________________(Voltage Source)__________________|
                 (    12 VAC    )

The following images show the DoorBell20 monitoring device, its connection to a door bell, and a door bell event notification displayed by the If This Then That (IFTTT) app on a mobile phone.

doorbell20_monitoring_device2

DoorBell20 monitoring device

DoorBell20 device connected to door bell.

DoorBell20 device connected to door bell.

IFTTT client showing door bell event notification.

IFTTT client showing door bell event notification.

The main features of DoorBell20 are:

  • Open-source software and hardware. Source code for the door bell monitoring device and IFTTT client as well as Eagle files (schematic and board layout) are provided.
  • Maker-friendly: using easily available cheap standard components (nRF51822 BLE chip, standard electronic parts), easy to manufacture circuit board, and open-source software and hardware design.
  • Includes a client for the popular and versatile If This Then That (IFTTT) service to facilitate the development of IoT applications integrating DoorBell20.
  • Liberal licensing of software and hardware under the Apache License 2.0 and the CERN Open Hardware License 1.0, respectively.

DoorBell20 Monitoring Device

The following images show the DoorBell20 hardware and schematic:

DoorBell20 monitoring device

DoorBell20 monitoring device

 

DoorBell20 monitoring device

DoorBell20 monitoring device

Schematic of DoorBell20 device

Schematic of DoorBell20 device

The DoorBell20 monitoring device is based on the BLE chip nRF51822 by Nordic Semiconductors. The nRF51822 features an ARM Cortex M0 processor implementing both, the application logic and the BLE stack (so-called softdevice). DoorBell20 uses the S110 softdevice version 8.0. See next sub-section on how to flash the softdevice and the application code. We use a so-called “Bluetooth 4.0″ breakout boards with an nRF51822 (version 3, variant AA w/ 16 kB of RAM and 256 kB flash memory) and two 2×9 connectors (2 mm pitch), which you can buy over the Internet for about 6 US$ including shipping.

We isolate the 12 VAC door bell circuit from the microcontroller using an opto-isolator. A rectifier and 5 V voltage regulater is used to power the LED of the opto-isolator whenever the door bell is ringing. A GPIO pin of the nRF51822 connected to the other side of the opto-isolator is then detecting the event. In addition to the integrate protection mechanisms of the LM2940 voltage regulator (short circuit and thermal overload protection, shutdown during transients), a varistor protects from voltage transients since many door bells are inductive loads inducing voltage spikes when switched off. Since varistors age with every voltage transient, a fuse is added to protect the door bell circuit from a short circuit of the varistor.

The nRF51822 is powered by two AA batteries. No additional voltage regulator is required, which increased the energy efficiency, and the monitoring device is expected to run for years from a pair of AA batteries. Note that we did not implement a reverse polarity protection, so be careful to insert the batteries correctly.

The schemtic and circuit board layout (PCB) of the DoorBell20 monitoring device for Eagle as well as the firmware can be found at GitHub. We deliberately used a simple single-sided through-hole design to help makers producing their own boards.

IFTTT DoorBell20 Client

DoorBell20 can be connected to any BLE client running on a remote machine. After receiveing a BLE notification about a door bell event, the client can then trigger local actions, and can forward the event to a remote IoT cloud service. DoorBell20 comes with a client for connecting to the popular If This Then That (IFTTT) cloud service.

Whenever a notification for a door bell alarm is received, a web request is sent to the IFTTT Maker Channel triggering an event with a pre-defined name. You can then define your own IFTTT recipes to decide what to do with this event like showing a notification on your smartphone through the IFTTT app, as shown in the following image.

IFTTT client showing door bell event notification.

IFTTT client showing door bell event notification.

For further technical details, please have a look at the documentation and source code provided at GitHub.

Key 2.0 — Bluetooth IoT Door Lock

What is Key 2.0?

Key 2.0 (or Key20 for short) is a Bluetooth IoT Door Lock. It turns a conventional electric door lock into a smart door lock that can be opened using a smartphone without the need for a physical key. Thus, Key20 is the modern version of a physical key, or, as the name suggests, the key version 2.0 for the Internet of Things (IoT) era.

Key20 consists of two parts:

  1. Door lock controller device, which is physically connected to the electric door lock and wirelessly via BLE to the mobile app.
  2. Mobile app implementing the user interface to unlock the door and communicating with the door lock controller through BLE.

You can get a quick impression on how Key20 works by watching the following video:

The following image shows the Key20 door lock controller device and the Key20 app running on a smartphone.

Key 2.0 App and Door Lock Controller Device

Key 2.0 App and Door Lock Controller Device

The main features of Key20 are:

  • Using state-of-the-art security mechanisms (Elliptic Curve Diffie-Hellman Key Exchange (ECDH), HMAC) to protect against attacks.
  • Open-source software and hardware, including an open implementation of the security mechanisms. No security by obscurity! Source code for the app and door lock controller as well as Eagle files (schematic and board layout) are available on GitHub.
  • Maker-friendly: using easily available cheap standard components (nRF51822 BLE chip, standard electronic parts), easy to manufacture circuit board, and open-source software and hardware design.
  • Works with BLE-enabled Android 4.3 mobile devices (and of course newer versions). Porting to other mobile operating systems like iOS should be straightforward.
  • Liberal licensing of software and hardware under the Apache License 2.0 and the CERN Open Hardware License 1.0, respectively.

Security Concepts

A door lock obviously requires security mechanisms to protect from unauthorized requests to open the door. To this end, Key20 implements the following state of the art security mechanisms.

Authorization of Door Open Requests with HMAC

All door open requests are authorized through a Keyed Hash Message Authentication Code (HMAC). A 16 byte nonce (big random number) is generated by the door lock controller for each door open request as soon as a BLE connection is made to the door lock controller. The nonce is sent to the mobile app. Both, the nonce and the shared secret, are used by the mobile app to calculate a 512 bit HMAC using the SHA-2 hashing algorithm, which is then truncated to 256 bits (HMAC512-256), and sent to the door lock controller. The door lock controller also calculates an HMAC based on the nonce and the shared secret, and only if both HMACs match, the door will be opened.

The nonce is only valid for one door open request and effectively prevents replay attacks, i.e., an attacker sniffing on the radio channel and replaying the sniffed HMAC later. Note that the BLE radio communication is not encrypted, and it actually does not need to be encrypted since a captured HMAC is useless when re-played.

Moreover, each nonce is only valid for 15 s to prevent man-in-the-middle attacks where an attacker intercepts the HMAC and does not forward it immediatelly but waits until the (authorized) user walks away after he is not able to open the door. Later the attacker would then send the HMAC to the door lock controller to open the door. With a time window of only 15 s (which could be reduced further), such attacks are futile since the authorized user will still be at the door.

Note that the whole authentication procedure does not include heavy-weight asymmetric crypto functions, but only light-weight hashing algorithms, which can be performed on the door lock device featuring an nRF51822 micro-controller (ARM Cortex M0) very fast in order not to delay door unlocking.

With respect to the random nonce we would like to note the following. First, the nRF51822 chip includes a random number generator for generating random numbers from thermal noise, so nonces should be of high quality, i.e., truly random. An attack by cooling down the Bluetooth chip to reduce randomness due to thermal noise is not relevant here since this requires physical access to the lock controller installed within the building, i.e., the attacker is then already in your house.

Secondly, 128 bit nonces provide reasonable security for our purpose. Assume one door open request per millisecond (very pessimistic assumption!) and 100 years of operation, i.e., less than n = 2^42 requests to be protected. With 128 bit nonces, we have m = 2^128 possible nonce values. Then the birthday paradox can be used to calculate the probability p of at least one pair of requests sharing the same nonce, or, inversely, no nonces shared by any pair of requests. An approximation of p for n << m is p(n,m) = 1 – e^((-n^2)/(2*m)), which practically evaluates to 0 for n = 2^42 and m = 2^128. Even for n = 2^52 (one request per us; actually not possible with BLE), p(2^52,2^128) < 3e-8, which is about the probability to be hit by lightning, which is about 5.5e-8.

Exchanging Keys with Elliptic Curve Diffie Hellman Key Exchange (ECDH)

Obviously, the critical part is the establishment of a shared secret between the door lock controller and the mobile app. Anybody in possession of the shared secret can enter the building, thus, we must ensure that only the lock controller and the Key20 app know the secret. To this end, we use Elliptic Curve Diffie-Hellman (ECDH) key exchange based on Curve 25519. We assume that the door lock controller is installed inside the building that is secured by the lock—if the attacker is already in your home, the door lock is futile anyway. Thus, only the authorized user (owner of the building) has physical access to the door lock controller.

First, the user needs to press a button on the door lock controller device to enter key exchange mode (the red button in the pictures). Then both, the mobile app and the door lock controller calculate different key pairs based on the Elliptic Curve 25519 and exchange their public keys, which anyone can know. Using the public key of the other party and their own private keys, the lock controller and the app can calculate the same shared secret.

Using Curve 25519 and the Curve 25519 assembler implementation optimized for ARM Cortex-M0 from the Micro NaCl project, key pairs and shared secrets can be calculated in sub-seconds on the nRF51822 BLE chip (ARM Cortex M0).

Without further measures, DH is susceptible to man-in-the-middle attacks where an attacker actively manipulates the communication between mobile app and door lock controller. With such attacks, the attacker could exchange his own public key with both, the lock controller and the app to establish two shared secrets between him and the door lock controller, and between him and the mobile app. We prevent such attacks with the following mechanism. After key exchange, the mobile app and the door lock device both display a checksum (hash) of their version of the exchanged shared secret. The user will visually check these checksums to verify that they are the same. If they are the same, no man-in-the-middle attack has happened since the man in the middle cannot calculate the same shared secret as the door lock controller and the mobile app (after all, the private keys of door lock controller and mobile app remain private). Only then the user will confirm the key by pressing buttons on the door lock controller and the mobile app. Remember that only the authorized user has physical access to the door lock controller since it is installed within the building to be secured by the lock.

The following image shows the mobile app and the door lock controller displaying a shared secret checksum after key exchange. The user can confirm this secret by pushing the green button the the lock controller device and the Confirm Key button of the app.

Key 2.0: key checksum verification after key exchange.

Key 2.0: key checksum verification after key exchange.

Why not Standard Bluetooth Security?

Actually, Bluetooth 4.2 implements security concepts similar to the mechanisms described above. So it is a valid question why don’t we just rely on the security concepts implemented by Bluetooth?

A good overview why Bluetooth might not be as secure as we would like it to be is provided by Francisco Corella. So we refer the interested reader to his page for the technical details and a discussion of Bluetooth security. We also would like to add that many mobile devices still do not implement Bluetooth 4.2 but only Bluetooth 4.0, which is even less secure than Bluetooth 4.2.

So we decided not to rely on Bluetooth security mechanisms, but rather implement all security protocols on the application layer using state of the art security mechanisms as described above.

Bluetooth Door Lock Controller Device

The following image shows the door lock controller and its components.

Key 2.0 Door Lock Controller Device

Key 2.0 Door Lock Controller Device

The Door Lock Controller Device needs to be connected to the electric door lock (2 cables). You can simply replace a manual switch by the door lock controller device.

The door lock controller needs to be placed in Bluetooth radio range to the door and inside the building. Typical radio ranges are about 10 m. Depending on the walls, the distance might be shorter or longer. In our experience, one concrete wall is no problem, but two might block the radio signal.

The main part of the hardware is an nRF51822 BLE chip from Nordic Semiconductors. The nRF51822 features an ARM Cortex M0 micro-controller and a so-called softdevice implementing the Bluetooth stack, which runs together with the application logic on the ARM Cortex M0 processor.

An LCD is used to implement the secure key exchange procedure described above (visual key verification to avoid man-in-the-middle attacks).

For more technical details including schematics, board layout, and source code please visit the Key20 GitHub page.

Android App

The app requires a BLE-enabled mobile device running Android version 4.3 “Jelly Bean” (API level 18) or higher.

The following images show the two major tabs of the app: one for opening the door, and the second for exchanging keys between the app and the door lock controller.

Key 2.0 App: door unlock tab

Key 2.0 App: door unlock tab

 

Key 2.0 App: key exchange tab

Key 2.0 App: key exchange tab

The source code is available from the Key20 GitHub page.

ECDH-Curve25519-Mobile: Elliptic Curve Diffie-Hellman Key Exchange for Android Devices with Curve 25519

tl;dr

ECDH-Curve25519-Mobile implements Diffie-Hellman key exchange based on the Elliptic Curve 25519 for Android devices. It is released into the public domain and available through GitGub.

How I came across Curve 25519 … and the problem to be solved

Recently, I had to implement Diffie-Hellman key exchange for an Internet of Things (IoT) application, namely, a smart door lock (more about this later in another post). This system consists of a low-power embedded device featuring an ARM Cortex M0 microcontroller communicating via Bluetooth Low-Energy (BLE) with an Android app.

First, I had some doubts whether compute-intensive asymmetric cryptography could be implemented efficiently on a weak ARM Cortex M0. However, then I came across the Curve 25519, an elliptic curve proposed by Daniel J. Bernstein for Elliptic Curve Diffie-Hellman (ECDH) key exchange. In addition to the fact that Curve 25519 can be implemented very efficiently, there exists an implementation targeting ARM Cortex M0 from the Micro NaCl project. So I gave this implementation a try, and it turned out to be really fast.

So I decided to use ECDH with Curve 25519 for my IoT system. Thanks to Micro NaCl, the part for the microcontroller was implemented very quickly. However, I also needed an implementation for Android. My first thought was to use the popular Bouncy/Spongy Castle crypto library. However, it turned out that although they come with a definition of Curve 25519, they use a different elliptic curve representation, namly, the Weierstrass form rather than the Montgomery form used by NaCl. One option would have been to convert between the two representations, but to me it seemed less intuitive to convert the Montgomery curve back and forth when I could stick to one representation.

So the problem was now to find a Curve 25519 implementation for Android using the Montgomery form. And that was not so easy. So I finally decided to take the code from the NaCl project and make it accessible to the Android world. And the result is ECDH-Curve25519-Mobile.

What is ECDH-Curve25519-Mobile?

ECDH-Curve25519-Mobile implements Diffie-Hellman key exchange based on the Elliptic Curve 25519 for Android devices.

ECDH-Curve25519-Mobile is based on the NaCl crypto implementation, more specifically AVRNaCl, written by Michael Hutter and Peter Schwabe, who dedicated their implementation to the public domain. ECDH-Curve25519-Mobile follows their example and also dedicates the code to the public domain using the Unlicense. Actually, the core of ECDH-Curve25519-Mobile is NaCl code, and ECDH-Curve25519-Mobile is just a simple JNI (Java Native Interface) wrapper around it to make it accessible from Java on Android devices. So I gratefully acknowledge the work of the NaCl team and their generous dedication of their code to the public domain!

ECDH-Curve25519-Mobile is a native Android library since NaCl is implemented in C rather than Java. However, it can be easily compiled for all Android platforms like ARM or x86, so this is not a practical limitation compared to a Java implementation. The decision to base ECDH-Curve25519-Mobile on NaCl was not so much the performance you can gain from a native implementation—actually AVRNaCl leaves some room for performance improvements since it originally targeted 8 bit microcontrollers—, but using an implementation from crypto experts who actually work together with Daniel J. Bernstein as the inventor of Curve 25519.

How to use it?

I do not want to repeat everything already said in the description of ECDH-Curve25519-Mobile available at GitHub. Let me just show you some code to give you an impression that it is really easy to use from within your Android app:

// Create Alice's secret key from a big random number.
SecureRandom random = new SecureRandom();
byte[] alice_secret_key = ECDHCurve25519.generate_secret_key(random);
// Create Alice's public key.
byte[] alice_public_key = ECDHCurve25519.generate_public_key(alice_secret_key);

// Bob is also calculating a key pair.
byte[] bob_secret_key = ECDHCurve25519.generate_secret_key(random);
byte[] bob_public_key = ECDHCurve25519.generate_public_key(bob_secret_key);

// Assume that Alice and Bob have exchanged their public keys.

// Alice is calculating the shared secret.
byte[] alice_shared_secret = ECDHCurve25519.generate_shared_secret(
    alice_secret_key, bob_public_key);

// Bob is also calculating the shared secret.
byte[] bob_shared_secret = ECDHCurve25519.generate_shared_secret(
    bob_secret_key, alice_public_key);

More details can be found on the ECDH-Curve25519-Mobile project page at GitHub. Hope to see you there!

Testing USB-C to USB-A/Micro-USB Cables for Conformance

Many new mobile devices now feature an USB-C connector. In order to connect these devices to USB devices or chargers with a USB-A or Micro-USB connector, you need an adapter or cable with USB-C plug on the one side and USB-A/Micro-USB connector on the other.

As first discovered by Google engineer Benson Leung, many of these USB-C to USB-A/Micro-USB cables or adapters do not conform with the USB standard allowing USB-C devices to draw excessive power, which might damage the host or charger permanently.

Recently, I bought a Nexus 5x featuring a USB-C connector and now faced the problem of figuring out whether my USB-C to USB-A cable is conforming with the standard. So I bought a USB-C connector (actually, not so easy to get as I thought) and tested my cable with a multimeter. Of course, this works fine, and fortunately, my cable was OK. Then I thought: Why not build a little device to quickly check cables without multimeter? Just plug in the cable and see whether it is OK or not.

That’s exactly what I present here: an Arduino-based device to check USB-C to USB-A/Micro-USB cables and adapters for standard conformity. Two images of the board are shown below. It’s not very complex at all as you will see, and I don’t claim this to be rocket science. It’s just a little practical tool. Everything is completely open source, the code as well as the hardware design (printed circuit board), and you can download both from my Github repository.

USB-C Adapter Tester

USB-C Adapter Tester

USB-C Adapter Tester

USB-C Adapter Tester

Background

I don’t want to repeat everything that has already been said elsewhere. However, to keep this page self-contained, I quickly describe the problem in plain words so you can easily understand the solution.

USB-C allows the USB host or charger (called downstream-facing port, DFP) to signal to the powered USB-C device (called upstream-facing port, UFP) how much current it can provide. This is implemented by a defined current flowing from DFP to UFP over the CC (Channel Configuration) line of the USB-C connector. 80 µA +- 20 % signal “Default USB power” (900 mA for “Super Speed” devices), whereas 180 µA +-8 % signal 1.5 A, and 330 µA signal 3.0 A.

So far, so good. A USB-C host or charger will know how much power it can provide and signal the correct value by sending the corresponding current over the CC line. The problem starts with “legacy” devices with USB-A or Micro-USB connector. These connectors don’t have a CC pin, thus, the host or charger cannot signal to the USB-C device how much current they can provide. In this case, the current on the CC line is “generated” by the cable or adapter using a simple resistor RP connecting the 5 V line to the CC line of the UFP. You might remember: R = V/I. So by selecting the right resistor in the cable/adapter, a certain current ICC is flowing through the CC line. Actually, the UFP connects CC through another 5.1k resistor (RD) to ground, so you have to consider the series resistance of RP and RD when calculating ICC. RP = 56k corresponds to about 80 µA (corresponding to “Default USB-Power”), RP = 22k to 180 µA (corresponding to 1.5 A), and RP = 10k to 300 µA (corresponding to 3.0 A).

Note that now the adapter cable rather than the upstream USB host or USB charger is defining the maximum current the downstream USB-C device can pull! However, the cable cannot know to which host or charger it will be connected and how much current this host or charger can actually provide. So the only safe choice for RP is a value resulting in 80 µA on the CC line corresponding to “Default USB Power”, i.e., a 56k resistor. Unfortunately, some cable and adapter manufacturers don’t use 56k resistors but lower values like 10k resistors. If your host can just provide the required “Default USB Power”, it might get grilled.

USB-C-Adapter-Tester

Now that we know what to check, we can build our USB-C-Adapter-Tester shown on the images above. This tester consists of a microcontroller (Atmega 328p; same chip as used by the Arduino UNO) featuring an Analog-to-Digital Converter (ADC). The ADC measures the voltage drop along a 5.1k resistor (actually, two separate 5.1k resistors on different channels of the ADC since USB-C features two CC so you can plug-in the USB-C cable either way). Knowing the resistance and the voltage drop measured by the ADC, the microcontroller calculates ICC. If ICC is within the specified range (80 µA +- 20 %), an LED signaling a “good” cable is turned on from an GPIO pin. If it is outside the range, another LED signaling a “bad” cable is turned on.

The cable to be checked is also powering the microcontroller from the USB host or charger. The good old Atmega 328p can be powered from 5V, which is the voltage of USB-A and Micro-USB.

Since the internal voltage reference of the Atmega might not be very precise, I used an external 2.5 V voltage reference diode to provide a reference voltage to the ADC. If you trust the internal 1.1 V voltage reference of the Atmega, you can save this part.

As said, the USB-C connector was a little hard to get, but I finally found one at an E-Bay shop.

For the implementation of the code, I used the Arduino platform. The device is programmed through a standard 6 pin in-system programmer port.

As soon as you plug in the cable under test, the microcontroller starts measuring the voltage drop, translates it to current, compares it to the specified range, and switches on the corresponding LED signaling a good or bad cable.

If you want to etch the PCB yourself, I provide the Eagle files in the Git repository. Of course, you can also simply use a standard Arduino UNO instead of the shown PCB.

Several cables and adapters were tested with this device. The Micro-USB/USB-C adapter that came with the Nexus 5x phone was OK as well as my axxbiz USB-A/USB-C cable. Some Micro-USB/USB-C adapters were not OK (using 10k resistor instead of 51k resistors). Benson Leung tested many more cables if you are interested in what to buy.

I hope your USB cable is OK :)

BLE-V-Monitor: How car batteries join the Internet of Things

The Internet of Things (IoT) envisions a world where virtually everything is connected and able to communicate. Today, I want to present one such IoT application, namely, the BLE-V-Monitor: a battery voltage monitor for vehicels (cars, motorbikes).

BLE-V-Monitor consists of an Arduino-based monitoring device and an Android app. The BLE-V-Monitor device is connected to the car battery to monitor the battery voltage and record voltage histories. The app queries the current voltage and voltage history via Bluetooth Low Energy (BLE) and displays them to the user. Below you can see an image of the circuit board of the BLE-V-Monitor device and two sceenshots of the app showing the current voltage, charge status, and voltage history.

BLE-V-Monitor Board

BLE-V-Monitor board.

 

BLE-V-Monitor app: voltage and charge status

BLE-V-Monitor app: voltage and charge status

BLE-V-Monitor app: minutely history

BLE-V-Monitor app: minutely voltage history

The main features of BLE-V-Monitor are:

  • Voltage and battery charge status monitoring
  • Recording of minutely, hourly, and daily voltage histories
  • Bluetooth Low Energy (BLE) to transmit voltage samples to smartphones, tablets, Internet gateways, etc.
  • Very low energy consumption
  • Android app for displaying current voltage and voltage histories
  • Open source hardware (CERN Open Hardware Licence v1.2) and software (Apache License 2.0)

Motivation

According to a recent study of ADAC (the largest automobile club in Europe), 46 % of car breakdowns are due to electrical problems, mostly empty or broken batteries. Personally, I know several incidents, where a broken or empty battery was the reason for breakdowns of cars or motorbikes. So no question: there is a real problem to be solved.

The major problem with an empty battery is that you might not realize it until you turn the key, or for those of you with a more modern car, push the engine start button. And then it is already too late! So wouldn’t it be nice if the battery could tell you in advance, when it needs to be recharged and let you know its status (weakly charge, fully charged, discharged, etc.)?

That’s where the Internet of Things comes into play: the “thing” is your car battery, which is able to communicate its voltage and charge status using wireless communication technologies.

Let me present you some technical details of BLE-V-Monitor, to show you how to implement this specific IoT use case. More details including Android and Arduino source code and hardware design (PCB layout) can be found on Github:

https://github.com/duerrfk/BLE-V-Monitor

Requirements

The technical design of BLE-V-Monitor was driven by two key requirements:

  1. Keep it as simple as possible: Simple and commonly available hardware; through-hole PCB design to allow for simple etching and soldering.
  2. Very low energy consumption. What is the use of a battery monitor consuming substantial energy? Just to give you an idea that this is not trivial even considering the fact that a car battery stores a lot of energy (usually more than 40 Ah even for smaller cars): Consider the current of one standard LED, which is about 15 mA connected through a resistor to your 12 V car battery. After two hours, this LED and the resistor consumed 2 h * 15 mA * 12 V = 30 mAh * 12 V energy. Now, assume starting your motor with a starter motor drawing 50 A on average over a 2 s starting period. In this scenario, starting your motor once consumes 2 s * 50 A * 12 V = 28 mAh * 12 V. Thus, in less than two hours, the LED and its resistor consumed about the same energy as starting your car once. I know, this scenario is highly simplified, but it might serve to show that even a small consumer (in our case the BLE-Monitor device) is significant if it is running for a long time. Consequently, as a goal we want to bring down the average energy consumption of the monitoring device far below 1 mA.

Implementation

Technically, BLE-V-Monitor consists of the BLE-V-Monitor device already shown above and a smartphone app for Android.

The BLE-V-Monitor device periodically samples the voltage of the battery, and the app uses Bluetooth Low Energy (BLE) to query the battery voltage when the smartphone is close to the car. Instead of using a smartphone, you could also install some dedicated (fixed) hardware (e.g., a Raspberry Pi with a Bluetooth USB stick in your garage), but since I walk by my car every day and the range of BLE was sufficient to receive the signal even one floor above the garage, I did not consider this option so far.

In order not to lose data while the smartphone is not within BLE range, the BLE-V-Monitor device records minutely, daily, and hourly histories in RAM, which can then be queried by the smartphone.

This approach based on BLE has several advantages: It is cheap. It is energy efficient. Clients can be implemented with many existing devices since BLE is commonly available in most consumer devices, in particular, mobile devices and cheap single-board computers like the Raspberry Pi (using a Bluetooth USB stick).

BLE-V-Monitor Device

The BLE-V-Monitor device is based on the Arduino platform. It uses an ATmega 328P microcontroller and the BLE module MOD-nRF8001 from Olimex with the Nordic Semiconductors BLE chip nRF8001. The ATmega is programmed via an in-system programmer (ISP) and interfaces with the BLE module through SPI. Overall, if you build this device yourself, the hardware might cost you less than 20$.  And since we rely on a simple and energy efficient microcontroller and BLE together with small duty cycles, the current consumption can be below 100 microampere (including everything like the 3.3 V voltage regulator to power the microcontroller and BLE module from the car battery).

To measure voltage, we use the 10 bit analog/digital converter (ADC) of the ATmega (no extra ADC component required). The voltage range that can be measured ranges from 0 to 18 V, thus, the resolution is 18 V / 1024 = 17.6 mV, which is fine-grained enough to derive the charge status of the battery (see voltage thresholds below). Note that while the car is running, the car’s alternator provides more than 12 V to charge the battery (about 15 V for my car as can be seen from the voltage history screenshot). A voltage divider with large resistor values (to save energy) is used to divide the battery voltage. Since we use a 2.5 V reference voltage, 18 V is mapped to 2.5 V by the voltage divider. The 2.5 V reference voltage is provided by the very precise micropower voltage reference diode LM285-2.5, which is only powered on demand through a GPIO pin of the ATmega during sampling to minimize energy consumption as much as possible. Since the resistors of the voltage divider have large values to save energy, a 100 nF capacitor in parallel to the second resistor of the voltage divider provides a low impedance source to the ADC (this 100 nF capacitor is much larger than the 14 pF sampling capacitor of the ATmega).

A 18 V varistor (not shown on the image; it’s an SMD on the backside of the PCB since I only had an SMD version available) protects from transient voltage spikes above 18 V. Since varistors typically age whenever they shunt excessive voltage, a fuse limits the current to protect against a short circuit of the varistor.

A micropower voltage regulator (LP295x) provides 3.3 V to the ATmega and BLE module. The 100 mA that can be provided by this regulator are more than sufficient to power the ATmega and BLE module while being active, and a very low quiescent current of only 75 microampere ensures efficient operation with small duty cycles.

BLE-V-Monitor App

The BLE-V-Monitor App is implemented for Android (version 4.3 or higher since we need the BLE features of Android). It consists of a tab view with a fragment to display the current voltage, and three more fragments to display minutely, hourly, and daily voltage histories, respectively.

The charge status of a lead–acid car battery can be quite easily derived from its voltage. We use the following voltage levels to estimate the charge status on the client side:

  • 100 % charged (fully charged): about 12.66 V
  • 75 % charged (charged): about 12.35 V
  • 50 % charged (weakly charged): about 12.10 V
  • 25 % charged (discharged): about 11.95 V
  • 0 % charged (over discharged): about 11.7 V

The screenshots above show some examples of the current voltage, charge status, and voltage histories. In the history screenshot you can also identify two periods when the car was running where the charging voltage reached about 15 V.

Final Prototype

The following photos show how the BLE-V-monitor PCB is mounted inside a case and the placement of the monitoring device right in front of the battery of my car (in this photo, the device is already connected to the battery but not yet fixed). Fortunately, older cars have plenty of space and not a lot of useless plastic hiding every part of the motor.

BLE-V-Monitor device

BLE-V-Monitor device with case

 

BLE-V-Monitor device in car

BLE-V-Monitor device mounted in car and connected to car battery

The pull relief (knot) might not look very elegant but it is highly effective.

Obviously, plastic is the better choice for the case since the Bluetooth module is inside. Still, I had some concerns that all the metal of the car would shield Bluetooth signals too much, but it works suprisingly well. Even one floor above the garage with the metal engine hood and a concrete ceiling between device and client I can still receive a weak signal and I can still query the battery status.

Where to go from here?

Obviously, there is some potential to further improve the functionality. Beyond just monitoring the raw voltage and mapping it to a charge status, we could analyse the voltage data to find out whether the battery is still in a healthy condition. For instance, we could look at voltage peaks and analyse the voltage histories to find out how quickly the battery discharges, and how these values change over the lifetime of the battery. To this end, you could send the data to the cloud. Although I think, you could implement such simple “small data” analytics also on the smartphone or even on the microcontroller of the monitoring device.

However, the battery or car vendor might want to collect the status of all of their batteries in the cloud for other reasons, for instance, to improve maintenance and product quality, or to offer advanced services. With the cloud, everything becomes a service, so why not offering “battery as a service”? Instead of buying the battery, you buy the service of always having enough energy to operate your car. When the performance of your battery is degrading over time, the vendor already knows and sends you a new battery well before the old one is completely broken or invites you to visit a garage where they exchange the battery for you (this service would be include in the “battery as a service” fees).

I hope you found this little journey to the IoT interesting. Have a good trip, wherever you go!

Raspberry Pi Going Realtime with RT Preempt

[UPDATE 2016-05-13: Added pre-compiled kernel version 4.4.9-rt17 for all Raspberry Pi models (Raspberry Pi Model A(+), Model B(+), Zero, Raspberry Pi 2, Raspberry Pi 3). Added build instructions for Raspberry Pi 2/3.

A real-time operating system gives you deterministic bounds on delay and delay variation (jitter). Such a real-time operating system is an essential prerequisite for implementing so-called Cyber Physical Systems, where a computer controls a physical process. Prominent examples are the control of machines and robots in production environments (Industry 4.0), drones, etc.

RT Preempt is a popular patch for the Linux kernel to transform Linux into such a realtime operating system. Moreover, the Raspberry Pi has many nice features to interface with sensors and actuators like SPI, I2C, and GPIO so it seems to be a good platform for hosting a controller in a cyber-physical system. Consequently, it is very attractive to install Linux with the RT Preempt patch on the Raspberry Pi.

Exactly this is what I do here: I provide detailed instructions on how to install a Linux kernel with RT Preempt patch on a Raspberry Pi. Basically, I wrote this document to document the process for myself, and it is more or less a collection of information you will find on the web. But anyway, I hope I can save some people some time.

And to save you even more time, here is the pre-compiled kernel (including kernel modules, firmware, and device tree) for the Raspberry Pi Model A(+),B(+), Raspberry Pi Zero, Raspberry Pi 2 Model B, Raspberry Pi 3 Model B:

To install this pre-compiled kernel, login to your Raspberry Pi running Raspbian (if you have not installed Raspbian already, you can find an image here: https://www.raspberrypi.org/downloads/raspbian/), and execute the following commands (I recommend to do a backup of your old image since this procedure will overwrite the old kernel):

pi@raspberry ~$ sudo rm -r /boot/overlays/
pi@raspberry ~$ sudo rm -r /lib/firmware/
pi@raspberry ~$ cd /tmp
pi@raspberry ~$ wget http://download.frank-durr.de/kernel-4.4.9-rt17.tgz
pi@raspberry ~$ tar xzf kernel-4.4.9-rt17.tgz
pi@raspberry ~$ cd boot
pi@raspberry ~$ sudo cp -rd * /boot/
pi@raspberry ~$ cd ../lib
pi@raspberry ~$ sudo cp -dr * /lib/
pi@raspberry ~$ sudo /sbin/reboot

With this patched kernel, I could achieve bounded latency well below 200 microseconds on a fully loaded 700 MHz Raspberry Pi Model B (see results below). This should be safe for tasks with a cycle time of 1 ms.

Since compiling the kernel on the Pi is very slow, I will cross compile the kernel on a more powerful host. You can distinguish the commands executed on the host and the Pi by looking at the prompt of the shell in the following commands.

Install Vanilla Raspbian on your Raspberry Pi

Download Raspbian from https://www.raspberrypi.org/downloads/raspbian/ and install it on your SD card.

Download Raspberry Pi Kernel Sources

On your host (where you want to cross-compile the kernel), download the latest kernel sources from Github:

user@host ~$ git clone https://github.com/raspberrypi/linux.git
user@host ~$ cd linux

If you like, you can switch to an older kernel version like 4.1:

user@host ~/linux$ git checkout rpi-4.1.y

Patch Kernel with RT Preempt Patch

Next, patch the kernel with the RT Preempt patch. Choose the patch matching your kernel version. To this end, have a look at the Makefile. VERSION, PATCHLEVEL, and SUBLEVEL define the kernel version. At the time of writing this tutorial, the latest kernel was version 4.4.9. Patches for older kernels can be found in folder "older".

user@host ~/linux$ wget https://www.kernel.org/pub/linux/kernel/projects/rt/4.4/patch-4.4.9-rt17.patch.gz
user@host ~/linux$ zcat patch-4.4.9-rt17.patch.gz | patch -p1

Install and Configure Tool Chain

For cross-compiling the kernel, you need the tool chain for ARM on your machine:

user@host ~$ git clone https://github.com/raspberrypi/tools.git
user@host ~$ export ARCH=arm
user@host ~$ export CROSS_COMPILE=/home/user/tools/arm-bcm2708/gcc-linaro-arm-linux-gnueabihf-raspbian/bin/arm-linux-gnueabihf-
user@host ~$ export INSTALL_MOD_PATH=/home/user/rtkernel

Later, when you install the modules, they will go into the directory specified by INSTALL_MOD_PATH.

Configure the kernel

Next, we need to configure the kernel for using RT Preempt.

For Raspberry Pi Model A(+), B(+), Zero, execute the following commands:

user@host ~$ export KERNEL=kernel
user@host ~$ make bcmrpi_defconfig

For Raspberry Pi 2/3 Model B, execute these commands:

user@host ~$ export KERNEL=kernel7
user@host ~$ make bcm2709_defconfig

An alternative way is to export the configuration from a running Raspberry Pi:

pi@raspberry$ sudo modprobe configs
user@host ~/linux$ scp pi@raspberry:/proc/config.gz ./
user@host ~/linux$ zcat config.gz > .config

Then, you can start to configure the kernel:

user@host ~/linux$ make menuconfig

In the kernel configuration, enable the following settings:

  • CONFIG_PREEMPT_RT_FULL: Kernel Features → Preemption Model (Fully Preemptible Kernel (RT)) → Fully Preemptible Kernel (RT)
  • Enable HIGH_RES_TIMERS: General setup → Timers subsystem → High Resolution Timer Support (Actually, this should already be enabled in the standard configuration.)

Build the Kernel

Now, it’s time to cross-compile and build the kernel and its modules:

user@host ~/linux$ make zImage
user@host ~/linux$ make modules
user@host ~/linux$ make dtbs
user@host ~/linux$ make modules_install

The last command installs the kernel modules in the directory specified by INSTALL_MOD_PATH above.

Transfer Kernel Image, Modules, and Device Tree Overlay to their Places on Raspberry Pi

We are now ready to transfer everything to the Pi. To this end, you could mount the SD card on your PC. I prefer to transfer everything over the network using a tar archive:

user@host ~/linux$ mkdir $INSTALL_MOD_PATH/boot
user@host ~/linux$ ./scripts/mkknlimg ./arch/arm/boot/zImage $INSTALL_MOD_PATH/boot/$KERNEL.img
user@host ~/linux$ cp ./arch/arm/boot/dts/*.dtb $INSTALL_MOD_PATH/boot/
user@host ~/linux$ cp -r ./arch/arm/boot/dts/overlays $INSTALL_MOD_PATH/boot
user@host ~/linux$ cd $INSTALL_MOD_PATH
user@host ~/linux$ tar czf /tmp/kernel.tgz *
user@host ~/linux$ scp /tmp/kernel.tgz pi@raspberry:/tmp

Then on the Pi, install the real-time kernel (this will overwrite the old kernel image!):

pi@raspberry ~$ cd /tmp
pi@raspberry ~$ tar xzf kernel.tgz
pi@raspberry ~$ sudo rm -r /lib/firmware/
pi@raspberry ~$ sudo rm -r /boot/overlays/
pi@raspberry ~$ cd boot
pi@raspberry ~$ sudo cp -rd * /boot/
pi@raspberry ~$ cd ../lib
pi@raspberry ~$ sudo cp -dr * /lib/

Most people also disable the Low Latency Mode (llm) for the SD card:

pi@raspberry /boot$ sudo nano cmdline.txt

Add the following option:

sdhci_bcm2708.enable_llm=0

Reboot

pi@raspberry ~$ sudo /sbin/reboot

Latency Evaluation

For sure, you want to know the latency bounds achieved with the RT Preempt patch. To this end, you can use the tool cyclictest with the following test case:

  • clock_nanosleep(TIMER_ABSTIME)
  • Cycle interval 500 micro-seconds
  • 100,000 loops
  • 100 % load generated by running the following commands in parallel:
    • On the Pi:
      pi@raspberry ~$ cat /dev/zero > /dev/null
    • From another host:
      user@host ~$ sudo ping -i 0.01 raspberrypi
  • 1 thread (I used a Raspberry Pi model B with only one core)
  • Locked memory
  • Process priority 80
pi@raspberry ~$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/clrkwllms/rt-tests.git
pi@raspberry ~$ cd rt-tests/
pi@raspberry ~/rt-test$ make all
pi@raspberry ~/rt-test$ sudo ./cyclictest -m -t1 -p 80 -n -i 500 -l 100000

On a Raspberry Pi model B at 700 MHz, I got the following results:

T: 0 ( 976) P:80 I:500 C: 100000 Min: 23 Act: 40 Avg: 37 Max: 95

With some more tests, the worst case latency sometimes reached about 166 microseconds. Adding a safety margin, this should be safe for cycletimes of 1 ms.

I also observed that using other timers than clock_nanosleep(TIMER_ABSTIME)—e.g., system timers (sys_nanosleep and sys_setitimer)—, the latency was much higher with maximum values above 1 ms. Thus, for low latencies, I would only rely on clock_nanosleep(TIMER_ABSTIME).

Faros BLE Beacon with Google Eddystone Support

Today I want to introduce Faros, an open-source Bluetooth Low Energy (BLE) beacon supporting Google’s open beacon format Eddystone implemented for the popular Arduino platform.

With the invention of Apple’s iBeacon, BLE beacons became very popular as a positioning technology. Now Google has released a new open beacon format called Eddystone (named after a famous lighthouse), which is more versatile. Eddystone supports three frame types for broadcasting data:

  • UID frames broadcasting identifiers, namely a namespace identifier to group a set of beacons, and an instance identifier to identify an individual beacon.
  • URL frames broadcasting a short URL, so you can think of this as a radio-based QR code replacement.
  • Telemetry (TLM) frames to check the health status of beacons. These frames contain the beacon temperature, battery level, uptime, and counters for broadcasted frames to facilitate the management of beacons.

The complete protocol specification is online, so I do not go into much technical detail here.

Faros (named after the Faros of Alexandria, which was one of the seven wonders of the ancient world) is an implementation of the Eddystone protocol targeting the popular Arduino platform and using the nRF8001 chip of Nordic Semiconductors. The main features of Faros are:

  • Full support for all Eddystone frame types (UID, URL, TLM)
  • Energy efficiency allowing for runtimes of several years
  • Using popular and powerful hardware platforms: Arduino and nRF8001 BLE chip
  • Simplicity of hardware: easy to build using a commodity Arduino or our Faros board together with the BLE module MOD-nRF8001 from Olimex
  • Liberal licensing: Apache License 2.0 for software, CERN Open Hardware Licence v1.2 for hardware

Faros is hosted at GitHub and includes:

  • Source code for Arduino
  • Faros board schematics and board layouts

Below you see several versions of Faros beacons:

  • Faros software running on a commodity Arduino Pro Micro, powered and programmed through the USB connector. This could also be easily setup on a breadboard.
  • Self-made Faros printed circuit board (50 mm x 70 mm) with an ATmega328P powered by two AA-size batteries. A PDF with the PCB mask is included in the Git repository.
  • Faros printed circuit board (50 mm x 50 mm) manufactured by Seeed Studio using an ATmega328P. 10 PCBs cost about 15 $ including shipping. Gerber files are included in the Git repository.
Faros on Arduino Pro Micro

Faros running on Arduino Pro Micro

Faros board with ATMega 326P

Self-made Faros board with ATmega328P

Faros Board

Faros board manufactured by Seeed Studio with ATmega328P

The Faros board further lowers cost and energy consumption—two very important requirements for field deployments. The Faros board just needs the BLE module and an ATmega328P (where the “P” stands for pico-power). It is programmed via ISP so you do not need USB. The ATmega is put into power-down mode whenever possible, and then consumes only few micro-ampere. The nRF8001 will wake up the ATmega whenever it has an event to transmit. Moreover, the watch dog timer wakes up the ATmega periodically to switch the Eddystone frame types. Thus, the beacon can send all three frame types sequentially in a round robin fashion.

The Faros board is kept as simple as possible (through-hole design, no SMD
components). It comes in two versions: (1) a single-sided 50 mm x 70 mm layout
that is well-suited for self-etching; (2) a double-sided 50 mm x 50 mm layout that can be sent to a PCB manufacturer.

The Faros board also has a 3 mm low-current LED, which can be switched on by a digital pin of the Arduino. It draws 10 mA at 3 V, which is a lot if one targets runtimes of several years! So if you aim at maximum battery lifetime, only send short pulses in long intervals (e.g., one 100 ms pulse every 30 s amounts to about 33 uA average current), or even better: rely on TLM frames (management is their job anyway).

Of course, the Faros board can be run from batteries. The nRF8001 can run down to 1.9 V; the ATmega down to 1.8 V at frequencies <= 4 MHz. The maximum voltage for the nRF8001 is 3.6 V. Thus, one good option is to use two AA- or two AAA-size alkaline batteries. They are cheap. They provide > 1800 mAh, which should suffice for several years of runtime. Two batteries are discharged at about 2.0 V (then the voltage drops rapidly), which fits nicely our desired voltage range of 1.9 – 3.0 V. And at runtimes of several years, no re-charging is required (you rather replace the device than changing the batteries). Of course, you can also try out other options like coin cells (e.g., one CR 2450 @ 3.0 V, 560 mAh), or one battery (1.5 V) plus a step-up converter (which wastes maybe more than 20 % energy for conversion and is probably more expensive than a second AA or AAA battery).

Finally, here are some screenshots from Google’s Eddystone Validator and Nordic’s nRF Master Control Panel showing the content of the Eddystone frames broadcasted by a Faros beacon.

Eddystone data sent by Faros beacon

Eddystone data sent by Faros beacon

UID Eddystone frame sent by Faros beacon

UID Eddystone frame sent by Faros beacon

URL Eddystone frame sent by Faros beacon

URL Eddystone frame sent by Faros beacon

TLM Eddystone frame sent by Faros beacon

TLM Eddystone frame sent by Faros beacon

All further technical details can be found in the Faros GIT repository at GitHub.

Have fun building your own Eddystone beacon with Faros!

Introducing SDN-MQ: A Powerful and Simple-to-Use Northbound Interface for OpenDaylight

One of the essential parts of an SDN controller is the so-called northbound interface through which network control applications implementing control logic interface with the SDN controller. The SDN controller then uses the OpenFlow protocol to program the switches according to the instructions of the control application. Since the northbound interface is the “API to the network”, a well-designed interface is essential for the acceptance and success of the SDN controller.

Ideally, the northbound interface should be powerful and still simple. Powerful means that it should expose all essential functionalities of OpenFlow to the control application. Certainly, the most essential function of SDN is flow programming to define forwarding table entries on the switches. Flow programming should include pro-active flow-programming, where the control application proactively decides to program a flow (e.g., a static flow), on the one hand. On the other hand, the northbound interface should support reactive flow programming where the control application reacts to packet-in events triggered by packets without matching forwarding table entries.

Simple means that the programmer should be able to use technologies that he is familiar with. So in short, the ideal northound interface should be as simple as possible, but not simpler.

Current Northbound Interfaces and Observed Limitations

OpenDaylight currently offers two kinds of northbound interfaces:

  1. RESTful interfaces using XML/JSON over HTTP.
  2. OSGi allowing for implementing control logic as OSGi services.

RESTful interfaces are simple to use since they are based on technologies that many programmers are familiar with and that are used in many web services. Parsing and creating JSON or XML messages and sending or receiving these messages over HTTP is straightforward and well-supported by many libraries. However, due to the request/response nature of REST and HTTP, these interfaces are restricted to proactive flow programming. The very essential feature of reacting to packet-in events is missing.

OSGi interfaces are powerful. Control applications can use any feature of the OpenFlow standard (implemented by the controller). However, they are much more complex than RESTful interfaces since OSGi itself is a complex technology. Moreover, OSGi is targeted at Java, which is nice for integrating it with the Java-based OpenDaylight controller, but bad if you want to use any other language to implement your control logic like C++ or Python.

So none of these interface seems to be simple and powerful at the same time.

How SDN can Benefit from Message-oriented Middleware

So how can we have the best of both worlds: a simple and powerful interface? The keyword (or maybe at least one possible keyword) is message-oriented middleware. As shown in the following figure, a message-oriented middleware (MOM) decouples the SDN controller from the control application through message queues for request/response interaction (proactive flow programming) and publish/subscribe topics for event-based interaction (reactive flow programming). So we can program flows through a request-response interface implemented by message queues and react to packet-in events by subscribing to events through message topics.

mom

Moreover, messages can be based on simple textual formats like XML or JSON making message creation and interpretation as simple as for the RESTful interfaces mentioned above, however, without their restriction to request/response interaction.

Since a MOM decouples the SDN controller from the control application, the control logic can be implemented in any programming language. SDN controller and application talk to each other using JSON/XML, and the MOM takes care to transport messages from application to SDN controller and vice versa.

This decoupling also allows for the horizontal distribution of control logic by running control applications on several hosts. Such a decoupling “in space” is perfect for scaling out horizontally.

MOMs not only decouple the controller and control application in space but also in time. So the receiver does not need to consume the message at the time when it is sent. Messages can be buffered by the MOM and delivered when the control application or SDN controller are available and ready to process it. Although being a nice feature in general, time decoupling might not be strictly essential for SDN since usually we want a timely reaction of both controller and application. Still, it might be handy for some delay tolerant functions.

SDN-MQ: Integrating Message-oriented Middleware and SDN Controller

SDN-MQ integrates a message-oriented middleware with the OpenDaylight controller. In more detail, SDN-MQ is based on the Java Messaging Service (JMS) standard. The basic fatures of SDN-MQ are:

  • All messages are consequently based on JSON making message generation and interpretation straightforward.
  • SDN-MQ supports proactive and reactive flow programming without the need to implement complex OSGi services.
  • SDN-MQ supports message filtering for packet-in events through standard JMS selectors. So the control application can define, which packet-in events to receive based on packet header fields like source and destination adddresses. According to the publish/subscribe paradigm, multiple control applications can receive packet-in event notifications for the same packet.
  • SDN control logic can be distributed horizontally to different hosts for scaling out control logic.
  • Although SDN-MQ is based on the Java-based JMS standard, JMS servers such as Apache ActiveMQ support further language-independent protocols like STOMP (Streaming Text Oriented Messaging Protocol). Therefore, cross-language control applications implemented in C++, Python, JavaScript, etc. are supported.
  • Besides packet-in events and flow programming, SDN-MQ supports further essential functionality such as packet forwarding/injection via the controller.
  • SDN-MQ is open source and licensed through the Eclipse license (similar to OpenDaylight). The full source code is available at GitHub.

The figure below shows the basic architecture of SDN-MQ. SDN-MQ is implemented as OSGi services executed within the same OSGi framework as the the OpenDaylight OSGi services. SDN-MQ uses the OpenDaylight services to provide its service to the control application. So basically, SDN-MQ acts as a bridge between OpenDaylight and control application.

sdn-mq

Three services are implemented by SDN-MQ to date:

  • Packet-in service to receive packet-in events including packet filtering based on header fields using JMS selectors.
  • Flow programming to define flow table entries on switches.
  • Packet forwarding to forward either packets received through packet-in events or new packets created by the application.

The JMS middleware transports messages between the SDN-MQ services and the control applications. As JMS middleware, we have used ActiveMQ so far, but any JMS-compliant service should work. If the message-oriented middleware is supporting other language-independent protocols (such as STOMP), control applications can be implemented in any supported language.

Where to go from here

In my next blog post, I will explain in detail how to use SDN-MQ. Until then, you can find more details and programming examples on the SDN-MQ website at GitHub.

Stay tuned!

Reactive Flow Programming with OpenDaylight

In my last OpenDaylight tutorial, I demonstrated how to implement an OSGi module for OpenDaylight. In this tutorial, I will show how to use these modules for reactive flow programming and packet forwarding.

In detail, you will learn:

  • how to decode incoming packets
  • how to set up flow table entries including packet match rules and actions
  • how to forward packets

Scenario

To make things concrete, we consider a simple scenario in this tutorial: load balancing of a TCP service (e.g., a web service using HTTP over TCP). The basic idea is that TCP connections to a service addressed through a public IP address and port number are distributed among two physical server instances using IP address re-writing performed by an OpenFlow switch. Whenever a client opens a TCP connection to the service, one of the server instances is chosen randomly, and a forwarding rule is installed by the network controller on the ingress switch to forward all incoming packets of this TCP connection to the chosen server instance. In order to make sure that the server instance accepts the packets of the TCP connection, the destination IP address is re-written to the IP address of the chosen server instance, and the destination MAC address is set to the MAC address of the server instance. In the reverse direction from server to client, the switch re-writes the source IP address of the server to the public IP address of the service. Therefore, to the client it looks like the response is coming from the public IP address. Thus, load balancing is transparent to the client.

To keep things simple, I do not consider the routing of packets. Rather, I assume that the clients and the two server instances are connected to the same switch on different ports (see figure below). Moreover, I also simplify MAC address resolution by setting a static ARP table entry at the client host for the public IP address. Since there is no physical server assigned to the public IP address, we just set a fake MAC address (in a real setup, the gateway of the data center would receive the client request, so we would not need an extra MAC address assigned to the public IP address).

load_balancing

I assume that you have read the previous tutorial, so I skip some explanations on how to set up an OpenDaylight Maven project, subscribe to services, and further OSGi module basics.

You can find all necessary files of this tutorial in this archive: myctrlapp.tar.gz

The folder myctrlapp containts the Maven project of the OSGi module. You can compile and create the OSGi bundle with the following command

user@host:$ tar xzf myctrlapp.tar.gz
user@host:$ cd ~/myctrlapp
user@host:$ mvn package

The corresponding Eclipse project can be created using

user@host:$ cd ~/myctrlapp
user@host:$ mvn eclipse:eclipse

Registering Required Services and Subscribing to Packet-in Events

For our simple load balancer, we need the following OpenDaylight services:

  • Data Packet Service for decoding incoming packets and encoding and sending outgoing packets.
  • Flow Programmer Service for setting flow table entries on the switch.
  • Switch Manager Service to determine the outport of packets forwarded to the server instances.

As explained in my previous tutorial, we register for OSGi services by implementing the configureInstance(...) method of the Activator class:

public void configureInstance(Component c, Object imp, String containerName) {
    log.trace("Configuring instance");

    if (imp.equals(PacketHandler.class)) {
        // Define exported and used services for PacketHandler component.

        Dictionary<String, Object> props = new Hashtable<String, Object>();
        props.put("salListenerName", "mypackethandler");

        // Export IListenDataPacket interface to receive packet-in events.
        c.setInterface(new String[] {IListenDataPacket.class.getName()}, props);

        // Need the DataPacketService for encoding, decoding, sending data packets
        c.add(createContainerServiceDependency(containerName).setService(IDataPacketService.class).setCallbacks(
            "setDataPacketService", "unsetDataPacketService").setRequired(true));

        // Need FlowProgrammerService for programming flows
        c.add(createContainerServiceDependency(containerName).setService(IFlowProgrammerService.class).setCallbacks(
            "setFlowProgrammerService", "unsetFlowProgrammerService").setRequired(true));

        // Need SwitchManager service for enumerating ports of switch
        c.add(createContainerServiceDependency(containerName).setService(ISwitchManager.class).setCallbacks(
            "setSwitchManagerService", "unsetSwitchManagerService").setRequired(true));
    }
}

set... and unset... define names of callback methods. These callback methods are implemented in our PacketHandler class to receive service proxy objects, which can be used to call the services:

/**
 * Sets a reference to the requested DataPacketService
 */
void setDataPacketService(IDataPacketService s) {
    log.trace("Set DataPacketService.");

    dataPacketService = s;
}

/**
 * Unsets DataPacketService
 */
void unsetDataPacketService(IDataPacketService s) {
    log.trace("Removed DataPacketService.");    

    if (dataPacketService == s) {
        dataPacketService = null;
    }
}

/**
 * Sets a reference to the requested FlowProgrammerService
 */
void setFlowProgrammerService(IFlowProgrammerService s) {
    log.trace("Set FlowProgrammerService.");

    flowProgrammerService = s;
}

/**
 * Unsets FlowProgrammerService
 */
void unsetFlowProgrammerService(IFlowProgrammerService s) {
    log.trace("Removed FlowProgrammerService.");

    if (flowProgrammerService == s) {
        flowProgrammerService = null;
    }
}

/**
 * Sets a reference to the requested SwitchManagerService
 */
void setSwitchManagerService(ISwitchManager s) {
   log.trace("Set SwitchManagerService.");

   switchManager = s;
}

/**
 * Unsets SwitchManagerService
 */
void unsetSwitchManagerService(ISwitchManager s) {
    log.trace("Removed SwitchManagerService.");

    if (switchManager == s) {
        switchManager = null;
    }
}

Moreover, we register for packet-in events in the Activator class. To this end, we must declate that we implement the IListenDataPacket interface (line 11). This interface basically consists of one callback method receiveDataPacket(...) for receiving packet-in events as described next.

Handling Packet-in Events

Whenever a packet without matching flow table entry arrives at the switch, it is sent to the controller and the event handler receiveDataPacket(...) of our packet handler class is called with the received packet as parameter:

@Override
public PacketResult receiveDataPacket(RawPacket inPkt) {
    // The connector, the packet came from ("port")
    NodeConnector ingressConnector = inPkt.getIncomingNodeConnector();
    // The node that received the packet ("switch")
    Node node = ingressConnector.getNode();

    log.trace("Packet from " + node.getNodeIDString() + " " + ingressConnector.getNodeConnectorIDString());

    // Use DataPacketService to decode the packet.
    Packet pkt = dataPacketService.decodeDataPacket(inPkt);

    if (pkt instanceof Ethernet) {
        Ethernet ethFrame = (Ethernet) pkt;
        Object l3Pkt = ethFrame.getPayload();

        if (l3Pkt instanceof IPv4) {
            IPv4 ipv4Pkt = (IPv4) l3Pkt;
            InetAddress clientAddr = intToInetAddress(ipv4Pkt.getSourceAddress());
            InetAddress dstAddr = intToInetAddress(ipv4Pkt.getDestinationAddress());
            Object l4Datagram = ipv4Pkt.getPayload();

            if (l4Datagram instanceof TCP) {
                TCP tcpDatagram = (TCP) l4Datagram;
                int clientPort = tcpDatagram.getSourcePort();
                int dstPort = tcpDatagram.getDestinationPort();

                if (publicInetAddress.equals(dstAddr) && dstPort == SERVICE_PORT) {
                    log.info("Received packet for load balanced service");

                    // Select one of the two servers round robin.

                    InetAddress serverInstanceAddr;
                    byte[] serverInstanceMAC;
                    NodeConnector egressConnector;

                    // Synchronize in case there are two incoming requests at the same time.
                    synchronized (this) {
                        if (serverNumber == 0) {
                            log.info("Server 1 is serving the request");
                            serverInstanceAddr = server1Address;
                            serverInstanceMAC = SERVER1_MAC;
                            egressConnector = switchManager.getNodeConnector(node, SERVER1_CONNECTOR_NAME);
                            serverNumber = 1;
                        } else {
                            log.info("Server 2 is serving the request");
                            serverInstanceAddr = server2Address;
                            serverInstanceMAC = SERVER2_MAC;
                            egressConnector = switchManager.getNodeConnector(node, SERVER2_CONNECTOR_NAME);
                            serverNumber = 0;
                        }
                    }

                    // Create flow table entry for further incoming packets

                    // Match incoming packets of this TCP connection 
                    // (4 tuple source IP, source port, destination IP, destination port)
                    Match match = new Match();
                    match.setField(MatchType.DL_TYPE, (short) 0x0800);  // IPv4 ethertype
                    match.setField(MatchType.NW_PROTO, (byte) 6);       // TCP protocol id
                    match.setField(MatchType.NW_SRC, clientAddr);
                    match.setField(MatchType.NW_DST, dstAddr);
                    match.setField(MatchType.TP_SRC, (short) clientPort);
                    match.setField(MatchType.TP_DST, (short) dstPort);

                    // List of actions applied to the packet
                    List actions = new LinkedList();

                    // Re-write destination IP to server instance IP
                    actions.add(new SetNwDst(serverInstanceAddr));

                    // Re-write destination MAC to server instance MAC
                    actions.add(new SetDlDst(serverInstanceMAC));

                    // Output packet on port to server instance
                    actions.add(new Output(egressConnector));

                    // Create the flow
                    Flow flow = new Flow(match, actions);

                    // Use FlowProgrammerService to program flow.
                    Status status = flowProgrammerService.addFlow(node, flow);
                    if (!status.isSuccess()) {
                        log.error("Could not program flow: " + status.getDescription());
                        return PacketResult.CONSUME;
                    }

                    // Create flow table entry for response packets from server to client

                    // Match outgoing packets of this TCP connection 
                    match = new Match();
                    match.setField(MatchType.DL_TYPE, (short) 0x0800); 
                    match.setField(MatchType.NW_PROTO, (byte) 6);
                    match.setField(MatchType.NW_SRC, serverInstanceAddr);
                    match.setField(MatchType.NW_DST, clientAddr);
                    match.setField(MatchType.TP_SRC, (short) dstPort);
                    match.setField(MatchType.TP_DST, (short) clientPort);

                    // Re-write the server instance IP address to the public IP address
                    actions = new LinkedList();
                    actions.add(new SetNwSrc(publicInetAddress));
                    actions.add(new SetDlSrc(SERVICE_MAC));

                    // Output to client port from which packet was received
                    actions.add(new Output(ingressConnector));

                    flow = new Flow(match, actions);
                    status = flowProgrammerService.addFlow(node, flow);
                    if (!status.isSuccess()) {
                        log.error("Could not program flow: " + status.getDescription());
                        return PacketResult.CONSUME;
                    }

                    // Forward initial packet to selected server

                    log.trace("Forwarding packet to " + serverInstanceAddr.toString() + " through port " + egressConnector.getNodeConnectorIDString());
                    ethFrame.setDestinationMACAddress(serverInstanceMAC);
                    ipv4Pkt.setDestinationAddress(serverInstanceAddr);
                    inPkt.setOutgoingNodeConnector(egressConnector);                       
                    dataPacketService.transmitDataPacket(inPkt);

                    return PacketResult.CONSUME;
                }
            }
        }
    }

    // We did not process the packet -> let someone else do the job.
    return PacketResult.IGNORED;
}

Our load balancer reacts as follows to packet-in events. First, it uses the Data Packet Service to decode the incoming packet using method decodeDataPacket(inPkt). We are only interested in packets addressed to the public IP address and port number of our load-balanced service. Therefore, we have to check the destination IP address and port number of the received packet. To this end, we iteratively decode the packet layer by layer. First, we check whether we received an Ethernet frame, and get the payload of the frame, which should be an IP packet for a TCP connection. If the payload of the frame is indeed an IPv4 packet, we typecast it to the corresponding IPv4 packet class and use the methods getSourceAddress(...) and getDestinationAddress(...) to retrieve the IP addresses of the client (source) and service (destination). Then, we go up one layer and check for a TCP payload to retrieve the port information in a similar way.

After we have retrieved the IP address and port information from the packet, we check whether it is targeted at our load-balanced service (line 28). If it is not addressed at our service, we ignore the packet and let another handler process it (if any) by returning packetResult.IGNORED as a result of the packet handler.

If the packet is addressed at our service, we choose one of the two physical service instances in a round-robin fashion (line 38–52). The idea is to send the first request to server 1, second request to server 2, third to server 1 again, etc. Note that we might have multiple packet handlers for different packets executed in parallel (at least, we should not rely on a sequential execution as long as we do not know how OpenDaylight handles requests). Therefore, we synchronize this part of the packet handler, to make sure that only one thread is in this code section at a time.

Programming Flows

To forward a packet to the selected server instance, we select its IP address and MAC address as target addresses for each packet of this TCP connection from the client. To this end, we use IP address and MAC address re-writing to re-write the IP destination address and MAC destination address of each incoming packet of this connection to the selected server address. Note that a TCP connection is identified by the 4-tuple [source IP, source MAC, destination IP, destination MAC]. Therefore, we use this information as match criteria for the flow that performs address re-writing and packet forwarding.

A flow table entry consists of a match rule and list of actions. As said, the match rule should identify packets of a certain TCP connection. To this end, we create a new Match object, and set the required fields as shown in line 58–64. Since we are matching on a TCP/IPv4 datagram, we must make sure to identify this packet type by setting the ethertype (0×0800 meaning IPv4) and protocol id (6 meaning TCP). Moreover, we set the source and destination IP address and port information of the client and service that identifies the individual TCP connection.

Afterwards, we define the actions to be applied to a matched packet of the TCP connection. We set an action for re-writing the IP destination address to the IP address of the selected server instance, as well as the destination MAC address (line 70 and 73). Moreover, we define an output action to forward packets over the switch port of the server instance. In line 43 and line 49, we use the Switch Manager Service to retrieve the corresponding connector of the switch by its name. Note that these names are not simply the port numbers but s1-eth1 and s1-eth1 in my setup using Mininet. If you want to find out the name of a port, you can use the web GUI of the OpenDaylight controller (http://controllerhost:8080/) and inspect the port names of the switch.

Sometimes, it might also be handy to enumerate all connectors of a switch (node) — e.g., to flood a packet — using the following method:

Set ports = switchManager.getUpNodeConnectors(node)

Finally, we create the flow with match criteria and actions, and program the switch using the Flow Programmer service in line 82.

In the reverse direction from server to client, we also install a flow that re-writes the source IP address and MAC address of outgoing packets to the address information of the public service (line 90–112).

Forwarding Packets

However, we are not done yet. Although now every new packet of the connection will be forwarded to the right server instance, we also have to forward the received initial packet (TCP SYN request) to the right server. To this end, we modify the destinaton address information of this packet as shown in line 116–119. Then, we use the Data Packet Service to forward the packet using method transmitDataPacket(...).

In this example, we simply re-used the received packet. However, sometimes you might want to create and send a new packet. To this end, you create the payloads of the packets on the different layers and encode them as a raw packet using the Data Packet Service:

TCP tcp = new TCP();
tcp.setDestinationPort(tcpDestinationPort);
tcp.setSourcePort(tcpSourcePort);
IPv4 ipv4 = new IPv4();
ipv4.setPayload(tcp);
ipv4.setSourceAddress(ipSourceAddress);
ipv4.setDestinationAddress(ipDestinationAddress);
ipv4.setProtocol((byte) 6);
Ethernet ethernet = new Ethernet();
ethernet.setSourceMACAddress(sourceMAC);
ethernet.setDestinationMACAddress(targetMAC);
ethernet.setEtherType(EtherTypes.IPv4.shortValue());
ethernet.setPayload(ipv4);
RawPacket destPkt = dataPacketService.encodeDataPacket(ethernet);

Testing

Following the instructions from my last tutorial, you can compile the OSGi bundle using Maven as follows:

user@host:$ cd ~/myctrlapp
user@host:$ mvn package

Then you start the OpenDaylight controller (here, I assume you use the release version located in directory ~/opendaylight):

user@host:$ cd ~/opendaylight
user@host:$ ./runs.sh

Afterwards, to avoid conflicts with our service, you should first stop OpenDaylight’s simple forwarding service and OpenDaylight’s load balancing service (which has nothing to do with our load balancing service) from the OSGi console:

osgi> ss | grep simple
171 ACTIVE org.opendaylight.controller.samples.simpleforwarding_0.4.1.SNAPSHOT
true
osgi> stop 171
osgi> osgi> ss | grep loadbalancer
150 ACTIVE org.opendaylight.controller.samples.loadbalancer.northbound_0.4.1.SNAPSHOT
187 ACTIVE org.opendaylight.controller.samples.loadbalancer_0.5.1.SNAPSHOT
true
osgi> stop 187

Both of these services implement packet handlers, and for now we want to make sure that they do not interfere with our handler.

Then, we can install our compiled OSGi bundle (located in /home/user/myctrlapp/target)

osgi> install file:/home/user/myctrlapp/target/myctrlapp-0.1.jar
Bundle id is 256

and start it:

osgi> start 256

You can also change the log level of our bundle to see log output down to the trace level:

osgi> setLogLevel de.frank_durr.myctrlapp.PacketHandler trace

Next, we create a simple Mininet topology with one switch and three hosts:

user@host:$ sudo mn --controller=remote,ip=129.69.210.89 --topo single,3 --mac --arp

Be sure to use the IP address of your OpenDaylight controller host. The option --mac assigns a MAC address according to the host number to each host (e.g., 00:00:00:00:00:01 for the first host). In our implementation, we use these addresses as hard-coded constants.

Option --arp pre-populates the ARP cache of the hosts. I use host 1 and 2 as the server hosts with the IP addresses 10.0.0.1 and 10.0.0.2. Host 3 runs the client. Therefore, I also set a static ARP table entry on host 3 for the public IP address of the service (10.0.0.100):

mininet> xterm h3
mininet h3> arp -s 10.0.0.100 00:00:00:00:00:64

On host 1 and 2 we start two simple servers using netcat listening on port 7777:

mininet> xterm h1
mininet> xterm h2
mininet h1> nc -l 7777
mininet h2> nc -l 7777

Then, we send a message to our service from the client on host 3 using again netcat:

mininet h3> echo "Hello" | nc 10.0.0.100 7777

Now, you should see the output “Hello” in the xterm of host 1. If you execute the same command again, the output will appear in the xterm of host 2. This shows that requests (TCP connections) are correctly distributed among the two servers.

Where to go from here

Basically, you can now implement your service using reactive flow programming. However, some further services might be helpful. For instance, according to the paradigm of logically centralized control, it might be interesting to query the global topology of the network, locations of hosts, etc. This, I plan to cover in future tutorials.