Reactive Flow Programming with OpenDaylight

In my last OpenDaylight tutorial, I demonstrated how to implement an OSGi module for OpenDaylight. In this tutorial, I will show how to use these modules for reactive flow programming and packet forwarding.

In detail, you will learn:

  • how to decode incoming packets
  • how to set up flow table entries including packet match rules and actions
  • how to forward packets

Scenario

To make things concrete, we consider a simple scenario in this tutorial: load balancing of a TCP service (e.g., a web service using HTTP over TCP). The basic idea is that TCP connections to a service addressed through a public IP address and port number are distributed among two physical server instances using IP address re-writing performed by an OpenFlow switch. Whenever a client opens a TCP connection to the service, one of the server instances is chosen randomly, and a forwarding rule is installed by the network controller on the ingress switch to forward all incoming packets of this TCP connection to the chosen server instance. In order to make sure that the server instance accepts the packets of the TCP connection, the destination IP address is re-written to the IP address of the chosen server instance, and the destination MAC address is set to the MAC address of the server instance. In the reverse direction from server to client, the switch re-writes the source IP address of the server to the public IP address of the service. Therefore, to the client it looks like the response is coming from the public IP address. Thus, load balancing is transparent to the client.

To keep things simple, I do not consider the routing of packets. Rather, I assume that the clients and the two server instances are connected to the same switch on different ports (see figure below). Moreover, I also simplify MAC address resolution by setting a static ARP table entry at the client host for the public IP address. Since there is no physical server assigned to the public IP address, we just set a fake MAC address (in a real setup, the gateway of the data center would receive the client request, so we would not need an extra MAC address assigned to the public IP address).

load_balancing

I assume that you have read the previous tutorial, so I skip some explanations on how to set up an OpenDaylight Maven project, subscribe to services, and further OSGi module basics.

You can find all necessary files of this tutorial in this archive: myctrlapp.tar.gz

The folder myctrlapp containts the Maven project of the OSGi module. You can compile and create the OSGi bundle with the following command

user@host:$ tar xzf myctrlapp.tar.gz
user@host:$ cd ~/myctrlapp
user@host:$ mvn package

The corresponding Eclipse project can be created using

user@host:$ cd ~/myctrlapp
user@host:$ mvn eclipse:eclipse

Registering Required Services and Subscribing to Packet-in Events

For our simple load balancer, we need the following OpenDaylight services:

  • Data Packet Service for decoding incoming packets and encoding and sending outgoing packets.
  • Flow Programmer Service for setting flow table entries on the switch.
  • Switch Manager Service to determine the outport of packets forwarded to the server instances.

As explained in my previous tutorial, we register for OSGi services by implementing the configureInstance(...) method of the Activator class:

public void configureInstance(Component c, Object imp, String containerName) {
    log.trace("Configuring instance");

    if (imp.equals(PacketHandler.class)) {
        // Define exported and used services for PacketHandler component.

        Dictionary<String, Object> props = new Hashtable<String, Object>();
        props.put("salListenerName", "mypackethandler");

        // Export IListenDataPacket interface to receive packet-in events.
        c.setInterface(new String[] {IListenDataPacket.class.getName()}, props);

        // Need the DataPacketService for encoding, decoding, sending data packets
        c.add(createContainerServiceDependency(containerName).setService(IDataPacketService.class).setCallbacks(
            "setDataPacketService", "unsetDataPacketService").setRequired(true));

        // Need FlowProgrammerService for programming flows
        c.add(createContainerServiceDependency(containerName).setService(IFlowProgrammerService.class).setCallbacks(
            "setFlowProgrammerService", "unsetFlowProgrammerService").setRequired(true));

        // Need SwitchManager service for enumerating ports of switch
        c.add(createContainerServiceDependency(containerName).setService(ISwitchManager.class).setCallbacks(
            "setSwitchManagerService", "unsetSwitchManagerService").setRequired(true));
    }
}

set... and unset... define names of callback methods. These callback methods are implemented in our PacketHandler class to receive service proxy objects, which can be used to call the services:

/**
 * Sets a reference to the requested DataPacketService
 */
void setDataPacketService(IDataPacketService s) {
    log.trace("Set DataPacketService.");

    dataPacketService = s;
}

/**
 * Unsets DataPacketService
 */
void unsetDataPacketService(IDataPacketService s) {
    log.trace("Removed DataPacketService.");    

    if (dataPacketService == s) {
        dataPacketService = null;
    }
}

/**
 * Sets a reference to the requested FlowProgrammerService
 */
void setFlowProgrammerService(IFlowProgrammerService s) {
    log.trace("Set FlowProgrammerService.");

    flowProgrammerService = s;
}

/**
 * Unsets FlowProgrammerService
 */
void unsetFlowProgrammerService(IFlowProgrammerService s) {
    log.trace("Removed FlowProgrammerService.");

    if (flowProgrammerService == s) {
        flowProgrammerService = null;
    }
}

/**
 * Sets a reference to the requested SwitchManagerService
 */
void setSwitchManagerService(ISwitchManager s) {
   log.trace("Set SwitchManagerService.");

   switchManager = s;
}

/**
 * Unsets SwitchManagerService
 */
void unsetSwitchManagerService(ISwitchManager s) {
    log.trace("Removed SwitchManagerService.");

    if (switchManager == s) {
        switchManager = null;
    }
}

Moreover, we register for packet-in events in the Activator class. To this end, we must declate that we implement the IListenDataPacket interface (line 11). This interface basically consists of one callback method receiveDataPacket(...) for receiving packet-in events as described next.

Handling Packet-in Events

Whenever a packet without matching flow table entry arrives at the switch, it is sent to the controller and the event handler receiveDataPacket(...) of our packet handler class is called with the received packet as parameter:

@Override
public PacketResult receiveDataPacket(RawPacket inPkt) {
    // The connector, the packet came from ("port")
    NodeConnector ingressConnector = inPkt.getIncomingNodeConnector();
    // The node that received the packet ("switch")
    Node node = ingressConnector.getNode();

    log.trace("Packet from " + node.getNodeIDString() + " " + ingressConnector.getNodeConnectorIDString());

    // Use DataPacketService to decode the packet.
    Packet pkt = dataPacketService.decodeDataPacket(inPkt);

    if (pkt instanceof Ethernet) {
        Ethernet ethFrame = (Ethernet) pkt;
        Object l3Pkt = ethFrame.getPayload();

        if (l3Pkt instanceof IPv4) {
            IPv4 ipv4Pkt = (IPv4) l3Pkt;
            InetAddress clientAddr = intToInetAddress(ipv4Pkt.getSourceAddress());
            InetAddress dstAddr = intToInetAddress(ipv4Pkt.getDestinationAddress());
            Object l4Datagram = ipv4Pkt.getPayload();

            if (l4Datagram instanceof TCP) {
                TCP tcpDatagram = (TCP) l4Datagram;
                int clientPort = tcpDatagram.getSourcePort();
                int dstPort = tcpDatagram.getDestinationPort();

                if (publicInetAddress.equals(dstAddr) && dstPort == SERVICE_PORT) {
                    log.info("Received packet for load balanced service");

                    // Select one of the two servers round robin.

                    InetAddress serverInstanceAddr;
                    byte[] serverInstanceMAC;
                    NodeConnector egressConnector;

                    // Synchronize in case there are two incoming requests at the same time.
                    synchronized (this) {
                        if (serverNumber == 0) {
                            log.info("Server 1 is serving the request");
                            serverInstanceAddr = server1Address;
                            serverInstanceMAC = SERVER1_MAC;
                            egressConnector = switchManager.getNodeConnector(node, SERVER1_CONNECTOR_NAME);
                            serverNumber = 1;
                        } else {
                            log.info("Server 2 is serving the request");
                            serverInstanceAddr = server2Address;
                            serverInstanceMAC = SERVER2_MAC;
                            egressConnector = switchManager.getNodeConnector(node, SERVER2_CONNECTOR_NAME);
                            serverNumber = 0;
                        }
                    }

                    // Create flow table entry for further incoming packets

                    // Match incoming packets of this TCP connection 
                    // (4 tuple source IP, source port, destination IP, destination port)
                    Match match = new Match();
                    match.setField(MatchType.DL_TYPE, (short) 0x0800);  // IPv4 ethertype
                    match.setField(MatchType.NW_PROTO, (byte) 6);       // TCP protocol id
                    match.setField(MatchType.NW_SRC, clientAddr);
                    match.setField(MatchType.NW_DST, dstAddr);
                    match.setField(MatchType.TP_SRC, (short) clientPort);
                    match.setField(MatchType.TP_DST, (short) dstPort);

                    // List of actions applied to the packet
                    List actions = new LinkedList();

                    // Re-write destination IP to server instance IP
                    actions.add(new SetNwDst(serverInstanceAddr));

                    // Re-write destination MAC to server instance MAC
                    actions.add(new SetDlDst(serverInstanceMAC));

                    // Output packet on port to server instance
                    actions.add(new Output(egressConnector));

                    // Create the flow
                    Flow flow = new Flow(match, actions);

                    // Use FlowProgrammerService to program flow.
                    Status status = flowProgrammerService.addFlow(node, flow);
                    if (!status.isSuccess()) {
                        log.error("Could not program flow: " + status.getDescription());
                        return PacketResult.CONSUME;
                    }

                    // Create flow table entry for response packets from server to client

                    // Match outgoing packets of this TCP connection 
                    match = new Match();
                    match.setField(MatchType.DL_TYPE, (short) 0x0800); 
                    match.setField(MatchType.NW_PROTO, (byte) 6);
                    match.setField(MatchType.NW_SRC, serverInstanceAddr);
                    match.setField(MatchType.NW_DST, clientAddr);
                    match.setField(MatchType.TP_SRC, (short) dstPort);
                    match.setField(MatchType.TP_DST, (short) clientPort);

                    // Re-write the server instance IP address to the public IP address
                    actions = new LinkedList();
                    actions.add(new SetNwSrc(publicInetAddress));
                    actions.add(new SetDlSrc(SERVICE_MAC));

                    // Output to client port from which packet was received
                    actions.add(new Output(ingressConnector));

                    flow = new Flow(match, actions);
                    status = flowProgrammerService.addFlow(node, flow);
                    if (!status.isSuccess()) {
                        log.error("Could not program flow: " + status.getDescription());
                        return PacketResult.CONSUME;
                    }

                    // Forward initial packet to selected server

                    log.trace("Forwarding packet to " + serverInstanceAddr.toString() + " through port " + egressConnector.getNodeConnectorIDString());
                    ethFrame.setDestinationMACAddress(serverInstanceMAC);
                    ipv4Pkt.setDestinationAddress(serverInstanceAddr);
                    inPkt.setOutgoingNodeConnector(egressConnector);                       
                    dataPacketService.transmitDataPacket(inPkt);

                    return PacketResult.CONSUME;
                }
            }
        }
    }

    // We did not process the packet -> let someone else do the job.
    return PacketResult.IGNORED;
}

Our load balancer reacts as follows to packet-in events. First, it uses the Data Packet Service to decode the incoming packet using method decodeDataPacket(inPkt). We are only interested in packets addressed to the public IP address and port number of our load-balanced service. Therefore, we have to check the destination IP address and port number of the received packet. To this end, we iteratively decode the packet layer by layer. First, we check whether we received an Ethernet frame, and get the payload of the frame, which should be an IP packet for a TCP connection. If the payload of the frame is indeed an IPv4 packet, we typecast it to the corresponding IPv4 packet class and use the methods getSourceAddress(...) and getDestinationAddress(...) to retrieve the IP addresses of the client (source) and service (destination). Then, we go up one layer and check for a TCP payload to retrieve the port information in a similar way.

After we have retrieved the IP address and port information from the packet, we check whether it is targeted at our load-balanced service (line 28). If it is not addressed at our service, we ignore the packet and let another handler process it (if any) by returning packetResult.IGNORED as a result of the packet handler.

If the packet is addressed at our service, we choose one of the two physical service instances in a round-robin fashion (line 38–52). The idea is to send the first request to server 1, second request to server 2, third to server 1 again, etc. Note that we might have multiple packet handlers for different packets executed in parallel (at least, we should not rely on a sequential execution as long as we do not know how OpenDaylight handles requests). Therefore, we synchronize this part of the packet handler, to make sure that only one thread is in this code section at a time.

Programming Flows

To forward a packet to the selected server instance, we select its IP address and MAC address as target addresses for each packet of this TCP connection from the client. To this end, we use IP address and MAC address re-writing to re-write the IP destination address and MAC destination address of each incoming packet of this connection to the selected server address. Note that a TCP connection is identified by the 4-tuple [source IP, source MAC, destination IP, destination MAC]. Therefore, we use this information as match criteria for the flow that performs address re-writing and packet forwarding.

A flow table entry consists of a match rule and list of actions. As said, the match rule should identify packets of a certain TCP connection. To this end, we create a new Match object, and set the required fields as shown in line 58–64. Since we are matching on a TCP/IPv4 datagram, we must make sure to identify this packet type by setting the ethertype (0×0800 meaning IPv4) and protocol id (6 meaning TCP). Moreover, we set the source and destination IP address and port information of the client and service that identifies the individual TCP connection.

Afterwards, we define the actions to be applied to a matched packet of the TCP connection. We set an action for re-writing the IP destination address to the IP address of the selected server instance, as well as the destination MAC address (line 70 and 73). Moreover, we define an output action to forward packets over the switch port of the server instance. In line 43 and line 49, we use the Switch Manager Service to retrieve the corresponding connector of the switch by its name. Note that these names are not simply the port numbers but s1-eth1 and s1-eth1 in my setup using Mininet. If you want to find out the name of a port, you can use the web GUI of the OpenDaylight controller (http://controllerhost:8080/) and inspect the port names of the switch.

Sometimes, it might also be handy to enumerate all connectors of a switch (node) — e.g., to flood a packet — using the following method:

Set ports = switchManager.getUpNodeConnectors(node)

Finally, we create the flow with match criteria and actions, and program the switch using the Flow Programmer service in line 82.

In the reverse direction from server to client, we also install a flow that re-writes the source IP address and MAC address of outgoing packets to the address information of the public service (line 90–112).

Forwarding Packets

However, we are not done yet. Although now every new packet of the connection will be forwarded to the right server instance, we also have to forward the received initial packet (TCP SYN request) to the right server. To this end, we modify the destinaton address information of this packet as shown in line 116–119. Then, we use the Data Packet Service to forward the packet using method transmitDataPacket(...).

In this example, we simply re-used the received packet. However, sometimes you might want to create and send a new packet. To this end, you create the payloads of the packets on the different layers and encode them as a raw packet using the Data Packet Service:

TCP tcp = new TCP();
tcp.setDestinationPort(tcpDestinationPort);
tcp.setSourcePort(tcpSourcePort);
IPv4 ipv4 = new IPv4();
ipv4.setPayload(tcp);
ipv4.setSourceAddress(ipSourceAddress);
ipv4.setDestinationAddress(ipDestinationAddress);
ipv4.setProtocol((byte) 6);
Ethernet ethernet = new Ethernet();
ethernet.setSourceMACAddress(sourceMAC);
ethernet.setDestinationMACAddress(targetMAC);
ethernet.setEtherType(EtherTypes.IPv4.shortValue());
ethernet.setPayload(ipv4);
RawPacket destPkt = dataPacketService.encodeDataPacket(ethernet);

Testing

Following the instructions from my last tutorial, you can compile the OSGi bundle using Maven as follows:

user@host:$ cd ~/myctrlapp
user@host:$ mvn package

Then you start the OpenDaylight controller (here, I assume you use the release version located in directory ~/opendaylight):

user@host:$ cd ~/opendaylight
user@host:$ ./runs.sh

Afterwards, to avoid conflicts with our service, you should first stop OpenDaylight’s simple forwarding service and OpenDaylight’s load balancing service (which has nothing to do with our load balancing service) from the OSGi console:

osgi> ss | grep simple
171 ACTIVE org.opendaylight.controller.samples.simpleforwarding_0.4.1.SNAPSHOT
true
osgi> stop 171
osgi> osgi> ss | grep loadbalancer
150 ACTIVE org.opendaylight.controller.samples.loadbalancer.northbound_0.4.1.SNAPSHOT
187 ACTIVE org.opendaylight.controller.samples.loadbalancer_0.5.1.SNAPSHOT
true
osgi> stop 187

Both of these services implement packet handlers, and for now we want to make sure that they do not interfere with our handler.

Then, we can install our compiled OSGi bundle (located in /home/user/myctrlapp/target)

osgi> install file:/home/user/myctrlapp/target/myctrlapp-0.1.jar
Bundle id is 256

and start it:

osgi> start 256

You can also change the log level of our bundle to see log output down to the trace level:

osgi> setLogLevel de.frank_durr.myctrlapp.PacketHandler trace

Next, we create a simple Mininet topology with one switch and three hosts:

user@host:$ sudo mn --controller=remote,ip=129.69.210.89 --topo single,3 --mac --arp

Be sure to use the IP address of your OpenDaylight controller host. The option --mac assigns a MAC address according to the host number to each host (e.g., 00:00:00:00:00:01 for the first host). In our implementation, we use these addresses as hard-coded constants.

Option --arp pre-populates the ARP cache of the hosts. I use host 1 and 2 as the server hosts with the IP addresses 10.0.0.1 and 10.0.0.2. Host 3 runs the client. Therefore, I also set a static ARP table entry on host 3 for the public IP address of the service (10.0.0.100):

mininet> xterm h3
mininet h3> arp -s 10.0.0.100 00:00:00:00:00:64

On host 1 and 2 we start two simple servers using netcat listening on port 7777:

mininet> xterm h1
mininet> xterm h2
mininet h1> nc -l 7777
mininet h2> nc -l 7777

Then, we send a message to our service from the client on host 3 using again netcat:

mininet h3> echo "Hello" | nc 10.0.0.100 7777

Now, you should see the output “Hello” in the xterm of host 1. If you execute the same command again, the output will appear in the xterm of host 2. This shows that requests (TCP connections) are correctly distributed among the two servers.

Where to go from here

Basically, you can now implement your service using reactive flow programming. However, some further services might be helpful. For instance, according to the paradigm of logically centralized control, it might be interesting to query the global topology of the network, locations of hosts, etc. This, I plan to cover in future tutorials.

Developing OSGi Components for OpenDaylight

In this tutorial, I will explain how to develop an OSGi component for OpenDaylight that is implementing custom network control logic. In contrast to the REST interface, which I have explained in one of my last posts, OSGi components can receive packet-in events, which are triggered when a packet without matching flow table entry arrives at a switch. Therefore, in order to do reactive flow programming, OSGi components are the right way to go in OpenDaylight.

Even for experienced Java programmers, the learning curve for developing OSGi components for OpenDaylight is quite steep. OpenDaylight uses powerful development tools and techniques like Maven and OSGi. Moreover, the project structure is quite complex and the number of Java classes overwhelming at first. However, as you will see in this tutorial, the development process is quite straightforward and thanks to Maven very convenient.

In order to explain everything step by step, I will go through the development of a simple OSGi component. This component does nothing special. It basically displays a message when an IPv4 packet is received to show the destination address, data path id, and ingress port. However, you will learn many things that will help you in developing your own control components like:

  • How to setup an OpenDaylight Maven project?
  • How to install, uninstall, start, and stop an OSGi bundle in OpenDaylight at runtime?
  • How to manage the OSGi component dependencies and life-cycle?
  • How to receive packet-in events through data packet listeners?
  • How to decode packets using the OpenDaylight Data Packet Service

I should note here, that I will use the so-called API-driven Service Abstraction Layer (SAL) of OpenDaylight. OpenDaylight implements a second alternative API called the Model-driven SAL. This API I might cover in a future post.

So let’s get started!

The Big Picture

The figure below shows the architecture of our system. It consists of a number of OSGi bundles that bundle together Java classes, resources, and a manifest file. One of these bundles called the MyControlApp bundle is the bundle we are developing in this tutorial. Other bundles are coming from the OpenDaylight project like the SAL (Service Abstraction Layer) bundle.

Bundles are executed atop the OSGi Framework (Equinox in OpenDaylight). The interesting thing about OSGi is that bundles can be installed and removed at runtime, so you do not have to stop the SDN controller to add or modify control logic.

opendaylight-osgi

As you can also see, OSGi bundles are offering services that can be called by other OSGi components. One interesting service that comes with OpenDaylight and that we will use during this tutorial is the Data Packet Service (interface IDataPacketService) to decode data packets.

Although our simple control component is not offering functionality to any other bundle, it is important to understand that in order to receive packet-in events, it has to offer a service implementing the IListenDataPacket interface. Whenever an OpenFlow packet-in event arrives at the controller, the SAL invokes the components that implement the IListenDataPacket interface, among them our bundle.

Prerequisites

Before we start developing our component, we should get a running copy of OpenDaylight. Lately, the first release version of OpenDaylight was released. You can get a copy from this URL.

Or you can get the latest version from the OpenDaylight GIT repository and compile it yourself:

user@host:$ git clone https://git.opendaylight.org/gerrit/p/controller.git
user@host:$ cd ./controller/opendaylight/distribution/opendaylight/
user@host:$ mvn clean install

Actually, in order to develop an OpenDaylight OSGi component, you do not need the OpenDaylight source code! As we will see below, we can just import the required components as JARs from the OpenDaylight repository.

During the compile process, you see that Maven downloads many Java packages on the fly. If you have never used Maven before, this can be quite confusing. Haven’t we just downloaded the complete project with git? Actually, Maven can automatically download project dependencies (libraries, plugins) from a remote repository and place them into your local repository so they are available during the build process. Your local repository usually resides in ~/.m2. If you look into this repository after you have compiled OpenDaylight, you will see all the libraries that Maven downloaded:

user@host:$ ls ~/.m2/repository/
antlr                     classworlds          commons-fileupload  dom4j          jline  regexp
aopalliance               com                  commons-httpclient  eclipselink    junit  stax
asm                       commons-beanutils    commons-io          equinoxSDK381  log4j  virgomirror
backport-util-concurrent  commons-cli          commons-lang        geminiweb      net    xerces
biz                       commons-codec        commons-logging     io             orbit  xml-apis
bsh                       commons-collections  commons-net         javax          org    xmlunit
ch                        commons-digester     commons-validator   jfree          oro

For instance, you see that Maven has downloaded the Apache Xerces XML parser. We will come back to this nice feature later when we discuss our project dependencies.

I will refer to the root directory of the controller as ~/controller in the following.

Creating the Maven Project

Now we start developing our OSGi component. Since OpenDaylight is based on Maven, it is a good idea to also use Maven for our own project. So we start by creating a Maven project for our OSGi component. First, create the following project structure. I will refer to the root directory of our component as ~/myctrlapp:

myctrlapp
  |--src
       |--main
           |--java
               |--de
                   |--frank_durr
                       |--myctrlapp

Obviously, Java implementations go into the folder src/main/java. I used the package de.frank_durr.myctrlapp for the implementation of my control component.

Essential to the Maven build process is a so-called Project Object Model (POM) file called pom.xml that you have to create in the folder ~/myctrlapp with the following content:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

    <modelVersion>4.0.0</modelVersion>

    <groupId>de.frank_durr</groupId>
    <artifactId>myctrlapp</artifactId>
    <version>0.1</version>
    <packaging>bundle</packaging>

    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.felix</groupId>
                <artifactId>maven-bundle-plugin</artifactId>
                <version>2.3.7</version>
                <extensions>true</extensions>
                <configuration>
                    <instructions>
                        <Import-Package>
                            *
                        </Import-Package>
                        <Export-Package>
                            de.frank_durr.myctrlapp
                        </Export-Package>
                        <Bundle-Activator>
                            de.frank_durr.myctrlapp.Activator
                        </Bundle-Activator>
                    </instructions>
                    <manifestLocation>${project.basedir}/META-INF</manifestLocation>
                </configuration>
            </plugin>
        </plugins>
    </build>

    <dependencies>
        <dependency>
            <groupId>org.opendaylight.controller</groupId>
            <artifactId>sal</artifactId>
            <version>0.7.0</version>
        </dependency>
    </dependencies>

    <repositories>
        <!-- OpenDaylight releases -->
        <repository>
            <id>opendaylight-mirror</id>
            <name>opendaylight-mirror</name>
            <url>http://nexus.opendaylight.org/content/groups/public/</url>
            <snapshots>
                <enabled>false</enabled>
            </snapshots>
            <releases>
                <enabled>true</enabled>
                <updatePolicy>never</updatePolicy>
            </releases>
        </repository>
        <!-- OpenDaylight snapshots -->
        <repository>
            <id>opendaylight-snapshot</id>
            <name>opendaylight-snapshot</name>
            <url>http://nexus.opendaylight.org/content/repositories/opendaylight.snapshot/</url>
            <snapshots>
                <enabled>true</enabled>
            </snapshots>
            <releases>
                <enabled>false</enabled>
            </releases>
        </repository>
    </repositories>
</project>

First, we define our group id (unique id of our organization) and artifact id (name of our component/project) as well as a version number. The packaging element specifies that an OSGi bundle (JAR file with classes, resources, and manifest file) should be built.

During the Maven build process, plugins are invoked. One very important plugin here is the Bundle plugin from the Apache Felix project that creates our OSGi bundle. The import element specifies every package that should be imported by the bundle. The wildcard * imports “everything referred to by the bundle content, but not contained in the bundle” [Apache Felix], which is reasonable and much less cumbersome than specifying the imports explicitly. Moreover, we export every implementation from our package.

The bundle activator is called during the life-cycle of our bundle when it is started or stopped. Below I show how it is used to register for services used by our component and how to export the interface of our component.

The dependency element specifies other packages to which our component has a dependency. Remember when I said that Maven will download required libraries (JARs) automatically to your local repository in ~/.m2? Of course, it can only do that if you tell Maven what you need. We basically need the API-driven Service Abstraction Layer (SAL) of OpenDaylight. The OpenDaylight project provides an own repository with the readily-compiled components (see repositories element). Thus, Maven will download the JARs from this remote repository. No need to import all the source code of OpenDaylight into Eclipse! In my example, I use the release version 0.7.0. You can also use a snapshot by changing the version to 0.7.0-SNAPSHOT (or whatever version is available in the snapshot repository; just browse the repository URL given above to find out). If you need further packages, have a look at the central Maven repository.

From this POM file, you can now create an Eclipse project by executing:

user@host:$ cd ~/myctrlapp
user@host:$ mvn eclipse:eclipse

Remember to re-create the Eclipse project using this command, when you make changes to the POM.

Afterwards, you can import the project into Eclipse:

  • Menu Import / General / Existing projects into workspace
  • Select root folder ~/myctrlapp

Implementation of OSGi Component: The Activator

In order to implement our OSGi component, we only need two class files: an OSGi activator registering our component with the OSGi framework and a packet handler implementing the control logic and executing actions whenever a packet-in event is received.

First, we implement the activator by creating the file Activator.java in the directory ~/myctrlapp/src/main/java/frank_durr/myctrlapp:

package de.frank_durr.myctrlapp;
import java.util.Dictionary;
import java.util.Hashtable;

import org.apache.felix.dm.Component;
import org.opendaylight.controller.sal.core.ComponentActivatorAbstractBase;
import org.opendaylight.controller.sal.packet.IDataPacketService;
import org.opendaylight.controller.sal.packet.IListenDataPacket;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class Activator extends ComponentActivatorAbstractBase {

    private static final Logger log = LoggerFactory.getLogger(PacketHandler.class);

    public Object[] getImplementations() {
        log.trace("Getting Implementations");

        Object[] res = { PacketHandler.class };
        return res;
    }

    public void configureInstance(Component c, Object imp, String containerName) {
        log.trace("Configuring instance");

        if (imp.equals(PacketHandler.class)) {

            // Define exported and used services for PacketHandler component.

            Dictionary<String, Object> props = new Hashtable<String, Object>();
            props.put("salListenerName", "mypackethandler");

            // Export IListenDataPacket interface to receive packet-in events.
            c.setInterface(new String[] {IListenDataPacket.class.getName()}, props);

            // Need the DataPacketService for encoding, decoding, sending data packets
            c.add(createContainerServiceDependency(containerName).setService(IDataPacketService.class).setCallbacks("setDataPacketService", "unsetDataPacketService").setRequired(true));

        }
    }
}

We extend the base class ComponentActivatorAbstractBase from the OpenDaylight controller. Developers already familiar with OSGi know that there are two methods start() and stop() that are called by the OSGi framework when the bundle is started or stopped, respectively. These two methods are overridden in the class ComponentActivatorAbstractBase to mange the life-cycle of an OpenDaylight component. From there, the two methods getImplementations() and configureInstance() are called.

The method getImplementations() returns the classes implementing components of this bundle. A bundle can implement more than one component, for instance, a packet handler for ARP requests and one for IP packets. However, our bundle just implements one component: the one reacting to packet-in events, which is implemented by our PacketHandler class (the second class described below). So we just return one implementation.

Method configureInstance() configures the component and, in particular, declares exported service interfaces and the services used. Since an OSGi bundle can implement more than one component, it is good style to check, which component should be configured in line 26.

Then we declare the services exported by our component. Recall that in order to receive packet-in events, the component has to implement the service interface IListenDataPacket. Therefore, by specifying that our class PacketHandler implements this interface in line 34, we implicitly register our component as listener for packet-in events. Moreover, we have to give our listener a name (line 31) using the property salListenerName. If you want to understand in detail, what is happening during registration, I recommend to have a look at the method setListenDataPacket() of class org.opendaylight.controller.sal.implementation.internal.DataPacketService. There you will see that so far, packet handlers are called sequentially. There might be many components that have registered for packet-in events, and you cannot force OpenDaylight to call your listener first before another one gets the event. So the order in which listeners are called is basically unspecified. However, you can create dependency lists using the property “salListenerDependency”. Moreover, using the property “salListenerFilter” you can set a org.opendaylight.controller.sal.match.Match object for the listener to filter packets according to header fields. Otherwise, you will receive all packets (if not other listener consumes it before our handler is called; see below).

Besides exporting our packet listener implementation, we also use other services. These dependencies are declared in line 37. In our example, we only use one service implementing the IDataPacketService interface. You might say now, “fine, but how do I get the object implementing this service to call it?”. To this end, you define two callback functions as part of your component class (PacketHandler), here called setDataPacketService() and unsetDataPacketService(). These callback functions are called with a reference to the service (see implementation of PacketHandler below).

Implementation of OSGi Component: The Packet Handler

The second part of our implementation is the packet handler, which receives packet-in events (the class that you have configured through the activator above). To this end, we implement the class PacketHandler by creating the following file PacketHandler.java in the directory ~/myctrlapp/src/main/java/frank_durr/myctrlapp:

package de.frank_durr.myctrlapp;

import java.net.InetAddress;
import java.net.UnknownHostException;

import org.opendaylight.controller.sal.core.Node;
import org.opendaylight.controller.sal.core.NodeConnector;
import org.opendaylight.controller.sal.packet.Ethernet;
import org.opendaylight.controller.sal.packet.IDataPacketService;
import org.opendaylight.controller.sal.packet.IListenDataPacket;
import org.opendaylight.controller.sal.packet.IPv4;
import org.opendaylight.controller.sal.packet.Packet;
import org.opendaylight.controller.sal.packet.PacketResult;
import org.opendaylight.controller.sal.packet.RawPacket;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class PacketHandler implements IListenDataPacket {

    private static final Logger log = LoggerFactory.getLogger(PacketHandler.class);
    private IDataPacketService dataPacketService;

    static private InetAddress intToInetAddress(int i) {
        byte b[] = new byte[] { (byte) ((i>>24)&0xff), (byte) ((i>>16)&0xff), (byte) ((i>>8)&0xff), (byte) (i&0xff) };
        InetAddress addr;
        try {
            addr = InetAddress.getByAddress(b);
        } catch (UnknownHostException e) {
            return null;
        }

        return addr;
    }

    /*
     * Sets a reference to the requested DataPacketService
     * See Activator.configureInstance(...):
     * c.add(createContainerServiceDependency(containerName).setService(
     * IDataPacketService.class).setCallbacks(
     * "setDataPacketService", "unsetDataPacketService")
     * .setRequired(true));
     */
    void setDataPacketService(IDataPacketService s) {
        log.trace("Set DataPacketService.");

        dataPacketService = s;
    }

    /*
     * Unsets DataPacketService
     * See Activator.configureInstance(...):
     * c.add(createContainerServiceDependency(containerName).setService(
     * IDataPacketService.class).setCallbacks(
     * "setDataPacketService", "unsetDataPacketService")
     * .setRequired(true));
     */
    void unsetDataPacketService(IDataPacketService s) {
        log.trace("Removed DataPacketService.");

        if (dataPacketService == s) {
            dataPacketService = null;
        }
    }

    @Override
    public PacketResult receiveDataPacket(RawPacket inPkt) {
        log.trace("Received data packet.");

        // The connector, the packet came from ("port")
        NodeConnector ingressConnector = inPkt.getIncomingNodeConnector();
        // The node that received the packet ("switch")
        Node node = ingressConnector.getNode();

        // Use DataPacketService to decode the packet.
        Packet l2pkt = dataPacketService.decodeDataPacket(inPkt);

        if (l2pkt instanceof Ethernet) {
            Object l3Pkt = l2pkt.getPayload();
            if (l3Pkt instanceof IPv4) {
                IPv4 ipv4Pkt = (IPv4) l3Pkt;
                int dstAddr = ipv4Pkt.getDestinationAddress();
                InetAddress addr = intToInetAddress(dstAddr);
                System.out.println("Pkt. to " + addr.toString() + " received by node " + node.getNodeIDString() + " on connector " + ingressConnector.getNodeConnectorIDString());
                return PacketResult.KEEP_PROCESSING;
            }
        }
        // We did not process the packet -> let someone else do the job.
        return PacketResult.IGNORED;
    }
}

As you can see, our handler implements the listener interface IListenDataPacket. This interface declares the function receiveDataPacket(), which is called with the raw packet after a packet-in event from OpenFlow.

In order to parse the raw packet, we use the OpenDaylight Data Packet Service (object dataPacketService). As described for the activator, during the component configuration, we set two callback functions in our packet handler implementation, namely, setDataPacketService() and unsetDataPacketService(). Method setDataPacketService() is called with a reference to the data packet service, which is then used for parsing raw packets. After receiving a raw packet “inPkt”, we call dataPacketService.decodeDataPacket(inPkt) to get a layer 2 frame. Using instanceof, we can check for the class of the returned packet. If it is an Ethernet frame, we go on and get the payload from this frame, which is the layer 3 packet. Again, we check the type, and if it is an IPv4 packet, we dump the destination address.

Moreover, the example shows how to determine the node (i.e., switch) that received the packet and connector (i.e., port) on which the packet was received (lines 72 and 75).

Finally, we decide whether the packet should be further processed by another handler, or whether we want to consume the packet by returning a corresponding return value. PacketResult.KEEP_PROCESSING says, our handler has processed the packet, but others should also be allowed to do so. PacketResult.CONSUME means, no other handler after us receives the packet anymore (as described above, handlers are sorted in a list and called sequentially). PacketResult.IGNORED says, packet processing should go on since we did not handle the packet.

Deploying the OSGI Bundle

Now that we have implemented our component, we can first compile and bundle it using Maven:

user@host:$ cd ~/myctrlapp
user@host:$ mvn package

If our POM file and code are correct, this should create the bundle (JAR file) ~/myctrlapp/target/myctrlapp-0.1.jar.

This bundle can now be installed in the OSGi framework Equinox of OpenDaylight. First, start the controller:

user@host:$ cd ~/controller/opendaylight/distribution/opendaylight/target/distribution.opendaylight-osgipackage/opendaylight/
user@host:$ ./runs.sh

In the OSGi console install the bundle by specifying its URL:

osgi> install file:/home/user/myctrlapp/target/myctrlapp-0.1.jar
Bundle id is 256

We see that the id of our bundle is 256. Using this id, we can start the bundle next:

osgi> start 256

You can check, whether it is running by listing all OSGi bundles using the command ss:

osgi> ss
...
251 ACTIVE org.opendaylight.controller.hosttracker.implementation_0.5.1.SNAPSHOT
252 ACTIVE org.opendaylight.controller.sal-remoterpc-connector_1.0.0.SNAPSHOT
253 ACTIVE org.opendaylight.controller.config-persister-api_0.2.3.SNAPSHOT
256 ACTIVE de.frank_durr.myctrlapp_0.1.0

Similarly, you can stop and uninstall the bundle using the commands stop and uninstall, respectively:

osgi> stop 256
osgi> uninstall 256

Before we test our bundle, we stop two OpenDaylight services, namely, the Simple Forwarding Service and Load Balancing Service:

osgi> ss | grep simple
171 ACTIVE org.opendaylight.controller.samples.simpleforwarding_0.4.1.SNAPSHOT
true
osgi> stop 171
osgi> osgi> ss | grep loadbalancer
150 ACTIVE org.opendaylight.controller.samples.loadbalancer.northbound_0.4.1.SNAPSHOT
187 ACTIVE org.opendaylight.controller.samples.loadbalancer_0.5.1.SNAPSHOT
true
osgi> stop 187

Why did we do that? Because these are the two services also implementing a packet listener. For testing, we do want to make sure, they are not getting in our way and consuming packets before we can get them.

Testing

For testing, we use a simple linear Mininet topology with two switches and two hosts connected at the ends of the line:

user@host:$ sudo mn --controller=remote,ip=129.69.210.89 --topo linear,2

The given IP is the IP of our controller host.

Now let’s ping host 2 from host 1 and see the output in the OSGi console:

mininet> h1 ping h2

osgi>
Pkt. to /10.0.0.2 received by node 00:00:00:00:00:00:00:01 on connector 1
Pkt. to /10.0.0.1 received by node 00:00:00:00:00:00:00:02 on connector 1

You see that our handler received a packet from both switches with the data path ids 00:00:00:00:00:00:00:01 and 00:00:00:00:00:00:00:02 as well as the ports (1) on which they have been received and the destination IP addresses 10.0.0.2 and 10.0.0.1. So it worked.

Where to go from here?

What I did not show in this tutorial is how to send a packet. If you join me again, you can see that in one of my next tutorials here on this blog.

Securing OpenDaylight’s REST Interfaces

OpenDaylight comes with a set of REST interfaces. For instance, in one of my previous posts, I have introduced OpenDaylight’s REST interface for programming flows. With these interfaces, you can easily outsource your control logic to a remote server other than the server on which the OpenDaylight controller is running. Basically, the controller offers a web service, and the control application invokes this service sending REST requests via HTTP.

Although the concept to offer network services as web services is very nice and lowers the barriers to “program” the network significantly, it also brings up security problems well known from web services. If you do not authenticate clients, any client that can send HTTP requests to your controller can control your network — certainly something you want to avoid!

Therefore, in this post, I will show how to secure OpenDaylight’s REST interfaces.

Authentication in OpenDaylight

The REST interfaces are so-called northbound interfaces between controller and control application. So you can think of the controller as the service and the control application as the client as shown in the figure below.

insecure_network

In order to ensure that the controller only accepts requests from authorized clients, clients have to authenticate themselves. OpenDaylight uses HTTP Basic authentication, which is based on user names and passwords (default: admin, admin). Sounds good: So only a client with the valid password can invoke the service … or is there a problem? In order to see the security threats, we have to take a closer look at the HTTP Basic authentication mechanism.

The follwing command invokes the Flow Programmer service of OpenDaylight via cURL and prints the HTTP header information of the request:

user@host:$ curl -u admin:admin -H 'Accept: application/xml' -v 'http://localhost:8080/controller/nb/v2/flowprogrammer/default/'
* About to connect() to localhost port 8080 (#0)
* Trying 127.0.0.1... connected
* Server auth using Basic with user 'admin'
> GET /controller/nb/v2/flowprogrammer/default/ HTTP/1.1
> Authorization: Basic YWRtaW46YWRtaW4=
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> Host: localhost:8080
> Accept: application/xml
>
< HTTP/1.1 200 OK
< Server: Apache-Coyote/1.1
< Cache-Control: private
< Expires: Thu, 01 Jan 1970 01:00:00 CET
< Set-Cookie: JSESSIONIDSSO=9426E0F12A0A0C80BE549451707EF339; Path=/
< Set-Cookie: JSESSIONID=DB23D1EE61348E101E6CE8117A04B8D8; Path=/
< Content-Type: application/xml
< Content-Length: 62
< Date: Sun, 12 Jan 2014 16:50:38 GMT
<
* Connection #0 to host localhost left intact
* Closing connection #0
<?xml version="1.0" encoding="UTF-8" standalone="yes"?><list/>

The interesting header field is “Authorization” with its value “Basic YWRtaW46YWRtaW4=”. Here, “YWRtaW46YWRtaW4=” is the user name and password sent from the client to the controller. Although this value looks quite cryptic, it is actually plain text. This value is simply the Base64 encoding of the user name and password string “admin:admin”. Base64 is a simple translation of 8 bit characters to 6 bit characters involving no encryption or hashing at all! Basically, it comes from the time when SMTP was restricted to sending 7 bit ASCII characters. Everything else like binary (8 bit) content had to be translated to 7 bit characters first, and exactly that’s the job of Base64 encoding. You can use a paper and pencil to decode it. Just interpret the bit pattern of three 8 bit characters as four 6 bit characters and look-up the values of the 6 bit characters in the Base64 table. Or if you are lazy, just use one of the many Base64 decoders in the WWW.

Now the problem should become obvious. If your network between client and controller is non-trusted and an attacker can eavesdrop on the communication channel, he can read your user name and password.

Securing the REST Interface

Now that we see the problem, also the solution should become obvious. We need a secure channel between client and controller, so an attacker cannot read the header fields of the HTTP request. The HTTPS standard provides exactly that. Moreover, the client can make sure that it really connects to the right controller, and not the controller of an attacker who just wants to intercept our password. So we use HTTPS to encrypt the channel between client and controller and to authenticate the controller, and HTTP Basic authentication to authenticate the client.

So the trick is, enabling HTTPS in OpenDaylight, which is turned off by default. Note that above we used the insecure HTTP protocol on port 8080. Now we want to use HTTPS on port 8443 (or 443 if you want to use the official HTTPS port instead of the alternative port).

OpenDaylight uses the Tomcat servlet container to provide its web services. Therefore, the steps to enable HTTPS are very similar to configuring Tomcat.

First, we need a server certificate that the client can use to authenticate the server. Of course, a certificate signed by a trusted certification authority would be best. However, here I will show how to create your own self-signed certificate using the Java keytool:

user@host:$ keytool -genkeypair -v -alias tomcat -storepass changeit -validity 1825 -keysize 1024 -keyalg DSA
What is your first and last name?
[Unknown]: duerr-mininet.informatik.uni-stuttgart.de
What is the name of your organizational unit?
[Unknown]: Institute of Parallel and Distributed Systems
What is the name of your organization?
[Unknown]: University of Stuttgart
What is the name of your City or Locality?
[Unknown]: Stuttgart
What is the name of your State or Province?
[Unknown]: Baden-Wuerttemberg
What is the two-letter country code for this unit?
[Unknown]: de
Is CN=duerr-mininet.informatik.uni-stuttgart.de, OU=Institute of Parallel and Distributed Systems, O=University of Stuttgart, L=Stuttgart, ST=Baden-Wuerttemberg, C=de correct?
[no]: yes

Generating 1,024 bit DSA key pair and self-signed certificate (SHA1withDSA) with a validity of 1,825 days
for: CN=duerr-mininet.informatik.uni-stuttgart.de, OU=Institute of Parallel and Distributed Systems, O=University of Stuttgart, L=Stuttgart, ST=Baden-Wuerttemberg, C=de
Enter key password for <tomcat>
(RETURN if same as keystore password):
Re-enter new password:
[Storing /home/duerrfk/.keystore]

This creates a certificate valid for five years (1825 days) and stores it in the keystore .keystore in my home directory /home/duerrfk. As first and last name, we use the DNS name of the machine, the controller is running on. The rest should be pretty obvious.

With this information, we can now configure the OpenDaylight controller. First, check out and compile OpenDaylight, if you haven’t done so already:

user@host:$ git clone https://git.opendaylight.org/gerrit/p/controller.git
user@host:$ cd controller/opendaylight/distribution/opendaylight/
user@host:$ mvn clean install

Now edit the following file, where “controller” is the root directory of the controller you checked out:

user@host:$ emacs controller/opendaylight/distribution/opendaylight/target/distribution.opendaylight-osgipackage/opendaylight/configuration/tomcat-server.xml

Comment in the following XML element:

<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
maxThreads="150" scheme="https" secure="true"
clientAuth="false" sslProtocol="TLS"
keystoreFile="/home/duerrfk/.keystore"
keystorePass="changeit"/>

Use the keystore location and password that you used before with the keytool command.

Now you can start OpenDaylight and connect via HTTPS to the controller on port 8443. Use your web browser to try it.

Making Secure Calls

One last step needs to be done before we can call the controller securely by a client. If you did not use a certificate signed by a well-known certification authority — like we did above with our self-signed certficate –, you need to present the client with the server certificate it should use for authenticating the controller. If you are using cURL, the required option is “–cacert”:

user@host:$ curl -u admin:admin --cacert ~/cert-duerr-mininet.pem -v -H 'Accept: application/xml' 'https://duerr-mininet.informatik.uni-stuttgart.de:8443/controller/nb/v2/flowprogrammer/default/'

So the last question is, how do we get the server certificate in PEM format? To this end, we can use openssl to call our server and store the returned certificate:

user@host:$ openssl s_client -connect duerr-mininet.informatik.uni-stuttgart.de:8443

The PEM certificate is everything between “—–BEGIN CERTIFICATE—–” and “—–END CERTIFICATE—–” (including these two lines), so we can just copy this to a file. Note that you have to make sure that the call is actually going to the right server (not the server of an attacker). So better call it from the machine where your controller is running to avoid a “chicken and egg” problem.

Summary

Now we can securely outsource our control application to a remote host, for instance, a host in our campus network or a cloud server running in a remote data center.

OpenDaylight: Programming Flows with the REST Interface and cURL

The SDN controller OpenDaylight comes with a Flow Programmer service, which makes it very easy for applications to program flows by using a REST interface. In this post, I will show how to use this service together with the command line tool cURL to add, modify, and delete flows.

What is so nice about REST?

REST is based on technologies that many programmers already know, in particular, HTTP to transport requests and responses between client (control application) and service (e.g., OpenDaylight’s Flow Programmer), and XML and JSON to describe the parameters of a request and the result. Because of its simplicity (compared to other web service standads like SOAP), REST is very popular and used by many web services in the World Wide Web, e.g., by Twitter or Flickr.

Since REST is based on popular technologies like HTTP, JSON, and XML, you can use it in a more or less straightforward way with most programming languages like Java, Python, C (you name it), and even command line tools like cURL. Therefore, the barrier to use it is very low. I think, this very nicely shows how software-defined networking reduces the gap between application (programmer) and network. I am sure, after you have read this post, you will agree.

Having said this, it is also important to understand the limitations of REST in the scope of SDN. REST is based on request/response interaction, i.e., the control application (client) makes a request, and the service executes the request and returns the response. REST is less suited for event-based interaction, where the controller calls the application (strictly speaking, then the controller would be the client and the application the service since client and server are only roles of two communicating entities).

In OpenFlow, there is one important event that signals to the control application that a packet without matching flow table entry has arrived at a switch (packet-in event). In this case, a controll application implementing the control logic has to decide what to do: dropping the packet, forwarding it, or setting up a flow table entry for similar packets. Because of this limitation, the REST interface is limited to proactive flow programming, where the control application proactively programs the flow table of the switches; reactive flow programming where the control application reacts on packet-in events is implemented in OpenDaylight using OSGI components.

Programming Flows with REST

REST is all about resources and their states. REST stands for Representational State Transfer. What does that mean in the context of OpenDaylight’s Flow Programmer service? For the Flow Programmer service, you can think of resources as the whole network, a switch, or a flow. The basic job of the Flow Programmer is to query and change the state of these resources by returning, adding, or deleting flows. The state of a resource can be represented in different formats, namely, XML or JSON.

Adding Flows

That was still very abstract, wasn’t it? According to Einstein, the only way to explain something is by example. So let’s make some examples. First of all, we will add a flow to a switch. We use a very simple linear topology with two switches and two hosts depicted in the following figure.

mininet-topology

You can create this topology in Mininet with the following command assuming the OpenDaylight controller is running on the machine with IP 192.168.1.1:

mn --controller=remote,ip=192.168.1.1 --topo linear,2

We also open X-terminals for both hosts using the following commands:

mininet> xterm h1
mininet> xterm h2

With the command ifconfig in the respective terminal, you can find out the IP addresses of the hosts (h1: 10.0.0.1; h2: 10.0.0.2).

OpenDaylight already implements reactive forwarding logic, so a ping between host h1 and host h2 already works! You can send a ping from h1 to h2 to check this:

h1> ping 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_req=1 ttl=64 time=0.368 ms
64 bytes from 10.0.0.2: icmp_req=2 ttl=64 time=0.050 ms
64 bytes from 10.0.0.2: icmp_req=3 ttl=64 time=0.059 ms
64 bytes from 10.0.0.2: icmp_req=4 ttl=64 time=0.058 ms

Since these flows were programmed reactively using an OSGI module, we cannot see them in the Flow Programmer service! This service only knows about the flows that were created via the Flow Programmer itself.

Now let’s program a flow that blocks all TCP requests to port 80 (web server) targeted at Host 2. You can think of this as an in-network firewall. Instead of blocking the traffic at the host using a software firewall, we already block it on a switch, e.g., at a top-of-rack switch of the rack where h2 is located or even earlier at the core switches of your data center to keep your network free from unwanted traffic.

Before we setup our blocking flow entry, we verify that h1 can send requests on port 80 to h2 using netcat to simulate a web server and client:

h2> nc -lk 10.0.0.2 80
Hello
h1> echo "Hello" | nc 10.0.0.2 80

Here, h2 is listening (option -l) on port 80 for incoming requests (and stays listening; option -k). h1 sends the String “Hello”, and h2 displays this string showing that the connection works.

I will now show you the cURL command to block TCP datagrams to port 80 at switch s2, and then explain the details:

controller> curl -u admin:admin -H 'Content-type: application/json' -X PUT -d '{"installInHw":"true", "name":"flow1", "node": {"id":"00:00:00:00:00:00:00:02", "type":"OF"}, "ingressPort":"2", "etherType": "0x800", "protocol": "6", "tpDst": "80", "priority":"65535","actions":["DROP"]}' 'http://localhost:8080/controller/nb/v2/flowprogrammer/default/node/OF/00:00:00:00:00:00:00:02/staticFlow/flow1'

We are using option “-u” to specify the user name and password (both “admin” which you might want to change for obvious reasons). OpenDaylight uses HTTP Basic authentication, so every request to a northbound REST interface must be authenticated with a user name and password.

According to the REST paradigm, resources are added using HTTP PUT requests. Therefore, we set the cURL option “-X PUT” to add the flow. If a resource with the same id already exists, it will be modified.

Moreover, we specify the HTTP content type as “application/json” since we are sending our request as JSON representation. You could as well use XML (Content-type: application/XML), however, here we use the simpler JSON format.

With option “-d”, we set the payload of the HTTP request, which in our case is a JSON document defining the flow to be added. A JSON document consists of a number of key value pairs separated by colon, so it should be very easy to read the above example. Our new flow gets the name “flow1″, so we can later refer to it to modify or delete it. It will be installed in hardware on the switch — whatever that means for our emulated mininet ;) The value of the key “node” consists of a JSON structure (marked by “{..}” in JSON) with the keys “id” and “type”. These are the data path id and type (OF = OpenFlow) of the switch. Actually, adding node id, type, and flow name is redundant from my point of view, because it is also included in the URL as you can see. According to the REST paradigm, the URL is the right place to specify the path of the resource. Anyway, it will not work without, so we better include it.

The rest defines the values of our flow. In order to match datagrams to port 80 over TCP/IP, you have to specify (1) the ethertype “0×800″ to identify IPv4 packets; (2) protocol id 6 to identify TCP datagrams; (3) the transport layer destination address (also known as port) 80. Moreover, we specify the incoming port and a flow priority. Priority 65535 is the highest priority, so we are sure that this entry will be effective if there are other entries that match the same packet (e.g., setup by the reactive forwarding module of OpenDaylight). Finally, we have to specify an action. In this case, the packet will be dropped. Another frequently used action is to output the packet on a certain port (e.g., “OUTPUT=1″). As you can see, the action field is an array (marked by “[..]” in JSON), so you can specify multiple actions.

So let’s see whether our flow table entry has some effect by sending another hello message via TCP/IP to port 80:

h1> echo "Hello" | nc 10.0.0.2 80
h2> echo $?
1

The second command returns the exit status of the first command, which is 1, i.e, non-zero, so it failed. Therefore, no packets to port 80 arrived at h2 anymore. You can check whether other ports are still accessible by changing the port number of the netcat commands. Actually, they are, so everything worked as expected and we successfully programmed our first flow.

Querying Flows

Now that we have programmed a flow, we can query for installed flows. According to the REST paradigm, you can query the state of a resource using a GET request. The specific request for querying all flows of the network looks as follows:

controller> curl -u admin:admin -H 'Accept: application/xml' 'http://localhost:8080/controller/nb/v2/flowprogrammer/default'
<?xml version="1.0" encoding="UTF-8" standalone="yes"?><list><flowConfig><installInHw>true</installInHw><name>flow1</name><node><id>00:00:00:00:00:00:00:02</id><type>OF</type></node><ingressPort>2</ingressPort><priority>65535</priority><etherType>0x800</etherType><protocol>6</protocol><tpDst>80</tpDst><actions>DROP</actions></flowConfig></list>

Note that this time, we are using XML as accpeted content type by setting the HTTP header field “Accepted” to “application/xml”. We could as well use JSON again, which is easier to parse and smaller.

We can also query a particular switch by specifying its data path id in the URL:

curl -u admin:admin -H 'Accept: application/xml' 'http://localhost:8080/controller/nb/v2/flowprogrammer/default/node/OF/00:00:00:00:00:00:00:02'

Or we can query the definition of a certain flow using the flow’s name in the resource URL:

curl -u admin:admin -H 'Accept: application/xml' 'http://localhost:8080/controller/nb/v2/flowprogrammer/default/node/OF/00:00:00:00:00:00:00:02/staticFlow/flow1'

Deleting Flows

Finally, we can also delete flows using the HTTP DELETE request:

curl -u admin:admin -X DELETE 'http://localhost:8080/controller/nb/v2/flowprogrammer/default/node/OF/00:00:00:00:00:00:00:02/staticFlow/flow1'

The URL specifies the resource (here of a flow) to be deleted. HTTP DELETE requests have no payload — I am just mentioning this because the Floodlight controller required a payload for delete requests, which most HTTP implementations refuse to send; so the OpenDaylight interface is cleaner and more REST’ish in that respect.

Summary

That’s basically it. I hope, you agree that programming flows using REST is really simple. In future posts, I plan to introduce other northbound REST APIs of OpenDaylight and also the OSGI interface for reactive flow programming. If you are interested, I hope to see you again.

If you need further information about the Flow Programmer REST interface, you can visit the OpenDaylight website.

Wireless Connection for Carrera Slot Car Track

This post is about a special networked device: a slot car track and how to connect it via Bluetooth to a PC, laptop, or mobile phone.

At Christmas we had a lot of fun playing with my nephew’s brand new slot car track: a Carrera 132 digital — so everything I write here applies to this specific slot car track, although I believe that it’s at least applicable to all other digital models from Carrera. I realized that what used to be analog some (ten) years ago now is digital and has a connection to a so-called “PC Unit”. After searching a little bit, I found out that this is nothing else than a serial connection using TTL-level signals. By connecting a PC with the right software (which you can get for free), you get a lap counter, you can measure lap times, etc.

However, this PC Unit from Carrera has two drawbacks:

  • It costs about 65 EUR — a lot of money for a simple box that just converts TTL-level serial signals to an USB interface.
  • It is wired, i.e., you have to place your PC or laptop close to the track and cannot easily connect wireless devices like smartphones or tablets that do not have an USB port.

Therefore, I decided to connect the slot car track through a cheap Bluetooth-Serial module. With this module, the slot car track will behave as a Bluetooth device implementing a serial profile (“wireless serial port”).

The Circuit

The key component is the HC-06 Bluetooth-Serial module depicted below.

Bluetooth-Serial Module

 

Bluetooth-Serial Module

This module costs about 8 EUR at ebay (including shipping from China). Be sure to get a slave module, so the PC can act as the master establishing the connection to the slave module. This module takes as input 3.3 V to 5 V TTL-level signals, just fine for the PC Unit socket of the slot car track.

The Bluetooth board has a voltage regulator on board that works with 3.6 V to 6 V. Actually, the PC Unit connector (PS/2 socket) has a 5 V output (see figure of socket below). Although I believe it would provide enough power for the Bluetooth module, I did not want to risk to ruin the slot car track of my nephew on Christmas for obvious reasons ;) Therefore, I decided to use a separat 5 V voltage regulator to convert the 14.4 V output of the PC Unit socket to 5 V.

The circuit is very simple as can be seen below.

curcuit

Bluetooth Slot Car Connector

 

Bluetooth Slot Car Connector

 

Bluetooth Slot Car Connector

You have to cross-connect the RX and TX pins of the PC Unit socket and the Bluetooth module. As voltage regulator (IC1) you can use an L7805 or L4940V5, for instance. To stabilize the output, you should add a small capacitor (C1; I used 22 uF, but you can also use higher values without problem).

The PC Unit X1 is a PS/2 connector. Below you see the pins labeled with the numbers from the schematic above. Be sure not to mix up 14.4 V and ground, or RX and TX!

PC Unit Connector

The pins of the Bluetooth-Serial module are nicely labeled as can be seen on the photos above.

Configuring the Bluetooth Module

The serial port of the track uses 19200 baud, 8 bit, no parity bit, and 1 stop bit. By default, the Bluetooth module uses 9200 baud, so we have to change the configuration of the Bluetooth module. To this end, you can use an FTDI breakout board (see below) that converts TTL-level signals of a serial connection to USB. BTW: you can also use this board to connect the slot car track with a cable to the PC or laptop; although this somehow defeats the purpose of this whole post ;)

FTDI

This board is connected to the Bluetooth module, and as long as no wireless device is connected to the Bluetooth module, you can configure it using AT commands. You need a serial terminal software to this end, e.g., putty. Be sure to use the connection settings 9200/8/N/1 initially until you have changed it.

Depending on the Bluetooth module you got, the AT commands might be slightly different. For my module, you can change the baud rate to 19200 baud with

AT+BAUD5

You have to be fast enough when you type in this command. The module does not wait for the final line feed! So better copy&paste this line to your serial terminal. The module should respond with “OK19200″. If you are not sure whether your connection works, you can first simply type in AT. Then the module should respond with “OK”.

You can also change the name of the Bluetooth device using the following command (where “SLOTCAR” is the new name here):

AT+nameSLOTCAR

Of course, you can use any other name than SLOTCAR.

Testing it

To test it, first try to connect your PC or laptop via Bluetooth to the module. The pairing code is 1234 for my module. The module has an LED that flashes while there is no connection, and is permanently on if it is connected to  a remote master device. Be sure to configure the serial port of the Bluetooth device on your PC to 19200/8/N/1.

To further test is — and finally use it :) — I downloaded the X-Lap software from Carrera. When you start this software, it tries to connect to the slot car track using all available serial ports. Your Bluetooth serial port should work automatically.

That’s it. Hope that it works and you have a lot of fun racing :)

 

Panodroid

What is Panodroid?

Since I enjoy taking panorama photos and wanted to watch them on my Android phone, I developed Panodroid: an interactive panorama image viewer for Google’s Android platform targeted at smart phones and tablets. It displays equirectangular panorama images (360 deg x 180 deg spherical panoramas) hosted at Flickr, local images stored on the device, or from any user-defined URL. The user can rotate the view by 360 degrees horizontally and 180 degrees vertically. Panodroid supports kinetic rotation and tag-based image queries. It can also act as generic panorama image viewer for other third-party apps (see developer information below).

If you want to get an impression of Panodroid, I recommend to watch this short video:

OpenPanodroid

After a while I realized that I do not have enough time to add further features to Panodroid. Therefore, I decided to make Panodroid open source. This open version is called OpenPanodroid. I hope the community will give OpenPanodroid a hand, and add some more features or bug fixes — if there are any ;) I also hope that the source code is helpful for other projects that require a fast high-quality panorama image viewer, or just to learn how OpenPanodroid converts and renders panoramas (e.g., using OpenGL and native code). I am looking forward to any feedback how you modified or used Panodroid!

The full source code can be downloaded from github:

git clone git://github.com/duerrfk/OpenPanodroid.git

Getting Panodroid

You can download Panodroid from Google Play using one of the following links, or simply by scanning the QR code below:

panodroid QR code

Supported Image Formats

To display your panorama image with Panodroid, it must fulfill some properties. It must be a full-spherical panorama covering 360 degrees horizontally and 180 degrees vertically. It must be stored as single JPG or PNG image using the equirectangular projection. Such images have an image resolution where the image width is two times the image height. This is a common format for panorama images and should be supported by most applications for creating panorama images.

Developer Information

Panodroid can be used by other applications to display panorama images. To call the panorama viewer activity, you have to supply an URI pointing at the panorama image (remote file URL (http://), local file path (file://), content URI (content://)). The following code example shows how to invoke Panodroid:

Uri panoUri = Uri.parse("http://www.frank-durr.de/pano-6000.jpg");

ComponentName panoViewerComponent = 
    new ComponentName("de.frank_durr.panodroid", 
    "de.frank_durr.panodroid.PanoViewerActivity");

Intent intent = new Intent(Intent.ACTION_VIEW);
intent.setComponent(panoViewerComponent);
intent.setData(panoUri);

startActivity(intent);

Frequently Asked Questions

  • Question: Why are some panoramas only displayed in poor quality?
    • Answer: On the one hand, display quality depends on the resolution of the input panorama image. In general, images with a resolution of 3000 pixel x 1500 pixel and higher will give good quality on devices with a screen resolution of 800 pixel x 480 pixel (WVGA smart phones). About 6000 pixel x 3000 pixel will give optimal quality for WVGA screens. If you display images from Flickrs, please note that many Flickr users do not have a pro account and therefore can only offer images of 1024 pixel x 512 pixel. These images can only be displayed with low quality. You can exclude such low resolution images in the panorama search dialog. Moreover, high resolution images are tagged with a star in the panorama list view. On the other hand, display quality depends on the texture size used for the 3D visualization. The default (1024 pixel x 1024 pixel) should be sufficient for smart phones with WVGA screens. To further increase the display quality, e.g., on tablets with larger screens, you can select a texture size of 2048 pixel x 2048 pixel in the preferences menu. However, this requires more memory and leads to longer image conversion times.
  • Question: What does the star in the panorama list view mean?
    • Answer: The star marks high resolution images that can be displayed with higher quality.
  • Question: Why is image conversion slow on my device?
    • Answer: Images are stored at Flickr as equirectangular images. This format cannot be display directly as 3D image and, therefore, has to be converted into another format (cubic panorama). This step is complex for mobile devices with (compared to desktop and server CPUs) slow CPUs. We have optimized this step as much as possible using a native implementation for the conversion routine which reduces the conversion time from several minutes to about 30 seconds on a current mobile phone. Unfortunately, users of older devices might experience longer delays.
  • Question: Why does Panodroid require a network connection?
    • Answer: Because it has to download panorama images from Flickr.
  • Question: Why does Panodroid require a lot of memory (RAM)?
    • Answer: Panodroid was carefully designed, to keep memory usage to a minimum. However, as described above, Panodroid has to convert panorama images from the equirectangular format to cubic format in order to display it as 3D image. In this step, Panodroid has to store the original (downloaded) image in memory. Depending on the size of the original image, this can take a lot of RAM. To give you an impression of the required memory. An image of resolution 3000 pixel x 1500 pixel requires 18 MByte. An image of size 6000 pixel x 3000 pixel requires 72 MByte. For a current PC with gigabytes of memory, this does not sound too much. However, note that your mobile phone might only have about 500 MByte RAM or less, and other applications and the operating system also need some memory.
  • Question: Why can’t I load large panorama images?
    • Android restricts the amount of RAM for each app. For instance, for second generation Android devices (~ Android v2.x), the limit is often 24 MB. A panorama image of size 3000 pixel x 1500 pixel consumes about 18 MB. Therefore, with a limit of 24 MB you could not load much larger images than this. Fortunately, there are possibilities to increase the amount of memory available for images. For instance, we were able to load images of size 6000 pixel x 3000 pixel (~ 72 MB) on smart phones running Android 2.x. However, some Android 3.x devices seem to be more restrictive. Some suppport per app memory sizes up to 256 MB, some only allow for 48 MB. Since this is a restriction imposed by Android, we cannot get around this. (We won’t downsample images since this degrades quality too much.)
  • Question: I have stored a panorama image at Flickr. How can it be displayed with Panodroid? It seems that currently it is not found by Panodroid.
    • Answer: In order to be considered by Panodroid, your images must fulfill a few properties. It must be an equirectangular panorama image covering the full sphere, i.e., 360 degrees horizontally and 180 degrees vertically. Such images have an image resolution where the image width is two times the image height. If your image does not fulfill this requirement, it will be ignored. Moreover, you must tag your image with the tag “equirectangular”.
  • Question: How can I create panorame images with Panodroid?
    • Answer: You can’t. Panodroid is a panorama image viewer, not a creator.

Legal Notice

Panodroid is a free non-commercial application.

Panodroid uses the Flickr API but is not endorsed or certified by Flickr. The API is used according to Flickr’s terms of use. In particular, every photo is displayed together with the name of the author and the required link to the original photo page at Flickr. However, if you as an author of photos stored at Flickr want your photos to be excluded from Panodroid, you can opt-out from being considered by requests through the Flickr API. Please read the official discussion at Flickr for more details on this issue.