Reactive Flow Programming with OpenDaylight

In my last OpenDaylight tutorial, I demonstrated how to implement an OSGi module for OpenDaylight. In this tutorial, I will show how to use these modules for reactive flow programming and packet forwarding.

In detail, you will learn:

  • how to decode incoming packets
  • how to set up flow table entries including packet match rules and actions
  • how to forward packets

Scenario

To make things concrete, we consider a simple scenario in this tutorial: load balancing of a TCP service (e.g., a web service using HTTP over TCP). The basic idea is that TCP connections to a service addressed through a public IP address and port number are distributed among two physical server instances using IP address re-writing performed by an OpenFlow switch. Whenever a client opens a TCP connection to the service, one of the server instances is chosen randomly, and a forwarding rule is installed by the network controller on the ingress switch to forward all incoming packets of this TCP connection to the chosen server instance. In order to make sure that the server instance accepts the packets of the TCP connection, the destination IP address is re-written to the IP address of the chosen server instance, and the destination MAC address is set to the MAC address of the server instance. In the reverse direction from server to client, the switch re-writes the source IP address of the server to the public IP address of the service. Therefore, to the client it looks like the response is coming from the public IP address. Thus, load balancing is transparent to the client.

To keep things simple, I do not consider the routing of packets. Rather, I assume that the clients and the two server instances are connected to the same switch on different ports (see figure below). Moreover, I also simplify MAC address resolution by setting a static ARP table entry at the client host for the public IP address. Since there is no physical server assigned to the public IP address, we just set a fake MAC address (in a real setup, the gateway of the data center would receive the client request, so we would not need an extra MAC address assigned to the public IP address).

load_balancing

I assume that you have read the previous tutorial, so I skip some explanations on how to set up an OpenDaylight Maven project, subscribe to services, and further OSGi module basics.

You can find all necessary files of this tutorial in this archive: myctrlapp.tar.gz

The folder myctrlapp containts the Maven project of the OSGi module. You can compile and create the OSGi bundle with the following command

user@host:$ tar xzf myctrlapp.tar.gz
user@host:$ cd ~/myctrlapp
user@host:$ mvn package

The corresponding Eclipse project can be created using

user@host:$ cd ~/myctrlapp
user@host:$ mvn eclipse:eclipse

Registering Required Services and Subscribing to Packet-in Events

For our simple load balancer, we need the following OpenDaylight services:

  • Data Packet Service for decoding incoming packets and encoding and sending outgoing packets.
  • Flow Programmer Service for setting flow table entries on the switch.
  • Switch Manager Service to determine the outport of packets forwarded to the server instances.

As explained in my previous tutorial, we register for OSGi services by implementing the configureInstance(...) method of the Activator class:

public void configureInstance(Component c, Object imp, String containerName) {
    log.trace("Configuring instance");

    if (imp.equals(PacketHandler.class)) {
        // Define exported and used services for PacketHandler component.

        Dictionary<String, Object> props = new Hashtable<String, Object>();
        props.put("salListenerName", "mypackethandler");

        // Export IListenDataPacket interface to receive packet-in events.
        c.setInterface(new String[] {IListenDataPacket.class.getName()}, props);

        // Need the DataPacketService for encoding, decoding, sending data packets
        c.add(createContainerServiceDependency(containerName).setService(IDataPacketService.class).setCallbacks(
            "setDataPacketService", "unsetDataPacketService").setRequired(true));

        // Need FlowProgrammerService for programming flows
        c.add(createContainerServiceDependency(containerName).setService(IFlowProgrammerService.class).setCallbacks(
            "setFlowProgrammerService", "unsetFlowProgrammerService").setRequired(true));

        // Need SwitchManager service for enumerating ports of switch
        c.add(createContainerServiceDependency(containerName).setService(ISwitchManager.class).setCallbacks(
            "setSwitchManagerService", "unsetSwitchManagerService").setRequired(true));
    }
}

set... and unset... define names of callback methods. These callback methods are implemented in our PacketHandler class to receive service proxy objects, which can be used to call the services:

/**
 * Sets a reference to the requested DataPacketService
 */
void setDataPacketService(IDataPacketService s) {
    log.trace("Set DataPacketService.");

    dataPacketService = s;
}

/**
 * Unsets DataPacketService
 */
void unsetDataPacketService(IDataPacketService s) {
    log.trace("Removed DataPacketService.");    

    if (dataPacketService == s) {
        dataPacketService = null;
    }
}

/**
 * Sets a reference to the requested FlowProgrammerService
 */
void setFlowProgrammerService(IFlowProgrammerService s) {
    log.trace("Set FlowProgrammerService.");

    flowProgrammerService = s;
}

/**
 * Unsets FlowProgrammerService
 */
void unsetFlowProgrammerService(IFlowProgrammerService s) {
    log.trace("Removed FlowProgrammerService.");

    if (flowProgrammerService == s) {
        flowProgrammerService = null;
    }
}

/**
 * Sets a reference to the requested SwitchManagerService
 */
void setSwitchManagerService(ISwitchManager s) {
   log.trace("Set SwitchManagerService.");

   switchManager = s;
}

/**
 * Unsets SwitchManagerService
 */
void unsetSwitchManagerService(ISwitchManager s) {
    log.trace("Removed SwitchManagerService.");

    if (switchManager == s) {
        switchManager = null;
    }
}

Moreover, we register for packet-in events in the Activator class. To this end, we must declate that we implement the IListenDataPacket interface (line 11). This interface basically consists of one callback method receiveDataPacket(...) for receiving packet-in events as described next.

Handling Packet-in Events

Whenever a packet without matching flow table entry arrives at the switch, it is sent to the controller and the event handler receiveDataPacket(...) of our packet handler class is called with the received packet as parameter:

@Override
public PacketResult receiveDataPacket(RawPacket inPkt) {
    // The connector, the packet came from ("port")
    NodeConnector ingressConnector = inPkt.getIncomingNodeConnector();
    // The node that received the packet ("switch")
    Node node = ingressConnector.getNode();

    log.trace("Packet from " + node.getNodeIDString() + " " + ingressConnector.getNodeConnectorIDString());

    // Use DataPacketService to decode the packet.
    Packet pkt = dataPacketService.decodeDataPacket(inPkt);

    if (pkt instanceof Ethernet) {
        Ethernet ethFrame = (Ethernet) pkt;
        Object l3Pkt = ethFrame.getPayload();

        if (l3Pkt instanceof IPv4) {
            IPv4 ipv4Pkt = (IPv4) l3Pkt;
            InetAddress clientAddr = intToInetAddress(ipv4Pkt.getSourceAddress());
            InetAddress dstAddr = intToInetAddress(ipv4Pkt.getDestinationAddress());
            Object l4Datagram = ipv4Pkt.getPayload();

            if (l4Datagram instanceof TCP) {
                TCP tcpDatagram = (TCP) l4Datagram;
                int clientPort = tcpDatagram.getSourcePort();
                int dstPort = tcpDatagram.getDestinationPort();

                if (publicInetAddress.equals(dstAddr) && dstPort == SERVICE_PORT) {
                    log.info("Received packet for load balanced service");

                    // Select one of the two servers round robin.

                    InetAddress serverInstanceAddr;
                    byte[] serverInstanceMAC;
                    NodeConnector egressConnector;

                    // Synchronize in case there are two incoming requests at the same time.
                    synchronized (this) {
                        if (serverNumber == 0) {
                            log.info("Server 1 is serving the request");
                            serverInstanceAddr = server1Address;
                            serverInstanceMAC = SERVER1_MAC;
                            egressConnector = switchManager.getNodeConnector(node, SERVER1_CONNECTOR_NAME);
                            serverNumber = 1;
                        } else {
                            log.info("Server 2 is serving the request");
                            serverInstanceAddr = server2Address;
                            serverInstanceMAC = SERVER2_MAC;
                            egressConnector = switchManager.getNodeConnector(node, SERVER2_CONNECTOR_NAME);
                            serverNumber = 0;
                        }
                    }

                    // Create flow table entry for further incoming packets

                    // Match incoming packets of this TCP connection 
                    // (4 tuple source IP, source port, destination IP, destination port)
                    Match match = new Match();
                    match.setField(MatchType.DL_TYPE, (short) 0x0800);  // IPv4 ethertype
                    match.setField(MatchType.NW_PROTO, (byte) 6);       // TCP protocol id
                    match.setField(MatchType.NW_SRC, clientAddr);
                    match.setField(MatchType.NW_DST, dstAddr);
                    match.setField(MatchType.TP_SRC, (short) clientPort);
                    match.setField(MatchType.TP_DST, (short) dstPort);

                    // List of actions applied to the packet
                    List actions = new LinkedList();

                    // Re-write destination IP to server instance IP
                    actions.add(new SetNwDst(serverInstanceAddr));

                    // Re-write destination MAC to server instance MAC
                    actions.add(new SetDlDst(serverInstanceMAC));

                    // Output packet on port to server instance
                    actions.add(new Output(egressConnector));

                    // Create the flow
                    Flow flow = new Flow(match, actions);

                    // Use FlowProgrammerService to program flow.
                    Status status = flowProgrammerService.addFlow(node, flow);
                    if (!status.isSuccess()) {
                        log.error("Could not program flow: " + status.getDescription());
                        return PacketResult.CONSUME;
                    }

                    // Create flow table entry for response packets from server to client

                    // Match outgoing packets of this TCP connection 
                    match = new Match();
                    match.setField(MatchType.DL_TYPE, (short) 0x0800); 
                    match.setField(MatchType.NW_PROTO, (byte) 6);
                    match.setField(MatchType.NW_SRC, serverInstanceAddr);
                    match.setField(MatchType.NW_DST, clientAddr);
                    match.setField(MatchType.TP_SRC, (short) dstPort);
                    match.setField(MatchType.TP_DST, (short) clientPort);

                    // Re-write the server instance IP address to the public IP address
                    actions = new LinkedList();
                    actions.add(new SetNwSrc(publicInetAddress));
                    actions.add(new SetDlSrc(SERVICE_MAC));

                    // Output to client port from which packet was received
                    actions.add(new Output(ingressConnector));

                    flow = new Flow(match, actions);
                    status = flowProgrammerService.addFlow(node, flow);
                    if (!status.isSuccess()) {
                        log.error("Could not program flow: " + status.getDescription());
                        return PacketResult.CONSUME;
                    }

                    // Forward initial packet to selected server

                    log.trace("Forwarding packet to " + serverInstanceAddr.toString() + " through port " + egressConnector.getNodeConnectorIDString());
                    ethFrame.setDestinationMACAddress(serverInstanceMAC);
                    ipv4Pkt.setDestinationAddress(serverInstanceAddr);
                    inPkt.setOutgoingNodeConnector(egressConnector);                       
                    dataPacketService.transmitDataPacket(inPkt);

                    return PacketResult.CONSUME;
                }
            }
        }
    }

    // We did not process the packet -> let someone else do the job.
    return PacketResult.IGNORED;
}

Our load balancer reacts as follows to packet-in events. First, it uses the Data Packet Service to decode the incoming packet using method decodeDataPacket(inPkt). We are only interested in packets addressed to the public IP address and port number of our load-balanced service. Therefore, we have to check the destination IP address and port number of the received packet. To this end, we iteratively decode the packet layer by layer. First, we check whether we received an Ethernet frame, and get the payload of the frame, which should be an IP packet for a TCP connection. If the payload of the frame is indeed an IPv4 packet, we typecast it to the corresponding IPv4 packet class and use the methods getSourceAddress(...) and getDestinationAddress(...) to retrieve the IP addresses of the client (source) and service (destination). Then, we go up one layer and check for a TCP payload to retrieve the port information in a similar way.

After we have retrieved the IP address and port information from the packet, we check whether it is targeted at our load-balanced service (line 28). If it is not addressed at our service, we ignore the packet and let another handler process it (if any) by returning packetResult.IGNORED as a result of the packet handler.

If the packet is addressed at our service, we choose one of the two physical service instances in a round-robin fashion (line 38–52). The idea is to send the first request to server 1, second request to server 2, third to server 1 again, etc. Note that we might have multiple packet handlers for different packets executed in parallel (at least, we should not rely on a sequential execution as long as we do not know how OpenDaylight handles requests). Therefore, we synchronize this part of the packet handler, to make sure that only one thread is in this code section at a time.

Programming Flows

To forward a packet to the selected server instance, we select its IP address and MAC address as target addresses for each packet of this TCP connection from the client. To this end, we use IP address and MAC address re-writing to re-write the IP destination address and MAC destination address of each incoming packet of this connection to the selected server address. Note that a TCP connection is identified by the 4-tuple [source IP, source MAC, destination IP, destination MAC]. Therefore, we use this information as match criteria for the flow that performs address re-writing and packet forwarding.

A flow table entry consists of a match rule and list of actions. As said, the match rule should identify packets of a certain TCP connection. To this end, we create a new Match object, and set the required fields as shown in line 58–64. Since we are matching on a TCP/IPv4 datagram, we must make sure to identify this packet type by setting the ethertype (0×0800 meaning IPv4) and protocol id (6 meaning TCP). Moreover, we set the source and destination IP address and port information of the client and service that identifies the individual TCP connection.

Afterwards, we define the actions to be applied to a matched packet of the TCP connection. We set an action for re-writing the IP destination address to the IP address of the selected server instance, as well as the destination MAC address (line 70 and 73). Moreover, we define an output action to forward packets over the switch port of the server instance. In line 43 and line 49, we use the Switch Manager Service to retrieve the corresponding connector of the switch by its name. Note that these names are not simply the port numbers but s1-eth1 and s1-eth1 in my setup using Mininet. If you want to find out the name of a port, you can use the web GUI of the OpenDaylight controller (http://controllerhost:8080/) and inspect the port names of the switch.

Sometimes, it might also be handy to enumerate all connectors of a switch (node) — e.g., to flood a packet — using the following method:

Set ports = switchManager.getUpNodeConnectors(node)

Finally, we create the flow with match criteria and actions, and program the switch using the Flow Programmer service in line 82.

In the reverse direction from server to client, we also install a flow that re-writes the source IP address and MAC address of outgoing packets to the address information of the public service (line 90–112).

Forwarding Packets

However, we are not done yet. Although now every new packet of the connection will be forwarded to the right server instance, we also have to forward the received initial packet (TCP SYN request) to the right server. To this end, we modify the destinaton address information of this packet as shown in line 116–119. Then, we use the Data Packet Service to forward the packet using method transmitDataPacket(...).

In this example, we simply re-used the received packet. However, sometimes you might want to create and send a new packet. To this end, you create the payloads of the packets on the different layers and encode them as a raw packet using the Data Packet Service:

TCP tcp = new TCP();
tcp.setDestinationPort(tcpDestinationPort);
tcp.setSourcePort(tcpSourcePort);
IPv4 ipv4 = new IPv4();
ipv4.setPayload(tcp);
ipv4.setSourceAddress(ipSourceAddress);
ipv4.setDestinationAddress(ipDestinationAddress);
ipv4.setProtocol((byte) 6);
Ethernet ethernet = new Ethernet();
ethernet.setSourceMACAddress(sourceMAC);
ethernet.setDestinationMACAddress(targetMAC);
ethernet.setEtherType(EtherTypes.IPv4.shortValue());
ethernet.setPayload(ipv4);
RawPacket destPkt = dataPacketService.encodeDataPacket(ethernet);

Testing

Following the instructions from my last tutorial, you can compile the OSGi bundle using Maven as follows:

user@host:$ cd ~/myctrlapp
user@host:$ mvn package

Then you start the OpenDaylight controller (here, I assume you use the release version located in directory ~/opendaylight):

user@host:$ cd ~/opendaylight
user@host:$ ./runs.sh

Afterwards, to avoid conflicts with our service, you should first stop OpenDaylight’s simple forwarding service and OpenDaylight’s load balancing service (which has nothing to do with our load balancing service) from the OSGi console:

osgi> ss | grep simple
171 ACTIVE org.opendaylight.controller.samples.simpleforwarding_0.4.1.SNAPSHOT
true
osgi> stop 171
osgi> osgi> ss | grep loadbalancer
150 ACTIVE org.opendaylight.controller.samples.loadbalancer.northbound_0.4.1.SNAPSHOT
187 ACTIVE org.opendaylight.controller.samples.loadbalancer_0.5.1.SNAPSHOT
true
osgi> stop 187

Both of these services implement packet handlers, and for now we want to make sure that they do not interfere with our handler.

Then, we can install our compiled OSGi bundle (located in /home/user/myctrlapp/target)

osgi> install file:/home/user/myctrlapp/target/myctrlapp-0.1.jar
Bundle id is 256

and start it:

osgi> start 256

You can also change the log level of our bundle to see log output down to the trace level:

osgi> setLogLevel de.frank_durr.myctrlapp.PacketHandler trace

Next, we create a simple Mininet topology with one switch and three hosts:

user@host:$ sudo mn --controller=remote,ip=129.69.210.89 --topo single,3 --mac --arp

Be sure to use the IP address of your OpenDaylight controller host. The option --mac assigns a MAC address according to the host number to each host (e.g., 00:00:00:00:00:01 for the first host). In our implementation, we use these addresses as hard-coded constants.

Option --arp pre-populates the ARP cache of the hosts. I use host 1 and 2 as the server hosts with the IP addresses 10.0.0.1 and 10.0.0.2. Host 3 runs the client. Therefore, I also set a static ARP table entry on host 3 for the public IP address of the service (10.0.0.100):

mininet> xterm h3
mininet h3> arp -s 10.0.0.100 00:00:00:00:00:64

On host 1 and 2 we start two simple servers using netcat listening on port 7777:

mininet> xterm h1
mininet> xterm h2
mininet h1> nc -l 7777
mininet h2> nc -l 7777

Then, we send a message to our service from the client on host 3 using again netcat:

mininet h3> echo "Hello" | nc 10.0.0.100 7777

Now, you should see the output “Hello” in the xterm of host 1. If you execute the same command again, the output will appear in the xterm of host 2. This shows that requests (TCP connections) are correctly distributed among the two servers.

Where to go from here

Basically, you can now implement your service using reactive flow programming. However, some further services might be helpful. For instance, according to the paradigm of logically centralized control, it might be interesting to query the global topology of the network, locations of hosts, etc. This, I plan to cover in future tutorials.

Securing OpenDaylight’s REST Interfaces

OpenDaylight comes with a set of REST interfaces. For instance, in one of my previous posts, I have introduced OpenDaylight’s REST interface for programming flows. With these interfaces, you can easily outsource your control logic to a remote server other than the server on which the OpenDaylight controller is running. Basically, the controller offers a web service, and the control application invokes this service sending REST requests via HTTP.

Although the concept to offer network services as web services is very nice and lowers the barriers to “program” the network significantly, it also brings up security problems well known from web services. If you do not authenticate clients, any client that can send HTTP requests to your controller can control your network — certainly something you want to avoid!

Therefore, in this post, I will show how to secure OpenDaylight’s REST interfaces.

Authentication in OpenDaylight

The REST interfaces are so-called northbound interfaces between controller and control application. So you can think of the controller as the service and the control application as the client as shown in the figure below.

insecure_network

In order to ensure that the controller only accepts requests from authorized clients, clients have to authenticate themselves. OpenDaylight uses HTTP Basic authentication, which is based on user names and passwords (default: admin, admin). Sounds good: So only a client with the valid password can invoke the service … or is there a problem? In order to see the security threats, we have to take a closer look at the HTTP Basic authentication mechanism.

The follwing command invokes the Flow Programmer service of OpenDaylight via cURL and prints the HTTP header information of the request:

user@host:$ curl -u admin:admin -H 'Accept: application/xml' -v 'http://localhost:8080/controller/nb/v2/flowprogrammer/default/'
* About to connect() to localhost port 8080 (#0)
* Trying 127.0.0.1... connected
* Server auth using Basic with user 'admin'
> GET /controller/nb/v2/flowprogrammer/default/ HTTP/1.1
> Authorization: Basic YWRtaW46YWRtaW4=
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> Host: localhost:8080
> Accept: application/xml
>
< HTTP/1.1 200 OK
< Server: Apache-Coyote/1.1
< Cache-Control: private
< Expires: Thu, 01 Jan 1970 01:00:00 CET
< Set-Cookie: JSESSIONIDSSO=9426E0F12A0A0C80BE549451707EF339; Path=/
< Set-Cookie: JSESSIONID=DB23D1EE61348E101E6CE8117A04B8D8; Path=/
< Content-Type: application/xml
< Content-Length: 62
< Date: Sun, 12 Jan 2014 16:50:38 GMT
<
* Connection #0 to host localhost left intact
* Closing connection #0
<?xml version="1.0" encoding="UTF-8" standalone="yes"?><list/>

The interesting header field is “Authorization” with its value “Basic YWRtaW46YWRtaW4=”. Here, “YWRtaW46YWRtaW4=” is the user name and password sent from the client to the controller. Although this value looks quite cryptic, it is actually plain text. This value is simply the Base64 encoding of the user name and password string “admin:admin”. Base64 is a simple translation of 8 bit characters to 6 bit characters involving no encryption or hashing at all! Basically, it comes from the time when SMTP was restricted to sending 7 bit ASCII characters. Everything else like binary (8 bit) content had to be translated to 7 bit characters first, and exactly that’s the job of Base64 encoding. You can use a paper and pencil to decode it. Just interpret the bit pattern of three 8 bit characters as four 6 bit characters and look-up the values of the 6 bit characters in the Base64 table. Or if you are lazy, just use one of the many Base64 decoders in the WWW.

Now the problem should become obvious. If your network between client and controller is non-trusted and an attacker can eavesdrop on the communication channel, he can read your user name and password.

Securing the REST Interface

Now that we see the problem, also the solution should become obvious. We need a secure channel between client and controller, so an attacker cannot read the header fields of the HTTP request. The HTTPS standard provides exactly that. Moreover, the client can make sure that it really connects to the right controller, and not the controller of an attacker who just wants to intercept our password. So we use HTTPS to encrypt the channel between client and controller and to authenticate the controller, and HTTP Basic authentication to authenticate the client.

So the trick is, enabling HTTPS in OpenDaylight, which is turned off by default. Note that above we used the insecure HTTP protocol on port 8080. Now we want to use HTTPS on port 8443 (or 443 if you want to use the official HTTPS port instead of the alternative port).

OpenDaylight uses the Tomcat servlet container to provide its web services. Therefore, the steps to enable HTTPS are very similar to configuring Tomcat.

First, we need a server certificate that the client can use to authenticate the server. Of course, a certificate signed by a trusted certification authority would be best. However, here I will show how to create your own self-signed certificate using the Java keytool:

user@host:$ keytool -genkeypair -v -alias tomcat -storepass changeit -validity 1825 -keysize 1024 -keyalg DSA
What is your first and last name?
[Unknown]: duerr-mininet.informatik.uni-stuttgart.de
What is the name of your organizational unit?
[Unknown]: Institute of Parallel and Distributed Systems
What is the name of your organization?
[Unknown]: University of Stuttgart
What is the name of your City or Locality?
[Unknown]: Stuttgart
What is the name of your State or Province?
[Unknown]: Baden-Wuerttemberg
What is the two-letter country code for this unit?
[Unknown]: de
Is CN=duerr-mininet.informatik.uni-stuttgart.de, OU=Institute of Parallel and Distributed Systems, O=University of Stuttgart, L=Stuttgart, ST=Baden-Wuerttemberg, C=de correct?
[no]: yes

Generating 1,024 bit DSA key pair and self-signed certificate (SHA1withDSA) with a validity of 1,825 days
for: CN=duerr-mininet.informatik.uni-stuttgart.de, OU=Institute of Parallel and Distributed Systems, O=University of Stuttgart, L=Stuttgart, ST=Baden-Wuerttemberg, C=de
Enter key password for <tomcat>
(RETURN if same as keystore password):
Re-enter new password:
[Storing /home/duerrfk/.keystore]

This creates a certificate valid for five years (1825 days) and stores it in the keystore .keystore in my home directory /home/duerrfk. As first and last name, we use the DNS name of the machine, the controller is running on. The rest should be pretty obvious.

With this information, we can now configure the OpenDaylight controller. First, check out and compile OpenDaylight, if you haven’t done so already:

user@host:$ git clone https://git.opendaylight.org/gerrit/p/controller.git
user@host:$ cd controller/opendaylight/distribution/opendaylight/
user@host:$ mvn clean install

Now edit the following file, where “controller” is the root directory of the controller you checked out:

user@host:$ emacs controller/opendaylight/distribution/opendaylight/target/distribution.opendaylight-osgipackage/opendaylight/configuration/tomcat-server.xml

Comment in the following XML element:

<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
maxThreads="150" scheme="https" secure="true"
clientAuth="false" sslProtocol="TLS"
keystoreFile="/home/duerrfk/.keystore"
keystorePass="changeit"/>

Use the keystore location and password that you used before with the keytool command.

Now you can start OpenDaylight and connect via HTTPS to the controller on port 8443. Use your web browser to try it.

Making Secure Calls

One last step needs to be done before we can call the controller securely by a client. If you did not use a certificate signed by a well-known certification authority — like we did above with our self-signed certficate –, you need to present the client with the server certificate it should use for authenticating the controller. If you are using cURL, the required option is “–cacert”:

user@host:$ curl -u admin:admin --cacert ~/cert-duerr-mininet.pem -v -H 'Accept: application/xml' 'https://duerr-mininet.informatik.uni-stuttgart.de:8443/controller/nb/v2/flowprogrammer/default/'

So the last question is, how do we get the server certificate in PEM format? To this end, we can use openssl to call our server and store the returned certificate:

user@host:$ openssl s_client -connect duerr-mininet.informatik.uni-stuttgart.de:8443

The PEM certificate is everything between “—–BEGIN CERTIFICATE—–” and “—–END CERTIFICATE—–” (including these two lines), so we can just copy this to a file. Note that you have to make sure that the call is actually going to the right server (not the server of an attacker). So better call it from the machine where your controller is running to avoid a “chicken and egg” problem.

Summary

Now we can securely outsource our control application to a remote host, for instance, a host in our campus network or a cloud server running in a remote data center.

OpenDaylight: Programming Flows with the REST Interface and cURL

The SDN controller OpenDaylight comes with a Flow Programmer service, which makes it very easy for applications to program flows by using a REST interface. In this post, I will show how to use this service together with the command line tool cURL to add, modify, and delete flows.

What is so nice about REST?

REST is based on technologies that many programmers already know, in particular, HTTP to transport requests and responses between client (control application) and service (e.g., OpenDaylight’s Flow Programmer), and XML and JSON to describe the parameters of a request and the result. Because of its simplicity (compared to other web service standads like SOAP), REST is very popular and used by many web services in the World Wide Web, e.g., by Twitter or Flickr.

Since REST is based on popular technologies like HTTP, JSON, and XML, you can use it in a more or less straightforward way with most programming languages like Java, Python, C (you name it), and even command line tools like cURL. Therefore, the barrier to use it is very low. I think, this very nicely shows how software-defined networking reduces the gap between application (programmer) and network. I am sure, after you have read this post, you will agree.

Having said this, it is also important to understand the limitations of REST in the scope of SDN. REST is based on request/response interaction, i.e., the control application (client) makes a request, and the service executes the request and returns the response. REST is less suited for event-based interaction, where the controller calls the application (strictly speaking, then the controller would be the client and the application the service since client and server are only roles of two communicating entities).

In OpenFlow, there is one important event that signals to the control application that a packet without matching flow table entry has arrived at a switch (packet-in event). In this case, a controll application implementing the control logic has to decide what to do: dropping the packet, forwarding it, or setting up a flow table entry for similar packets. Because of this limitation, the REST interface is limited to proactive flow programming, where the control application proactively programs the flow table of the switches; reactive flow programming where the control application reacts on packet-in events is implemented in OpenDaylight using OSGI components.

Programming Flows with REST

REST is all about resources and their states. REST stands for Representational State Transfer. What does that mean in the context of OpenDaylight’s Flow Programmer service? For the Flow Programmer service, you can think of resources as the whole network, a switch, or a flow. The basic job of the Flow Programmer is to query and change the state of these resources by returning, adding, or deleting flows. The state of a resource can be represented in different formats, namely, XML or JSON.

Adding Flows

That was still very abstract, wasn’t it? According to Einstein, the only way to explain something is by example. So let’s make some examples. First of all, we will add a flow to a switch. We use a very simple linear topology with two switches and two hosts depicted in the following figure.

mininet-topology

You can create this topology in Mininet with the following command assuming the OpenDaylight controller is running on the machine with IP 192.168.1.1:

mn --controller=remote,ip=192.168.1.1 --topo linear,2

We also open X-terminals for both hosts using the following commands:

mininet> xterm h1
mininet> xterm h2

With the command ifconfig in the respective terminal, you can find out the IP addresses of the hosts (h1: 10.0.0.1; h2: 10.0.0.2).

OpenDaylight already implements reactive forwarding logic, so a ping between host h1 and host h2 already works! You can send a ping from h1 to h2 to check this:

h1> ping 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_req=1 ttl=64 time=0.368 ms
64 bytes from 10.0.0.2: icmp_req=2 ttl=64 time=0.050 ms
64 bytes from 10.0.0.2: icmp_req=3 ttl=64 time=0.059 ms
64 bytes from 10.0.0.2: icmp_req=4 ttl=64 time=0.058 ms

Since these flows were programmed reactively using an OSGI module, we cannot see them in the Flow Programmer service! This service only knows about the flows that were created via the Flow Programmer itself.

Now let’s program a flow that blocks all TCP requests to port 80 (web server) targeted at Host 2. You can think of this as an in-network firewall. Instead of blocking the traffic at the host using a software firewall, we already block it on a switch, e.g., at a top-of-rack switch of the rack where h2 is located or even earlier at the core switches of your data center to keep your network free from unwanted traffic.

Before we setup our blocking flow entry, we verify that h1 can send requests on port 80 to h2 using netcat to simulate a web server and client:

h2> nc -lk 10.0.0.2 80
Hello
h1> echo "Hello" | nc 10.0.0.2 80

Here, h2 is listening (option -l) on port 80 for incoming requests (and stays listening; option -k). h1 sends the String “Hello”, and h2 displays this string showing that the connection works.

I will now show you the cURL command to block TCP datagrams to port 80 at switch s2, and then explain the details:

controller> curl -u admin:admin -H 'Content-type: application/json' -X PUT -d '{"installInHw":"true", "name":"flow1", "node": {"id":"00:00:00:00:00:00:00:02", "type":"OF"}, "ingressPort":"2", "etherType": "0x800", "protocol": "6", "tpDst": "80", "priority":"65535","actions":["DROP"]}' 'http://localhost:8080/controller/nb/v2/flowprogrammer/default/node/OF/00:00:00:00:00:00:00:02/staticFlow/flow1'

We are using option “-u” to specify the user name and password (both “admin” which you might want to change for obvious reasons). OpenDaylight uses HTTP Basic authentication, so every request to a northbound REST interface must be authenticated with a user name and password.

According to the REST paradigm, resources are added using HTTP PUT requests. Therefore, we set the cURL option “-X PUT” to add the flow. If a resource with the same id already exists, it will be modified.

Moreover, we specify the HTTP content type as “application/json” since we are sending our request as JSON representation. You could as well use XML (Content-type: application/XML), however, here we use the simpler JSON format.

With option “-d”, we set the payload of the HTTP request, which in our case is a JSON document defining the flow to be added. A JSON document consists of a number of key value pairs separated by colon, so it should be very easy to read the above example. Our new flow gets the name “flow1″, so we can later refer to it to modify or delete it. It will be installed in hardware on the switch — whatever that means for our emulated mininet ;) The value of the key “node” consists of a JSON structure (marked by “{..}” in JSON) with the keys “id” and “type”. These are the data path id and type (OF = OpenFlow) of the switch. Actually, adding node id, type, and flow name is redundant from my point of view, because it is also included in the URL as you can see. According to the REST paradigm, the URL is the right place to specify the path of the resource. Anyway, it will not work without, so we better include it.

The rest defines the values of our flow. In order to match datagrams to port 80 over TCP/IP, you have to specify (1) the ethertype “0×800″ to identify IPv4 packets; (2) protocol id 6 to identify TCP datagrams; (3) the transport layer destination address (also known as port) 80. Moreover, we specify the incoming port and a flow priority. Priority 65535 is the highest priority, so we are sure that this entry will be effective if there are other entries that match the same packet (e.g., setup by the reactive forwarding module of OpenDaylight). Finally, we have to specify an action. In this case, the packet will be dropped. Another frequently used action is to output the packet on a certain port (e.g., “OUTPUT=1″). As you can see, the action field is an array (marked by “[..]” in JSON), so you can specify multiple actions.

So let’s see whether our flow table entry has some effect by sending another hello message via TCP/IP to port 80:

h1> echo "Hello" | nc 10.0.0.2 80
h2> echo $?
1

The second command returns the exit status of the first command, which is 1, i.e, non-zero, so it failed. Therefore, no packets to port 80 arrived at h2 anymore. You can check whether other ports are still accessible by changing the port number of the netcat commands. Actually, they are, so everything worked as expected and we successfully programmed our first flow.

Querying Flows

Now that we have programmed a flow, we can query for installed flows. According to the REST paradigm, you can query the state of a resource using a GET request. The specific request for querying all flows of the network looks as follows:

controller> curl -u admin:admin -H 'Accept: application/xml' 'http://localhost:8080/controller/nb/v2/flowprogrammer/default'
<?xml version="1.0" encoding="UTF-8" standalone="yes"?><list><flowConfig><installInHw>true</installInHw><name>flow1</name><node><id>00:00:00:00:00:00:00:02</id><type>OF</type></node><ingressPort>2</ingressPort><priority>65535</priority><etherType>0x800</etherType><protocol>6</protocol><tpDst>80</tpDst><actions>DROP</actions></flowConfig></list>

Note that this time, we are using XML as accpeted content type by setting the HTTP header field “Accepted” to “application/xml”. We could as well use JSON again, which is easier to parse and smaller.

We can also query a particular switch by specifying its data path id in the URL:

curl -u admin:admin -H 'Accept: application/xml' 'http://localhost:8080/controller/nb/v2/flowprogrammer/default/node/OF/00:00:00:00:00:00:00:02'

Or we can query the definition of a certain flow using the flow’s name in the resource URL:

curl -u admin:admin -H 'Accept: application/xml' 'http://localhost:8080/controller/nb/v2/flowprogrammer/default/node/OF/00:00:00:00:00:00:00:02/staticFlow/flow1'

Deleting Flows

Finally, we can also delete flows using the HTTP DELETE request:

curl -u admin:admin -X DELETE 'http://localhost:8080/controller/nb/v2/flowprogrammer/default/node/OF/00:00:00:00:00:00:00:02/staticFlow/flow1'

The URL specifies the resource (here of a flow) to be deleted. HTTP DELETE requests have no payload — I am just mentioning this because the Floodlight controller required a payload for delete requests, which most HTTP implementations refuse to send; so the OpenDaylight interface is cleaner and more REST’ish in that respect.

Summary

That’s basically it. I hope, you agree that programming flows using REST is really simple. In future posts, I plan to introduce other northbound REST APIs of OpenDaylight and also the OSGI interface for reactive flow programming. If you are interested, I hope to see you again.

If you need further information about the Flow Programmer REST interface, you can visit the OpenDaylight website.