Installation

Installing the DPB service

The instructions assume a platform of Ubuntu 18.04. It can surely be adapted for other Linux platforms.

Install Java 9 or later:

sudo apt-get install openjdk-11-jdk

Install other prerequisites:

sudo apt-get install build-essential libjsonp-java junit4 libhttpcore-java \
libhttpclient-java libcommons-logging-java gawk par subversion

To run Ryu controllers, you also need:

sudo apt-get install python-ryu

Ubuntu 16.04 doesn’t have libjsonp-java. This procedure should install it under /usr/local/share/java/:

wget 'http://www.java2s.com/Code/JarDownload/javax.json/javax.json-1.0.jar.zip'
unzip javax.json-1.0.jar.zip
sudo mkdir -p /usr/local/share/java
sudo mv javax.json-1.0.jar /usr/local/share/java
sudo ln -s javax.json-1.0.jar /usr/local/share/java/javax.json.jar

Additionally for a DPB server installation, a database connector is required:

sudo mkdir -p /usr/local/share/java
sudo curl 'https://bitbucket.org/xerial/sqlite-jdbc/downloads/sqlite-jdbc-3.23.1.jar' \
     -o /usr/local/share/java/sqlite-jdbc-3.23.1.jar

The DPB and its remaining prerequisites are best checked out from their repositories. We’ll assume you’ll keep working copies in a directory such as ~/works.

Install Jardeps, which provides rules for compiling Java with makefiles:

cd ~/works
svn co ‘http://scc-forge.lancaster.ac.uk/svn-repos/misc/jardeps/trunk' jardeps
cd jardeps
make
sudo make install

Install Binodeps, which provides enhanced rules for build C and C++ programs using GNU Make:

mkdir -p ~/works
cd ~/works
svn co http://scc-forge.lancaster.ac.uk/svn-repos/misc/binodeps/trunk/ binodeps
cd binodeps
make
sudo make install

Install DDSLib, a C library for managing various data structures like linked lists, binary heaps, etc.:

cd ~/works
svn co ‘http://scc-forge.lancaster.ac.uk/svn-repos/utils/ddslib/branches/stable' ddslib
cd ddslib
cat > ddslib-env.mk <<EOF
CFLAGS += -O2 -g
CFLAGS += -std=gnu11
#CFLAGS += -fgnu89-inline

CPPFLAGS += -D_XOPEN_SOURCE=600 CPPFLAGS += -D_GNU_SOURCE=1 CPPFLAGS += -pedantic -Wall -W -Wno-unused-parameter CPPFLAGS += -Wno-missing-field-initializers

CXXFLAGS += -O2 -g CXXFLAGS += -std=gnu++11 EOF make sudo make install

Install Reactor, a library for event-driven programming in C:

cd ~/works
svn co ‘http://scc-forge.lancaster.ac.uk/svn-repos/utils/react/trunk' react
cd react
cat > react-env.mk <<EOF
ENABLE_CXX=no
CFLAGS += -O2 -g
CFLAGS += -std=gnu11
#CFLAGS += -fgnu89-inline

CPPFLAGS += -D_XOPEN_SOURCE=600 CPPFLAGS += -D_GNU_SOURCE=1 CPPFLAGS += -pedantic -Wall -W -Wno-unused-parameter CPPFLAGS += -Wno-missing-field-initializers

CXXFLAGS += -O2 -g CXXFLAGS += -std=gnu++11 EOF make sudo make install

Usmux allows a Java process to receive connections over a Unix-domain socket, and fork into the background. The DPB server process makes itself available over such a socket. Install Usmux:

cd ~/works
svn co ‘http://scc-forge.lancaster.ac.uk/svn-repos/misc/usmux/branches/export' usmux
cd usmux
cat > usmux-env.mk <<EOF
CFLAGS += -O2 -g
CFLAGS += -std=gnu11
CPPFLAGS += -D_XOPEN_SOURCE=600
CPPFLAGS += -D_GNU_SOURCE=1
CPPFLAGS += -pedantic -Wall -W -Wno-unused-parameter
CPPFLAGS += -Wno-missing-field-initializers
EOF
make
sudo make install

Install Java REST library (noting correct path for javax.json.jar if you installed it manually):

cd ~/works
svn co ‘http://scc-forge.lancaster.ac.uk/svn-repos/utils/lurest/trunk' 
lurest cat > lurest-env.mk >>EOF CLASSPATH += /usr/share/java/httpclient.jar CLASSPATH += /usr/share/java/httpcore.jar CLASSPATH += /usr/local/share/java/jardeps-lib.jar CLASSPATH += /usr/share/java/javax.json.jar EOF make sudo make install

Install DPB (noting correct path for javax.json.jar if you installed it manually):

cd ~/works
git clone git@github.com:DataPlaneBroker/DPB.git
cd DPB
cat > dataplanebroker-env.mk <<EOF
CLASSPATH += /usr/share/java/junit4.jar
CLASSPATH += /usr/share/java/javax.json.jar
CLASSPATH += /usr/share/java/httpcore.jar
CLASSPATH += /usr/share/java/httpclient.jar
CLASSPATH += /usr/share/java/commons-logging.jar
CLASSPATH += /usr/local/share/java/jardeps-lib.jar
CLASSPATH += /usr/local/share/java/usmux_session.jar
CLASSPATH += /usr/local/share/java/sqlite-jdbc-3.21.0.jar
CLASSPATH += /usr/local/share/java/lurest-server-api.jar
CLASSPATH += /usr/local/share/java/lurest-client-api.jar
CLASSPATH += /usr/local/share/java/lurest-service.jar
PROCPATH += /usr/local/share/java/lurest-service.jar
PROCPATH += /usr/local/share/java/lurest-service-proc.jar
EOF
make
sudo make install

That will install a number of jars, three Bash scripts, and two Ryu/Python controllers.

Controller for Corsa

If you're setting up the server side of the broker, ensure that a portslicer.py Ryu controller is running somewhere. For example:

/usr/bin/ryu-manager --ofp-listen-host 172.31.31.1 --ofp-tcp-listen-port 6556 \
                     --wsapi-host localhost --wsapi-port 8081 \
                     /usr/local/share/dataplane-broker/portslicer.py

It's safe to run as a cronjob; if it's already up, the latest invocation will just quit, and leave the existing one running. One instance of this controller can operate several Corsa VFCs independently.

The controller presents two interfaces:

  • 172.31.31.1:6556 is the OpenFlow interface, which bridges within the Corsa will contact when set up. The address you use here appears in configuration below. Make sure the Corsa has a netns that can access this address. We'll use dpb-netns as an example.
  • localhost:8081 is the controller's northbound interface, through which the broker expresses intents to the switch. localhost is good enough if the controller and the broker are running in the same machine.

Details of these interfaces go in the ctrl section of a fabric agent later.

The advantages of the Corsa fabric include the following:

  • The OpenFlow rules are relatively simple, as Corsa's tunnel attachments handle VLAN tagging and detagging. As far as the OF switch is concerned, every circuit comes in on its own ofport.
  • The tunnel attachments also handle bandwidth shaping (on exit from the switch) and metering (on entry).

Tunnel attachments are set up through the Corsa's REST-based management interface.

Controller for OpenFlow 1.3

If you're setting up the server side of the broker, ensure that a tupleslicer.py Ryu controller is running somewhere. For example:

/usr/bin/ryu-manager --ofp-listen-host 172.31.31.1 --ofp-tcp-listen-port 6556 \
                     --wsapi-host localhost --wsapi-port 8082 \
                     /usr/local/share/dataplane-broker/tupleslicer.py

It's safe to run as a cronjob; if it's already up, the latest invocation will just quit, and leave the existing one running. One instance of this controller can operate several OpenFlow switches independently.

The controller presents two interfaces:

  • 172.31.31.1:6556 is the OpenFlow interface, which OpenFlow switches will contact when set up. Make sure the switch's control port can access this address.
  • localhost:8082 is the controller's northbound interface, through which the broker expresses intents to the switch.

DPB server configuration

Create server-side configuration as required in ~/.config/dataplanebroker/server.properties (a Java ‘properties’ file, consisting of dotted.name=value pairs). This file must contain a property called agents whose value is a comma-separated list of prefixes of other properties. For example, if it contains “, foo, ”, then there should also be a set of properties beginning “foo.”.

Here’s an example server-side configuration, establishing three persistent switches and one persistent aggregator:

## Two agents are required for each switch, one to implement a generic switch
## abstraction, and one to adapt it to the physical fabric.
agents=london-switch, london-fabric, 
paris-switch, paris-fabric,
athens-switch, athens-fabric,
aggr

aggr.name=aggr aggr.type=persistent-aggregator aggr.db.service=jdbc:sqlite:/home/initiate/.local/var/dataplane-broker/aggr.sqlite3

## The london switch stores its service status in an Sqlite3 database, ## and operates the fabric provided by the london-fabric agent. london-switch.name=london london-switch.type=persistent-switch london-switch.db.service=jdbc:sqlite:/home/initiate/.local/var/dataplane-broker/london-switch.sqlite3 london-switch.fabric.agent=london-fabric

## The london fabric operates a single VFC in a Corsa, multiplexing ## services across it. It is told how to operate the Corsa’s REST ## management interface, what OpenFlow configuration to apply to the ## VFC (the controller address, and netns, if it needs to create it), ## and how to configure the VFC’s controller (through a custom REST ## API). london-fabric.name=london-fabric london-fabric.type=corsa-dp2x00-sharedbr london-fabric.description.prefix=dpb:vc:london: london-fabric.subtype=openflow london-fabric.rest.inherit=#mycorsa.rest london-fabric.ctrl.inherit=#mycorsa.ctrl

## Other switches and fabrics follow a similar pattern. athens-switch.name=athens athens-switch.type=persistent-switch athens-switch.db.service=jdbc:sqlite:/home/initiate/.local/var/dataplane-broker/athens-switch.sqlite3 athens-switch.fabric.agent=athens-fabric

athens-fabric.name=athens-fabric athens-fabric.type=corsa-dp2x00-sharedbr athens-fabric.description.prefix=dpb:vc:athens: athens-fabric.subtype=openflow athens-fabric.rest.inherit=#mycorsa.rest athens-fabric.ctrl.inherit=#mycorsa.ctrl

paris-switch.name=paris paris-switch.type=persistent-switch paris-switch.db.service=jdbc:sqlite:/home/initiate/.local/var/dataplane-broker/paris-switch.sqlite3 paris-switch.fabric.agent=paris-fabric

paris-fabric.name=paris-fabric paris-fabric.type=corsa-dp2x00-sharedbr paris-fabric.description.prefix=dpb:vc:paris: paris-fabric.subtype=openflow paris-fabric.rest.inherit=#mycorsa.rest paris-fabric.ctrl.inherit=#mycorsa.ctrl

## These credentials are used to operate a Corsa, and can ## be shared by multiple simulated switch abstractions. mycorsa.rest.location=https:\//10.11.12.13/ mycorsa.rest.cert.file=mycorsa-certificate.pem mycorsa.rest.authz.file=mycorsa-authz mycorsa.ctrl.host=172.31.31.1 mycorsa.ctrl.port=6556 mycorsa.ctrl.netns=dpb-netns mycorsa.ctrl.rest.location=http:\//localhost:8081/slicer/

The switch networks each operate a distinct bridge in the same Corsa. The bridges all connect to the same controller (also running on beta.sys). As the networks access the same Corsa, and the same OpenFlow controller, their control parameters are all inherited from the same configuration, mycorsa. mycorsa.ctrl identifies the northbound interface of the OF controller; mycorsa.rest identifies the management API of the Corsa. You will need to enable this on your Corsa, as well as generate credentials to reference from the rest.cert.file and rest.authz.file fields. (These filenames are relative to the file referencing them.)

Each switch network type is corsa-dp2x00-sharedbr, meaning a single bridge is created and used for all services. Each network gives its bridge a distinct description within Corsa management; for example, the paris bridge’s description begins with dpb:vc:paris:. When the network first contacts the Corsa, it ensures that only one bridge has a description with that prefix.

In the corsa-dp2x00-sharedbr implementation, each active service of N end points manifests as N tunnel attachments to N available ofports of the bridge (which can be viewed on the Corsa with show bridge br2 tunnel). When N=2, two OpenFlow rules can be seen to exchange traffic from each of these ofports to the other (with show openflow flow br2). For larger N, an OpenFlow group is created to output to all ofports forming the service (show openflow group br2), learning-switch rules based on this group are added, and unrecognized source MACs are delivered to the controller.

There is no topology information in the configuration file, as that is set up through other dpb-client commands. The configuration file only contains management connectivity information.

SSH authorization

Access to the server is through SSH. The script dpb-ssh-agent is invoked on the server side, with minimal argumentation, then the client and server engage in a bidirectional exchange of JSON messages. To allow a user access, take their SSH public key, and append it as a single line to ~/.ssh/authorized_keys on the server account. This should be augmented with settings that restrict what the client can invoke:

command="/usr/local/bin/dpb-ssh-agent +N -s /tmp/dataplane-broker.sock -m london -m athens -m paris -m aggr",no-port-forwarding,no-agent-forwarding,no-X11-forwarding,no-pty ecdsa-sha2-nistp256 AAAAE2VjZHf85P+s= foo@example

The example above permits the caller to manage and control all four network abstractions (london, athens, paris and aggr). Management operations include the addition and removal of terminals, and the additional, removal and configuration of trunks (for aggregators only). Control operations include the creation, definition, activation, deactivation and release of services. Use -c to permit only control of a particular network.

DPB server invocation

Run the DPB server process through Usmux:

usmux -d -B /tmp/dataplane-broker.sock -- dpb-server -s '%s' -L/usr/share/java -L/usr/local/share/java -lsqlite-jdbc -lhttpcore -lhttpclient -lcommons-logging

Everything after is the actual DPB server command. Usmux creates a Unix-domain socket /tmp/dataplane-broker.sock (which is also an argument of the SSH configuration above), listens for connections on it, and relays them to the given process.

The -d runs in debug mode, i.e., the server process remains in the foreground, and remains connected to the terminal. Without it, the command exits as soon as the server is established and has forked into the background.

To run as a cronjob, remember to escape the % character in the command, as this is special in crontab:

*/10 * * * * usmux -B /tmp/dataplane-broker.sock -- dpb-server -s '\%s' -L/usr/share/java -L/usr/local/share/java -lsqlite-jdbc -lhttpcore -lhttpclient -lcommons-logging > /dev/null 2>&1

DPB client configuration

You’ll need an SSH key to get in to remote servers:

ssh-keygen -t ecdsa -f ~/.ssh/dpb_ecdsa

That creates dpb_ecdsa and dpb_ecdsa.pub in your SSH directory. Provide the public key to the server admin to install as noted above.

(You could use your existing key pair, but if you already have unrestricted access to the account, using that key pair will preclude it, as the authorized_keys configuration will restrict the user to running only dpb-ssh-agent, and only in one particular way.)

Store client-side configuration in ~/.config/dataplanebroker/client.properties. The syntax is the same as for the server side. However, all the agents are just SSH clients into the server account (initiate@beta.sys in this example). For example:

## Define how to SSH to the server host and account.
beta.host=beta.sys
beta.user=initiate
## (Psst! Not sure if I've implemented ${} yet!)
beta.key-file=${user.home}/.ssh/london-switch_ecdsa

## Indicate the configuration prefixes of agents to instantiate.
agents=london, paris, athens, aggr

## Define those agents with corresponding prefixes (e.g., "paris.")...

## This agent provides access to a remote switch via SSH.
paris.type=ssh-switch
paris.ssh.inherit=\#beta

## Identify this switch by this name.  This applies to both the
## command line (i.e., "-n paris"), and the name used at the remote
## site.
paris.name=paris

## Other networks are configured similarly...

athens.type=ssh-switch
athens.ssh.inherit=\#beta
athens.name=athens

london.type=ssh-switch
london.ssh.inherit=\#beta
london.name=london

aggr.type=ssh-switch
aggr.ssh.inherit=\#beta
aggr.name=aggr

Client invocation

Example topology configuration

A client with management access can describe the physical topology of the network to the network abstractions. The following command sets up a simulated chain of switches (athens-paris-london), co-ordinated by aggr:

dpb-client -n london 
add-terminal openstack phys.25
add-terminal paris phys.13
-n paris
add-terminal london phys.29
add-terminal athens phys.29
-n athens
add-terminal paris phys.13
add-terminal openstack phys.24
-n aggr
add-terminal athens-openstack athens:openstack
add-terminal london-openstack london:openstack
add-trunk athens:paris paris:athens
add-trunk london:paris paris:london
provide paris:athens 5000
provide paris:london 5000
open paris:athens 100-399
open paris:london 400-699

From the aggregator’s viewpoint, there are two terminals, aggr:athens-openstack and aggr:london-openstack. (The syntax is network:terminal.) These are mapped to athens:openstack and london:openstack respectively, which in turn map to the interface configurations phys.24 (i.e., port 24) and phys.25. This means that users of the aggregator can connect any two or more VLANs ports 24 and 25.

The aggregator is also made aware of two trunks, one between terminals athens:paris and paris:athens, and one between london:paris and paris:london. Both of these run between ports 13 and 29, which have been connected physically with a 10G cable. 5G is allocated to each trunk, as are two non-overlapping VLAN ranges, so we are implementing both trunks on a single link (but just for experimental purposes).

Make sure that the referenced physical ports have the right tunnel mode (usually ctag). Otherwise, tunnel attachment will (quietly!) fail when a service is created. If this happens, and then you fix the tunnel mode, restarting the broker server will usually cause the tunnels to be (re-)established properly.

Control operation

Test connectivity:

dpb-client -n aggr dump
Create a service:
dpb-client -n aggr new
Select the service, and define its end points (100Mbps ingress):
dpb-client -n aggr -s 1 -b 100 -e athens-openstack:50 -e london-openstack:80 initiate
This will plot a spanning tree across the switches, and allocate bandwidth from the physical links. Activate the service, creating the forwarding rules in the switches:
dpb-client -n aggr -s 1 activate
De-activate the service:
dpb-client -n aggr -s 1 deactivate
Activate the service:
dpb-client -n aggr -s 1 activate
De-activate the service:
dpb-client -n aggr -s 1 deactivate
Activate the service:
dpb-client -n aggr -s 1 activate
De-activate the service:
dpb-client -n aggr -s 1 deactivate
Release the service (resulting in its destruction):
dpb-client -n aggr -s 1 release
Watch the service for events:
dpb-client -n aggr -s 1 watch

Install NBI information

Install WIM Driver in RO

DPB RO repo

git clone https://github.com/DataPlaneBroker/RO.git