Data Center Bridging (DCB) for Intel® Network Connections

Overview

DCB for Windows

DCB for Linux

iSCSI Over DCB


Overview

Data Center Bridging is a collection of standards-based extensions to classical Ethernet. It provides a lossless data center transport layer that enables the convergence of LANs and SANs onto a single Unified Fabric. In addition to supporting Fibre Channel Over Ethernet (FCoE), it enhances the operation of other business-critical traffic.

Data Center Bridging is a flexible framework that defines the capabilities required for switches and end points to be part of a data center fabric. It includes the following capabilities:

There are two supported versions of DCBX.

Version 1: The specification can be found at http://download.intel.com/technology/eedc/dcb_cep_spec.pdf

This version of DCBX is referenced in Annex F of the FC-BB-5 standard (FCoE) as the version of DCBX used with pre-FIP FCoE implementations.

Version 2: The specification can be found as a link within the following document:  http://www.ieee802.org/1/files/public/docs2008/dcb-baseline-contributions-1108-v1.01.pdf

For more information on DCB, including the DCB Capability Exchange Protocol Specification, go to  http://www.intel.com/technology/eedc/ or http://www.ieee802.org/1/pages/dcbridges.html

For system requirements go here.


DCB for Windows

Installation:

From the Intel® CD: Click the FCoE/DCB checkbox to install the Intel® Ethernet FCoE Protocol Driver and DCB. The MSI Installer installs all FCoE and DCB components including the Base Driver.

If you have a switch configured to use DCBX, the Windows DCB service (from Intel) will (by default) automatically communicate with the switch and setup the DCB configuration. 

Configuration:

Many DCB functions can be configured or revised using Intel® PROSet for Windows Device Manager, from the Data Center tab. You can use the Intel® PROSet for Windows Device Manager to perform the following tasks:

Click here for instructions on installing and using Intel® PROSet for Windows Device Manager.


DCB for Linux

Background

Requirements

Functionality

How To Build a DCB Capable System

Options

Setup

Operation

Testing

dcbtool Overview

dcbtool Options

Commands

FAQ

Known Issues

License

Support

Background

In the 2.4.x kernel, qdiscs were introduced. The rationale behind this effort was to provide QoS in software, as hardware did not provide the necessary interfaces to support it. In 2.6.23, Intel pushed the notion of multiqueue support into the qdisc layer. This provides a mechanism to map the software queues in the qdisc structure into multiple hardware queues in underlying devices. In the case of Intel adapters, this mechanism is leveraged to map qdisc queues onto the queues within our hardware controllers.

Within the Data Center, the perception is that traditional Ethernet:

  1. has high latency
  2. is prone to losing frames, rendering it unacceptable for storage applications

In an effort to address these issues, Intel and a host of industry leaders have been working on addressing these problems. Specifically, within the IEEE 802.1 standards body there are a number of task forces working on enhancements to address these concerns. Listed below are the applicable standards bodies:

Enhanced Transmission Selection
        IEEE 802.1Qaz
Lossless Traffic Class
        Priority Flow Control: IEEE 802.1Qbb
        Congestion Notification: IEEE 802.1Qau
DCB Capability exchange protocol: IEEE 802.1Qaz

The software solution that is being released represents Intel's implementation of these efforts. It is worth noting that many of these standards have not been ratified - this is a pre-standards release, so users are advised to check Sourceforge often. While we have worked with some of the major ecosystem vendors in validating this release, there are many vendors which still have solutions in development. As these solutions become available and standards get ratified, we will work with ecosystem partners and the standards body to ensure that the Intel solution works as expected.


Requirements


Functionality

dcbd

dcbtool


How To Build a DCB-Capable System

Linux kernel install

  1. Requires at minimum 2.6.29 kernel.
  2. Untar and make the kernel. Listed below are the required kernel options:

    Required configuration options:

    From make menu config
    1. In Networking support -> Networking options, enable Data Center Bridging

    2. In Networking support -> Networking options -> QoS and/or fair queuing, enable:

      Hardware Multiqueue-aware Multi Band Queuing (MULTIQ)
      Multi Band Priority Queueing (PRIO)
      Elementary classification ( BASIC )
      Universal 32bit comparisons w/ hashing (U32)
      Extended Matches and make sure U32 key is selected
      Actions -> SKB Editing

    3. To enable ixgbe driver support for DCB
      1. In Device Drivers -> Network device support -> Ethernet (10000 Mbit) enable Intel® 10GbE PCI Express adapters support and the Data Center Bridging (DCB) Support option
  3. Build the kernel.
  4. Create a link from /usr/include/linux to
    /usr/src/kernels/linux-2.x.xx.x/include/linux. Use the following command:
    ln -s /usr/src/kernels/linux-2.x.xx.x/include/linux /usr/include/linux.

dcbd Application Install

  1. Download iproute2 from the web. Listed below is a link for iproute:

    http://devresources.linux-foundation.org/dev/iproute2/download/

    Please ensure that you use the version that corresponds to the kernel version that you are using. Follow the build/installation instructions in the README with the tarball. Typically, the commands ./configure;make;make install should work.
  2. Download the latest version of the dcbd-x.y.z tarball from the e1000 project in sourceforge and untar it. Go into the dcbd-x.y.z directory and run the following commands

        ‘make clean; make; make install’.

    This will build and copy 'dcbd' and 'dcbtool' to /usr/sbin, make the  '/etc/sysconfig/dcbd'directory (default location of the dcbd.conf file) and setup dcbd to run as a system service using the chkconfig program. Verify that the dcbd service is working as expected with the 'service dcbd status' command. If the service is not on, issue the command 'service dcbd start'

    dcbd will create the dcbd.conf file if it does not exist.

    For development purposes, 'dcbd' can be run directly from the build directory.


Options

dcbd has the following command line options:
-h  show usage information
-f  configfile: use the specified file as the config file instead of the default file - /etc/sysconfig/dcbd/dcbd.conf
-d  run dcbd as a daemon
-v  show dcbd versionSetup

Setup

  1. Load the ixgbe module.
  2. Verify dcbd service is functional.
    If dcbd was installed, do "service dcbd status" to check, "service dcbd start" to start.
    Or, run "dcbd -d" from the command line to start.
  3. Enable DCB on the selected ixgbe port: dcbtool sc ethX dcb on
  4. The dcbtool command can be used to query and change the DCB configuration (i.e., various percentages to different queues). Use dcbtool -h to see a
    list of options.

Operation

dcbd and dcbtool can be used to configure a DCB capable driver, such as the ixgbe driver, which supports the rtnetlink DCB interface. Once the DCB features are configured, the next step is to classify traffic to be identified with an 802.1p priority and the associated DCB features. This can be done by using the 'tc' command to setup the qdisc and filters to cause network traffic to be transmitted on different queues.

The skbedit action mechanism can be used in a tc filter to classify traffic patterns to a specific queue_mapping value from 0-7. The ixgbe driver will place traffic with a given queue_mapping value onto the corresponding hardware queue and tag the outgoing frames with the corresponding 802.1p priority value.

Set up the multi-queue qdisc for the selected interface:

# tc qdisc add dev ethX root handle 1: multiq

Setting the queue_mapping in a TC filter allows the ixgbe driver to classify a packet into a queue. Here are some examples of how to filter traffic into various queues using the flow ids:

# tc filter add dev ethX protocol ip parent 1: u32 match ip dport 80 \
0xffff action skbedit queue_mapping 0

# tc filter add dev ethX protocol ip parent 1: u32 match ip dport 53 \
0xffff action skbedit queue_mapping 1

# tc filter add dev ethX protocol ip parent 1: u32 match ip dport 5001 \
0xffff action skbedit queue_mapping 2

# tc filter add dev ethX protocol ip parent 1: u32 match ip dport 20 \
0xffff action skbedit queue_mapping 7

Here is an example that sets up a filter based on EtherType. In this example the EtherType is 0x8906.

# tc filter add dev ethX protocol 802_3 parent 1: handle 0xfc0e basic match \
'cmp(u16 at 12 layer 1 mask 0xffff eq 35078)' action skbedit queue_mapping 3


Testing

To test in a back-to-back setup, use the following tc commands to setup the qdisc and filters for TCP ports 5000 through 5007. Then use a tool, such as iperf, to generate UDP or TCP traffic on ports 5000-5007.

Statistics for each queue of the ixgbe driver can be checked using the ethtool utility: ethtool -S ethX

# tc qdisc add dev ethX root handle 1: multiq

# tc filter add dev ethX protocol ip parent 1: \
u32 match ip dport 5000 0xffff action skbedit queue_mapping 0

# tc filter add dev ethX protocol ip parent 1: \
u32 match ip sport 5000 0xffff action skbedit queue_mapping 0

# tc filter add dev ethX protocol ip parent 1: \
u32 match ip dport 5001 0xffff action skbedit queue_mapping 1

# tc filter add dev ethX protocol ip parent 1: \
u32 match ip sport 5001 0xffff action skbedit queue_mapping 1

# tc filter add dev ethX protocol ip parent 1: \
u32 match ip dport 5002 0xffff action skbedit queue_mapping 2

# tc filter add dev ethX protocol ip parent 1: \
u32 match ip sport 5002 0xffff action skbedit queue_mapping 2

# tc filter add dev ethX protocol ip parent 1: \
u32 match ip dport 5003 0xffff action skbedit queue_mapping 3

# tc filter add dev ethX protocol ip parent 1: \
u32 match ip sport 5003 0xffff action skbedit queue_mapping 3

# tc filter add dev ethX protocol ip parent 1: \
u32 match ip dport 5004 0xffff action skbedit queue_mapping 4

# tc filter add dev ethX protocol ip parent 1: \
u32 match ip sport 5004 0xffff action skbedit queue_mapping 4

# tc filter add dev ethX protocol ip parent 1: \
u32 match ip dport 5005 0xffff action skbedit queue_mapping 5

# tc filter add dev ethX protocol ip parent 1: \
u32 match ip sport 5005 0xffff action skbedit queue_mapping 5

# tc filter add dev ethX protocol ip parent 1: \
u32 match ip dport 5006 0xffff action skbedit queue_mapping 6

# tc filter add dev ethX protocol ip parent 1: \
u32 match ip sport 5006 0xffff action skbedit queue_mapping 6

# tc filter add dev ethX protocol ip parent 1: \
u32 match ip dport 5007 0xffff action skbedit queue_mapping 7

# tc filter add dev ethX protocol ip parent 1: \
u32 match ip sport 5007 0xffff action skbedit queue_mapping 7


dcbtool Overview

dcbtool is used to query and set the DCB settings of a DCB capable Ethernet interface. It connects to the client interface of dcbd to perform these operations. dcbtool will operate in interactive mode if it is executed without a command. In interactive mode, dcbtool also functions as an event listener and will print out events received from dcbd as they arrive.


Synopsis

dcbtool -h

dcbtool -v

dcbtool [-rR]

dcbtool [-rR] [command] [command arguments]

Options

-h    shows the dcbtool usage message

-v    shows dcbtool version information

-r    displays the raw dcbd client interface messages as well as the readable output.

-R    displays only the raw dcbd client interface messages


Commands

help                 shows the dcbtool usage message
ping test command. The dcbd daemon responds with "PONG" if the client interface is operational.
license displays dcbtool license information
quit exit from interactive mode
 

The following commands interact with the dcbd daemon to manage the dae-mon and DCB features on DCB capable interfaces.

dcbd general configuration commands:

 
<gc|go> dcbx gets the configured or operational version of the DCB capabilities exchange protocol. If different, the configured version will take effect (and become the operational version) after dcbd is restarted.
sc dcbx v:[1|2] sets the version of the DCB capabilities exchange protocol which will be used the next time dcbd is started. Information about
version 1 can be found at:
<http://download.intel.com/technology/eedc/dcb_cep_spec.pdf>
Information about version 2 can be found at:
<http://www.ieee802.org/1/files/public/docs2008/az-wadekar-dcbx-capability-exchange-discovery-protocol-1108-v1.01.pdf>

DCB-per interface commands

 
go <ifname> <feature> gets configuration of feature on interface ifname.
go <ifname> <feature> gets operational status of feature on interface ifname.
gp <ifname> <feature> gets peer configuration of feature on interface ifname.
sc <ifname> <feature> <args>

sets the configuration of feature on interface ifname.


Feature may be one of the following:

dcb     DCB state of the port
pg       priority groups
pfc priority flow control
app:<subtype> application specific data
ll:<subtype> logical link status

Subtype can be:

0|fcoe Fiber Channel over Ethernet (FCoE)

Args can include:

e:<0|1> controls feature enable
a:<0|1> controls whether the feature is advertised via DCBX to the peer
w:<0|1> controls whether the feature is willing to change its operational configuration based on what is received from the peer
[feature specific args] arguments specific to a DCB feature

Feature specific arguments for dcb:

On/off enable or disable for the interface. The go and gp commands are not needed for the dcb feature. Also, the enable, advertise and willing parameters are not required.

Feature specific arguments for pg:

pgid:xxxxxxxx Priority Group ID for the 8 priorities. From left to right (priorities 0-7), x is the corresponding Priority Group ID value, which can be 0-7 for Priority Groups with bandwidth allocations or f (Priority Group ID 15) for the unrestricted Priority Group.
pgpct:x,x,x,x,x,x,x,x Priority Group percentage of link bandwidth. From left to right (Priority Groups 0-7), x is the percentage of link bandwidth allocated to the corresponding Priority Group. The total bandwidth must equal 100%.
uppct:x,x,x,x,x,x,x,x Priority percentage of Priority Group bandwidth. From left to right (priorities 0-7), x is the percentage of Priority Group bandwidth allocated to the corresponding priority. The sum of percentages for priorities which belong to the same Priority Group must total 100% (except for Priority Group 15).
strict:xxxxxxxx Strict priority setting. From left to right (priorities 0-7), x is 0 or 1. 1 indicates that the priority may utilize all of the bandwidth allocated to its Priority Group.
up2tc:xxxxxxxx Priority to traffic class mapping. From left to right (priorities 0-7), x is the traffic class (0-7) to which the priority is mapped.

Feature specific arguments for pfc:

pfcup:xxxxxxxx Enable/disable priority flow control. From left to right (priorities 0-7), x is 0 or 1. 1 indicates that the corresponding priority is configured to transmit priority pause.

Feature specific arguments for app:< subtype>:

appcfg:xx xx is a hexadecimal value representing an 8 bit bitmap where bits set to 1 indicate the priority which frames for the  applications specified by subtype should use. The lowest order bit maps to priority 0.

Feature specific arguments for ll:<subtype>:

status:[0|1]

For testing purposes, the logical link status may be set to 0 or 1. This setting is not persisted in the configuration file.

Examples

Enable DCB on interface eth2

dcbtool sc eth2 dcb on

Assign priorities 0-3 to Priority Group 0, priorities 4-6 to Priority Group 1 and priority 7 to the unrestricted priority. Also, allocate 25% of link bandwidth to Priority Group 0 and 75% to group 1.

dcbtool sc eth2 pg pgid:0000111f pgpct:25,75,0,0,0,0,0,0

Enable transmit of Priority Flow Control for priority 3 and assign FCoE to priority 3.

dcbtool sc eth2 pfc pfcup:00010000
dcbtool sc eth2 app:0 appcfg:08


FAQ

How did Intel verify their DCB solution?

Answer - The Intel solution is continually evolving as the relevant standards become solidified and more vendors introduce DCB capable systems. That said, we initially used test automation to verify the DCB state machine. As the state machine became more robust and we had DCB capable hardware, we began to test back to back with our adapters. Finally, we introduced DCB capable switches in our test bed.


Known Issues

Prior to kernel 2.6.26, tso will be disabled when the driver is put into DCB mode.

A TX unit hang may be observed when link strict priority is set when a large amount of traffic is transmitted on the link strict priority.


License

dcbd and dcbtool - DCB daemon and command line utility DCB configuration
Copyright(c) 2007-2010 Intel Corporation.

Portions of dcbd and dcbtool (basically program framework) are based on:

hostapd-0.5.7
Copyright (c) 2004-2007, Jouni Malinen <j@w1.fi>

This program is free software; you can redistribute it and/or modify it under the terms and conditions of the GNU General Public License, version 2, as published by the Free Software Foundation.

This program is distributed in the hope it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.

The full GNU General Public License is included in this distribution in the file called "COPYING".


Support

Contact Information:
e1000-eedc Mailing List <e1000-eedc@lists.sourceforge.net>
Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497


Last modified on 12/21/10 3:41p 5/05/05 8:51a 28