You are on page 1of 41

CERTIFICATE

This is to certify that the Seminar report titled OpenFlow that is being submitted by Mr. C. DHRUVA (08K81A0569), in partial fulfillment of the requirements for the award of the degree of Bachelor of Technology in Computer Science& Engineering is a record of bonafide work carried out by him. The result of investigation enclosed in this report have been verified and found satisfactory.

GUIDE (A.PRAKASH)

HOD (R.S MURLI NATH)

ACKNOWLEDGEMENT
The satisfaction and euphoria that accompanies the successful completion of any task would be incomplete without the mention of the people who made it possible and whose encouragement and guidance have crowned my efforts with success. I extend my deep sense of gratitude to Principal, St.Martins Engineering College, Dhulapally for permitting me to undertake this Seminar. Iam indebted to my guide, A.Prakash, Associate Professor, Computer Science and

Engineering, St.Martins Engineering College, Dhulapally, for his constant support and guidance throughout my Seminar. Iam indebted to Mr. R.S.MURALI NATH, HOD, Computer Science & engineering, St.Martins Engineering College, Dhulapally, for his support and guidance throughout our Seminar. Finally, I express thanks to one and all who have helped us in successfully completing this Seminar. Furthermore I would like to thank our family and friends for their moral support and encouragement.

C.DHRUVA (08K81A0569)

A SEMINAR REPORT ON Openflow


submitted in partial fulfillment of the requirement for the award of the degree of BACHELOR OF TECHNOLOGY
IN

COMPUTER SCIENCE & ENGINEERING

By C . DHRUVA (08K81A0569) Under the Guidance of A.PRAKASH Associate Professor

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING St.MARTINS ENGINEERING COLLEGE (Affiliated to JNTU, Hyderabad) DHULAPALLY(V), QUTBULLAPUR(M), SECUNDERABAD 2008-2012

ABSTRACT

This whitepaper proposes OpenFlow: a way for researchers to run experimental protocols in the networks they use every day. OpenFlow is based on an Ethernet switch, with an internal flow-table, and a standardized interface to add and remove flow entries. Our goal is to encourage networking vendors to add OpenFlow to their switch products for deployment in college campus backbones and wiring closets. We believe that OpenFlow is a pragmatic compromise: on one hand, it allows researchers to run experiments on heterogeneous switches in a uniform way at line-rate and with high portdensity; while on the other hand, vendors do not need to expose the internal workings of their switches. In addition to allowing researchers to evaluate their ideas in real-world traffic settings, OpenFlow could serve as a useful campus component in proposed large-scale test beds like GENI. Two buildings at Stanford University will soon run OpenFlow networks, using commercial Ethernet switches and routers. We will work to encourage deployment at other schools; and We encourage you to consider deploying OpenFlow in your university network too.

LIST OF CONTENTS

1) INTRODUCTION 2) SWITCH COMPONENTS 3) GLOSSARY 4) OpenFlow TABLES i. 4.1 Flow Table ii. 4.2 Group Table iii. 4.3 Match Fields iv. 4.4 Matching v. 4.5 Counters vi. 4.6 Instructions vii. 4.7 Action set viii. 4.8 Action list ix. 4.9 Actions 5) OpenFlow CHANNEL 6) THE OpenFlow SWITCH 7) THE OpenFlow CONSORTIUM 8) DEPLOYING OpenFlow SWITCHES 9) CONCLUSION 10) REFERENCES

LIST OF FIGURES

1. INTRODUCTION
Networks have become part of the critical infrastructure of our businesses, homes and schools. This success has been both a blessing and a curse for networking researchers; their work is more relevant, but their chance of making an impact is more remote. The reduction in real-world impact of any given network innovation is because the enormous installed base of equipment and protocols, and the reluctance to experiment with production traffic, which have created an exceedingly high barrier to entry for new ideas. Today, there is almost no practical way to experiment with new network protocols (e.g., new routing protocols, or alternatives to IP) in sufficiently realistic settings (e.g., at scale carrying real traffic) to gain the confidence needed for their widespread deployment. The result is that most new ideas from the networking research community go untried and untested; hence the commonly held belief that the network infrastructure has ossified. Having recognized the problem, the networking community is hard at work developing programmable networks, such as GENI [1] a proposed nationwide research facility for experimenting with new network architectures and distributed systems. These programmable networks call for programmable switches and routers that (using virtualization) can process packets for multiple isolated experimental networks simultaneously. For example, in GENI it is envisaged that a researcher will be allocated a slice of re- sources across the whole network, consisting of a portion of network links, packet processing elements (e.g. routers) and end-hosts; researchers program their slices to behave as they wish. A slice could extend across the backbone, into access networks, into college campuses, industrial research labs, and include wiring closets, wireless networks, and sensor networks. Virtualized programmable networks could lower the barrier to entry for new ideas, increasing the rate of innovation in the network infrastructure. But the plans for nationwide facilities are ambitious (and costly), and it will take years for them to be deployed.

This focuses on a shorter-term question closer to home: As researchers, how can we run experiments in our campus networks? If we can figure out how, we can start soon and extend the technique to other campuses to benefit the whole community. To meet this challenge, several questions need answering, including: In the early days, how will college network administrators get comfortable putting experimental equipment (switches, routers, access points, etc.) into their network? How will researchers control a portion of their local net- work in a way that does not disrupt others who depend on it? And exactly what functionality is needed in network switches to enable experiments? Our goal here is to propose a new switch feature that can help extend programmability into the wiring closet of college campuses. One approach - that we do not take - is to persuade commercial name-brand equipment vendors to provide an open, programmable, virtualized platform on their switches and routers so that researchers can deploy new protocols, while network administrators can take comfort that the equipment is well supported. This outcome is very unlikely in the shortterm. Commercial switches and routers do not typically provide an open software platform, let alone pro- vide a means to virtualize either their hardware or software. The practice of commercial networking is that the standardized external interfaces are narrow (i.e., just packet forwarding), and all of the switchs internal flexibility is hidden. The internals differ from vendor to vendor, with no standard platform for researchers to experiment with new ideas. Further, network equipment vendors are understandably nervous about opening up interfaces inside their boxes: they have spent years deploying and tuning fragile distributed protocols and algorithms, and they fear that new experiments will bring networks crashing down. And, of course, open platforms lower the barrier-to-entry for new competitors.

A few open software platforms already exist, but do not have the performance or portdensity we need. The simplest example is a PC with several network interfaces and an operating system. All well-known operating systems support routing of packets between interfaces, and open-source implementations of routing protocols exist (e.g., as part of the Linux distribution, or from XORP [2]); and in most cases it is possible to modify the operating system to process packets in almost any manner (e.g., using Click [3]). The problem, of course, is performance: A PC can neither support the number of ports needed for a college wiring closet (a fanout of 100+ ports is needed per box), nor the packetprocessing performance (wiring closet switches process over 100Gbits/s of data, whereas a typical PC struggles to exceed 1Gbit/s; and the gap between the two is widening). Existing platforms with specialized hardware for line-rate processing are not quite suitable for college wiring closets either. For example, an ATCA-based virtualized programmable router called the Supercharged PlanetLab Plat- form [4] is under development at Washington University, and can use network processors to process packets from many interfaces simultaneously at line-rate. This approach is promising in the long-term, but for the time being is targeted at large switching centers and is too expensive for widespread deployment in college wiring closets. At the other extreme is NetFPGA [5] targeted for use in teaching and research labs. NetFPGA is a low-cost PCI card with a user-programmable FPGA for processing packets, and 4- ports of Gigabit Ethernet. NetFPGA is limited to just four network interfacesinsufficient for use in a wiring closet. Thus, the commercial solutions are too closed and inflexible, and the research solutions either have insufficient performance or fanout, or are too expensive. It seems unlikely that the research solutions, with their complete generality, can overcome their performance or cost limitations. A more promising approach is to compromise on generality and to seek a degree of switch flexibility that is:

Amenable to high-performance and low-cost implementations. Capable of supporting a broad range of research. Assured to isolate experimental traffic from production traffic. Consistent with vendors need for closed platforms.

FIG 1: Idealized OpenFlow Switch. A remote controller via the Secure Channel controls the Flow Table.

2. SWITCH COMPONENTS
An OpenFlow Switch consists of one or more flow tables and a group table, which perform packet lookups and forwarding, and an OpenFlow channel to an external controller. The controller manages the switch via the OpenFlow protocol. Using this protocol, the controller can add, update, and delete flow entries, both reactively (in response to packets) and proactively. Each flow table in the switch contains a set of flow entries; each flow entry consists of match fields, counters, and a set of instructions to apply to matching packets. Matching starts at the first flow table and may continue to additional flow tables. Flow entries match packets in priority order, with the first matching entry in each table being used. If a matching entry is found, the instructions associated with the specific flow entry are executed. If no match is found in a flow table, the outcome depends on switch configuration: the packet may be forwarded to the controller over the OpenFlow channel, dropped, or may continue to the next flow table. Instructions associated with each flow entry describe packet forwarding, packet modification, group table processing, and pipeline processing. Pipeline processing instructions allow packets to be sent to subsequent tables for further processing and allow information, in the form of metadata, to be communicated between tables. Table pipeline processing stops when the instruction set associated with a matching flow entry does not specify a next table; at this point the packet is usually modified and forwarded. Flow entries may forward to a port. This is usually a physical port, but it may also be a virtual port defined by the switch or a reserved virtual port defined by this specification. Reserved virtual ports may specify generic forwarding actions such as sending to the controller, flooding, or forwarding using non-OpenFlow methods, such as normal switch processing, while switch-defined virtual ports may specify link aggregation groups, tunnels

or loopback interfaces. Flow entries may also point to a group, which specifies additional processing. Groups represent sets of actions for flooding, as well as more complex forwarding semantics (e.g. multipath, fast reroute, and link aggregation). As a general layer of indirection, groups also enable multiple flows to forward to a single identifier (e.g. IP forwarding to a common next hop). This abstraction allows common output actions across flows to be changed efficiently. The group table contains group entries; each group entry contains a list of action buckets with specific semantics dependent on group type. The actions in one or more action buckets are applied to packets sent to the group. Switch designers are free to implement the internals in any way convenient, provided that correct match and instruction semantics are preserved. For example, while a flow may use an all group to forward to multiple ports, a switch designer may choose to implement this as a single bitmask within the hardware-forwarding table. Another example is matching; the pipeline exposed by an OpenFlow switch may be physically implemented with a different number of hardware tables.

3. GLOSSARY
Byte: an 8-bit octet. Packet: an Ethernet frame, including header and payload.

Pipeline: the set of linked tables that provide matching, forwarding, and packet modifications in an OpenFlow switch.

Port: where packets enter and exit the OpenFlow pipeline. May be a physical port, a virtual port defined by the switch, or a virtual port defined by the OpenFlow protocol. Reserved virtual ports are ports reserved by this specification. Switch-defined virtual ports are higher-level abstractions that may be defined in the switch using nonOpenFlow methods (e.g. link aggregation groups, tunnels, loopback interfaces).

Match Field: a field against which a packet is matched, including packet headers, the ingress port, and the metadata value.

Metadata: a makeable register value that is used to carry information from one table to the next.

Instruction: an operation that either contains a set of actions to add to the action set, contains list of actions to apply immediately to the packet, or modifies pipeline processing.

Action: an operation that forwards the packet to a port or modifies the packet, such as decrementing the TTL field. Actions may be specified as part of the instruction set

associated with a flow entry or in an action bucket associated with a group entry.

Action Set: a set of actions associated with the packet that are accumulated while the packet is processed by each table and that are executed when the instruction set instructs the packet to exit the processing pipeline.

Group: a list of action buckets and some means of choosing one or more of those buckets to apply on a per-packet basis.

Action Bucket: a set of actions and associated parameters, defined for groups. Tag: a header that can be inserted or removed from a packet via push and pop actions. Outermost Tag: the tag that appears closest to the beginning of a packet.

4. OpenFlow TABLES
4.1 Flow Table
A flow table flow entries.
Match field Counters Instructions

consists of

Table 1: Main components of a flow entry in a flow table.

Each flow table entry contains: Match fields: to match against packets. These consist of the ingress port and packet headers, and optionally metadata specified by a previous table. Counters: to update for matching packets Instructions: to modify the action set or pipeline processing

4.1.1 Pipeline Processing


OpenFlow-compliant switches come in two types: OpenFlow-only, and OpenFlow-hybrid. OpenFlow-only switches support only OpenFlow operation, in those switches all packets are processed by the OpenFlow pipeline, and can not be processed otherwise. OpenFlow-hybrid switches support both OpenFlow operation and normal Ethernet switching operation, i.e. traditional L2 Ethernet switching, VLAN isolation, L3 routing,

ACL and QoS processing. Those switches should provide a classification mechanism outside of OpenFlow that routes traffic to either the OpenFlow pipeline or the normal pipeline. For example, a switch may use the VLAN tag or input port of the packet to decide whether to process the packet using one pipeline or the other, or it may direct all packets to the OpenFlow pipeline. This classification mechanism is outside the scope of this specification. An OpenFlow-hybrid switch may also allow a packet to go from the OpenFlow pipeline to the normal pipeline through the NORMAL and FLOOD virtual ports (see 4.9). The OpenFlow pipeline of every OpenFlow switch contains multiple flow tables, each flow table containing multiple flow entries. The OpenFlow pipeline processing defines how packets interact with those flow tables (see Figure 2). An OpenFlow switch with only a single flow table is valid; in this case pipeline processing is greatly simplified.

FIG 2: OpenFlow Processing Pipeline

The flow tables of an OpenFlow switch are sequentially numbered, starting at 0. Pipeline processing always starts at the first flow table: the packet is first matched against entries of flow table 0. Other flow tables may be used depending on the outcome of the match in the

first table. If the packet matches a flow entry in a flow table, the corresponding instruction set is executed. The instructions in the flow entry may explicitly direct the packet to another flow table (using the Go to Instruction, see 4.6), where the same process is repeated again. A flow entry can only direct a packet to a flow table number which is greater than its own flow table number, in other words pipeline processing can only go forward and not backward. Obviously, the flow entries of the last table of the pipeline cannot include the Goto instruction. If the matching flow entry does not direct packets to another flow table, pipeline processing stops at this table. When pipeline processing stops, the packet is processed with its associated action set and usually forwarded. If the packet does not match a flow entry in a flow table, this is a table miss. The behavior on table miss depends on the table configuration; the default is to send packets to the controller over the control channel via a packet-in message , another options is to drop the packet. A table can also specify that on a table miss the packet processing should continue; in this case the next sequentially numbered table processes the packet.

4.2 Group Table


A group table consists of group entries. The ability for a flow to point to a group enables OpenFlow to represent additional methods of forwarding (e.g. select and all). Each group entry contains:
Group Identifier Group Type Counters Action Buckets

Table 2: A group entry consists of a group identifier, a group type, counters, and a list of action buckets.

Group identifier: a 32 bit unsigned integer uniquely identifying the group Group type: to determine group semantics Counters: updated when packets are processed by a group Action buckets: an ordered list of action buckets, where each action bucket contains a set of actions to execute and associated parameters.

4.3 Match Fields


The below table shows the match fields an incoming packet is compared against. Each entry contains a specific value, or ANY, which matches any value. If the switch supports arbitrary bitmasks on the Ethernet source and/or destinations fields, or on the IP source and/or destination fields, these masks can more precisely specify matches. The fields in the OpenFlow tuple are listed in following table and details on the properties of each field are described in the next table. In addition to packet headers, matches can also be performed against the ingress port and metadata fields. Metadata may be used to pass information between tables in a switch.

FIG 3: OpenFlows flow identification fields.

4.4 Matching

FIG 4: Flowchart detailing packet flow through an OpenFlow switch.

On receipt of a packet, an OpenFlow Switch performs the functions shown in the above figure. The switch starts by performing a table lookup in the first flow table, and, based on pipeline processing, may perform table lookup in other flow tables. Match fields used for table lookups depend on the packet type as in next figure. A packet matches a flow table entry if the values in the match fields used for the lookup (as defined in next figure) match those defined in the flow table. If a flow table field has a value of ANY, it matches all possible values in the header. To handle the various Ethernet framing types, matching the Ethernet type is handled based on the packet frame content. In general, the Ethernet type matched by OpenFlow is the one describing what OpenFlow as the payload of the packet considers. If the packet has VLAN tags, the Ethernet type matched is the one found after all the VLAN tags. An exception to that rule is packets with MPLS tags where OpenFlow cannot determine the Ethernet type of the MPLS payload of the packet. If the packet is an Ethernet II frame, the Ethernet type of the Ethernet header (after all VLAN tags) is matched against the flows Ethernet type. If the packet is an 802.3 frame with a 802.2 LLC header, a SNAP header and Organizationally Unique Identifier (OUI) of 0x000000, the SNAP protocol id is matched against the flows Ethernet type. A flow entry that specifies an Ethernet type of 0x05FF, matches all 802.3 frames without a SNAP header and those with SNAP headers that do not have an OUI of 0x000000.

FIG 5: Flowchart showing how match fields are parsed for matching.

Table 3: Field lengths and the way they must be applied to flow entries.

The switch should apply the instruction set and update the associated counters of only the highest- priority flow entry matching the packet. If there are multiple matching flow entries with the same highest priority, the matching flow entry is explicitly undefined. This case can only arise when a controller writer never sets the CHECK_OVERLAP bit on flow mod messages and adds overlapping entries. IP fragments must be reassembled before pipeline processing if the switch configuration contains the OFPC_FRAG_REASM flag. This version of the specification does not define the expected behavior when a switch receives a malformed or corrupted packet.

4.5 Counters
Counters may be maintained for each table, flow, port, queue, group, and bucket. OpenFlow-compliant counters may be implemented in software and maintained by polling hardware counters with more limited ranges. Next table contains the set of counters defined by the OpenFlow specification. Duration refers to the amount of time the flow has been installed in the switch. The Receive Errors field is the total of all receive and collision errors defined in next table, as well as any others not called out in the table. Counters wrap around with no overflow indicator. If a specific numeric counter is not available in the switch, its value should be set to -1.

Table 4: List of counters.

4.6 Instructions
Each flow entry contains a set of instructions that are executed when a packet matches the entry. These instructions result in changes to the packet, action set and/or pipeline processing. Supported instructions include: Apply-Actions action(s): Applies the specific action(s) immediately, without any change to the Action Set. This instruction may be used to modify the packet between two tables or to execute multiple actions of the same type. The actions are specified as an action list. Clear-Actions: Clears all the actions in the action set immediately. Write-Actions action(s): Merges the specified action(s) into the current action set. If an action of the given type exists in the current set, overwrite it, otherwise add it. Write-Metadata metadata / mask: Writes the masked metadata value into the metadata field. The mask specifies which bits of the metadata register should be modified i.e. new metadata old metadata mask value mask).

Goto-Table next-table-id: Indicates the next table in the processing pipeline. The tableid must be greater than the current table-id. The flows of last table of the pipeline cannot include this instruction. The instruction set associated with a flow entry contains a maximum of one instruction of each type. The instructions of the set execute in the order specified by this above list. In practice, the only constraints are that the Clear-Actions instruction is executed before the Write-Actions instruction, and that Goto-Table is executed last. A switch may reject a flow entry if it is unable to execute the instructions associated with the flow entry. In this case, the switch must return an unsupported flow error. Flow tables may not support every match and every instruction.

4.7 Action Set


An action set is associated with each packet. This set is empty by default. A flow entry can modify the action set using a Write-Action instruction or a Clear-Action instruction associated with a particular match. The action set is carried between flow tables. When an instruction set does not contain a Goto-Table instruction, pipeline processing stops and the actions in the action set are executed. An action set contains a maximum of one action of each type. When multiple actions of the same type are required, e.g. pushing multiple MPLS labels or popping multiple MPLS labels, the Apply-Actions instruction may be used. The actions in an action set are applied in the order specified below, regardless of the order that they were added to the set. If an action set contains a group action, the actions in the appropriate action bucket of the group are also applied in the order specified below. The switch may support arbitrary action execution order through the action list of the ApplyActions instruction. Copy TTL inwards: apply copy TTL inward actions to the packet Pop: apply all tag pop actions to the packet Push: apply all tag push actions to the packet Copy TTL outwards: apply copy TTL outwards action to the packet Decrement TTL: apply decrement TTL action to the packet Set: apply all set-field actions to the packet Qos: apply all QoS actions, such as set queue to the packet

Group: if a group action is specified, apply the actions of the relevant group bucket(s) in the order specified by this list Output: if no group action is specified, forward the packet on the port specified by the output action The output action in the action set is executed last. If both an output action and a group action are specified in an action set, the output action is ignored and the group action takes precedence. If no output action and no group action were specified in an action set, the packet is dropped. The execution of groups is recursive; a group bucket may specify another group, in which case the execution of actions traverses all the groups specified by the group configuration.

4.8 Action List


The Apply-Actions instruction and the Packet-out message include an action list. The semantic of the action list is identical to the OpenFlow 1.0 specification. The actions of an action list are executed in the order specified by the list, and are applied immediately to the packet. The execution of action start with the first action in the list and each action is executed on the packet in sequence. The effect of those actions is cumulative, if the action list contains two Push VLAN actions; two VLAN headers are added to the packet. If the action list contains an output action, a copy of the packet is forwarded in its current state to the desired port. If the list contains a group action, the relevant group buckets process a copy of the packet in its current state. After the execution of the action list in an Apply-Actions instruction, pipeline execution continues on the modified packet. The action set of the packet is unchanged by the execution of the action list.

4.9 Actions
A switch is not required to support all action types just those marked Required Actions below. When connecting to the controller, a switch indicates which of the Optional Actions it supports. Required Action: Output. The Output action forwards a packet to a specified port. OpenFlow switches must support forwarding to physical ports and switch-defined virtual ports. Standard ports are defined as physical ports, switch-defined virtual ports, and the LOCAL port if supported (excluding other reserved virtual ports). OpenFlow switches must also support forwarding to the following reserved virtual ports: ALL: Send the packet out all standard ports, but not to the ingress port or ports that are configured OFPPC_NO_FWD. CONTROLLER: Encapsulate and send the packet to the controller. TABLE: Submit the packet to the first flow table so that the packet can be processed through the regular OpenFlow pipeline. Only valid in the action set of a packetout message. IN PORT: Send the packet out the ingress port. Optional Action: Output. The switch may optionally support forwarding to the following reserved virtual ports: LOCAL: Send the packet to the switchs local networking stack. The local port enables remote entities to interact with the switch via the OpenFlow network, rather than via a separate control network. With a suitable set of default rules it can be used to implement an in-band controller connection. NORMAL: Process the packet using the traditional non-OpenFlow pipeline of the switch. If the switch cannot forward packets from the OpenFlow pipeline to the normal pipeline, it must indicate that it does not support this action.

FLOOD: Flood the packet using the normal pipeline of the switch. In general, send the packet out all standard ports, but not to the ingress port, or ports that are in OFPPS_BLOCKED state. The switch may also use the packet VLAN ID to select which ports to flood to. OpenFlow-only switches do not support output actions to the NORMAL port and FLOOD port, while OpenFlow-hybrid switches may support them. Forwarding packets to the FLOOD port depends on the switch implementation and configuration, while forwarding using a group of type all enables the controller to more flexibly implement flooding. Optional Action: SetQueue. The set-queue action sets the queue id for a packet. When the packet is forwarded to a port using the output action, the queue id determines which queue attached to this port is used for forwarding the packet. Forwarding behavior is dictated by the configuration of the queue and is used to provide basic Quality-ofService (QoS) support. Required Action: Drop. There is no explicit action to represent drops. Instead, packets whose action sets have no output actions should be dropped. This result could come from empty instruction sets or empty action buckets in the processing pipeline, or after executing a Clear-Actions instruction. Required Action: Group. Process the packet through the specified group. The exact interpretation depends on group type. Optional Action: Push-Tag/Pop-Tag. Switches may support the ability to push/pop tags as shown in Table 6. To aid integration with existing networks, we suggest that the ability to push/pop VLAN tags be supported. The ordering of header fields/tags is:

Newly pushed tags should always be inserted as the outermost tag in this ordering. When a new VLAN tag is pushed, it should be the outermost VLAN tag inserted immediately after

the Ethernet header. Likewise, when a new MPLS tag is pushed, it should be the outermost MPLS tag, inserted as a shim header after any VLAN tags.

Table 5: Push/pop tag actions.

Optional Action: Set-Field. The various Set-Field actions modify the values of the respective header field in the packet. While not strictly required, the actions shown in below table greatly increase the usefulness of an OpenFlow implementation. To aid integration with existing networks, we suggest that VLAN modification actions be supported. Set-Field actions should always be applied to the outermost-possible header e.g. a Set VLAN ID action always sets the ID of the outermost VLAN tag).

Table 6: Set-Field actions.

5. OpenFlow CHANNEL
The OpenFlow channel is the interface that connects each OpenFlow switch to a controller. Through this interface, the controller configures and manages the switch, receives events from the switch, and sends packets out the switch. Between the data path and the OpenFlow channel, the interface is implementation-specific, however all OpenFlow channel messages must be formatted according to the OpenFlow protocol. The OpenFlow channel is usually encrypted using TLS, but may be run directly over TCP. Support for multiple simultaneous controllers is currently undefined.

6. THE OpenFlow SWITCH


The basic idea is simple: we exploit the fact that most modern Ethernet switches and routers contain flow-tables (typically built from TCAMs) that run at line-rate to implement firewalls, NAT, QoS, and to collect statistics. OpenFlow provides an open protocol to program the flow- table in different switches and routers. A network administrator can partition traffic into production and research flows. The data path of an OpenFlow Switch consists of a Flow Table, and an action associated with each flow entry. The set of actions supported by an OpenFlow Switch is extensible, but below we describe a minimum requirement for all switches. For high-performance and low-cost the data- path must have a carefully prescribed degree of flexibility. An OpenFlow Switch consists of at least three parts: (1) A Flow Table, with an action associated with each flow entry, to tell the switch how to process the flow, (2) A Secure Channel that connects the switch to a remote control process (called the controller), allowing commands and packets to be sent between a controller and the switch using (3) The OpenFlow Protocol, which provides an open and standard way for a controller to communicate with a switch. Dedicated OpenFlow switches: A dedicated OpenFlow Switch is a dumb data path element that forwards packets between ports, as defined by a remote control. Forward this flows packets to a given port or ports). This allows packets to be routed through the network. In most switches this is expected to take place at line- rate. Encapsulate and forward this flows packets to a controller. Packet is delivered to Secure Channel, where it is encapsulated and sent to a controller. Typically used for the first packet in a new flow, so a controller can decide if the flow should be added to the Flow Table. Or in some experiments, it could be used to forward all packets to a controller for processing.

Drop this flows packets. Can be used for security, to curb denial of service attacks, or to reduce spurious broadcast discovery traffic from end-hosts.

An entry in the Flow-Table has three fields: (1) A packet header that defines the flow, (2) The action, which defines how the packets should be processed, and (3) Statistics, which keep track of the number of packets and bytes for each flow, and the time since the last packet matched the flow (to help with the removal of inactive flows). OpenFlow-enabled switches: Some commercial switches, routers and access points will be enhanced with the OpenFlow feature by adding the Flow Table, Secure Channel and OpenFlow Protocol.

Table 7: The header fields matched in a Type 0 OpenFlow switch.

Additional features: If a switch supports the header for- mats and the four basic actions mentioned above (and de- tailed in the OpenFlow Switch Specification), then we call it a Type 0 switch. We expect that many switches will sup- port additional actions, for example to rewrite portions of the packet header (e.g., for NAT, or to obfuscate addresses on intermediate links), and to map packets to a priority class. Likewise, some Flow Tables will be able to match on arbitrary fields in the packet header, enabling experiments with new non-IP protocols. As a particular set of features emerges, we will define a Type 1 switch.

Controllers: A controller adds and removes flow-entries from the Flow Table on behalf of experiments. For example, a static controller might be a simple application running on a PC to statically establish flows to interconnect a set of test computers for the duration of an experiment. In this case the flows resemble VLANs in current networks providing a simple mechanism to isolate experimental traffic from the production network. Viewed this way, OpenFlow is a generalization of VLANs.

FIG 6: Example of a network of OpenFlow- enabled commercial switches and routers.

6.1 Experiments in a Production Network


We want the network to have two additional properties: Packets belonging to users other than Amy should be routed using a standard and tested routing protocol running in the switch or router from a name-brand vendor. Amy should only be able to add flow entries for her traffic, or for any traffic her network administrator has allowed her to control.

6.2 Processing packets rather than flows.


For example, an intrusion detection system that inspects every packet, an explicit congestion control mechanism, or when modifying the contents of packets, such as when converting packets from one protocol format to another. There are two basic ways to process packets in an OpenFlow-enabled network. First, and simplest, is to force all of a flows packets to pass through a controller. To do this, a controller doesnt add a new flow entry into the Flow Switch it just allows the switch to default to forwarding every packet to a controller. This has the advantage of flexibility, at the cost of performance. It might provide a useful way to test the functionality of a new protocol, but is unlikely to be of much interest for deployment in a large network. The second way to process packets is to route them to a programmable switch that does packet processing for example, a NetFPGA-based programmable router. The advantage is that the packets can be processed at line-rate in a user-definable way; below figure shows an example of how this could be done, in which the OpenFlow-enabled switch operates essentially as a patch-panel to allow the packets to reach the NetFPGA. In some cases, the NetFPGA board (a PCI board that plugs into a Linux PC) might be placed in the wiring closet alongside the OpenFlow-enabled switch, or (more likely) in a laboratory.

FIG 7: Example of processing packets through an external line-rate packet-processing device, such as a programmable NetFPGA router.

7. THE OpenFlow CONSORTIUM


The OpenFlow Consortium aims to popularize OpenFlow and maintain the OpenFlow Switch Specification. The Consortium is a group of researchers and network administrators at universities and colleges who believe their research mission will be enhanced if OpenFlow-enabled switches are installed in their network. Membership is open and free for anyone at a school, college, university, or government agency worldwide. The OpenFlow Consortium welcomes individual members who are not employed by companies that manufacture or sell Ethernet switches, routers or wireless access points (because we want to keep the consortium free of vendor influence).

8. DEPLOYING OpenFlow SWITCHES


There is an interesting market opportunity for network equipment vendors to sell OpenFlow-enabled switches to the research community. Every building in thousands of colleges and universities contains wiring closets with Ethernet switches and routers, and with wireless access points spread across campus. Active work with several switch and router manufacturers who are adding the OpenFlow feature to their products by implementing a Flow Table in existing hard- ware; i.e. no hardware change is needed. The switches run the Secure Channel software on their existing processor. Large OpenFlow networks in the Computer Science and Electrical Engineering departments at Stanford University are deployed. Switches running OpenFlow will replace the networks in two buildings. Eventually, all traffic will run over the OpenFlow network, with production traffic and experimental traffic being isolated on different VLANs under the control of network administrators. Researchers will control their own traffic, and be able to add/remove flow-entries.

9. CONCLUSION
OpenFlow is a pragmatic compromise that allows researchers to run experiments on heterogeneous switches and routers in a uniform way, without the need for vendors to expose the internal workings of their products, or researchers to write vendor-specific control software. If there are successful OpenFlow networks in the campuses, OpenFlow will gradually catch on in other universities, increasing the number of networks that support experiments. A new generation of control software emerges, allowing researchers to reuse controllers and experiments, and build on the work of others. And over time, tunnels will interconnect the islands of OpenFlow networks at different universities and overlay networks, and perhaps by new Open- Flow networks running in the backbone networks that connect universities to each other.

10. REFERENCES
[1] Global Environment for Network Innovations. Web site http://geni.net. [2] Mark Handley Orion Hodson Eddie Kohler. XORP: An Open Platform for Network Research, ACM SIGCOMM Hot Topics in Networking, 2002. [3] Eddie Kohler, Robert Morris, Benjie Chen, John Jannotti, and M. Frans Kaashoek. The Click modular router, ACM Transactions on Computer Systems 18(3), August 2000, pages 263-297. [4] J. Turner, P. Crowley, J. Dehart, A. Freestone, B. Heller, F. Kuhms, S. Kumar, J. Lockwood, J. Lu, M.Wilson, C. Wiseman, D. Zar. Supercharging PlanetLab - High Performance, Multi-Application, Overlay Network Platform, ACM SIGCOMM 07, August 2007, Kyoto, Japan. [5] NetFPGA: Programmable Networking Hardware. Web site http://netfpga.org. [6] The OpenFlow Switch Specification. Available at http://OpenFlowSwitch.org. [7] Martin Casado, Michael J. Freedman, Justin Pettit, Jianying Luo, Nick McKeown, Scott Shenker. Ethane: Taking Control of the Enterprise, ACM SIGCOMM 07, August 2007, Kyoto, Japan. [8] Natasha Gude, Teemu Koponen, Justin Pettit, Ben Pfaff, Martin Casadao, Nick McKeown, Scott Shenker, NOX: Towards an Operating System for Networks, In submission. Also: http://nicira.com/docs/nox-nodis.pdf.

You might also like