Professional Documents
Culture Documents
Cooperative Control Wireless LAN Architecture PDF
Cooperative Control Wireless LAN Architecture PDF
Aerohive Networks has responded by pioneering a new WLAN architecture called the
Cooperative Control architecture. It is a controller-less architecture that eliminates the
downsides of controllers while providing the management, mobility, scalability, resiliency,
and security that enterprises require in their wireless infrastructure.
The diagram that follows outlines the building blocks of the cooperative control
architecture. It is implemented using two types of products:
• Cooperative Control Access Points (HiveAPs) that have dual radios that support
simultaneous use of the 2.4 GHz and 5 GHz spectrums for wireless access and/or
wireless mesh connectivity. HiveAPs implement robust security such as:
WPA/WPA2-Enterprise, WPA/WPA2-Person, de facto standards such as
Opportunistic Key Caching, Private PSK, integrated WIPS, stateful firewall policies,
and L2-L4 denial-of-service (DoS) prevention. Each HiveAP’s SLA capabilities are
based on advanced QoS policies, Dynamic Airtime Scheduling, and Airtime
Boost capabilities using an easily-configured management application. A single
radio HiveAP is also available.
• Policy enforcement at the edge: the ability to enforce granular, user-based QoS,
security, and access policies at the edge of the network where the user
connects.
By comparing the logical network planes of the most common networking devices –
such as routers and switches – with that of HiveAPs, you can see striking similarities.
For example:
• They all have the ability to use a centralized management platform for configuration,
monitoring, and troubleshooting, and because the management platform itself is not
in the data path, it can be taken offline without affecting the functionality of the
network.
• Each class of network device implements a distributed control plane that uses control
protocols (e.g. OSPF, spanning tree, etc.) to share information between devices that
allows them to coordinate with each other to ensure the network functions properly
and continuously adapts to changes. With this knowledge of network state provided
by the distributed control plane, each individual device is then able to implement a
distributed data plane allowing each one to quickly make decisions on how traffic
should be processed and forwarded using the optimal path.
This architecture has proven to be the winning architecture for switched and routed
networks for many years because it is scalable, high performance, and resilient while still
allowing for central management. As an example, the Internet uses this architecture.
Aerohive’s cooperative control architecture is the first architecture to bring these proven
network benefits to WLANs. The following chart shows the architectural parallels
between cooperative control and the proven architecture used in switched and routed
networks.
Extending the proven architecture used in switched and routed infrastructures to WLANs
through the use of distributed control and data planes is especially important as
enterprises require greater levels of availability, increased performance with 802.11n, and
seek to improve productivity in their regional and branch offices. Distributing the control
and data planes (e.g., removing controllers) eliminates single points of failure and
performance bottlenecks from the entire wireless network, allowing the remote site
deployment to be as simple and as functional as the campus deployment.
• HiveAP®: The product brand name for Aerohive’s CC-AP (Cooperative Control
Access Point). HiveAPs coordinate with each other using cooperative control
protocols to provide critical functions including seamless mobility, automatic radio
resource management (RRM), policy-based security, and best-path forwarding.
• HiveOS®: The firmware developed by Aerohive Networks that runs on HiveAPs.
• HiveManager®: A centralized wireless network management system (WNMS) that
enables sophisticated identity-based policy management, simplistic device
configuration, HiveOS updates, and monitoring and troubleshooting of HiveAPs within
a cooperative control WLAN infrastructure. HiveManager is available as an
appliance, a virtual appliance, or a SaaS offering called HiveManager Online™.
• Hive: A Hive is a group of HiveAPs that share a common name and secret key that
permit them to securely communicate with each other using cooperative control
protocols. Within a Hive, clients can seamlessly roam among HiveAPs across layer 2
and layer 3 boundaries, while preserving their security state, QoS settings, IP settings,
and data connections.
• GuestManager™: A guest management platform that provides a simple web
interface for allowing administrators, such as receptionists or lobby ambassadors, to
create temporary user accounts that provide guests with access to the wireless
network.
• Wired Backhaul Link: An Ethernet connection from a HiveAP to the primary wired
network, typically called the distribution system (DS) in wireless standards, which is
used to bridge traffic between the wireless and wired LANs.
• Wireless Backhaul Link: Wireless connections between HiveAPs that are used to
create a wireless mesh and to provide wireless connections that transport primarily
control and data traffic.
• Bridge Link: An Ethernet connection from a HiveAP that allows a wired device or
network segment to be bridged over the WLAN onto the primary wired LAN.
• Wireless Access Link: The wireless connection between a wireless client and a
HiveAP.
• Portal: A HiveAP that is directly connected to the wired LAN via Ethernet that provides
default MAC routes to mesh points within the Hive. This role is dynamically chosen. If
the wired link is unplugged, then the HiveAP can dynamically become a mesh point.
• Mesh Point: A HiveAP that is connected to the Hive via wireless backhaul links and
does not use a wired link for backhaul. This role is also dynamically chosen. If a wired
link is plugged in, the HiveAP dynamically becomes a portal, if permitted by the
configuration.
• Cooperative Control Signaling: The control-plane communication between HiveAPs
using Cooperative Control Protocols
Cooperative Control
By utilizing cooperative control, HiveAPs cooperate with neighboring HiveAPs to support
control functions such as radio resource management, Layer 2/3 roaming, client load
balancing, and wireless mesh networking, eliminating the need for a centralized
controller.
• AMRP (Aerohive Mobility Routing Protocol) – Provides HiveAPs with the ability to
perform automatic neighbor discovery, MAC-layer best-path forwarding through a
wireless mesh, dynamic and stateful rerouting of traffic in the event of a failure, and
predictive identity information and key distribution to neighboring HiveAPs. This
provides clients with fast/secure roaming capabilities between HiveAPs while
maintaining their authentication state, encryption keys, firewall sessions, and QoS
enforcement settings.
• ACSP (Aerohive Channel Selection Protocol) – Used by HiveAPs to analyze the RF
environment on each channel within a regulatory domain and to work in conjunction
with each other to determine the best channel and power settings for wireless access
and mesh. ACSP minimizes co-channel and adjacent channel interference in order
to provide optimized application performance.
• DNXP (Dynamic Network Extension Protocol) – Dynamically creates tunnels on an as-
needed basis between HiveAPs in different subnets, giving clients the ability to
seamlessly roam between subnets while preserving their IP address settings,
Once the neighbor relationships have been established between HiveAPs in a Hive, they
will run cooperative control protocols across wired and wireless links to provide
fast/secure roaming, radio resource management, and resiliency. If HiveAPs discover
neighboring HiveAPs that are in a different subnet, as long as the HiveAPs are configured
with same hive name and hive shared secret settings, they will exchange IP information
with each other and establish communications over the routed network infrastructure to
provide cooperative control functionality across layer 3 boundaries. The beauty of
cooperative control protocols is that they do not need to be configured, greatly
decreasing the operational cost and complexity of deploying a modern wireless solution.
Fast/secure roaming is most often defined as roaming that occurs in just a few tens of
milliseconds. Fast/secure roaming becomes very important when using real-time
applications like voice and video, where an interruption in a connection can cause
dead air, pops, or even dropped sessions.
With traditional autonomous APs that exist without knowledge of each other, fast/secure
roaming using IEEE 802.1X/EAP for authentication is not possible. This is because during
authentication, the RADIUS server, wireless client, and AP exchange user authentication
information and derive encryption keys between themselves. If the wireless client moves
to another AP, the new AP does not have any of the keys that were created on the
previous AP, and so the wireless client will have to repeat the entire authentication and
key derivation process again. During this process, existing sessions on the client that are
time sensitive will be terminated, such as voice, video, or file transfers.
Aerohive Networks has solved the problem that exists with autonomous AP solutions using
AMRP. Whether connected via the wired LAN or wireless mesh, HiveAPs cooperate with
each other using AMRP to predicatively exchange client authentication state, identity
information, and encryption key information with neighboring HiveAPs, allowing clients to
perform fast/secure roaming. The following diagram lists the steps taken by the HiveAPs
for fast/secure roaming.
Step 2 – The RADIUS server transfers the PMK to the HiveAP so that the client and
HiveAP can build an encrypted connection between each other.
Step 3 – Using AMRP, the HiveAP proactively distributes encryption keys, identity
information, SIP voice session state information, firewall, and QoS policy
information to all neighboring HiveAPs. This, along with the de facto standard
Opportunistic Key Caching (OKC), permits clients to roam between HiveAPs
without having to repeat the 802.1X/EAP authentication process, enabling
fast/secure roaming.
Note: For security reasons, the key and identity information sent between HiveAPs is
encrypted with AES and is stored only in memory on the HiveAP. This way, the keys are
removed from the system with all user identity information when a HiveAP is powered off.
Furthermore, administrators do not have access to view the keys. These security measures
prevent the keys from being obtained if the wired network is analyzed or if a HiveAP is
stolen.
Along with the key information that is distributed among neighboring HiveAPs, AMRP also
distributes the user’s identity information so that HiveAPs can enforce the identity-based
firewall access policies and QoS settings as the user roams between HiveAPs.
When layer 3 roaming is enabled, HiveAPs can automatically discover their layer 3
neighbors (neighboring HiveAPs on different subnets) by scanning radio channels. If
HiveAPs are within radio range of each other, are in the same hive, have layer 3 roaming
enabled, and are in different IP networks, the HiveAPs will build layer 3 neighbor
relationships with each other over the routed Ethernet network. HiveAPs will then
distribute tunnel and client information to their layer 3 neighbors. This way, when the user
roams across layer 3 boundaries, the tunnels can be built without delay.
In situations where HiveAPs cannot discover each other automatically over the air,
possibly due to being on opposite sides of an RF obstacle, you can manually configure
layer 3 neighbors for HiveAPs using HiveManager.
The following diagram shows the basic steps performed by HiveAPs as clients roam within
their subnet and across subnet boundaries.
Step 1 – The client performs seamless, fast/secure layer 2 roaming within subnet A.
Step 2 – After the client successfully roams to HiveAP 2, HiveAP 2 will send an
encrypted control packet over the Ethernet infrastructure to HiveAP neighbors in
the neighboring subnet. The control packet contains, as a minimum, the client’s
identity, security and QoS information, SIP call state, and the client’s originating
subnet.
Step 3 – Because the client’s identity and key information, including SIP call state,
is proactively synchronized between neighboring HiveAPs, when the client roams
to HiveAP3, HiveAP3 has all the information it needs to enforce policies and to
tunnel permitted traffic, over the GRE tunnel, to a portal HiveAP in the client’s
original subnet. This behavior allows the client to maintain its IP address and
active sessions as it roams. Predictively, HiveAP3 forwards the wireless client’s
roaming information to HiveAP4 in anticipation of any further roaming.
The ability for a client to maintain its IP, QoS, firewall, and security settings while roaming
across subnet boundaries ensures that client application sessions do not get dropped
while roaming. Based on a configurable idle time or number of packets per minute,
HiveAPs can be set to disassociate these wireless clients so that they can reconnect and
receive an IP address in their new subnet allowing traffic to be locally forwarded. If a
client roams across subnet boundaries when it does not have any active sessions in
process, it can be immediately transitioned to the new subnet, eliminating the need to
tunnel traffic.
In summary, with HiveAPs and cooperative control, wireless clients have the ability to
perform fast/secure roaming between HiveAPs within the same or between different
subnets without impacting client data or voice connections.
For each radio in access mode, ACSP will select a channel and power level to maximize
coverage while minimizing interference with its neighbors. This is accomplished by
ensuring that HiveAPs use different channels than their immediate neighbors, and that
they adjust their power to minimize co-channel interference with other, more distant,
HiveAPs. For radios in backhaul (mesh) mode, ACSP ensures that that they use the same
channel throughout the mesh, while still minimizing interference with the access links.
To maintain optimal performance, ACSP constantly checks the radio power settings and
can automatically decrease radio power based on communication from neighboring
APs to give the maximum coverage possible while minimizing interference. This behavior
is highly beneficial in a failure state or when an AP is taken off line, where neighboring
APs can automatically adjust their power to the optimum state, essentially taking into
account the missing AP. ACSP can also be scheduled to recalibrate the radio channels
during a configurable daily time window and when a specified number of clients are
associated. This helps ensure that radio channels do not switch while the WLAN is being
utilized, preventing a disruption of service for wireless clients.
HiveAPs can make decisions to offload stations from one radio to another within the
same HiveAP (Bandsteering) based on client capabilities and/or to offload stations to a
HiveAP that is better suited to handle the load in the immediate area. Transitioning
clients between radios and between APs is done without breaking the application
session.
The user profile attribute assigned to a user stays with the user as he/she roams
throughout the WLAN. Administrators can configure HiveAPs to use the same set of
policies for a user profile throughout a WLAN or they can make adjustments based on
specific HiveAPs. For example, clients associating to the same SSID in different locations
can be assigned to different VLANs and subnets. As a user roams between the two
locations, the attribute is used by the HiveAPs to identify the set of policies for the client
at the new location including: VLAN, firewall policy, QoS policy, and layer 3 roaming
policy. Depending on the policies in place for each location, the client’s traffic can be
tunneled back to a HiveAP in the client’s original location (if it’s on a different subnet),
which is useful for seamlessly maintaining active IP sessions, or the client can forced to
obtain new IP settings for the VLAN in the new location.
Advanced QoS techniques are required in order to ensure optimal performance for high-
priority applications, such as voice and video, without adversely affecting the
performance of lower-priority traffic such as web-based applications and email, when
traffic is sent from the wired network to the WLAN. Each HiveAP has a sophisticated and
granular QoS engine, which provides linear and unlimited scalability of QoS services
across the WLAN system.
Most modern WLAN systems support the Wi-Fi Alliance WMM certification for QoS, which
was based on the IEEE 802.11e amendment. With WMM, traffic can be classified into
one of four access categories (ACs), which are bound to queues, for transmission onto
the wireless network. Higher-priority ACs have different (better) arbitration values than
lower-priority ACs, so higher-priority traffic experiences less delay in transitioning from
queue to wireless medium.
Though WMM does a reasonable job of ensuring that each traffic type gets an
appropriate amount of access to the wireless medium, if voice and video (high-priority)
traffic are being used on the WLAN, it is likely that there will be momentary periods of
congestion for lower priority queues. When queues get full, even momentarily, packets
get dropped.
Dropped packets may not sound like a big deal, especially if they use TCP (since they will
be retransmitted); however, because TCP uses a built-in congestion avoidance algorithm
that cuts TCP window sizes in half when a packet is dropped, TCP performance can be
severely affected by L2 frames that are dropped. Though applications classified in the
higher-priority queues may not be affected, applications in lower-priority queues
including email, database, accounting, inventory, workforce automation, or SaaS
applications may suffer significantly. By using advanced QoS techniques to augment
In order to provide highly-efficient voice and video transmission and to alleviate the
problems that occur when packets are dropped because of momentary congestion of
WMM queues, Aerohive has augmented WMM by implementing advanced
classification, policing (rate limiting), queuing, and packet scheduling mechanisms within
each HiveAP.
The following diagram shows a simple example and workflow of the QoS engines within a
HiveAP.
Diagram 6 shows an example of how traffic arriving from the wired LAN is processed by a
HiveAP to ensure highly-effective QoS to the WLAN.
Step 2 –When packets arrive from an Ethernet uplink, a wireless uplink, or an access
connection, the traffic is assigned to its appropriate user profile, which defines the QoS
policy.
Step 3 –The QoS packet classifier categorizes traffic into eight queues per user based on
QoS classification policies. Classification policies can be configured to map traffic to
queues based on MAC OUI (Organization Unique Identifier), network service, SSID and
interface, or priority markings on incoming packets using IEEE 802.1p or DSCP (DiffServ
Code Point).
Step 4 –The QoS traffic policer can then enforce QoS policy by performing rate-limiting
and marking. Traffic can be rate-limited per user profile, per user, and per user queue.
Step 5 –The marker is responsible for marking packets with DSCP and/or 802.11e for traffic
destined to the WLAN and with DSCP and/or 802.1p for traffic destined for the wired LAN.
Step 6 –Traffic to the WLAN is queued in eight queues per user and waits to be scheduled
for transmission by the scheduler.
Step 7 –The QoS packet scheduling engine uses two scheduling types for determining
how packets are sent from the user queues to the WMM hardware queues for
transmission onto the wireless medium.
• Strict priority – Packets in queues that are scheduled with strict priority are sent
ahead of packets in all other queues. Strict priority is typically configured for user
queues that are assigned to low latency traffic such as voice.
• Weighted round robin – The scheduler can allocate the amount of airtime or
bandwidth that can be transmitted by the user of a wireless client device based
on weights specified for a user profile, individual users within a user profile, and
the eight queues per user. Based on weighted preferences, the scheduler moves
packets from the individual user queues to the appropriate WMM access
categories for transmission onto the wireless network.
Because a QoS packet scheduling engine is built into every HiveAP, HiveAPs have the
ability to closely monitor the availability of the WMM access categories and to instantly
react to changing network conditions. The QoS packet scheduling engine only transmits
to WMM access categories when they are available, queuing packets in eight queues
per user in the meantime. This behavior prevents dropped packets and jitter, which
adversely affect time-sensitive applications such as voice. It also prevents TCP
performance degradation caused by contention window back-off algorithms that are
invoked when TCP packets are dropped.
Step 8 –Finally, WMM functionality transmits L2 frames from its four access categories
based on the availability of the wireless medium. Packets from higher-priority access
categories are transmitted with a smaller random back-off window to allow transmission
onto the wireless medium with less delay.
The benefits of Dynamic Airtime Scheduling are compelling both to the IT organization
and to the users of the WLAN, as it enables clients connected at higher data rates, in a
mixed data rate environment, to achieve up to 10 times more throughput than they
would get with traditional WLAN infrastructures - without penalizing low-speed clients.
This means that users see faster download times and improved application performance,
and it means that low-speed clients don’t destroy the performance of the WLAN for the
rest of the users. This allows IT to implement a phased upgrade to 802.11n and
immediately reap the benefits of the new 802.11n infrastructure, even if it takes years to
upgrade all of the clients. And, because a user connecting at the fringe of the WLAN
can no longer consume all of the airtime, the network impact of a bad client or a weak
coverage area is diminished. This allows IT to reduce their infrastructure investment,
saving IT time and increasing user satisfaction.
With bandwidth-based QoS scheduling, the AP calculates the bandwidth used by clients
based on the size and number of frames transmitted to or from a client. Bandwidth-
based scheduling does not take into account the time it takes for a frame to be
transmitted over the air. Clients connected at different data rates take different
amounts of airtime to transmit the same amount of data. By enabling Dynamic Airtime
Scheduling, the scheduler allocates airtime, instead of bandwidth, to each type of user,
user profile, and user queue, which can be given weighted preferences based on QoS
policy settings. When traffic is transmitted to or from a client, the HiveAP calculates the
airtime utilization based on intricate knowledge of the clients, user queues, per-packet
client data rates, and frame transmission times, ensuring that the appropriate amount of
airtime is provided to clients based on their QoS policy settings.
The following test, which is an excerpt from the whitepaper, simulates a typical WLAN’s
transition to 802.11n – where 802.11n APs are servicing a mixture of 802.11n clients and
legacy (a/b/g) clients. These tests show how Dynamic Airtime Scheduling prevents a
high performance 802.11n WLAN from being hampered by clients connected at lower
data rates – even if some of those slow clients are slow 802.11n clients. Depending on
distance from the AP, position of the antenna, or even RF interference, the data rate of a
connected 802.11n client can range from 6.5 to 300 Mbps (depending on supported
features), so some 802.11n clients will be slower than other 802.11n clients. It is even
possible that some 802.11n clients could actually be slower than 802.11a/g clients.
The graph to the right shows the same test with a HiveAP using Aerohive’s Dynamic
Airtime Scheduling. The transfer time at the 270 Mbps data rate is approximately 10
seconds – about 10 times faster than the 110 seconds seen in the previous test. Likewise,
the rest of the transfer times improved significantly. The 802.11n client at 108 Mbps was
over 6 times faster, the 802.11n client at 54 Mbps was over 3 times faster, the 802.11a
client at 54 Mbps was 2.5 times faster, the 802.11a client at 12 Mbps was 30% faster, and
the 802.11n client at 6 Mbps decreased slightly (10%).
If, for any reason, SLAs are not being met, actions can be taken, such as logging or use
of the Airtime Boost feature. Airtime Boost is a feature that works in concert with
Dynamic Airtime Scheduling that provides additional airtime to a client that is not
meeting its SLA.
The ability to assign and guarantee specific levels of throughput to individual users or
groups of users within a Wi-Fi network is another significant step, pioneered by Aerohive
Networks, toward making a Wi-Fi network as deterministic as Ethernet.
• Encrypted and/or Hashed Passwords and Shared Keys in Command Line Interface
If the bootstrap configuration is not set or if someone knows the admin name and
password and is able to gain access to the HiveOS command line interface, all
passwords and shared keys are hashed and/or encrypted so they cannot be
obtained.
• HiveAP RADIUS Users Stored in DRAM Are Removed When HiveAP is Powered Off
If the HiveAP is configured as a RADIUS server and uses a local user database, this
database can optionally be stored in DRAM, which is cleared when a HiveAP is
powered off or rebooted.
• Bootstrap Configuration
If the hardware reset button functionality is enabled, you can set a bootstrap
configuration on a HiveAP so that when the HiveAP boots, after a hardware reset, it
will be configured with:
– A strong admin name and password for serial console or SSH access;
– disabled wireless interfaces;
– disabled console access;
– and optionally disabled Ethernet interfaces.
In the unlikely and unfortunate event that a HiveAP is ever stolen, Aerohive has provided
these security mechanisms to prevent loss of secure data and to prevent the misuse of
the HiveAP.
Along with searching for rogue APs over the airwaves, HiveAPs can be configured to use
on-network rogue AP detection functionality to probe VLANs using cooperative control
messages to detect if rogue APs are physically attached to the corporate switched
network. Once rogue APs are found, the administrator can use rogue mitigation
functionality (e.g. deauthentication) to prevent clients from associating with the rogue
APs. The administrator can then use HiveManager to locate the position of rogue APs on
a topology map so they can be removed.
The following diagram shows a list of rogue APs in HiveManager, Aerohive’s WLAN
management system. In the picture, you can see the detailed information displayed
about the rogue APs along with a link to a topology map displaying the location of the
rogue AP on the map if it was discovered by at least three HiveAPs.
Monitoring one or more clients in real-time within HiveManager is a snap with the Client
Monitor tool. You simply select the client you want to monitor, select one or APs to
monitor in regard to this client, and start the process. Many clients can be monitored at
once, and everything that happens with monitored clients is logged to the display and
can be exported.
Aerohive has integrated HiveOS with Wireshark, one of the industry’s leading protocol
analyzers, for the purposes of remote troubleshooting and performance and security
analysis. Integration with other protocol analyzers is on the way, and with this kind of
integration, administrators can now see what the AP sees without disconnecting clients.
Each radio can capture data and send it to the analyzer (over the air or over the
Ethernet) while clients continue to operate normally. This revolutionary step enables
quick and easy monitoring across a distributed enterprise, or even large campus
installations, without an on-site visit.
HiveAPs apply security policies to traffic based on the identity of a user or by the SSID
being used. Security policies can enforce MAC address filters, MAC (layer 2) firewall
policies, IP (layer-3/layer-4) stateful inspection firewall polices, and can prevent a
number of wireless MAC-layer and IP-layer denial of service (DoS) attacks. Because
each HiveAP is responsible for processing the security policies for its own traffic, the
distributed processing power of all HiveAPs in the WLAN system is harnessed. Given the
high-end processing power of each HiveAP, collective processing power far exceeds a
centralized, controller-based model, allowing virtually unlimited scalability.
HiveAPs decrypt wireless frames at the network edge (before they are transmitted on to
the wired network), making it possible to use the security systems currently in place in the
wired network to enforce security policies on wireless traffic as well. This way, wireless
and wired traffic alike can be forced to flow through the corporate security systems,
including firewalls, antivirus gateways, intrusion detection and prevention systems, and
network access control (NAC) devices.
HiveAPs can authenticate wireless users using a local user database on the HiveAP or
with external domain authentication to Directory Services (Active Directory, eDirectory,
OpenDirectory, LDAP, or LDAP/S). Because the 802.1X/EAP key processing and
distribution is performed in HiveAPs for wireless clients, these processes are offloaded from
the corporate RADIUS servers, preserving their performance. With the ability to use any
HiveAP as a RADIUS server, network administrators have the flexibility to design fail-safe
RADIUS server implementations anywhere within a WLAN. This is especially useful in the
branch office, where the 802.1X/EAP authentication process can occur locally on a
HiveAP without the need to traverse a WAN link.
Administrators can customize the captive web portal by designing their own web pages.
After a user passes the registration process, their traffic is enforced by the HiveAP based
on the user profile assigned to the SSID or based on a user profile assigned by an
attribute returned from GuestManager or a RADIUS server.
Identity-Based Tunnels
Along with firewall policies, QoS policies, and VLAN assignment, when a client associates
with an SSID and gets assigned to a user profile, the client’s traffic can be directed to be
tunneled to a HiveAP in a pre-defined location or subnet.
The following diagram provides a common use-case for identity-based tunnels. An SSID
called Guest-WiFi is configured on HiveAPs in the internal network. When a guest client
associates with the SSID, they are assigned to the Guest-DMZ user profile that has a policy
to tunnel all the guest client’s frames (layer 2 traffic) via L2-GRE to one of the HiveAPs in
the DMZ. The client obtains its IP address from a DHCP server accessible from the DMZ,
which can optionally be running on a DMZ HiveAP.
When a guest attempts to access the network, the guest traffic is restricted by the
captive web portal on the local HiveAP until the guest has authenticated with a user
account created using GuestManager or a RADIUS server, or the guest has completed a
web-based self registration form. After being granted access, the local HiveAP enforces
the guest-specific firewall policy, QoS policy, and tunnel policy configured in the Guest-
DMZ user profile, and the permitted traffic is directed over a L2-GRE tunnel to a HiveAP in
the DMZ, which permits access to the Internet. The guest traffic will never appear on the
internal network outside of the L2-GRE tunnel. The client is essentially an extended
member of the remote network.
While this diagram shows an example of using identity-based tunnels for guest access,
identity-based tunnels can be used for a variety of other scenarios as well. Using identity-
based tunnels, network resources can be accessed from any location in the wireless
network as if they were physically there. For extra scalability, HiveAPs can use round
robin to build tunnels to a set of HiveAPs at the destination, and tunnels only exist while
they are in use by clients.
The Aerohive guest access solution provides the flexibility of using features integrated into
each HiveAP while allowing seamless interoperability with third-party guest access
solutions.
Even more difficult is the fact that many legacy and SOHO-class wireless devices are still
used in the enterprise that do not support 802.1X/EAP or the latest WPA2 standard with
support for Opportunistic Key Caching (OKC) that required for fast/secure roaming
between APs. The next best option has traditionally been to use a preshared key (PSK)
for these devices; however, classic PSK trades off many of the advantages of 802.1X/EAP,
such as the ability to revoke keys for wireless devices if they are lost, stolen, or
compromised, and the extra security of having unique keys per user or client device.
To draw on the strengths of both preshared key and 802.1X/EAP mechanisms without
incurring the significant shortcomings of either, Aerohive has introduced a new approach
to WLAN authentication: Private PSKs (PPSKs). PPSKs are unique preshared keys created
for individual users on the same SSID. They offer the key uniqueness and policy flexibility
that 802.1X/EAP provides with the deployment simplicity of preshared keys.
The following diagram is a simple example showing a WLAN with traditional PSKs versus
that of a WLAN using Aerohive’s PPSK functionality. With the traditional approach, all of
the client devices use the same PSK and all receive the same access rights because the
clients cannot be distinguished from each other.
On the other hand, with PPSK, as shown on the right, every user is assigned his/her own
unique or “private” PSK, which can be manually created or automatically generated by
HiveManager and sent to the user via email, printout, or SMS. Every PPSK can also be
used to identify the user’s access policy, including their VLAN, firewall policy, QoS policy,
tunnel policy, access schedule, and key validity period. Because the keys are unique,
keys from one user cannot be used to derive keys for other users. Furthermore, if a
device is lost, stolen, or compromised, the individual user’s key can be revoked from the
network, preventing unauthorized access from any wireless device using that key. As for
the client users, the configuration is the same as using a standard PSK.
Aerohive’s wireless VPN solution has been designed from the ground up to take
advantage of layer 2 (L2) and layer 3 (L3) tunneling making it easier to deploy a global
VPN. Typical L3 IPSec VPN solutions require planning and implementation of unique IP
subnets for the remote offices, DHCP relays/IP helpers on remote routers, and IP routing,
access control lists (ACLs), or firewall policies are needed to send traffic though the VPN.
Aerohive’s wireless VPN does not have these complexities because it uses GRE to
encapsulate layer 2 traffic, and IPSec to authenticate and encrypt the traffic so that it
can be securely passed though the Internet. To the client, it will appear as though they
are physically connected to the L2-switched network at the corporate office. For the IT
administrator, planning and implementation complexities of typical L3 VPN solutions are
alleviated because the same subnet can be shared among many remote sites.
To deploy an Aerohive wireless VPN solution, HiveAPs configured as VPN servers are
installed in the corporate network, typically in a DMZ. HiveAPs configured as VPN clients
are installed at remote sites can obtain IP address settings via DHCP and can
automatically establish a L2 VPN tunnel to the HiveAP acting as a VPN server at
corporate headquarters. From that point on, traffic from corporate devices at the
remote site, including DHCP, is encapsulated in GRE and sent encrypted though a VPN
tunnel to the HiveAP VPN servers at headquarters. Traffic is then unencrypted,
decapsulated, and transmitted on to the corporate LAN with its full L2 MAC header and
VLAN tag intact, just as if the traffic was originated from the corporate office. This gives IT
administrators the ability to allocate and share corporate IP subnets among many
remote sites without having to create unique IP subnets for each office. This alleviates
the need to configure IP routing at the corporate site or branch offices to route traffic
though the VPN. HiveAPs use MAC layer routing to determine whether traffic should be
forwarded locally or sent though the VPN.
An example of Aerohive’s wireless VPN solution is displayed below, which shows the
network components of a branch and a teleworker’s home office. Each remote site
establishes L2 VPN connections to corporate headquarters, which securely extends the
corporate network to corporate workers in remote sites. Corporate devices obtain IP
addressing though the VPN as if they are physically on the corporate network. In branch
offices, wireless devices access the corporate network via a secure SSID profile available
on branch HiveAPs. Likewise, corporate wired and wireless devices have direct access
to the branch network though a bridged Ethernet connection on a branch HiveAP.
Wireless Mesh
Using cooperative control protocols that operate over the wired and wireless network
segments, HiveAPs can establish wireless mesh connections with neighboring hive
members. Because HiveAPs have two radios – one that supports 2.4 GHz channels and
the other that supports 5 GHz channels – the administrator has the option to specify the
use of one radio for wireless access and the other for wireless mesh.
Wireless mesh can be used where wired connectivity is not feasible or is difficult to
deploy, such as in historic buildings or stairwells where wireless access is required for
VoWiFi solutions or location-based services. Wireless mesh can also be used where a
network needs to be rapidly deployed, such as for a conference or a disaster recovery
situation. Even if wired connectivity exists, wireless mesh can be used to augment the
wired network. This gives the HiveAPs extra resiliency capabilities by being able to route
around a failure like an Ethernet link that has been accidentally disconnected or an
access switch that has failed or has been powered down.
All a HiveAP needs is power, and the cooperative control protocol suite does the rest.
The HiveAPs automatically build wireless mesh connections with each other to provide
wireless coverage that is not limited to Ethernet’s 100-meter maximum twisted pair length.
Over the wireless mesh, Aerohive’s cooperative control protocols are used to provide
best path forwarding, fast/secure roaming, optimal radio channel and power selection
for wireless connections, and high availability with dynamic and stateful re-routing of
traffic in the event of a failure.
Because cooperative control protocols have been designed to scale to support very
large wireless mesh networks, they prevent flooding by limiting the scope of broadcasts
for the distribution of routes and roaming cache information. This, in combination with
QoS, DoS prevention, and firewall policy enforcement at the HiveAP, keeps unnecessary
traffic off the mesh, ensuring optimized WLAN performance though the mesh.
Another powerful capability delivered by Aerohive’s mesh technology is the ability to use
the mesh to bridge between two LANs. By configuring an Ethernet interface on a HiveAP
to function as a wireless bridge, the HiveAP will learn the MAC address of devices on a
local LAN segment and distribute that information throughout the Hive to allow traffic to
be forwarded to and from those devices over the wireless network and to the primary
wired LAN. This is especially useful for connecting devices such as video surveillance
cameras that are in a building, parking lot, or outdoor area where running Ethernet
cabling is too costly, or not possible, but power is available.
In order to determine the best paths through a network, HiveAPs run AMRP over both
wired and wireless mesh connections. This allows the routing algorithms to determine the
least path cost based on a number of metrics, including: data rates, hops, and
interference. If a wired or wireless uplink fails, or interference affects the wireless
performance, a better route can be selected and propagated through the WLAN. This
allows HiveAPs to dynamically select a new best path for seamless re-routing and
forwarding of traffic.
To support large-scale WLAN installations, such as large corporate campuses, AMRP has
been designed to limit the messages and routing information within self-contained areas.
This limits the number of route table entries that a single HiveAP needs to maintain. The
following diagram highlights the differences between the data forwarding paths that are
observed with the best path forwarding of Aerohive’s cooperative control architecture
versus that found in a typical controller-based architecture.
different location. Because of the indirect paths and round-trip times between the APs
and the controllers, extra latency and jitter are introduced, which can have adverse
affects on WLAN performance and voice quality. This is especially problematic if the
path to the controller or the controller itself is heavily utilized. In contrast, the Aerohive
cooperative control architecture using AMRP allows for best path forwarding between
devices over the LAN and over wireless mesh, preventing extra latency and jitter as
traffic passes between devices. This is essential for achieving high performance and
exceptional voice quality.
High Availability
The Aerohive Networks cooperative control architecture is a perfect fit for organizations
that require mission-critical reliability with no complex planning and minimal budget.
Aerohive’s high availability features come standard within the HiveAPs and provide
many levels of resiliency and redundancy.
Using Smart PoE, if the HiveAP needs more power than is currently available, it will first
disable its redundant Eth1 interface to free up some power. If it needs more power, it
switches its radios from 3x3 to 2x3, which requires less power still. In rare cases when
further power conservation is necessary, the HiveAP will reduce the speed on its Eth0
interface from 10/100/1000 Mbps to 10/100 Mbps. As a safeguard, in the event that
For extra power and resiliency, both gigabit Ethernet ports on the HiveAP 340 have been
equipped with Smart PoE technology, which allows both ports to draw power at the
same time. Even if both of the links are connected to legacy PoE (802.3af) switches, the
HiveAPs can operate at full capacity.
The following diagram shows a HiveAP connected to two different switches, having its
eth0 and eth1 interfaces configured as redundant. In the first case, the traffic is flowing
through eth0, and power is obtained from both eth0 and eth1 using Smart PoE. In the
second case, the switch fails or the eth0 link is pulled. The HiveAP draws its full power
from eth1, and the data traffic also continues out eth1. Failover between redundant
interfaces occurs in less than one second without any interruption of service.
The resiliency built inherent in HiveAPs because of the cooperative control protocol suite
allows the network to continue to operate even in the event of multiple inline failures.
Because there is no central control point, there is no worry of a single device failure that
is capable of bringing down an entire wireless network. The diagram below shows
HiveAPs and the traffic from wireless clients when all HiveAPs are up and operational,
and then when network failures occur and HiveAPs are taken offline.
function with their entire feature set. Without the overhead of control and data
forwarding that exists in controller-based management solutions, the HiveManager
platform scales to support the management and monitoring of thousands of HiveAPs
from a single console.
Though each HiveAP can be configured using a robust command line interface (CLI), for
any more than handful of HiveAPs it is recommended that the HiveManager platform be
used. HiveManager simplifies management and monitoring of HiveAPs using a
combination of topology views and floor plans, list views with filter and sort capabilities,
and WLAN policies which provide a unified configuration for all non-device-specific
configuration options.
The following picture shows an example of the monitor list view for HiveAPs. At a glance,
you can see the operational status, AP types, software versions, number of clients
associated, IP addresses, and power/channel selections to name just a few.
Versions of HiveManager
HiveManager is available in many versions, included 1U standard and 2U high-capacity
appliances, a virtual appliance (VMware virtual machine), and as a software-as-a-
service (SaaS) offering called HiveManager Online. Each mode of delivery has its own
unique benefits, and there’s even two operational modes: Express and Enterprise.
HiveManager Express has a streamlined graphical user interface (GUI) for those
organizations who prioritize ease-of-use above most other features and who have a
uniform company-wide access policy. HiveManager Enterprise is a maximized
implementation of HiveManager, offering a tremendous level of configuration,
monitoring, and configuration flexibility and sophistication.
HiveAPs can be located anywhere in the network as long as they have the ability to
reach the HiveManager via an IP address. HiveAPs can even communicate to the
HiveManager through NAT (Network Address Translation) or NAPT (Network Address Port
Translation). Administrators can access the HiveManager GUI from a web browser, and
the GUI is based on HTML, AJAX, and Flash, allowing administrators to experience a real-
time management platform securely and efficiently from any PC.
On HiveManager 1U and 2U appliances, there are both MGT and LAN interfaces,
allowing the separation of HiveManager administration from the management of
HiveAPs. By default, the MGT interface is used for both the HiveManager administration
and HiveAP management. However, the administrator can dedicate the LAN interface
for HiveAP management and use the MGT interface solely for HiveManager
administration.
HiveAPs then use CAPWAP to contact the HiveManager, authenticate, and then build
an encrypted tunnel. HiveManager then displays a list of the discovered HiveAPs. The
administrator simply assigns WLAN Policies to HiveAPs and accepts them as devices to
manage. After that, the administrator can use one-button configuration updates to
send the configuration to one or more HiveAPs, or to schedule the configuration updates
for a later time. Likewise, if the operating system needs updating, the administrator can
select one or more HiveAPs to immediately send or schedule operating system updates.
Auto-Provisioning
By default, when HiveAPs locate the HiveManager, they will be displayed in
HiveManager in the New HiveAPs Automatically Discovered list. This gives the
administrator control of when and how a HiveAP should be managed. However, the
management process can be automated by using the auto-provisioning feature in
HiveManager. Using auto-provisioning, the administrator can import or manually enter a
list of serial numbers for HiveAPs that will be deployed, and then pre-define a WLAN
policy, operating system version, radio profiles, topology map, and VLAN for the HiveAPs.
When a HiveAP with one of the entered serial numbers is discovered, HiveManager will
automatically push configuration and/or operating system updates to each newly-
discovered HiveAP with no administrator interaction required.
HiveManager allows an administrator to view all of the active clients in the WLAN,
showing their IP addresses, MAC addresses, hostnames, user names (If RADIUS
authentication is used), SSIDs, session start times, signal strength values, the HiveAPs
clients that are associated with, and a variety of other parameters. This information is
stored within the HiveManager and can be exported as a CSV file for advanced
troubleshooting and network forensics. For ease of identifying clients, administrators
have the ability to add comments and to modify user details for clients which remain
persistent across logins. This is especially useful when testing and troubleshooting.
A wide variety of information can be exported for reporting purposes and pre-defined
and custom reporting features are standard in HiveManager. To be proactive,
administrators can configure email notifications so that they can be immediately
informed of alarms on the WLAN.
More detailed information about each client is available by selecting the client from the
list. This information includes the SSID and APs the clients have been associated with,
If a set of HiveAPs require different configuration settings in areas of the network, you can
use one of two different methods to accomplish the configuration. The first method is to
simply clone an existing WLAN Policy, make the necessary changes, and apply the new
WLAN policy to one or more HiveAPs. The second method is to use HiveAP classification.
When HiveManager updates the configuration policy on HiveAPs, it can identify HiveAPs
by location on a map, HiveAP classification tag, or specifically chosen HiveAPs. In this
example, the HiveAPs in area 1 and area 2 are identified by the topology map they are
located on within HiveManager. When you define VLAN or IP address objects, you can
specify how they will be applied to HiveAPs based on HiveAP classification criteria. As an
example, the following picture shows how you can define a VLAN object from within
HiveManager that will be assigned to a user profile. You can see that the object
Employee-VLAN will be configured as: VLAN 2 on HiveAPs located on the Area1
topology map and VLAN 7 for HiveAPs on the Area2 topology map.
You can also use HiveAP classification settings for IP objects. In a similar fashion, you can
define the RADIUS server IP address to use 10.1.1.55 in Area 1, and 10.6.1.55 in Area 2. This
allows an administrator push one configuration policy to all HiveAPs while allowing the
objects within the policy to be applied differently to individual HiveAPs based on
classification information.
In this example, when a user authenticates with SSID Corp_WiFi, the RADIUS server returns
the user profile attribute number 100 for that specific user, and the HiveAP uses that value
to bind the user to a user profile (ensuring the appropriate policies are enforced for that
user). This mapping of user profile attribute numbers into specific user profiles can be
different in different parts of the network, allowing for either global user policy
enforcement or location-specific user policy enforcement. In this example, in some
locations, a user is assigned different VLANs and QoS policies, but the same firewall
policy and layer 3 roaming policies. Likewise, other users assigned to different user profile
attributes may follow the same physical path, but can be assigned to different VLANs
and policies. This allows administrators to create access policies based on identity and
location.
Guest Manager
Aerohive's GuestManager is a guest account management and authentication server
that completes Aerohive’s robust and secure guest access solution. In combination with
HiveAPs, the GuestManager enables guest users to be simply and securely authorized for
access to a guest network. Aerohive GuestManager makes it simple for authorized
employees to create guest accounts while keeping the guest network secure by
authenticating and reporting on all guest users.
credentials.
Credentials
The simple distribution of unique guest credentials is one of the key operational benefits
of GuestManager. Credentials can be randomly generated or specified during
registration to provide a unique login identity for the guest. Once the user credentials
are created, the credentials can be emailed or printed on letter-sized paper or a label
along with instructions on how to access the network.
Delivered as an Appliance
GuestManager is delivered as a standalone appliance or an instance on HiveManager
to ease deployment and maintenance of guest services. The GuestManager appliance
has been “hardened” (secured) against malicious attack, and vulnerabilities are fixed as
the system is upgraded.
Conclusion
Supporting near-100% uptime for mission-critical, real-time, and high-throughput
applications demands an architecture that can provide unprecedented availability and
security while still being simple to deploy, scale, and manage. Eliminating bottlenecks,
single points of failure, and scalability limitations is the next step in Wi-Fi’s continuing
evolution. The future of Wi-Fi is distributed intelligence and its name is Cooperative
Control.