Professional Documents
Culture Documents
High Density Tests & Comparative Study conducted on Ubiquiti Access Point
UAP-AC-SHD with Cisco-3802i, Meraki-MR52, Aruba-AP325, Mist-AP41,
Ruckus-R720
(version 1.3)
Submitted to
Ubiquiti
By
Alethea communications Technologies
2
info@alethea.in
2. Executive Summary 6
3. Rankings 7
3.1 Ratings @ single client 7
3.2 Ratings @ 32 Clients 7
3.3 Ratings @ 100 clients 8
3.4 Aggregate Ratings & Ranking 8
4. Test Setup 9
4.1 Access Point Configuration 10
4.1.1 Firmwares Used 10
4.1.2 Power Calibration 10
4.1.3 Settings & Configuration 11
4.2 Client Load Single client location 12
4.3 Client Load 32 clients location 13
4.4 Client Load 100 clients location 14
5. Detailed Results 15
5.1 Throughput Downlink 15
5.1.1 Test Method 15
5.1.1.1 Client Load Single 16
5.1.1.1.1 Android Test Results 16
5.1.1.1.2 IOS Test Results 17
5.1.1.1.3 Windows Test Results 18
5.1.1.1.4 Ubuntu Test Results 18
5.1.1.1.5 Ratings for single Client 19
5.1.1.2 Clients Load 32 22
5.1.1.2.1 Test Results 22
5.1.1.3 Client Load 100 23
5.1.1.3.1 Test Results 23
5.1.2 TCP_DL Comparison between Client_Load 1, 32, 100 24
5.2 Throughput Uplink 25
5.2.1 Test Method 25
5.2.1.1 Client Load Single 25
5.2.1.1.1 Android Test Results 25
5.2.1.1.2 IOS Test Results 26
5.2.1.1.3 Windows Test Results 27
5.2.1.1.4 Ubuntu Test Results 28
5.2.1.1.5 Ratings for single Client 29
5.2.1.2 Clients Load 32 30
5.2.1.2.1 Test Results 30
5.2.1.3 Client Load 100 32
5.2.1.3.1 Test Results 32
5.2.2 TCP_UL Comparison between Client_Load 1, 32, 100 33
5.3 Latency 34
5.3.1 Test Method 34
5.3.1.1 Client Load Single 34
5.3.1.1.1 Android Test Results 34
Appendix 50
Results Table 50
a) Client Load Single 50
b) Client Load 32 51
c) Client Load 100 54
Revision History 57
IMPORTANT INFORMATION:
After the first version of the report was published, R720 was found to be incompatible
with the PoE+ switch we used and hence the tests for R720 were done again. This
report version (1.3) carries the revised data after Ruckus Retest.
Please feel free to reach out to info@alethea.in for further information and
clarification
1. Introduction
Alethea took the task of evaluating the end user WiFi experience with the market leading 4*4
Enterprise grade access points. The list includes UAP-AC-SHD, Cisco-3802i, Meraki-MR52,
Aruba-AP325, Mist-AP41, Ruckus-R720 access points.
To evaluate wifi experience, various popular application types were chosen and Key
Performance Indicators (KPIs) were measured for each one of them. The traffics include
Video Streaming, Voice Over IP, Latency, Mixed Traffic and Uplink and Downlink
throughput.
The tests were conducted at different client load levels of single client, 32 clients and 100
clients. This helped study how the access point could scale at the increasing load levels.
Single client results were captured with all OSs to have the baseline and peak performance
numbers and to serve as a starting point to see the impact of scaling on each OS.
Real life users use different devices, phones, tablets, PCs and Laptops. We also mixed the
clients between Apple devices, MAC devices, Android phone/tabs, Windows and Linux. Also
the device capability were mixed from low end to high end. There is a mix of MIMO type as
well from 1*1 to 3*3. Both wave 1 and wave 2 devices were present.
All the environment parameters and configurations were kept similar. All Access Points were
loaded with latest firmwares as on February 2019.
This report captures the findings and the results. We analyzed which AP did better than
others at a given client load and the impact of scaling on each AP.
2. Executive Summary
A. TCP Downlink throughput tests at the client load levels of 1 client, 32 client and then 100
clients.
a. Cisco-3802i perform best in throughput, followed by UAP-AC-SHD
b. Ruckus-R720, Meraki-MR52 and Aruba-AP325 perform well and closely follow
UAP-AC-SHD
c. Mist-AP41 is fine till 32 clients but degrades heavily at 100 clients
B. TCP Uplink throughput tests at the client load levels of 1 client, 32 client and then 100 clients.
a. UAP-AC-SHD performs best in Uplink throughput, closely followed by Mist-AP-41
b. Ruckus-R720, Meraki-MR52 and Aruba-AP325 performed almost similar in all client
load levels
c. Cisco-3802i does not perform well and shows very low performance at 100 client load
level
C. Latency tests carried out at the client load levels of 1 client, 32 client and then 100 clients.
a. Cisco-3802i and Ruckus-R720 are almost similar and does better than all others in
Latency Tests.
b. Mist-AP41 is fine till 32 clients but degrades heavily at 100 clients and clients
disconnect.
D. Video Streaming tests carried out at the client load levels of 32 clients and then 100 clients
a. Cisco-3802i does best in video tests, followed by UAP-AC-SHD
b. Ruckus-R720 Performed well in video tests but behind UAP-AC-SHD
c. Meraki-MR52 and Aruba-AP325 show similar performance in video tests
d. Mist-AP41 does not scale to 100 clients in video streaming
.
E. Mixed traffic test carried out at the client load levels of 100 clients.
a. Ruckus-R720 holds on to both the VOIP and Video Quality better than others.
b. UAP-AC-SHD perform well in mixed traffic but behind Ruckus-R720.
c. Meraki-MR52 performs good in VOIP but does not keep up to the video performance.
3. Rankings
UAP-AC-SHD 5 5 4
Cisco-3802i 5 3 3
Meraki-MR52 4 3 4
Aruba-AP325 4 3 3
Mist-AP41 1 1 5
Ruckus-R720 1 4 5
UAP-AC-SHD 2 4 2 2
Cisco-3802i 5 1 5 3
Meraki-MR52 1 1 2 3
Aruba-AP325 1 1 2 3
Mist-AP41 3 5 1 3
Ruckus-R720 2 2 1 4
UAP-AC-SHD 3 5 4 4 4
Cisco-3802i 5 1 5 4 2
Meraki-MR52 2 5 3 3 4
Aruba-AP325 2 2 4 1 3
Mist-AP41 1 5 1 0 2
Ruckus-R720 2 3 4 4 5
Summary of results
UAP-AC-SHD 14 10 20 44 1
Cisco-3802i 11 14 17 42 2
Meraki-MR52 11 7 17 35 4
Aruba-AP325 10 7 12 29 5
Mist-AP41 7 12 9 28 6
Ruckus-R720 10 9 18 37 3
- For each test case, KPI measured (e.g. Data rate in Mbps for Throughput) is noted
down for each Access Point.
- Different between Lowest value and Highest value is divided in five equal
sub-ranges.
- Each sub-range corresponds to a grade on five-grade scale. (5 is highest and 1 is
lowest)
- If there are multiple KPIs for a test, then rating is arrived for each KPI and average is
taken, to get the rating of the test case.
- The Ratings are aggregated from all the Tests to arrive at overall Score.
- Based on the score, Rankings are given.
4. Test Setup
Test setup consists of 100 clients of different makes, which connect over wifi with the SSID
broadcasted by the Access Point.
The Access Point is powered by a POE+ gigabit switch. The same switch is connected to the
traffic generator. This establishes a gigabit link behind access point, between access point
and traffic generator.
All the tests done in the test event are such that the traffic originates and terminates in the
traffic generator. It does not go out to internet. All required servers for different applications
are hosted in internal traffic generator.
Note:-> Please note that POE+ Switch used is DLINK DGS 1008P POE+ Switch, for all the
Access Points, by default. For Ruckus R720 retest the POE switch was changed to Cisco
WS-C3850-48P.
Extensive tests were done with all the Access Points to measure the impact of change of
switch. Only Ruckus performance varied with the switch, and there was no difference of
performance recorded for any other Access Point. Hence, it was verified that the
performance numbers of other APs are still the same and comparable.
i) UAP-AC-SHD - 4.0.21.9965
ii) Cisco-3802i - 8.8.111.0 (Mobility Express)
iii) Meraki-MR52 - MR 25.13
iv) Aruba-AP325 - 8.3.0.5_68279
v) Mist-AP41 - 0.3.15151
vi) Ruckus-R720 - 200.7.10.2.339
Note:-> Specific AP power levels derived using this method, are mentioned for each test in
the Appendix along with the test results.
All APs are freshly loaded with the latest firmware and default configurations are used by
default.
Following are the additional changes done at each Access Point, if not part of default
configurations
a) Only 5GHz band is enabled. 2.4GHz band is disabled
b) Channel is selected manually which has least interference. It was chosen to be
Channel number 149 for each Access Point.
c) Bandwidth setting
i) For Single Client Tests bandwidth is set to 80 Mhz.
ii) For 32 and 100 client test is set to 40 Mhz.
d) All APs are ceiling mounted in the center of the Test Lab
e) All APs are configured using security type WPA2-PSK
f) All APs are configured to handle 100 clients and very high throughputs, if there is a
setting available to change that.
Note:-> All tests were done with low external channel occupancy of 2 to 3 %
Power level of AP is set based on ubuntu client RSSI, ensuring that ubuntu client RSSI are
same across all APs.
Clients used in 32 clients test are 8 clients in each different OS type which are as below
Power level of AP was set based on average ubuntu clients RSSI, ensuring that average
RSSI of ubuntu clients are same across all APs.
Clients used in 100 clients test are 25 clients in each different OS type which are as below
Power level of AP was set based on average ubuntu clients RSSI, ensuring that average
RSSI of ubuntu clients are same across all APs.
5. Detailed Results
We used iPerf3 to measure Throughput, since iPerf is one of the most popular and trusted
applications.
Firstly, we start the iPerf server in devices. Once all servers start, we also start iPerf clients
on the endpoint to communicate with each server.
Upon completion of the iPerf3 traffic, each device will have a reading of Throughput that it
could achieve. Cumulative Throughput can be calculated by adding the readings from all
the devices. However, when we are testing with a more number of clients, all iperf3 sessions
do not start exactly at the same time. Due to client load some clients may start early & finish
early and some may start late & finish late. At the beginning and the end of the test, there is
a time period when all clients are not running. Clients that run during this time record high
throughput because only few of them are running. So adding all the readings may give the
inflated Throughput.
So we have a method to measure the Throughput for small intervals. We ignore the initial
and end intervals when all clients are not running and take the average of the intervals when
all the clients are running.
Iperf duration is 180 seconds. We take only the intervals when 80% of the clients are
participating for the computation.
Points to Note
● UAP-AC-SHD exceeds the goal
● Cisco-3802i, Meraki-MR52 and Aruba-AP325 also exceeds the goal but slightly
behind the UAP-AC-SHD
● Ruckus-R720 & Mist-AP41 meets the expectations
Points to Note
Points to Note
Points to Note
Rating on a scale of 1 to 5
(5 - High performance, 1- Low performance)
Average
Throughput 545.09 534.07 521.95 513.13 459.63 460.29
Rating 5 5 4 4 1 1
Rating on a scale of 1 to 5
(5 - High performance, 1- Low performance)
Ruckus-R
UAP-AC-SHD Cisco-3802i Meraki-MR52 Aruba-AP325 Mist-AP41 720
Average
Throughput 229.92 256.87 219.07 217.01 233.56 224.99
Rating 2 5 1 1 3 2
Points to Note
● Cisco-3802i exceeds the goal
● Mist-AP41 meets the expectation
● All other APs Performed almost similar and does not meet the expectation
Rating on a scale of 1 to 5
(5 - High performance, 1- Low performance)
Ruckus-R
Cisco-3802 Meraki-MR5
UAP-AC-SHD Aruba-AP325 Mist-AP41 720
i 2
Average
Throughput 142.89 200.71 134.33 136.54 98.43 135.72
Rating 3 5 2 2 1 2
Points to Note:
● Cisco-3802i Performed really great and exceeds the goal
● UAP-AC-SHD meets the expectations
● Ruckus-R720, Meraki-MR52 and Aruba-AP325 slightly behind UAP-AC-SHD but
does not meet the expectations
● Mist-AP41 does not meet the expectations due to it’s lagging behind in Android
clients
Test Method is same as Throughput Downlink, only the direction got changed.
Points to Note
Points to Note
Points to Note
● All APs meets the expectations and performed almost similar with Windows OS.
● Ruckus-R720 doesn’t meet the expectations
Points to Note
Rating on a scale of 1 to 5
(5 - High performance, 1- Low performance)
Ruckus-R
UAP-AC-SHD Cisco-3802i Meraki-MR52 Aruba-AP325 Mist-AP41 720
Average
Throughput 485.28 455.85 453.33 462.12 428.38 467.30
Rating 5 3 3 3 1 4
Rating on a scale of 1 to 5
(5 - High performance, 1- Low performance)
Ruckus-R
UAP-AC-SHD Cisco-3802i Meraki-MR52 Aruba-AP325 Mist-AP41 720
Average
Throughput 88.92 52.56 55.30 49.94 99.89 61.40
Rating 4 1 1 1 5 2
Points to Note
Ruckus-R
UAP-AC-SHD Cisco-3802i Meraki-MR52 Aruba-AP325 Mist-AP41 720
Average
Throughput 47.66 3.46 39.66 16.42 44.02 22.45
Rating 5 1 5 2 5 3
Points to Note:
5.3 Latency
Various new improvements are introduced in the technology to increase the most popular
measure Throughput. However at times, it may negatively affects Latency. E.g. MU-MIMO
is known to degrade Latency, while improving throughput. Latency has important role to play
in user experience e.g. in VoIP & Browsing. So as a balancing factor over throughput,
latency should also be measured.
Packet Loss = 0 %
Points to Note
● Cisco-3802i is best with the lowest latency but Mist-AP41 is slightly behind
Cisco-3802i
Points to Note
UAP-AC-SHD Latency = 2
Packet Loss = 0 %
Cisco-3802i Latency = 6
Packet Loss = 0 %
Meraki-MR52 Latency = 3
Packet Loss = 0 %
Aruba-AP325 Latency = 2
Packet Loss = 0 %
Mist-AP41 Latency = 4
Packet Loss = 0 %
Ruckus-R720 Latency = 3
Packet Loss = 0 %
Points to Note
● UAP-AC-SHD and Aruba-AP325 are best with the lowest latency with windows
device
Points to Note
● Meraki-MR52 is best with the lowest latency with Ubuntu device
Rating on a scale of 1 to 5
(5 - High performance, 1- Low performance)
Ruckus-R
UAP-AC-SHD Cisco-3802i Meraki-MR52 Aruba-AP325 Mist-AP41 720
Average
Latency 8.04 7.58 8.33 8.59 7.03 7.36
Rating 4 3 4 3 5 5
Rating on a scale of 1 to 5
(5 - High performance, 1- Low performance)
Ruckus-R
UAP-AC-SHD Cisco-3802i Meraki-MR52 Aruba-AP325 Mist-AP41 720
Average
Latency 7.12 5.41 7.06 7.11 5.07 5.92
Rating 2 3 3 3 3 4
Points to Note
Rating on a scale of 1 to 5
(5 - High performance, 1- Low performance)
Mist-AP4 Ruckus-R
UAP-AC-SHD Cisco-3802i Meraki-MR52 Aruba-AP325
1 720
Average
Latency 12 8.23 13.11 14.26 Fail 10.40
Rating 4 4 3 1 0 4
Points to Note:
● Cisco-3802i Latency is good followed by Ruckus-R720
● Rest all other APs results are almost similar
● In Mist-AP41 Disconnections observed
We mark them pass or fail based on buffering time. We also capture the total completion
time per client in our report.
We make Media Server to run in End Point. Clients access the URL through the chrome
browser. All requests are made concurrently. When the client requests, HTTP streaming of
the video starts, from the Endpoint.
UAP-A 8 8 7 6 29 8 7 3 2 20 76.56
C-SHD
Cisco-3 8 8 7 7 30 8 8 6 3 25 85.93
802i
Meraki- 8 8 6 7 29 7 7 2 3 19 75
MR52
Aruba- 8 8 3 7 26 8 6 3 4 21 73.43
AP325
Mist-A 6 8 8 8 30 5 1 7 3 16 71.87
P41
Ruckus 5 8 8 4 25 5 6 8 3 21 71.87
-R720
Rating on a scale of 1 to 5
(5 - High performance, 1- Low performance)
Ruckus-R
UAP-AC-SHD Cisco-3802i Meraki-MR52 Aruba-AP325 Mist-AP41 720
Average pass
percentage 76.56 85.93 75 73.43 71.87 71.87
Rating 2 5 2 2 1 1
Points to Note:
UAP-A 17 18 23 25 83 17 16 12 17 58 70.5
C-SHD
Cisco-3 20 24 24 25 93 20 14 24 22 80 86.5
802i
Meraki- 19 23 8 16 66 17 22 2 5 46 56
MR52
Aruba- 21 15 18 17 71 16 10 14 5 45 58
AP325
Mist-A 0 5 6 4 15 1 4 4 2 11 13
P41
Ruckus 20 21 19 23 83 12 19 9 12 52 67.50
-R720
Rating on a scale of 1 to 5
(5 - High performance, 1- Low performance)
Ruckus-R
UAP-AC-SHD Cisco-3802i Meraki-MR52 Aruba-AP325 Mist-AP41 720
Average pass
percentage 70.50 86.50 56 58 13 67.50
Rating 4 5 3 4 1 4
Points to Note:
● With 100 clients also Cisco-3802i leading first compared to all APs
● And UAP-AC-SHD takes second place and Ruckus-R720,Aruba-325 & Meraki-MR52
meets the expectations
● Mist-AP41 does not meet the expectations
We evaluate traffic at each client based on respective criteria for each traffic. Video, TCP DL
and TCP UL are measured in the same way as mentioned in earlier sections. VOIP
performance is measured using MOS. 12 VOIP calls are initiated between 24 clients. Each
call is of 280 seconds.
We use SIP for VoIP testing. We have SIP server running in End point. When the test
begins we make all clients to register with SIP server. After registration, all of them are in
waiting state. For a call test (where n is 12 calls in the current test), we make n number of
clients as callers and they call other n number of clients. After the call is connected, receiver
of the calls play a predefined audio file to the caller. On completion of the call, caller records
various call statistics - Tx Packet, Rx Packet, Jitter, Latency, Packet Loss and more. Based
on the statistics, we compute MOS. Though it is based on standards, but it is yet a
proprietary way of combining various metrics and making sense out of them. VOIP MOS is a
quality indicator between 1 and 5. 5 being the highest quality and 1 being the lowest. For
more details on VOIP MOS calculation method, please refer the below link. More details of
how we compute MOS can be found at
https://alethea.in/voip-performance-wifi/
For the calls, which do not complete or get disconnected, MOS is taken as 0. Effective MOS
is calculated by taking average MOS of all the calls. Effective MOS is being compared
between different access points for benchmarking.
UAP-AC-SH 11 Calls
D Complete
Aggregat Aggregate = 4 Pass
Pass call MOS -
e = 35.61 23.61Mbps Percentage = Meets Goal
2.08 16.67 %
Mbps
Effective MOS
-1.90
Pass call
Aggregat Aggregate = 4 Pass
MOS - 1.01
e =15.4 5.06 Mbps Percentage = Does not
Mbps 16.67 % meet Goal
Effective MOS
-0.76
Meraki-MR5 12 Calls
2 Complete
Aggregat Aggregate = 0 Pass
Pass Call
e = 25.41 26.83 M
bps Percentage = Meets Goal
MOS - 2.92 0%
Mbps
Effective MOS
-2.92
Ruckus-R72 11 Calls
0 Complete
Aggregat Aggregate = 7 Pass
e= 37.10 Mbps Percentage = Exceeds
Pass calls MOS -
29.15 29.17% Goal
3.55
Mbps
Effective MOS
-3.25
Rating on a scale of 1 to 5
(5 - High performance, 1- Low performance)
Rating 4 2 4 3 2 5
Points to Note:
Appendix
Results Table
Bandwidth : 80 Mhz
RSSI as measured at client : -44 to -45 dbm
External Channel Occupancy : 2 to 3%
Iperf TCP_DL @
555.81 605.45 573.19 551.08 468.52 393.62
ios client (Mbps)
Iperf TCP_DL @
windows clients 417.98 344.29 398.43 362.79 315.68 327.41
(Mbps)
Iperf TCP_DL @
ubuntu clients 587.94 606.76 544.92 565.6 548.7 605.20
(Mbps)
Average 545.09 534.07 521.95 513.13 459.63 460.29
Throu
ghput
_UL
Iperf TCP_UL @
Iperf TCP_UL @
565.37 555.08 595.92 585.11 435.81 625.84
ios client (Mbps)
Iperf TCP_UL @
windows client 368.12 369.54 364.61 365.62 377.02 304.18
(Mbps)
Iperf TCP_UL @
ubuntu client 483.94 399.34 413.44 442.36 401.69 403.42
(Mbps)
Average 485.28 455.85 453.33 462.12 428.38 467.30
Laten
cy
Latency (ms) @
13.58 9.93 14.04 13.94 10.16 10.53
Android client
Latency (ms) @
11.37 9.41 11.94 12.11 8.68 11.09
ios client
Latency (ms) @
2 6 3 2 4 3
windows client
Latency (ms) @
5.21 4.99 4.34 6.32 5.27 4.84
ubuntu client
b) Client Load 32
Bandwidth : 40 Mhz
RSSI as measured at client : -43 to -44 db
External Channel Occupancy : 2 to 3%
Bandwidth : 40 Mhz
RSSI as measured at client : -43 to -45 db
External Channel Occupancy : 2 to 3%
Iperf TCP_DL @
25.19 14.68 29.5 23.01 2.24 48.96
25 Android clients
Iperf TCP_DL @
18.3 19.65 32.08 12.3 18.84 35.58
25 ios clients
Iperf TCP_DL @
25 windows 18.99 57.51 4.28 17.52 21.32 10.77
clients
Iperf TCP_DL @
80.36 108.87 68.47 83.71 56.03 40.41
25 ubuntu clients
Iperf TCP_DL @
100 Mixed 142.84 200.71 134.33 136.54 98.43 135.72
Clients
Throu
ghput
_UL
Iperf TCP_UL @
10.63 0.11 11.48 0.09 1.77 9.53
25 Android clients
Iperf TCP_UL @
0.57 0 2.16 0.06 3.97 0.99
25 ios clients
Iperf TCP_UL @
25 windows 1.62 0.63 0.53 3.61 2.22 2.82
clients
Iperf TCP_UL @
34.83 2.71 25.5 12.67 36.06 9.12
25 ubuntu clients
Iperf TCP_UL @
100 Mixed 47.66 3.46 39.66 16.42 44.02 22.45
Clients
Laten
cy
Latency (ms) @
13.871 11.159 14.027 16.26 15.05
25 Android clients
Packet Loss (%) 0 0.12 0 0 0
Latency (ms) @
23.449 14.895 27.627 30.512 17.473
25 ios clients
Packet Loss (%) 0.022 0.133 0.022 0.519 0.024
Latency (ms) @
25 windows 6.382 1.833 4.52 3.958 3.182
Fail.
clients
Disconnect
Packet Loss (%) 0 0 0 0 0
ions
Latency (ms) @ Observed.
4.28 4.791 6.017 5.976 5.327
25 ubuntu clients
Packet Loss (%) 0 0 0 0 0
Latency (ms) @
100 Mixed 12 8.23 13.11 14.26 10.40
Clients
Total Packet
0.005 0.063 0.006 0.13 0.005
Loss (%)
Video
_1Mb
ps
1000 Kbps @ 25
17 20 19 21 0 20
Android clients
1000 Kbps @ 25
18 24 23 15 5 21
ios clients
1000 Kbps @ 25 6 19
24 8 18
windows clients 23
1000 Kbps @ 25
25 25 16 17 4 23
ubuntu clients
1000 Kbps @
100 Mixed 83 93 66 71 15 83
Clients
Video
_1.5M
bps
1500 Kbps @ 25
17 20 17 16 1 12
Android clients
1500 Kbps @ 25
16 14 22 10 4 19
ios clients
1500 Kbps @ 25
12 24 2 14 4 9
windows clients
1500 Kbps @ 25
17 22 5 5 2 12
ubuntu clients
1500 Kbps @
100 Mixed 58 45 11 52
80 46
Clients
Mixed
Traffi
c
12 Calls between
24 Clients,
Effective MOS 1.9 0.76 2.92 1.06 0.08 3.25
Iperf TCP_DL @
24 Mixed Clients 35.61 15.41 25.41 15.51 7.59 29.15
Iperf TCP_UL @
24 Mixed Clients 23.61 5.06 26.83 12.53 28.11 37.10
1Mbps @ 24
Mixed Clients 4 4 0 5 2 7
Revision History
1.3 30th April 1. Added Ruckus Data after Retest with compatible
POE+ Switch
2. Added POE Switch details in Section 4 and updated
Section 4.1 with Ruckus R720 information.