You are on page 1of 6

2017 IEEE 26th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises

A comparison of QoS parameters of WebRTC videoconference with conference


bridge placed in private and public cloud

Robert R. Chodorek Agnieszka Chodorek


Grzegorz Rzym Department of Information Technology
Krzysztof Wajda Kielce University of Technology
Department of Telecommunications Kielce, Poland
The AGH University of Science and Technology e-mail: a.chodorek@tu.kielce.pl
Kraków, Poland
e-mail: {chodorek,rzym}@agh.edu.pl,
wajda@kt.agh.edu.pl

Abstract— This paper is devoted to analysis of QoS parameters presented. The paper is focused on comparative analysis of
of WebRTC traffic presented for centralized Quality of Service (QoS) parameters, measured for three
videoconferencing system used for collaborative work. This different locations of the conference bridge: in private cloud,
system consists of a videoconference application, built in public cloud, in local network. Impact of the location of
according to the WebRTC architecture, and a telemetric terminals and impact of presence of telemetric traffic also are
system, built as the IoT environment. Tests were carried out taken into account.
for three locations of the conference bridge: in private cloud
(the OpenStack cloud), in public cloud (the AWS cloud) and in The rest of this paper is organized as follows. Section 2
local network. Results show, that conference bridge in a cloud describes a concept of integration of WebRTC with IoT.
is a good solution for the WebRTC, and additional Section 3 presents test environment, while Section 4
transmission of telemetric data do not affect the QoS discusses results of carried experiments. Section 5
parameters of WebRTC's media stream. summarizes our experiences.
Keywords- cloud, conference bridge, IoT, QoS, WebRTC
II. THE SYSTEM FOR REMOTE COLLABORATION
I. INTRODUCTION During the interactive remote collaboration the
conference systems are often used. Because the collaborative
The motivation being behind the introduction of Web work requires an exchange of various types of data,
Real-Time Communications (WebRTC) is to allow usage of collaborative systems must enable sharing many resources
typical web browsers for transmission of multimedia available in the end systems. Nowadays, in many cases (e.g.
streams, such as voice, video, gaming, and also supporting in telemedicine), it is necessary to share data from multiple
remote collaboration among users. Typically, transmission of devices, located near end users, that are built according to the
media information is done based on dedicated systems, thus IoT paradigm [14]. As a result, collaborative services must
preventing flexible transmission in newer installations. IETF provide teleconferencing services and data transfer from the
RTCWeb WG [1], supported by collaboration with W3C [2]. IoT devices.
The outcome of IETF RTCWeb WG is summarized in basic
RFCs defining WebRTC use cases [3], or reporting used The developed conference system uses WebRTC
audio codecs [4], or video codecs [5] but significant work is terminals (web browsers with full support for the WebRTC)
still ongoing and already documented in working drafts, for and the Kurento Media Server [15] as conference bridge.
example a draft defining data channels in WebRTC [6]. Brief The Kurento can work both running on a single server and in
description of WebRTC features and architecture can be a computing cloud (private or public) [16]. To transfer data
found in [7], [8], [9]. from IoT devices (in the following text, data coming from
IoT devices will also be referred as a telemetry data) between
One of challenges of modern ICT systems is integration the WebRTC terminals (via conference bridge Kurento), the
of the WebRTC and Internet of Things (IoT) [10]. WebRTC data channel (DataChannel in Fig. 1) was used.
Significant position among proposed solutions has usage of The data from the IoT devices are integrated and aggregated
the MQ Telemetry Transport (MQTT) signaling protocol, at the WebRTC terminal by a local broker. The broker was
implemented in IoT devices (small sensors and mobile written by the Authors with the use of the mosca [17] library.
devices), for WebRTC applications [11][12]. This Some improvements (as implementation of the QoS 2 level
integration is a complex process and requires new methods of the MQTT protoco [18]) were introduced. The broker was
for rapid prototyping of the new applications [13]. run on WebRTC terminals. As conference bridge, the
In this paper, an analysis of a system for collaborative Kurento Media Server (original software, without our fixes)
work that combines videoconferencing system (built was used. Conference application was written by the Authors
according to the WebRTC architecture) with IoT system is (with the use of libraries for a broker and the Kurento). As is

978-1-5386-1759-5/17 $31.00 © 2017 IEEE 86


DOI 10.1109/WETICE.2017.59
depicted in Figure 1, aggregated data are transmitted through a)
the Kurento bridge to WebRTC terminals, as the second (in
addition to the media stream) WebRTC stream.
Public
IoT devices
cloud Mobile
terminal
Media stream Telemetric stream AWS WebRTC
Kurento Media
RTCPeerConnection DataChannel Server
Web
server
X S Internet
H S WebSocket SRTP SCTP
R E
HTTP 1.x/2 DTLS
IoT devices
TLS (optional) ICE, STUN, TURN
TCP UDP
Stationary terminal
IP WebRTC
Local network

Figure 1. WebRTC architecture [19] and data and telemetric streams


b)
The conference system has been implemented using the
JavaScript scripting language. Client part of the system runs
on a WebRTC terminal. Currently, the full implementation Internet
Mobile
IoT devices
of the WebRTC functions in a web browser is developed by terminal
Google (the Google Chrome browser). The part of the WebRTC
conferencing system that is running on the web server
implements application logic which communicates with
Kurento gateway using the Kurento protocol [15].
Web
server
III. TEST ENVIRONMENT
Experiments were carried out using test environment, IoT devices
depicted in Figure 2. Two WebRTC terminals (a stationary Private
cloud
computer and a mobile computer) were used. Both terminals Stationary terminal
OpenStack
used the Google Chrome browser. Tests were carried out Kurento Media Server
WebRTC
using two locations of the mobile WebRTC terminal:
Local network
• nearby location (less than 500 m from the stationary
terminal), c)
• distant location (about 150 km from the stationary
terminal).
Internet IoT devices
Stationary WebRTC terminal was connected to the local Mobile
network that use Gigabit Ethernet technology. Mobile terminal
WebRTC terminal, in both locations, was connected using WebRTC
wireless LTE technology. Kurento Media Server, which
acted as a conference bridge, was placed in two cloud
Web
environments:
server
• Public, Amazon Web Services (AWS) [20] cloud
(Fig. 2a); single instance of m3.medium virtual
machine [21] was used, IoT devices

• private cloud, built using the OpenStack cloud


Stationary terminal
operating system [22] (Fig. 2b). WebRTC
Kurento Media Server
For sake of done comparison, the case of Kurento Media Local network
Server connected to the local network also was analyzed
(Fig. 2). Tests of application were carried out tested with the
use of physical, autonomous IoT devices which send data
through 802.11 network using MQTT protocol (in all devices Figure 2. The test environment: a) public cloud, b) private cloud, c) local
client MQTT are implement based on mosquitto [23]). server

87
IV. RESULTS 700

In this section we describe traces that were collected 600

during experiments and present QoS parameters (throughput, ]s 500


p
delay and error rate), calculated on the basis of these traces. b
[k 400
t
u from W ebR TC
p
h
g 300 from K urent o
A. Traces u
ro
Th 200
Collected traces consist of both audio samples and video 100
frames transmitted in the same media stream with the use of
the RTP protocol. Because of problems with NAT (Network
0
0 10 20 30 40 50 60
Address Translation) traversal and thus necessity of usage of Ti me [s ]

such small number of ports, as possible, the WebRTC broken


with existing practice of transmission of audio and video
Figure 3. An effect of network load. Sender: the stationary computer.
information in separate streams. Audio with associated video Location of Kurento media server: in local network
constitutes one, common media stream, identified by the
same port number. Demultiplexing of audio and video Figure 3 shows transmission coming from the stationary
substreams is made through the Synchronization Source computer, where the Kurento was located at the same local
(SSRC) identifier, conveyed in each RTP packet. network. Collapse of the sending rate, observed in the
picture, was caused by transmission of bulk data. As is
B. Throughput depicted in Figure 4, media streams retransmitted by the
Figure 4 shows instantaneous throughput of a media Kurento server followed streams coming from the sending
stream, calculated for first 60 seconds of transmissions WebRTC terminal. Although in some parts of time diagrams
between WebRTC terminals via the Kurento Media Server. minor changes of transmission rate can be observed.
Transmissions were carried out in two stages: Table 1 presents mean throughput [in kbps] of media
• from a sender to the Kurento Media Server (in Figure stream transmitted between WebRTC terminals via Kurento
4 marked in blue), conference bridge. Throughput was calculated in stable part
of transmission (the first 30 seconds of each transmission
• from the Kurento Media Server to a receiver (in were omitted). As is shown in the Table 1, mean throughput
Figure 4 marked in red). of media stream received from Kurento was always a little
larger than throughput of the same stream, retransmitted by
During first seconds of each transmission gradual
the conference bridge. Usually this difference was about 1%
increase of throughput of media stream is observed. This
or less. Only in the case of mobile WebRTC terminal in
behavior is typical for TCP-friendly systems, which react on
distant location that sends data to the bridge located at public
congestion like the TCP. These systems directly implement
TCP's congestion control mechanism or emulate it (usually cloud this difference was two times larger (about 2%).
using so-called TCP throughput equation). Implementation The largest mean throughput (596 kbps) was observed
of the WebRTC in the Google Chrome browser emulates for stationary sender that sends media stream to the
TCP-like congestion control with the use of the TCP conference bridge located in the same network. The smallest
throughput equation taken from the TCP-friendly Rate (510,5) for stationary sender that sends media stream to
Control (TFRC) protocol. private cloud. Generally, location of the conference bridge in
After about 20-30 seconds, the transmission has the private cloud given the worst performance and location
stabilized at the level around 600 kbps. Transmission coming in local network the best (about 15% of growth of
from the stationary WebRTC terminal remained stable until throughput). Usage of public cloud improved performance
the very end, although minor changes of throughput were by 6%.
observed. Transmission coming from the mobile WebRTC
terminal, after a brief period of relative stability, fluctuated C. Impact of a mobile terminal location
from 300 to about 600-700 kbps. Generally, impact of location of a mobile terminal on
mean throughput of transmitted media stream is negligible
The largest fluctuations of instantaneous throughput are (Tab. 1). The same tendency is shown in the case of both
observed for mobile sender. The smallest - for stationary end-to-end delay (Tab. 2) and transmission errors (Tab. 3).
one, that sends packets to the Kurento sender located at the
same local network. Fluctuations are caused, primarily, by
instantaneous changes of network load. The TFRC D. Delays
mechanism is very sensitive to network load and appearance Delays were calculated as difference between time of
of additional traffic can significantly change send-side sending of a given RTP packet and time of reception of this
bandwidth estimation, and, as a result, sending rate. packet. To measure delays, endsystems must be
synchronized.

88
TABLE I. MEAN THROUGHPUT (IN KBPS) OF MEDIA STREAM
a)
700
Endpoints Mean throughput [kbps]
600
Sender Receiver to Kurento from Kurento
]s 500
p
b Conference bridge in private cloud
[k 400
t from WebRTC
u
p mobile (nearby
h stationary 514.6 ± 4.4 510.8 ± 3.4
g 300
u
from Kurento location)
ro
h
T 200 stationary mobile (distant 514.3 ± 4.8 510.5 ± 3.9
location)
100
mobile (nearby stationary 562.6 ± 0.4 558.0 ± 0.8
0 location)
0 10 20 30 40 50 60
mobile (distant stationary 563.8 ± 1.2 558.0 ± 3.2
Time [s]
location)
Conference bridge in public cloud
b)
700 mobile (nearby
stationary 545.6 ± 1.2 543.2 ± 1.8
location)
600
stationary mobile (distant 545.3 ± 1.4 543.1 ± 1.9
s] 500 location)
p
b
k[ mobile (nearby stationary 580.8 ± 1.2 577.7 ± 1.5
t 400 from WebRTC
u
p location)
gh300
u
o
from Kurento
mobile (distant stationary 567.3 ± 1.5 553.7 ± 3.0
r
h
T 200 location)
100 Conference bridge in a local network
mobile (nearby
0 stationary 596.0 ± 0.7 593.8 ± 0.4
0 10 20 30 40 50 60 location)
Time [s]
stationary mobile (distant 595.4 ± 0.8 593.7 ± 0.6
location)
c) mobile (nearby stationary 592.2 ± 2.8 591.5 ± 1.7
700 location)
mobile (distant stationary 591.4 ± 3.1 591.0 ± 1.8
600
location)
]s 500
p
b
Mean delays and delay variances, computed for media
k[
t 400
u from WebRTC streams, are shown in Table 2. The smallest mean delays
p
h
g 300 from Kurento (from 80 to 85 milliseconds) were achieved in the case of the
u
o
r
h
Kurento server (conference bridge) located at the private
T 200
cloud. The greatest delays (from 279 to 285 milliseconds)
100 were observed if the conference bridge was located in the
0 public cloud. Delays measured for conference bridge was
0 10 20 30 40 50 60 located in local network (from 267 to 272 milliseconds) were
Time [s] close to delays achieved in a conferencing system with the
bridge in public cloud, although slightly smaller (1% to 5 %).
d)
700 In the case of conference bridge located in both private
600
cloud and local network, propagation delays, transmission
delays, and queuing delays are comparable. Large difference
]s 500
p between mean delays measured for this two locations of
kb
[ 400 Kurento server results from processing delay. Processing in
t from WebRTC
u
p
h
g 300 from Kurento
the local cloud was much faster than performed by a
u
ro dedicated server. Processing in the public cloud was even
h
T 200 faster, but, due to location in distant network, propagation
100 delays, transmission delays, and queuing delays are too large
0
to be compensated by higher processing speed.
0 10 20 30 40 50 60
Time [s]
As one might expect, fluctuations of end-to-end delay
were smallest in the case of both local locations of
Figure 4. Instantaneous throughput of a media (audio and associated conference bridge (private cloud and local network).
video) stream. Transmission: a, c, d) from stationary computer to the Variations calculated for public cloud were one order of
mobile one, b) from mobile computer to the stationary one. Location of magnitude larger (0.018 to 0.019) then in uncongested local
Kurento media server: a, b) in private cloud, c) in public cloud, d) in a local
network (0.002s to 0.003s).
network. Location of mobile computer: nearby location

89
TABLE II. END-TO-END DELAY TABLE III. TRANSMISSION ERRORS

Endpoints Statistics Endpoints Packet errors

Sender Receiver Mean delays [s] Variation number of


Sender Receiver %
packets
Conference bridge in private cloud Conference bridge in private cloud
mobile (nearby mobile (nearby
stationary 0.080 ± 0.001 0.002 stationary 1 0.00
location) location)
stationary mobile (distant 0.081± 0.001 0.003 mobile (distant
stationary 0 0.00
location) location)
mobile (nearby stationary 0.085± 0.001 0.003 mobile (nearby stationary 1 0.00
location) location)
mobile (distant stationary 0.084± 0.001 0.003 mobile (distant stationary 1 0.00
location) location)
Conference bridge in public cloud Conference bridge in public cloud
mobile (nearby mobile (nearby
stationary 0.279± 0.005 0.018 stationary 15 0.07
location) location)
stationary mobile (distant 0.282± 0.005 0.019 mobile (distant
stationary 16 0.07
location) location)
mobile (nearby stationary 0.280± 0.005 0.018 mobile (nearby stationary 17 0.08
location) location)
mobile (distant stationary 0.285± 0.005 0.019 mobile (distant stationary 15 0.07
location) location)
Conference bridge in a local network Conference bridge in a local network
mobile (nearby mobile (nearby
stationary 0.267± 0.001 0.002 stationary 0 0.00
location) location)
stationary mobile (distant 0.269± 0.001 0.003 mobile (distant
stationary 0 0.00
location) location)
mobile (nearby stationary 0.271± 0.001 0.002 mobile (nearby stationary 1 0.00
location) location)
mobile (distant stationary 0.272± 0.001 0.003 mobile (distant stationary 1 0.00
location) location)
Table 4 shows packet errors, observed during
E. Error rate
simultaneous transmission of media stream and telemetric
Table 3 reports on packet errors observed during (data) stream. Data collected in this table are comparable
transmissions between WebRTC terminals, via conference with data presented in Table 3 (number of IoT devices N=0).
bridge build with the use of Kurento. If Kurento server was If the Kurento server was located in the same network as the
located in the private cloud or in the local network, only stationary computer, 0 or 1 error was observed. If the
single errors were observed (if any). Thus, packet error rate Kurento server was located in the public cloud, between
(PER) was always equal to zero percent. If the Kurento fifteen and twenty errors were observed. Minor increase of
server was located in the public cloud, between fifteen and number of errors, observed during transmission via public
twenty errors occurred during each transmission, and PER cloud - from 15-17 (N=0) to 16-19 (N=10) - weren't able to
achieved 0.08 %. change the overall packet error rate. Calculated PER is still
in the range 0.07-0.08 percent.
F. Impact of telemetric traffic
Note, that during transmission of telemetric data no
Telemetric traffic was generated by N IoT devices (N = 5 errors were detected on the level of IoT messages.
or N = 10), aggregated by a local broker and sent to
conference bridge using WebRTC data channel. Both
channels (media channel and data channel) were used at the V. CONCLUSIONS
same time. Tests of impact of telemetric traffic on QoS of In this paper, QoS parameters measured for media traffic
media stream were carried out using three levels of QoS are presented. Media traffic was generated by WebRTC-
assurance of the MQTT protocol, numbered from 0 to 2. based videoconferencing system integrated with IoT devices.
Results showed, that the impact of telemetric traffic on Traffic load of the network and, mainly, processing delay
QoS of associated media stream is very small or even have the greatest impact on transmission delay, and
negligible. Inelastic traffic, coming from IoT devices, was transmission errors were largest if transmission between the
too small to seriously affect the QoS of inelastic media stationary terminal and the Kurento server was carried out in
stream. Relatively small amount of transmitted data couldn't public Internet. Impact of both placement of mobile terminal
both change the throughput of media stream and increase and presence of additional IoT traffic were negligible.
queuing delays enough to change end-to-end delay.

90
TABLE IV. IMPACT OF TELEMETRIC (IOT DEVICES)

Endpoints 5 IoT 10 IoT

QoS level
Sender Receiver
0 1 2 0 1 2

Conference bridge in private cloud

stationary mobile (nearby location) 1 (0.00%) 1 (0.00%) 1 (0.00%) 1 (0.00%) 1 (0.00%) 1 (0.00%)

stationary mobile (distant location) 0 (0.00%) 0 (0.00%) 0 (0.00%) 1 (0.00%) 1 (0.00%) 1 (0.00%)

mobile (nearby location) stationary 1 (0.00%) 1 (0.00%) 1 (0.00%) 1 (0.00%) 0 (0.00%) 0 (0.00%)

mobile (distant location) stationary 0 (0.00%) 0 (0.00%) 0 (0.00%) 1 (0.00%) 1 (0.00%) 1 (0.00%)

Conference bridge in public cloud

stationary mobile (nearby location) 16 (0.07%) 15 (0.07%) 15 (0.07%) 18 (0.08%) 16 (0.07%) 16 (0.07%)

stationary mobile (distant location) 16 (0.07%) 16 (0.07%) 16 (0.07%) 18 (0.08%) 16 (0.07%) 16 (0.07%)

mobile (nearby location) stationary 17 (0.08%) 16 (0.07%) 16 (0.07%) 19 (0.08%) 17 (0.08%) 17 (0.08%)

mobile (distant location) stationary 17 (0.08%) 15 (0.07%) 15 (0.07%) 19 (0.08%) 16 (0.07%) 16 (0.07%)

Conference bridge in a local network

stationary mobile (nearby location) 1 (0.00%) 1 (0.00%) 1 (0.00%) 1 (0.00%) 1 (0.00%) 1 (0.00%)

stationary mobile (distant location) 0 (0.00%) 0 (0.00%) 0 (0.00%) 1 (0.00%) 0 (0.00%) 0 (0.00%)

mobile (nearby location) stationary 1 (0.00%) 1 (0.00%) 1 (0.00%) 1 (0.00%) 1 (0.00%) 1 (0.00%)

mobile (distant location) stationary 1 (0.00%) 1 (0.00%) 1 (0.00%) 1 (0.00%) 1 (0.00%) 1 (0.00%)
WebRTC represents a new solution of direct conferencing systems, IEEE Communications Magazine, 2013, Vol.
communication among users which can be seen as revisiting 51(4).
of peer-to-peer concept. Besides defining built-in features [9] A. Johnston, J. Yoakum and K. Singh, Taking on webRTC in an
enterprise, IEEE Communications Magazine, 2013, Vol. 51(4).
and aspects of WebRTC, there is significant interest in
[10] P. Bernier, “How IoT and WebRTC Can Change the World”,
studying cooperation of this technology with other important http://www.realtimecommunicationsworld.com/topics/realtimecomm
or novel solutions, such as conferencing tools, VoIP unicationsworld/articles/400358-how-iot-webrtc-change-world.htm
applications or IoT. [11] R. Bharath, P. Vaish and P. Rajalakshmi, "Implementation of
diagnostically driven compression algorithms via WebRTC for IoT
ACKNOWLEDGMENT enabled tele-sonography," IECBES 2016, Kuala Lumpur, pp. 204-
209. 2016.
The research reported in the chapter was supported by the [12] T. Sandholm, B. Magnusson and B. A. Johnsson, "An On-Demand
contract 11.11.230.018. WebRTC and IoT Device Tunneling Service for Hospitals," FiCloud-
2014, Barcelona, pp. 53-60, 2014.
[13] J. Janak and H. Schulzrinne, “Framework for rapid prototyping of
REFERENCES distributed IoT applications powered by WebRTC”, In Principles,
Systems and Applications of IP Telecommunications (IPTComm), pp.
[1] https://datatracker.ietf.org/wg/rtcweb/charter/ 1-7, 2016.
[2] http://www.w3.org/TR/webrtc/ [14] D. Gachet, M. de Buenaga, F. Aparicio and V. Padrón, "Integrating
Internet of Things and Cloud Computing for Health Services
[3] Ch. Holmberg, G. Eriksson and S. Hakansson, RFC 7478, Web Real- Provisioning: The Virtual Cloud Carer Project," IMIS 2012, Palermo,
Time Communication Use Cases and Requirements, Oct. 2015. pp. 918-921, 2012.
[4] A. Roach, RFC7742, WebRTC Video Processing and Codec [15] “What's Kurento – Kurento”, http://www.kurento.org/whats-kurento
Requirements, March 2016.
[16] https://webrtchacks.com/webrtcmedia-servers-in-the-cloud/.
[5] J.M. Valin and C. Bran, RFC7874, WebRTC Audio Codec and
Processing Requirements, May 2016. [17] https://github.com/mcollina/mosca
[6] J. Randell, S. Loreto and Tuexen M., WebRTC Data Channels, draft- [18] http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html
ietf-rtcweb-data-channel-13, Oct. 2015. [19] G. Ilya. “High Performance Browser Networking”, O'Reilly Media,
[7] S. Loreto and S. P. Romano, Real-Time Communications in the Web. 2013.
Issues, Achievements, and Ongoing Standardization Efforts, IEEE [20] https://aws.amazon.com/
Internet Computing, 2012. [21] https://aws.amazon.com/ec2/instance-types/,
[8] A. Amirante, T. Castaldi, L. Miniero and S. P. Romano, On the [22] https://www.openstack.org
seamless interaction between webRTC browsers and SIP-based
[23] http://mosquitto.org/

91

You might also like