Professional Documents
Culture Documents
based on which the proposed remote production framework only application control messages (e.g., HTTP GET) can
was implemented. be observed in industrial network while video streaming is
In addition to the introduction of the remote production hardly to be supported due to insufficient throughput [5].
framework for live teleportation, we also comprehensively and Second, with the increasing popularity of cloud computing,
systematically analyze a wide range of network and applica- mobile users with limited processing ability are able to
tion factors that may affect the quality of remote production offload the high computation cost to the cloud where high
operations in the 5G environment. All these experiments were performance devices reside. However, this is feasible only
carried out based on the real-life 5G network infrastructure when the RTT is smaller than the client computes the task
built at the 5G Innovation Centre (5GIC) hosted at University locally. In other words, 4G uplink is too poor to get such
of Surrey, including 5G RAN, 5G mobile edge and core benefit fully. In contrast, the measured performance in 5G
network components. We consider the following dimensions reveals the commercialized cellular performance is now
of analysis: (1) Bandwidth requirements against different con- approaching the Gbps era. To name a few, Narayanan et al.
tent resolution levels, ranging from basic quality to ultra-high measures the uplink performance across three operators,
definition (UHD) level. We point out through our experiments and its throughput varies between 30 Mbps to 60 Mbps [6],
that today’s 5G capabilities are only capable of supporting the Xu et al. [7] conducted a systematic study in operational 5G
low-end of the resolution spectrum. (2) Different transport- network and the uplink throughput is (35-68 Mbps). Liu et al.
layer congestion control algorithms for streaming content provided a profound diagnosis of uplink, and the measured
frames over the 5G uplink radio. (3) The testing of both out- throughput ranges from 0.81 Mbps to 217.97 Mbps [8].
door and indoor environment scenarios with varied 5G radio Meanwhile, the RTT calculated from the mobile edge
connection qualities. (4) Varied configurations on the object to the first hop at the edge is around 20 ms [6], [7].
capturing side, including the number of required sensor cam- Additionally, such boosted network resources and evolved
eras and the number of human figures to be simultaneously network infrastructure provide fertile ground for utilizing
teleported. point-to-multipoint transmission techniques to enhance the
It is important to note that holographic teleportation is performance and efficiency of holographic video delivery.
currently still in its infant stage, and such an application For instance, by comparison of commercial 5G network in
will continue to evolve in the future with much higher different countries and the adoption of broadcast/multicast
network/computing requirements and challenges. This paper in 3GPP standardization, Gomez-Barquero et al. [9], [10]
aims to conduct a thorough feasibility analysis on what today’s pointed out that the convergence of broadcast/multicast in 5G
5G network can support for the current early generation of New Radio (NR), service-enabled 5G Core, and non-3GPP
holographic teleportation applications. However, such a study dual connectivity brings unprecedented advantages for mixed
will certainly shed lights on how future 5G beyond network reality video to enhance its service quality, stability and
technologies can enable the delivery of next generation tele- automation ability. These advantages have been demonstrated
portation services, and in particular with the identification of a via a practical 5G testbed with the support of multi-link
set of key factors that needs to be included in the future feasi- and multicasting/broadcasting enabled network for immersive
bility studies. The rest of the paper is organized as follows. In video [11]. Similarly, Ge et al. [12] presented that with
Section II we present our literature review. Section III provides the assistance of MEC server, high quality video streaming
a detailed overview of the holographic teleportation applica- via 5G satellite backhaul can yield satisfied and assured
tion system with 5G enabled remote production, followed by performance.
Section IV that provides in-depth analysis of 5G capabilities On the other hand, as transport layer protocols play a
for supporting remote production operations. In Section V we critical role in delivering video applications, diagnosing trans-
conclude the paper. port layer behavior in mobile has attracted extensive atten-
tion in academia. There are three main problems revealed
in the traditional mobile network. First, the link utiliza-
II. L ITERATURE R EVIEW
tion of the transport network protocol is low, as apply-
A. Application Performance in Cellular Network ing multiple connections at 5G downlink can triple the
Measurement and understanding user-perceived throughput of a single connection [5]. Second, the unavoid-
performance of practical cellular network is the premise able RTT and loss variations affect the stability of the
of deployment and optimization of modern multimedia transport layer algorithm [5], [6] and therefore, application
applications. In 4G era, numerous efforts have been made to (e.g., panoramic video) uploading throughput shows poor
examine the achievable throughput and latency in different stability [7]. Third, the variations of congestion control algo-
scenarios [2], [3], [4], [5]. With the help of advanced wireless rithms and their parameters (e.g., CUBIC [13] and BBR [14])
techniques and evolved network architecture, streaming exhibit different patterns/behaviors, showing disparities in
high-resolution video (e.g., 4k video) at downlink presented advantage and disadvantage when facing network varia-
an unprecedented success in industry. However, due to the tions [15], [16]. For instance, the TCP buffer size and
high asymmetry of link conditions in cellular networks, unoptimized management policies at mobile phone result in
uplink with sluggish performance (e.g., even less than degraded application throughput of mobile network in 5G
1 Mbps [4]) limits the application’s usage in twofold: First, network [8].
Authorized licensed use limited to: LAKIREDDY BALI REDDY COLLEGE OF ENGINEERING. Downloaded on January 09,2024 at 05:36:54 UTC from IEEE Xplore. Restrictions apply.
QIAN et al.: REMOTE PRODUCTION FOR LIVE HOLOGRAPHIC TELEPORTATION APPLICATIONS 453
B. Holographic Applications
Holographic applications [1], [17]–[22] is the most recent
advances in immersive applications that create another
dimension for immersive experiences. In order to gener-
ate holograms, research literature focuses on two main
approaches: image-based [19], [21] and volumetric-based solu-
tion [1], [18], [22]. Image-based holograms [19], [21] require
capturing objects as an array of images at different angles
and tilts. This accumulates a massive amount of concurrent
images which leads to significant demand in storage and
network transmission bandwidth [17], [21]. Different com-
pression mechanisms [21] have been proposed to reduce the
overhead of image-based holograms by leveraging the fact that
individual images across the image array represent only min-
imal differences. Volumetric-based holograms [1], [18], [22]
store objects as a collection of “point clouds” in the three-
Fig. 2. Holographic teleportation system overview.
dimensional space such that each point has its set of X, Y and
Z coordinates. The 3D point cloud format allows represent- aggregated 6DoF video content to the user in real time. The
ing an object as a set of 3D volume pixels or voxels [25]. In rest of the section is organized as follow. Section III-A presents
order to capture volumetric videos, multiple RGB-D cameras the system overview of our holographic teleportation, Section
with depth sensors (e.g., Microsoft Azure Kinect [26], Kinect III-B describes the network video stream specification, frame
v2 [27], Intel RealSense [28], and other LIDAR scanners [29]) structure and signaling protocol of the teleportation system,
are used to acquire 3D data from different angles which is Section III-C discusses the use cases and applications of the
then merged to produce the entire scene. The whole pro- holographic teleportation system in 5G networks.
cessing pipeline involves dynamically complex preprocessing
and rendering actual images from different viewing angles at
A. Holographic Teleportation System
the local endpoint, gathering multiple point cloud objects and
rendering them simultaneously [1], [22], [23] where frames We leverage and substantially extend the open-source user-
from different cameras should be synchronized with unified friendly interface holographic application LiveScan3D [1] that
coordinates and filtered noises in order to build the output allows us to capture the 3D objects using multiple sensor
hologram. In volumetric-based holograms, each voxel is trans- cameras, aggregate and produce 3D hologram at the server
mitted only once and the volume is independent of the number before streaming 3D hologram as 6DoF video content to the
of angles or tilts [17], [18]. In addition to point clouds, user. It is worth mentioning that, although we implemented
3D mesh [30] has been studied extensively to represent the our design based on the LiveScan3D, our holographic tele-
volumetric video. However, point cloud introduces better ren- portation is generically applicable to other holographic-type
dering performance, more flexibility and simple representation communication platforms.
of 3D objects as 3D mesh algorithms are more complex, Figure 2 shows an overview of our holographic teleporta-
employing a set of triangles, quadrangles or general polygons tion system. To enable the 6DoF feature, a remote object is
meshes to represent the geometry of 3D models [18], [30]. surrounded by multiple sensor cameras in order to be cap-
Nevertheless, representing holograms as volumetric media tured into a 3D hologram. Each sensor camera is locally
requires complex compression schemes [31] such as octree- connected to a computer acting as a client whose responsi-
based approaches [32] to reduce the volume at the cost of bility is to capture sensor camera data, process local point
computation. MPEG defines two representations of point cloud clouds and produce continuous frames before sending them to
compression (PCC), namely video-PCC (V-PCC) [31] and the server. Each client has a global unique identifier (Guid) as
geometry-PCC (G-PCC) [33]. V-PCC decomposes 3D space IP addresses are not sufficient in identifying separate clients,
into two separate video sequences that capture the geome- especially when clients are in the same subnet or have a com-
try and texture information while G-PCC encodes the content mon public IP address when operating over the public Internet.
directly in 3D space by utilizing data structures (e.g., an In our teleportation system, we use Microsoft Azure Kinect
octree [34]) to describe the point location in 3D space. Most camera as the sensors [26] although our system is portable
decoding volumetric videos rely on the software where dedi- to work with other types of sensor cameras such as Kinect
cated hardware support for volumetric hologram decoding is v2 [27]. Microsoft Azure Kinect sensor camera allows us to
still limited [18]. enable a variety of frame rates and resolutions from both RGB
and depth sensors. To compare with Kinect v2, Azure Kinect at
synthetic RGBD resolution of Ultra HD (i.e., 3840x2160) can
III. H OLOGRAPHIC T ELEPORTATION S YSTEM OVERVIEW generate more than three million point clouds per frame
In this section, we describe our holographic teleporta- assuming one frame is produced every 33.33 milliseconds
tion system which uses multiple Azure Kinect sensor cam- that is at least three times more produced points than Kinect
eras to capture scenes from multiple viewpoints and stream v2 which results in better quality volumetric [26], [27].
Authorized licensed use limited to: LAKIREDDY BALI REDDY COLLEGE OF ENGINEERING. Downloaded on January 09,2024 at 05:36:54 UTC from IEEE Xplore. Restrictions apply.
454 IEEE TRANSACTIONS ON BROADCASTING, VOL. 68, NO. 2, JUNE 2022
Authorized licensed use limited to: LAKIREDDY BALI REDDY COLLEGE OF ENGINEERING. Downloaded on January 09,2024 at 05:36:54 UTC from IEEE Xplore. Restrictions apply.
QIAN et al.: REMOTE PRODUCTION FOR LIVE HOLOGRAPHIC TELEPORTATION APPLICATIONS 455
TABLE I
N UMBER OF P OINT C LOUDS VS . A ZURE K INECT R ESOLUTION bandwidth requirement for different Azure Kinect sensor cam-
era resolutions. In the efficiency mode, the client projects the
color resolution into depth space. For example, given the nar-
row field-of-view unbinned depth mode, each frame contains
approximately 267000 point clouds, resulting in 19.2 Mb (or
2.4 MB) per frame. Given the frames produced by the Azure
Kinect sensor camera at the maximum rate of 30 Frames Per
Second (FPS) as its capacity, the required network bandwidth
is 576 Mbps. In the quality mode, the client projects the depth
resolution into color resolution space. For example, in order
to achieve ultra HD resolution which generates 3.8 million
point cloud per frame, the maximum data rate required would
be approximately 8 Gbps. However, note that this bandwidth
required is theoretically for streaming the whole scene. Since
we focus on projecting human bodies only, the bandwidth
required would fall significantly depending on how many peo-
ple being captured by the sensor camera at the same time,
their position, postures and their movement relative to the
sensor cameras. For streaming one single person, the maxi-
mum data rate required in ultra HD resolution at 30 FPS is
around 2.9 Gbps. Furthermore, the holographic teleportation
system also utilizes Zstandard library [44] to compress data
before sending to the client. The data compression level is
controlled by the server via the signaling setting frame at the
beginning of the session. The data compression applied at the
client in real time would further reduce the data rate without
losing the frame quality. Note that frames across multiple sen-
sor cameras may include only minimal differences, thus more
sophisticated compression schemes can be applied to capture
and exploit the spatial-temporal frame coherency and further
reduce the volume [17]. As a trade off, it comes at the cost of
Fig. 4. Bandwidth requirement for different Azure Kinect camera resolution computation [17]. However, exploring different data compres-
at 30 FPS. sion mechanisms is out of the scope of the paper. In addition
to the point cloud data, the holographic teleportation system
includes one byte of source id in the payload to indicate frames
UEs including frame type (one byte), data length (four bytes), from different sources or different network locations of the
compression (four bytes), timestamp (four bytes) and optional teleported objects. There are also some additional data trans-
QoE/QoS control signal (four bytes). While the first 13 bytes mitted for the skeleton and joints information which could be
have been discussed in [22], we add one additional (optional) neglected due to its small size compared to the point clouds.
header field which is used by the server for QoE and QoS The server sends latest-frame requests to the clients every
control signals. In the real network environment, there is short adjustable period of time while the clients receive frame
always trade-off challenges between different QoE metrics requests and respond with a list of its latest produced frames
such as resolution quality, playback disruptions or frame syn- to the server. With network bandwidth limitations and possi-
chronization. Even one single QoE metric (e.g., Frame Per ble high latencies, the environment might result in frames not
Second (FPS)) is affected by multiple QoS (e.g., bandwidth, being transmitted to the server faster than they are captured.
latency, packet loss, etc.). To balance the trade-off, the holo- On this occasion, new frames will be queued into the client
graphic teleportation system may need to sacrifice one for buffer awaiting for the next frame requests to be transmitted
another (e.g., sacrifice the FPS to reduce playback latency). to the server. When the buffer is full, the client starts dropping
Based on the current network characteristics and application old frames to create more space for newer frames captured.
performance, the QoE and QoS control signal header field in Note that sending frames upon explicit client request have a
each frame request helps the server to instruct each client to, couple of advantages. On one hand, it enables potential frame-
for example, begin stochastically dropping frames in order to granularity dynamics (e.g., different resolutions per frame)
improve the playback experience of the user. according to varying network conditions and client processing
As the client captures and converts RGBD images to points ability. Moreover, blindly pushing frames from multiple cam-
in 3D space, for each point cloud being transmitted, we use eras via a share 5G link leads to imbalanced throughput, thus
nine bytes in the payload: one for each RGB and two for the frame requests allow the server to synchronize the frame
each XYZ dimension. Table I and Figure 4 show the number rate between different sensor cameras. On the other hand, this
of point clouds in each frame as well as the corresponding frame per request behavior comes at the cost of scalability
Authorized licensed use limited to: LAKIREDDY BALI REDDY COLLEGE OF ENGINEERING. Downloaded on January 09,2024 at 05:36:54 UTC from IEEE Xplore. Restrictions apply.
456 IEEE TRANSACTIONS ON BROADCASTING, VOL. 68, NO. 2, JUNE 2022
Authorized licensed use limited to: LAKIREDDY BALI REDDY COLLEGE OF ENGINEERING. Downloaded on January 09,2024 at 05:36:54 UTC from IEEE Xplore. Restrictions apply.
QIAN et al.: REMOTE PRODUCTION FOR LIVE HOLOGRAPHIC TELEPORTATION APPLICATIONS 457
Authorized licensed use limited to: LAKIREDDY BALI REDDY COLLEGE OF ENGINEERING. Downloaded on January 09,2024 at 05:36:54 UTC from IEEE Xplore. Restrictions apply.
458 IEEE TRANSACTIONS ON BROADCASTING, VOL. 68, NO. 2, JUNE 2022
Fig. 11. Frame delay of FHD resolution level. Fig. 12. FPS comparison between different resolution levels.
Authorized licensed use limited to: LAKIREDDY BALI REDDY COLLEGE OF ENGINEERING. Downloaded on January 09,2024 at 05:36:54 UTC from IEEE Xplore. Restrictions apply.
QIAN et al.: REMOTE PRODUCTION FOR LIVE HOLOGRAPHIC TELEPORTATION APPLICATIONS 459
TABLE II
FPS AND T HROUGHPUT C OMPARISON B ETWEEN BBR AND CUBIC
Authorized licensed use limited to: LAKIREDDY BALI REDDY COLLEGE OF ENGINEERING. Downloaded on January 09,2024 at 05:36:54 UTC from IEEE Xplore. Restrictions apply.
460 IEEE TRANSACTIONS ON BROADCASTING, VOL. 68, NO. 2, JUNE 2022
Fig. 19. Throughput comparison of three locations. Fig. 20. FPS comparison of three locations.
Authorized licensed use limited to: LAKIREDDY BALI REDDY COLLEGE OF ENGINEERING. Downloaded on January 09,2024 at 05:36:54 UTC from IEEE Xplore. Restrictions apply.
QIAN et al.: REMOTE PRODUCTION FOR LIVE HOLOGRAPHIC TELEPORTATION APPLICATIONS 461
Authorized licensed use limited to: LAKIREDDY BALI REDDY COLLEGE OF ENGINEERING. Downloaded on January 09,2024 at 05:36:54 UTC from IEEE Xplore. Restrictions apply.
462 IEEE TRANSACTIONS ON BROADCASTING, VOL. 68, NO. 2, JUNE 2022
frame relay, multiple cameras work in NFOV_2x2 mode in [8] T. Liu, J. Pan, and Y. Tian, “Detect the bottleneck of commercial 5G in
5G uplink can still support a much better frame delay level China,” in Proc. IEEE 6th Int. Conf. Comput. Commun. (ICCC), 2020,
pp. 941–945.
(e.g., 160.4 ms in 95th percentile, 3 cameras). Furthermore, as [9] D. Gomez-Barquero et al., “IEEE transactions on broadcasting special
applying coordinated multiple connections can increase band- issue on: 5G for broadband multimedia systems and broadcasting,” IEEE
width utilization [6] and mitigate the cwnd degradation caused Trans. Broadcast., vol. 65, no. 2, pp. 351–355, Jun. 2019.
[10] D. Gomez-Barquero et al., “IEEE transactions on broadcasting special
by network fluctuations [52], further improvement in frame issue on: Convergence of broadcast and broadband in the 5G era,” IEEE
delay, FPS is foreseeable if the multiple connections can be Trans. Broadcast., vol. 66, no. 2, pp. 383–389, Jun. 2020.
well scheduled and managed. [11] D. Mi et al., “Demonstrating immersive media delivery on 5G broadcast
and multicast testing networks,” IEEE Trans. Broadcast., vol. 66, no. 2,
pp. 555–570, Jun. 2020.
V. C ONCLUSION [12] C. Ge et al., “QoE-assured live streaming via satellite backhaul in
5G networks,” IEEE Trans. Broadcast., vol. 65, no. 2, pp. 381–391,
With the help of the enhanced network capability and evolved Jun. 2019.
mobile edge architecture in 5G, XR-based holographic telepor- [13] S. Ha, I. Rhee, and L. Xu, “CUBIC: A new TCP-friendly high-speed
tation becomes a promising media service type. In this paper, TCP variant,” ACM SIGOPS Oper. Syst. Rev., vol. 42, no. 5, pp. 64–74,
a novel 5G MEC framework for supporting remote production 2008.
[14] N. Cardwell, Y. Cheng, C. S. Gunn, S. H. Yeganeh, and V. Jacobson,
operations of live holographic teleportation services is presented “BBR: Congestion-based congestion control,” Commun. ACM, vol. 60,
and evaluated. By offloading complex media processing task to no. 2, pp. 58–66, Feb. 2017.
remote edge server that has powerful processing ability, local [15] D. Scholz, B. Jaeger, L. Schwaighofer, D. Raumer, F. Geyer, and
G. Carle, “Towards a deeper understanding of TCP BBR congestion
5G client is able to focus on collecting data from camera and control,” in Proc. IFIP Netw. Conf. (IFIP Networking) Workshops, 2018,
leverage the 5G uplink to transfer the raw data to network edge, pp. 109–117.
making the capital investment to be significantly reduced. [16] E. Atxutegi, F. Liberal, H. K. Haile, K. J. Grinnemo, A. Brunstrom, and
A. Arvidsson, “On the use of TCP BBR in cellular networks,” IEEE
Extensive evaluations have been conducted in a real-life 5G Commun. Mag., vol. 56, no. 3, pp. 172–179, Mar. 2018.
test network based on carefully identified scenarios, also with [17] A. Clemm, M. T. Vega, H. K. Ravuri, T. Wauters, and F. D. Turck,
detailed analysis about its achievable throughput, FPS, impact “Toward truly immersive holographic-type communication: Challenges
of transport layer protocol, radio condition and the application and solutions,” IEEE Commun. Mag., vol. 58, no. 1, pp. 93–99,
Jan. 2020.
settings in different scenarios. The results indicate that low res- [18] F. Qian, B. Han, J. Pair, and V. Gopalakrishnan, “Toward practical vol-
olution levels (e.g., HD) can be well supported with satisfied umetric video streaming on commodity smartphones,” in Proc. 20th Int.
performance via 5G radio, with a stable FPS and throughput, Workshop Mobile Comput. Syst. Appl., 2019, pp. 135–140.
[19] J. Karafin and B. Bevensee, “25-2: On the support of light field and
and low frame delay by the virtue of TCP BRR algorithm. holographic video display technology,” in SID Symp. Dig. Tech. Papers,
Additionally, by understanding the performance of different vol. 49, 2018, pp. 318–321.
person number, camera number and client’s indoor/outdoor [20] J. V. D. Hooft et al., “Towards 6DoF virtual reality video streaming:
Status and challenges,” IEEE ComSoc MMTC Commun. Front., vol. 14,
location, diversified holographic application scenarios can be no. 5, pp. 30–37, 2019.
designed in the prosperous 5G industry. [21] P. A. Kara, A. Cserkaszky, M. G. Martini, A. Barsi, L. Bokor, and
T. Balogh, “Evaluation of the concept of dynamic adaptive streaming of
light field video,” IEEE Trans. Broadcast., vol. 64, no. 2, pp. 407–421,
ACKNOWLEDGMENT Jun. 2018.
The authors also acknowledge the support from 5GIC indus- [22] I. Selinis, N. Wang, B. Da, D. Yu, and R. Tafazolli, “On the Internet-scale
try members (5GIC) (http://www.surrey.ac.uk/5gic) for this streaming of holographic-type content with assured user quality of expe-
riences,” in Proc. IFIP Netw. Conf. (Networking), 2020, pp. 136–144.
work, and also the colleagues in the HOLOTWIN Project [23] S. Anmulwar, N. Wang, A. Pack, V. S. H. Huynh, J. Yangy, and
(D01-285/06.10.2020) funded by the Ministry of Education R. Tafazolli, “Frame synchronisation for multi-source holograhphic tele-
and Science of Bulgaria. portation applications—An edge computing based approach,” in Proc.
IEEE Int. Symp. Pers. Indoor Mobile Radio Commun. (IEEE PIMRC),
2021, pp. 1–6.
R EFERENCES [24] T. Taleb, Z. Nadir, H. Flinck, and J. Song, “Extremely interactive and
low-latency services in 5G and beyond mobile systems,” IEEE Commun.
[1] M. Kowalski, J. Naruniec, and M. Daniluk, “LiveScan3D: A fast and Stand. Mag., vol. 5, no. 2, pp. 114–119, Jun. 2021.
inexpensive 3D data acquisition system for multiple Kinect V2 sensors,” [25] G. E. Favalora, “Volumetric 3D displays and application infrastructure,”
in Proc. Int. Conf. 3D Vis. (3DV), Lyon, France, 2015, pp. 318–325. Computer, vol. 38, no. 8, pp. 37–44, Aug. 2005.
[2] J. Huang et al., “An in-depth study of LTE: Effect of network protocol
[26] “Microsoft Azure Kinect DK.” [Online]. Available: https://docs.
and application behavior on performance,” ACM SIGCOMM Comput.
microsoft.com/en-us/azure/Kinect-dk/ (Accessed: Aug. 20, 2021).
Commun. Rev., vol. 43, no. 4, pp. 363–374, 2013.
[3] J. Hyun, Y. Won, J.-H. Yoo, and J. W.-K. Hong, “Measurement and [27] “Kinect V2.” [Online]. Available: https://developer.microsoft.com/en-us/
analysis of application-level crowd-sourced LTE and LTE-A networks,” windows/kinect/ (Accessed: Aug. 20, 2021).
in Proc. IEEE NetSoft Conf. Workshops (NetSoft), 2016, pp. 269–276. [28] “Intel RealSenseTM Technology.” [Online]. Available: https://
[4] Y. Guo, F. Qian, Q. A. Chen, Z. M. Mao, and S. Sen, “Understanding www.intel.co.uk/content/www/uk/en/architecture-and-technology/
on-device bufferbloat for cellular upload,” in Proc. Internet Meas. Conf., realsense-overview.html (Accessed: Aug. 20, 2021).
2016, pp. 303–317. [29] H. Qiu, F. Ahmad, F. Bai, M. Gruteser, and R. Govindan, “AVR:
[5] F. Li, X. Jiang, J. W. Chung, and M. Claypool, “Who is the king of Augmented vehicular reality,” in Proc. MobiSys, 2018, pp. 81–95.
the hill? Traffic analysis over a 4G network,” in Proc. IEEE Int. Conf. [30] A. Maglo, G. Lavoue, F. Dupont, and C. Hudelot, “3D mesh compres-
Commun. (ICC), 2018, pp. 1–6. sion: Survey, comparisons, and emerging trends,” ACM Comput. Surv.,
[6] A. Narayanan et al., “A first look at commercial 5G performance on vol. 47, no. 3, pp. 1–41, 2015.
smartphones,” in Proc. Web Conf., 2020, pp. 894–905. [31] S. Schwarz et al., “Emerging MPEG standards for point cloud com-
[7] D. Xu et al., “Understanding operational 5G: A first measurement study pression,” IEEE J. Emerg. Sel. Topics Circuits Syst., vol. 9, no. 1,
on its coverage, performance and energy consumption,” in Proc. Annu. pp. 133–148, Mar. 2019.
Conf. ACM Spec. Interest Group Data Commun. Appl. Technol. Archit. [32] R. Schnabel and R. Klein, “Octree-based point-cloud compression,” in
Protocols Comput. Commun., 2020, pp. 479–494. Proc. Eur. Symp. Point-Based Graph., 2006, pp. 111–121.
Authorized licensed use limited to: LAKIREDDY BALI REDDY COLLEGE OF ENGINEERING. Downloaded on January 09,2024 at 05:36:54 UTC from IEEE Xplore. Restrictions apply.
QIAN et al.: REMOTE PRODUCTION FOR LIVE HOLOGRAPHIC TELEPORTATION APPLICATIONS 463
[33] D. Graziosi, O. Nakagami, S. Kuma, A. Zaghetto, T. Suzuki, and Vu San Ha Huynh received the B.Sc. (Hons.)
A. Tabatabai, “An overview of ongoing point cloud compression degree in computer science from the University of
standardization activities: Video-based (V-PCC) and geometry-based Nottingham in 2016, and the Ph.D. degree from
(G-PCC),” APSIPA Trans. Signal Inf. Process., vol. 9, pp. 1–13, the University of Nottingham in 2021. He has
Apr. 2020. authored in venues, including IEEE ACCESS, ACM
[34] D. Meagher, “Geometric modeling using octree encoding,” Comput. MobiCom, Closer, International Conference on
Graph. Image Process., vol. 19, pp. 129–147, Jun. 1982. Pervasive and Parallel Computing, Communication
[35] S. Abdelwahab, B. Hamdaoui, M. Guizani, and T. Znati, “Network and Sensors, IEEE WiMob, IEEE FMEC, and IEEE
function virtualization in 5G,” IEEE Commun. Mag., vol. 54, no. 4, IWCMC. His research interests include intelligence
pp. 84–91, Apr. 2016. in the network transport layer, mobile edge cloud
[36] S. Sun, M. Kadoch, L. Gong, and B. Rong, “Integrating network function and fog computing, distributed data caching services,
virtualization with SDR and SDN for 4G/5G networks,” IEEE Netw., mobile heterogeneous opportunistic networks, large-scale Internet service
vol. 29, no. 3, pp. 54–59, May/Jun. 2015. architectures, Internet of Things, and network science.
[37] Y. C. Hu, M. Patel, D. Sabella, N. Sprecher, and V. Young, “Mobile edge
computing—A key technology towards 5G,” ETSI, Sophia Antipolis,
France, ETSI Rep. 11, 2015.
[38] T. X. Tran, A. Hajisami, P. Pandey, and D. Pompili, “Collaborative
Ning Wang (Senior Member, IEEE) received
mobile edge computing in 5G networks: New paradigms, scenarios, and
the Ph.D. degree in electronic engineering from
challenges,” IEEE Commun. Mag., vol. 55, no. 4, pp. 54–61, Apr. 2017.
the Centre for Communication Systems Research,
[39] P. J. Besl and N. D. McKay, “A method for registration of 3-D shapes,”
University of Surrey, where he has been working as
IEEE Trans. Pattern Anal. Mach. Intell., vol. 14, no. 2, pp. 239–256,
a Professor. He is currently leading a Research Team
Feb. 1992.
focusing on 5G and beyond networking and appli-
[40] N. Elmqaddem, “Augmented reality and virtual reality in education.
cations. He has published over 150 research papers
Myth or reality?” Int. J. Emerg. Technol. Learn., vol. 14, no. 3,
at prestigious international conferences and journals.
pp. 234–242, 2019.
His main research areas include future networks,
[41] M. Hu, X. Luo, J. Chen, Y. C. Lee, Y. Zhou, and D. Wu, “Virtual reality:
multimedia networking, network management and
A survey of enabling technologies and its applications in IoT,” J. Netw.
control, and QoS/QoE assurances.
Comput. Appl., vol. 178, Mar. 2021, Art. no. 102970.
[42] Y.-G. Kim and W.-J. Kim, “Implementation of augmented reality system
for smartphone advertisements,” Int. J. Multimedia Ubiquitous Eng.,
vol. 9, no. 2, pp. 385–392, 2014.
[43] A. Langley et al., “The QUIC transport protocol: Design and Internet- Sweta Anmulwar is currently pursuing the Ph.D.
scale deployment,” in Proc. Conf. ACM Spec. Interest Group Data degree in frame synchronisation for multisource
Commun., 2017, pp. 183–196. holographic teleportation applications with the
[44] “ZSTD Compression Library.” [Online]. Available: University of Surrey, U.K. She is a Research SDN
https://facebook.github.io/zstd/ (Accessed: Aug. 20, 2021). Software Developer. She is a key member in design
[45] D. L. Mills, “Internet time synchronization: The network time protocol,” and development of various software development
IEEE Trans. Commun., vol. 39, no. 10, pp. 1482–1493, Oct. 1991. projects, such as SDN Online Lab, Smart City,
[46] S. Subramanyam, J. Li, I. Viola, and P. Cesar, “Comparing the quality of and Traffic Generators. She has an experience in
highly realistic digital humans in 3DoF and 6DoF: A volumetric video computer networking, software development, and
case study,” in Proc. IEEE Conf. Virtual Reality 3D User Interfaces (VR), wireless communication. She has conducted various
2020, pp. 127–136. tutorials on SDN at IEEE conferences and faculty
[47] G. Liu, Y. Huang, Z. Chen, L. Liu, Q. Wang, and N. Li, “5G deployment: development programme on “SDN and OpenFlow”.
Standalone vs. Non-standalone from the operator perspective,” IEEE
Commun. Mag., vol. 58, no. 11, pp. 83–89, Nov. 2020.
[48] “iPerf3.” [Online]. Available: https://iperf.fr/iperf-download.php
(Accessed: Aug. 20, 2021).
De Mi (Member, IEEE) received the B.Eng. degree
[49] “SS Tool.” [Online]. Available: https://man7.org/linux/man-
from the Beijing Institute of Technology, China, the
pages/man8/ss.8.html (Accessed: Aug. 20, 2021).
M.Sc. degree from the Imperial College London,
[50] “NGINX.” [Online]. Available: https://www.nginx.com/
and the Ph.D. degree from the University of Surrey,
[51] X. Xie and X. Zhang, “POI360: Panoramic mobile video telephony over
U.K., wher he is currently with the Institute for
LTE cellular networks,” in Proc. 13th Int. Conf. Emerg. Netw. EXp.
Communication Systems, Home of 5G Innovation
Technol., 2017, pp. 336–349.
Centre and 6G Innovation Centre, University of
[52] P. Qian, N. Wang, and R. Tafazolli, “Achieving robust mobile Web
Surrey. He is also a U.K. Engineering and
content delivery performance based on multiple coordinated QUIC
Physical Sciences Research Council Ph.D. Academic
connections,” IEEE Access, vol. 6, pp. 11313–11328, 2018.
Supervisor, and a Visiting Scholar with the Shanghai
[53] M. Radenkovic and V. S. H. Huynh, “Cognitive caching at the edges for
Jiao Tong University, China. His research interests
mobile social community networks: A multi-agent deep reinforcement
include B5G/6G radio access techniques, radio access network architec-
learning approach,” IEEE Access, vol. 8, pp. 179561–179574, 2020.
ture and protocols, hypercomplex signal processing, and the next generation
[54] V. S. H. Huynh and M. Radenkovic, “Leveraging social behaviour of
broadcast and multicast communications.
users mobility and interests for improving QoS and energy efficiency of
content services in mobile community edge-clouds,” in Proc. CLOSER,
2020, pp. 382–389.
Authorized licensed use limited to: LAKIREDDY BALI REDDY COLLEGE OF ENGINEERING. Downloaded on January 09,2024 at 05:36:54 UTC from IEEE Xplore. Restrictions apply.