You are on page 1of 2

Movable Dynamic Data Detection and Visualization for Digital Twin City

Ahyun Lee, Juwan Kim, Insung Jang


City & Geospatial ICT Research Section,
Electronics and Telecommunications Research (ETRI), South Korea
{ahyun, juwan, e4dol2}@etri.re.kr

Abstract sensors must be attached to everyone, cars, and


bicycles to build general dynamic data within a city.
In this paper, we propose a digital twin city
A digital twin city can simulate urban phenomena
platform. The proposed platform enables to detect and
or designs based on the digital model similar to a real
visualize the 3D movable dynamic data. Figure 1
city. To build a digital twin model of a city, it requires
shows an example of visualization results of movable
static data such as terrain, buildings, roads and
dynamic data. The red cars are the 3D dynamic
dynamic data such as weather, temperature, humidity.
objects that has been reconstructed and visualized in
The moving objects such as vehicles, pedestrians can
position, time, and 3D pose. In this paper, we describe
be reproduced with the movable dynamic data which
the structure of the proposed platform and the roles of
has 3D position, direction and time. In this paper, we
each steps.
propose a platform which has movable dynamic data
detection, reconstruction, and visualization steps. The 1
proposed platform can visualize the past urban Image Data User
Acquisition
appearance and road situations in 3D model at the
time and space desired by the user. Image Data
D1
Image Data Rendering
Data
2 Upload / 6
Time /
Keywords: GIS, digital twin Image Data Download Data Location
Storage Server Visualization
1. Introduction Camera ID /
Time / Image Data
Location / Time /
Local ID / 3D Pose

3 5
Digital twin city is constructed with static and Car / Pedestrian Dynamic Data Storage
dynamic data. The static data includes 3D geospatial Detection and Management Server
data such as terrains, buildings, roads, and facilities. Camera ID / Time Camera ID /
Time / Upload / Download
/ Detection Result
And the dynamic data includes weather, temperature, Local ID /
3D Pose Dynamic
4 D2
humidity, wind, etc. A digital twin city built in a form Data
Car / Pedestrian
similar to reality enables to simulate various urban 3D Pose Estimation
phenomena or designs. However, the dynamic data
that can move are also required for a digital twin Fig. 2. Data flow diagram of the proposed platform
model similar to a real city. Singapore's Elderly
Monitoring system acquired the path of pedestrians by
2. The proposed platform
attaching the movement sensors to pedestrians [1].
This approach is limited, because the movement
The proposed platform consists of movable
dynamic data acquisition, detection, reconstruction,
and visualization steps. Figure 2 shows the data flow
diagram of the proposed platform. We built a testbed
for implementation of the proposed platform. Image
data acquisition (1) is performed through 4 CCTVs
installed in the testbed area. The CCTV videos are
collected in real time by a network video recorder
through the wireless transmitters. Image data storage
server (2) collects the image data whose time were
synchronized time among other cameras. This is to
minimize the 3D pose error of the reconstructed
Fig. 1. Visualization results of the dynamic data in the movable dynamic data due to the time difference
proposed platform. between other cameras.

Authorized licensed use limited to: Sungkyunkwan University. Downloaded on September 12,2022 at 03:06:50 UTC from IEEE Xplore. Restrictions apply.
Cars or pedestrians are detected for all images images. So, it has low accuracy in road units. For
stored in (2). The detected data are camera ID, time, visualization of the testbed area, roads and terrain
boundary box (x, y, width, height). Car / pedestrian images were collected using a drone and the point
detection (3) with Yolo-v4 [2] detects the dynamic cloud model data was created as shown in Fig. 5. In
object from all images of every CCTV. Figure 3 show the future, we will create the mesh-based 3D terrains,
the detection results of images from our testbed. buildings, and facility models from the point cloud
model.

Fig. 3. Car / pedestrian detection results of the testbed


images: yellow boxes are cars and a purple box is person.

Car / pedestrian 3D pose estimation (4)


Fig. 5. Point cloud data of the testbed area.
reconstructs the 3D positions and directions of
detected results in (3). In this paper, the 3D pose
estimation step has not been implemented yet. 3. Conclusions and future works
Therefore, the detection results were visualized on the
geospatial platform [3] based on the 2D. The direction In this paper, we proposed a framework for
of the cars in Fig. 1 were adjusted by the user. The detection, reconstruction and visualization of movable
reconstructed 3D pose of the car in (4) includes a local dynamic data for the digital twin city. We selected the
ID assigned by tracking each movable dynamic object testbed area and installed four CCTVs to establish an
in consecutive frames of the same camera. The local experimental environment capable of detecting the
ID is unique only in consecutive frames of one camera. movable dynamic data in real time. In future works,
Dynamic data storage and management server (5) we will detect vehicles and pedestrians in real time
stores and manages only the data reconstructed in (4). and estimate their latitudes, longitudes, altitudes, and
However, it can access the reconstructed target image directions in 3D. There are possible to overturn in the
data (D1) using the camera ID and time information. case of an accident vehicle, so the rotation value of the
Data visualization (6) can visualize the reconstructed 3 axes will be considered for the 3D reconstruction. In
data along with 3D terrains and buildings. Since the addition, we plan to build the 3D mesh model from
reconstructed data has time and location information, the acquired point cloud model data. It can visualize
it is possible to reproduce past movable dynamic data the appearance of a digital twin city close to reality.
in a specific time and space. Figure 4 shows the
geospatial platform [3] based on globe. Acknowledgment
This work was supported by Electronics and
Telecommunications Research Institute (ETRI) grant
funded by the Korean government. [20ZR1200,
DNA-based national intelligence core technology
development]

References
[1] L. T. Tam, A. C. Valera, H. P. Tan, & C. Koh, “Online
Fig. 4. The geospatial platform of Data visualization (6) detection of behavioral change using unobtrusive eldercare
monitoring system”, In Proceedings of the 11th
The proposed geospatial platform was International Conference on Queueing Theory and Network
implemented based on Unity3D game engine with ApplicationsI, pp. 1-8. Dec. 2016.
[2] A. Bochkovskiy, C. Y. Wang, & H. Y. M. Liao,
supported multi-platforms. Unity3D game engine “YOLOv4: Optimal Speed and Accuracy of Object
enables to debug various execution environments. So, Detection”, arXiv preprint arXiv:2004.10934, 2020.
it is suitable for developing various types of [3] A. Lee & I. Jang, “Spatial Information Platform with
applications. The geospatial platform in Fig. 4 VWorld for Improving User Experience in Limited Web
generates a 3D terrain model using DEM and aerial Environment”, Electronics, 8(12), 1411, 2019

Authorized licensed use limited to: Sungkyunkwan University. Downloaded on September 12,2022 at 03:06:50 UTC from IEEE Xplore. Restrictions apply.

You might also like