You are on page 1of 1

So How do they do it?

As well known only super computers can handle the rendering procees of a whole city

so they combined super computing and existing VR tech to produce this master peice

And to make this VP project happen, HLRS developed Herrenberg’s digital twin with
the Fraunhofer Institute,
the University of Stuttgart, and Kommunikationsbüro Ulmer, starting with a concept
they called "space syntax".

Just as the human skeleton provides a scaffolding for all of the other systems and
functions of the human body,
space syntax produces a 2D outline of physical grids in a city, offering a
framework for performing spatial analysis,
such as predicting the likely paths that car or pedestrian traffic might take to
move from one point to another

With the grids in place,


the team then added Herrenberg geographic information system (GIS) data
and traffic control systems data to incorporate topography, road geometry, and
detailed traffic.

The intresting thing is they even digitalised the wind ! can you belive it ? ,
the team uses the open-source fluid dynamics code for modeling
to create realistic models for the wind and emissions movement throughout the city.

How Does actually it difffers from the regular VR we usually know?

the HLRS team developed an app to invite city residents to share feedback.

Citizens could see the area in a digital way,


even before it was finished, and that brings a higher [level of] acceptance.

the team also plans to explore how artificial intelligence (AI) to be better
represent
the wide range of factors that affect how residents emotionally experience their
city.

You might also like