You are on page 1of 48

2IV35: Visualization

Construction of an interactive application that implements several visualization techniques for a real-time simulation of uid ow

D.J.W.M.G. Dingen S.J. van den Elzen

(0580528) (0573626)

Department of Mathematics and Computer Science Eindhoven University of Technology January 7, 2009

Abstract In this document is described what methods we used to construct an interactive application that implements several visualization techniques for a real-time simulation of uid ow for the course 2IV35: Visualization at Einhoven, University of technology. The used visualization techniques, color mapping, glyphs, streamlines, slices, stream surfaces, and image-based ow visualization, are described in detail in separate sections.

Contents
1 Introduction 2 Skeleton compilation 2.1 Division in components . . . . . . . . . . . 2.2 Graphical User Interface . . . . . . . . . . . 2.3 GL triangle strip color interpolation bug x 2.4 Compile instructions . . . . . . . . . . . . . 2.4.1 Qt 4.4.3 . . . . . . . . . . . . . . . . 2.4.2 FFTW 2.1.5 . . . . . . . . . . . . . 2.4.3 Real time visualization simulation . 4 5 5 6 6 7 7 7 8 9 9 9 10 11 11 12 12 12 13 13 13 15 15 16 16 17 18 18 18 19 20 20 21 21

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

3 Camera, picking & controls 3.1 Quaternion based camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 openGL Picking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Color mapping 4.1 Legenda . . . . . . . . . . . . . . . . . 4.2 Computation . . . . . . . . . . . . . . 4.2.1 Scalar data set computation . . 4.2.2 Scaling . . . . . . . . . . . . . 4.2.3 Clamping . . . . . . . . . . . . 4.3 Color maps . . . . . . . . . . . . . . . 4.3.1 Rainbow color maps . . . . . . 4.3.2 Grayscale color maps . . . . . . 4.3.3 Alternating color map . . . . . 4.3.4 Black-body color map . . . . . 4.3.5 User dened color map . . . . . 4.3.6 Overlay alternating color map . 5 Glyphs 5.1 Scalar and vector eld data . 5.2 Glyph distribution methods . 5.3 Glyph interpolation methods 5.4 Glyph parametrization . . . . 5.5 Clamping . . . . . . . . . . . 5.6 Lighting . . . . . . . . . . . . 5.7 Implemented glyphs . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

5.7.1 5.7.2 5.7.3 5.7.4

Hedgehogs 2D/3D . Arrows 3D . . . . . Cones 3D . . . . . . Cones silhouette 3D

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

22 22 22 22 24 24 25 25 27 27 27 27 28 28 29 29 30 31 33 33 34 34 34 37 37 40 40 41 41 43 43 44 45 45

6 Gradient 6.1 Gradient computation methods . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Noise reduction methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Gradient application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Streamlines 7.1 Computation of the streamline . . . . 7.1.1 Euler integration method . . . 7.1.2 Runge Kutta second and fourth 7.1.3 Integration direction . . . . . . 7.1.4 A value for t . . . . . . . . . 7.1.5 Integration stop criterion . . . 7.2 Seedpoints feeding strategy . . . . . . 7.2.1 Interpolation . . . . . . . . . . 7.3 Tapering eect . . . . . . . . . . . . . 8 Slices 8.1 Ringbuer . . . . . . . 8.2 Transparency . . . . . 8.2.1 Separate . . . . 8.2.2 Blended . . . . 8.2.3 First and Last 8.2.4 Composed . . .

. . . . . . . . . . . . . . . . . . . . order integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

9 Streamsurfaces 9.1 Stream surface seed curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Integration method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Surface creation, splitting and joining . . . . . . . . . . . . . . . . . . . . . . 10 Image-Based Flow Visualization 10.1 Method . . . . . . . . . . . . . 10.2 Injected noise texture . . . . . 10.3 Parameters . . . . . . . . . . . 10.4 Notices . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

Visualization

Introduction

In this document the design decisions and several dierent used visualization techniques are described and explained. These visualization techniques are implemented in a single program. This program is an interactive real-time simulation of uid ow. The construction of this program is the practical assignment for the course 2IV35: Visualization. Construct an interactive application that implements several visualization techniques for a real-time simulation of uid ow. In the upcoming sections the dierent visualization techniques are explained. Trade os are discussed, design decisions are explained and general remarks are given. The assignment consists of several dierent steps, which are shown in Figure 1. We implemented all the steps on the right side of the tree, including the bonus step: Image-based ow visualization. In Section 2 the steps made to adapt the provided skeleton, to compile with our tools, are described. In Section 4 the color mapping technique is described and explained. Section 5 explains the vector glyph technique. In Section 6 the dierent manners of gradient computation are explained. In Section 7 the streamline technique is explained. Section 8 describes the slices technique with all dierent blending techniques. In 9 the dierent design decisions with respect to streamsurfaces are described. Last but not least Section 10 describes the dierent aspects of Image-based ow visualization. Some aspects of the left assignment tree have been implemented as well. For example by adding a circle as a seed curve for streamsurfaces streamtubes are implemented. Also a height plot is implemented because we made everything three dimensional, even adding force to the simulation with the mouse can be done from every camera angle. All steps are successfully implemented and have lead to a better understanding of the dierent visualization techniques.

Figure 1: Assignment tree; Dark outlined steps are implemented, faded steps are omitted.

Visualization

Skeleton compilation

We chose to adapt the given skeleton to make it compile with the following tools: GNU Compiler Collection (GCC) 4.3.2 http://gcc.gnu.org/ Qt 4.4.3 http://trolltech.com/products/qt Fastest Fourier Transform in the West (FFTW) library 2.1.5 http://www.tw.org/ OpenGL Utility Toolkit (GLUT) http://www.opengl.org/resources/libraries/glut/ Qt is a cross-platform application framework for C++. The main choice to adapt the skeleton to work with this setup, is to provide cross-platform compilation. With this setup we can easily compile the constructed code to run on Linux, Mac and Windows. Another great advantage of Qt is that it has an extensive library with graphical user interface components which can easily be adapted to serve our needs. The steps that were necessary to make the provided skeleton code to work with the above mentioned components are explained in the next sections.

2.1

Division in components

The rst adaptation made to the provided skeleton is the division of dierent components to dierent header and source les. The initialization of the main window is no longer woven in the gluids component. The code concerning the glFluids and the main window are splitted into two dierent components. The glFluids code is promoted to a QWidget. Also a control panel QWidget is created to handle user interface actions. In this widget all simulation parameters can be controlled. In the main window component the glFluidsWidget and the cPanelWidget are included (see the listing below).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

# ifndef _ _ M A I N W I N D O W _H _ _ # define _ _ M A I N W I N D O W _H _ _ # include # include # include # include # include < QMainWindow > < QStatusBar > " ui_mainwindow . h " " glfluids widget . h " " cpanelwidget . h "

class CMainWindow : public QMainWindow , private Ui :: MainWindow { Q_OBJECT public : CMainWindow ( QWidget * parent = 0) ; private : C GL Fl ui d sW id ge t * m _ p G L F l u i d s W i d g e t ; CCPanelWidget * m _ pC Pa ne l Wi dg et ; QStatusBar * m_pStatusBar ; private slots : }; # endif // _ _ M A I N W I N D O W _ H _ _

2.2

Graphical User Interface

Visualization

2.2

Graphical User Interface

The non graphical user interface is adapted to a graphical user interface using Qt components. This interface maps all the previous key-bindings to graphical components. Operations like direction coloring and uid viscosity can be performed in a graphical way. Figure 2 shows the graphical user interface along with the control panel and glFluids openGL widget.

Figure 2: First version of openGL uids widget with graphical user interface. During the completion of the assignment steps, the graphical user interface is expanded with more components as needed.

2.3

GL triangle strip color interpolation bug x

In the original application there was a bug which did not show the matter visualization in a proper way. The bug showed ugly triangles on the left side of the matter of the simulation. This was because the colors of the drawn triangle strip were not interpolated in the right way. We xed this by setting the right colors for the triangle strip such that they are interpolated in the right way. Figure 3 shows the bug in the original application on the left and the xed application on the right. Notice how in the right image the left boundary of the uid ow simulation does not have the triangles that are present in the left image.

(a)

(b)

Figure 3: (a) Triangle strip color interpolation bug. (b) xed application.

2.4

Compile instructions

Visualization

2.4

Compile instructions

This section rst explains how to build and install the requirements for smoke, which are Qt 4.4.3 and FFTW 2.1.5 library. After that, the build procedure for the real-time visualization simulation program itself is given. In this section it is assumed that the code is build for Linux. The build procedures for Windows and Mac OS X are almost the same, but will not be discussed here. The main dierences for compiling on the dierent platforms is how to included and link the external libraries. This is solved by explicitly linking the libraries in the project le and results in the following code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

unix { LIBS = - lrfftw - lfftw - lglut } mac { LIBS = - framework \ OpenGL \ - framework \ GLUT \ -L / sw / lib \ - lrfftw \ - lfftw INCLUDEPATH += / sw / include } win32 { LIBS = - lglut32 - lglu32 - lopengl32 - mwindows - Llib - lFFTW INCLUDEPATH += ./ include }

On each platform the program can easily be build without adjustments by loading the project le into Qt creator (free downloadable from http://trolltech.com/developer/qt-creator) and pressing the build-all button. At code level, platform dependent dierences are taken into account (for example mouse handling) but will not be discussed here. 2.4.1 Qt 4.4.3

Qt 4.4.3 can be downloaded at ftp://ftp.trolltech.com/qt/source. The le needed is called qt-x11-opensource-src-4.4.3.tar.gz. Save this le to a known directory, say /qt. After the le has been downloaded, open up a terminal and run the following code (please note that for the last command you need to enter your password):
1 2 3 4 5 6

user@host :~ $ cd / qt user@host :/ qt$ tar - zxvf qt - x11 - opensource - src -4.4.3. tar . gz user@host :/ qt$ cd qt - x11 - opensource - src -4.4.3 user@host :/ qt / qt - x11 - opensource - src -4.4.3 $ ./ configure -- prefix =/ usr user@host :/ qt / qt - x11 - opensource - src -4.4.3 $ make user@host :/ qt / qt - x11 - opensource - src -4.4.3 $ sudo make install

After this Qt 4.4.3 is installed. This can be veried by running qmake -v, which output something like:
1 2 3

user@host :~ $ qmake -v QMake version 2.01 a Using Qt version 4.4.3 in / usr

2.4.2

FFTW 2.1.5

FFTW 2.1.5 can be downloaded at http://www.tw.org/tw-2.1.5.tar.gz. The le needed is called tw-2.1.5.tar.gz. Save this le to a known directory, say /tw. After this le has been downloaded, open up a terminal and run the following code (please note that for the last command you need to enter your password): 7

2.4

Compile instructions

Visualization

1 2 3 4 5 6

user@host :~ $ cd / fftw user@host :/ fftw$ tar - jxvf fftw -2.1.5. tar . gz user@host :/ fftw$ cd fftw -2.1.5 user@host :/ fftw / fftw -2.1.5 $ ./ configure -- prefix =/ usr / local user@host :/ fftw / fftw -2.1.5 $ make user@host :/ fftw / fftw -2.1.5 $ sudo make install

After this FFTW 2.1.5 is installed. 2.4.3 Real time visualization simulation

If both Qt 4.4.3 and FFTW 2.1.5 are installed, The visualization simulation can be compiled. Assuming the source code of the program is placed in /src, run the following commands in a terminal:
1 2 3

user@host :~ $ cd / src user@host :/ src qmake smoke . pro - config release user@host :/ src$ make

Smoke is now compiled. To run smoke, you can run the executable found in the directory where the source of smoke is by typing in a terminal (from the /smoke directory):
1

user@host :/ src$ ./ smoke

Visualization

Camera, picking & controls

For allowing three dimensional movement around the uid ow simulation a quaternion based camera class is implemented. This free rotating and translating will later become very useful when visualizing slices (see Section 8) and streamsurfaces (see Section 9). The free camera movement introduced some problems. Adding force to the uid ow simulation with the mouse was not as easy as before, because now the z-axis had to be taken into account. Because now the mouse coordinates do not map one-on-one on the uid ow coordinates anymore, some calculation has to be done to determine where the force has to be added. This problem is solved by implementing a procedure which calculates the right coordinates, using the openGL picking function.

3.1

Quaternion based camera

The implemented camera class, which allowes for free movement around the uid ow simulation, is based on quaternions. The detailes on how the quaternions are implemented is beyond the scope of this assignment and is therefor left out. The user is able to move freely by using the w,a,s,d keys and by holding the Ctrl -key while moving the mouse.

3.2

openGL Picking

The problem of not being able to add force to the ow simulation anymore, because of the camera class, is solved by implementing a picking procedure. This picking procedure calculates the exact position to which the mouse points on the ow simulation plane. An extra plane is added which is placed direct behind the simulation to enable picking when glyphs are drawn. The extra plane is made transparant, so you cannot see it. This extra plane is add because it is possible to click in an empty spot between the glyphs. The plane makes sure the picking procedure always returns a clicked place in the simulation, if force is added to the uid ow.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33

void C GL Fl ui d sW id g et :: startPicking () { GLint viewport [4]; glSel ectBuff er ( PICKBUFSIZE , m_selectBuf ) ; glRenderMode ( GL_SELECT ) ; glMatrixMode ( GL_PROJECTION ) ; glPushMatrix () ; glLoa dIdenti ty () ; glGetIntegerv ( GL_VIEWPORT , viewport ) ; gluPickMatrix ( m_mouseX , viewport [3] - m_mouseY ,1 ,1 , viewport ) ; gluPe rspecti ve ( FIELD_OF_VIEW , 1.0* viewport [2] / viewport [3] ,0.1 ,10000) ; glMatrixMode ( GL_MODELVIEW ) ; glInitNames () ; glLoa dIdenti ty () ; m_pCam - > Apply () ; // Push one name for picking glPushName (1) ; // Draw to hit visualize () ; } void C GL Fl ui d sW id g et :: processHits ( GLint hits ) { double modelview [16] , projection [16]; int viewport [4]; float z ; glGetDoublev ( GL_PROJECTION_MATRIX , projection ) ; glGetDoublev ( GL_MODELVIEW_MATRIX , modelview ) ; glGetIntegerv ( GL_VIEWPORT , viewport ) ;

3.3

Controls

Visualization

34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61

if ( hits > 0) { glReadPixels ( m_mouseX , viewport [3] - m_mouseY , 1 , 1 , GL_DEPTH_COMPONENT , GL_FLOAT , & z ); gluUnProject ( m_mouseX , viewport [3] - m_mouseY , z , modelview , projection , viewport , & m_objx , & m_objy , & m_objz ) ; } } bool C GL Fl ui d sW id g et :: stopPicking () { int hits ; // r e s t o r i n g the o r i g i n a l p r o j e c t i o n matrix glMatrixMode ( GL_PROJECTION ) ; glPopMatrix () ; glMatrixMode ( GL_MODELVIEW ) ; // r e t u r n i n g to normal r e n d e r i n g mode hits = glRenderMode ( GL_RENDER ) ; if ( hits >0) { processHits ( hits ) ; return true ; } return false ; }

3.3

Controls

Several dierent controls are bind to dierent actions. An overview of all dierent user keys is given below. Control W S A D Ctrl + left mouse button Alt + P Right mouse button Space Action Zoom in Zoom out Move camera left Move camera right Rotate camera Pauses the simulation Place user dened seed point for streamline Place a seedcurve for a streamsurface Dependency ow simulation ow simulation ow simulation ow simulation ow simulation

has has has has has

the the the the the

focus focus focus focus focus

Drawing streamlines enabled Drawing streamsurface enabled

10

Visualization

Color mapping

In this step of the assignment we implemented several dierent colormapping techniques. The colormapping techniques are applied to the three data sets in our application. These data sets are the uid density, the uid velocity magnitude and the force eld magnitude. The user is able to choose between dierent colormaps. As stated in the assignment, three colormaps have to be implemented; A rainbow colormap, a grayscale colormap and another of our choice. We implemented more than three colormaps to give the user the opportunity to choose the right colormap for the right kind of data. More on these right colormaps is described in the next sections. The implemented colormaps are: Black-White colormap Grayscale colormap with 255 colors ranging from black to white Grayscale colormap User dened number of color bands ranging form black to white Rainbow colormap Rainbow (Red, Orange, Yellow, Green, Blue, Indigo, Violet) colormap with 255 colors Bands colormap Rainbow colormap with user dened number of colors Alternating colormap Blue-yellow bands which alternates when values change, given a certain sensitivity Black-body colormap 255 colors ranging from black to red to yellow to white User dened colormap Interpolated colormap between two user dened colors, interpolation is also based on the users preference. The user can choose to interpolate between RGB, Hue, Saturation and Value. All the above mentioned colormaps can be overlayed by an alternating colormap for which the sensitivity is user dened. This has the great advantage that alternations in values can be seen much faster. More on this advantage and the overlayed alternating colormap is described in Section 4.3.6. All implemented colormaps can be applied to three dierent scalar data sets namely, The uid density rho, the uid velocity magnitude | v | and the force eld magnitude | f |. More on the computation of these scalar data sets is described in Section 4.2.1.

4.1

Legenda

We choose to implement a vertical color legend bar. This, as opposed to a horizontal color legend bar, makes it intuitive for the user that colors higher in the legend represent higher data values. The legenda is drawn in the same openGL window as the real time uid simulation is rendered. Because of this the cursor coordinates have to be adapted to the new situation. At the left side of the simulation we want to apply a correction which is equal to 2 times the legend-width. At the right side we do not want a correction. Too achieve this we interpolate the coordinates. This results in the following interpolation code:
1 2 3 4

static float correction = 2* m_legendWidth ; if ( mx < correction ) return ; else { mx -= correction * (1 -( mx - correction ) /( m_winWidth - 2* m_legendWidth ) ) ;

11

4.2

Computation

Visualization

4.2
4.2.1

Computation
Scalar data set computation

For applying the colormap to the three data sets, the values of the data sets have to be computed. The rst data set, the uid density rho, is given by the external FFTW library. The second and third data set, the uid velocity magnitude |v| and the force eld magnitude |f | have to be computed on the y. These magnitudes |v|, |f | are computed by taking the vector lengths (see Formulas 1, 2). After computation all scalar data values are multiplied by a factor 10 because the data values are really small. |v| = |f | = 4.2.2 Scaling
2 2 vx + vy

(1) (2)

2 2 fx + fy

Both scaling and clamping are implemented for all dierent color maps. When the scaling option is selected the entire actual range of the data values at the current time are mapped to the selected color map. The minimum and maximum data values are tracked by the application, therefor every occurring data value is shown in the visualization, for both scaling and clamping. As a consequence of this the displayed values in the legenda are updated real time and therefor change dynamically. This consequence is also described at page 130 of [Tel]: (...) if we do not know f (t) for all values t before we start the visualization, we cannot compute the absolute scalar range [fmin , fmax ] (...). In such situations, a better solution might be to normalize the scalar range separately for every time frame f (t). Of course, this implies drawing dierent color legends for every time frame as well. An eect of the updating of the legenda is that it is hard to interpret the data real time. A solution to this problem is to pause the simulation (which is of course possible in our simulation). The current values of the legenda are then correct for the timeframe shown, and can easily be interpreted. An example of the implemented scaling and clamping features is shown in Figure 4. In the left image the data values are clamped to the range [0..1], in the right image the exact same data values are scaled, such that the highest value corresponds to the color highest in the legend. The shown data values in the right image apply to the range [0.0236323, 0.539868].

(a)

(b) Figure 4: Data values clamped (a) and scaled (b).

12

4.3

Color maps

Visualization

4.2.3

Clamping

The user is able to clamp the displayed data values for all dierent color maps. A minimum and a maximum value can be dened, only data values within this range are displayed using the selected color map. Data values that are not within the dened range, less than minimum and/or larger than maximum, are clamped to minimum and maximum respectively. The clamping option can be used for example to show only the extreme high (or low) values, by specifying a small range with both high (or low) minimum and maximum values. Figure 5 shows the clamping option applied to the range [0.9, 0.95]. The left gure shows the entire range [0..1], in the right image only high values within the range [0.9, 0.95] are shown, values outside this range are clamped.

(a)

(b)

Figure 5: Data values, no clamping (a). (b) Clamping set to the range [0.9, 0.95], only high values are shown.

4.3
4.3.1

Color maps
Rainbow color maps

(a)

(b)

(c)

Figure 6: (a) Rainbow color map (b) grayscale color map and (c) black-body color map [BI07]

According to several research papers [BI07], [Hea96] the rainbow colormap (see Figure 6) is one of the worst colormaps one can use. Still the visualization community widely uses the rainbow colormap.

13

4.3

Color maps

Visualization

Not only does the rainbow color map confuse viewers through its lack of perceptual ordering and obscure data through its inability to present small details, but it actively misleads the viewer by introducing artifacts to the visualization [BI07]. The biggest problem with the rainbow colormap is that it is not perceptually ordered. For example a grayscale colormap is perceptually ordered. It is immediately clear that higher shades of gray present a higher value. For a rainbow colormap this greater than relationship is not immediately clear. To interpret the data, one has to know the precise order of the colors of a rainbow. If this is not the case, data is misinterpreted. To solve this problem a legenda could be introduced, but this leads to unnecessary distraction. Not only is the perceptually ordering a problem, the rainbow colormap introduces artifacts. Because of the sudden change in colors the user may think that values suddenly have a great dierence, while this is not the case at all. This eect is shown in the upper images of Figure 7, the lower image shows the black-body colormap applied to the same data values, which does not introduce artifacts. This negative eect, known as banding, is even stronger as the number of displayed colors in the rainbow colormap are decreased. Although we know that a rainbow colormap because of the above mentioned reasons is one of the worst colormaps to use, we did implement it (as it is part of the assignment) but with the option to show a legenda next to it, to interpret the data in the intended way.

(a)

(b)

(c) Figure 7: (a-b) Negative banding eect of the rainbow colormap. (c) The black-body colormap is applied to the same data values as in (a-b), here the banding eect is absent.

14

4.3

Color maps

Visualization

4.3.2

Grayscale color maps

Two grayscale color maps are implemented. Namely the RGB interpolated black-white colormap and the grayscale color bands map. The RGB interpolated black-white colormap has 255 shades of gray. According to the book [Tel] a usable colormap has to have 64 to 255 dierent shades. This is to prevent color banding. In our implementation we give the user the option to set the number of dierent shades for the grayscale color bands map. We experienced that indeed above 64 shades of gray there is no real color banding anymore. Also because the implemented program is a smoke simulation realism is preferred and color banding breaks the realistic believe of the simulation. Figure 8 shows the grayscale color bands map with dierent values for the shades of gray.

(a)

(b)

(c)

(d)

Figure 8: Grayscale colormap with increased shades of gray. (a) 4, (b) 16, (c) 32 and (d) 256 shades of gray respectively.

4.3.3

Alternating color map

As an extra option to our program we choose to implement an alternating color map. An alternating color map is used to emphasise relatively large dierences in data values. Where the data has more or less the same value the color is the same, and when this value is relatively smaller or greater the current color changes to another color. These two colors alternate and therefor the color map is called an alternating color map. The sensitivity can be set by the user. The value is initialised to 0.01 as this in practice gave the best results for our data set. For the alternating colormap we choose for the colors blue and yellow to deal with the problem of colorblindness. Colorblindness aects 10 percent of the male population, and is therefor 15

4.3

Color maps

Visualization

a great problem in visualization [BI07]. Colorblind people tend to have diculty to see the dierence between the colors green and red. We solved this problem by taking the alternating colors blue and yellow. Figure 9 shows the alternating colormap applied to matter (left image) and arrow glyphs (right image).

(a)

(b)

Figure 9: Alternating blue-yellow colormap. Sensitivity set to 0.01. (a) Drawn using matter. (b) Drawn using three dimensional arrow glyphs. 4.3.4 Black-body color map

In our visualisation simulation we included the option to apply a black-body color map (see Figure 6 and 7). The black-body colormap is the best choice if nothing is known about the data or task [BI07] because of the perceptual ordering and use of color to avoid contrast eects. In our case we do know something about the data but not enough to select a typical best tting colormap. The only thing that we know is that the received data from the FFTW library is non-discrete but continuous, and resembles a uid ow. Our opinion is that for our uid simulation the black-body color map is the best choice. 4.3.5 User dened color map

To let our simulation support the people with less common types of colorblindness (for example violet colorblindness), we implemented the user dened color map. For this color map the user can set from which color to which color the color map interpolates. Also the user is able to select the interpolation function. The colors can be interpolated via RGB, Hue, Saturation and Value. Figure 10 shows a user dened colormap from blue to brown, interpolated with the four dierent options.

(a)

(b)

(c)

(d)

Figure 10: User dened colormap from blue to brown, interpolated via (a) RGB, (b) Hue, (c) Saturation and (d) Value respectively.

16

4.3

Color maps

Visualization

4.3.6

Overlay alternating color map

On top of all the implemented color maps we implemented the option to have an overlayed alternating color map. This alternating overlayed color map is laid on top of the current selected colormap. This way, both the actual values of the data can be spotted, by the current selected color map, but also great changes in values can easily be spotted because of the alternating overlayed color map. The sensitivity of the alternating overlay can be set by the user. Because in the smoke simulation there are no great dierences in values, for example you will not encounter a data value of 10 next to a data value of 100 the alternating colormap and the overlayed alternating colormap are not really useful in this particular situation. There is not a great dierence in neighbouring values because of the way the data values are computed (by the external FFTW library). Force is just added to the current value in the selected area and the data value gradually rises in that area therefor no great dierences in value occur. The overlayed alternating colormap becomes more useful when the tracked scaling option is enabled. The right image of Figure 11 shows the overlayed alternating colormap on top of the black-body colormap. Nevertheless this option is implemented to show that in some situations it is a good choice for a colormap, and experimentation has indeed shown that in our case the alternating colormap is not a good choice.

(a)

(b)

(c)

(d)

Figure 11: (a) The black-body colormap applied to the data set, (b) The overlayed alternating colormap on top of the black-body colormap applied, making it easier to see groupings of values that are more or less the same. (c-d) Applied to the rainbow colormap.

17

Visualization

Glyphs

In the third step of the assignment glyphs are implemented using several dierent glyph techniques. Glyphs are icons that represent the dierent values of a vector eld. These values are encoded in a single glyph. For example, these values can be encoded through color, width and height of a single glyph. Also the shape and appearance of the glyphs can be varied in a lot of ways, for example 2D/3D arrows, 2D/3D hedgehogs, triangles. Glyphs can also be used to visualize one scalar eld and one vector eld at the same time. Glyphs are implemented to visualize the three data sets, the uid density, the uid velocity magnitude and the force eld magnitude.

5.1

Scalar and vector eld data

In a single glyph both a vector eld data and a scalar are encoded. For the scalar eld the user is able to set it to the density rho, the uid velocity magnitude | v |, or the force eld magnitude | f |. The vector eld can be set to either the uid velocity v or the force eld f . The uid velocity magnitude and force eld magnitude are computed by taking the vector length as described in 4.2.1. The vector eld direction and magnitude are visualized by the orientation and length of the glyph. The scalar eld value is visualized by the color of the glyph. The length and thickness of the glyph encodes the scalar eld rho, all these elds can be adjusted by the user.

5.2

Glyph distribution methods

Part of the assignment to implement glyphs is to implement a mechanism to specify where to draw the glyphs. We implemented three dierent ways of placing the glyphs: Uniform Random Uniform userdened The uniform method aligns the glyphs on a regular grid. Because this regular grid causes some interpretation problems as described in [Tel] random is implemented. In regular grids with highly densed areas, the perception of the diagonal orientation of the vector glyphs is weakened because of the uniformity of the sampling points. This problem can be solved by sub-sampling the data set using a randomly distributed (instead of regularly arranged) set of points. The user is able to dene how many random glyphs are drawn. Figure 12 shows a hard to interpret image because of the highly densed area in the right image the solution to this problem is shown, random placed glyph. Also the number of uniform glyphs can be set, by selecting the Uniform userdened option. Initially this is set to 50 50 glyphs, but can be adjusted by the user. In Figure 13 the number of glyphs is set to 300 300 making the glyphs look like matter. The user is able to break the symmetry by setting the number of horizontal glyphs to a dierent value than the number of vertical glyphs. Because sample points do not always coincide with actual grid points, and thus have no value, dierent interpolation methods are implemented as described in 5.3.

18

5.3

Glyph interpolation methods

Visualization

(a)

(b) Figure 12: (a) Highly densed area and the solution, (b) random placed glyphs.

Figure 13: User dened uniform 300 300 glyphs.

5.3

Glyph interpolation methods

The data values for the glyphs are computed by two dierent interpolation methods. The user is able to select the interpolation method that suits him best. The methods to choose are nearest neighbor interpolation or Bilinear interpolation. The nearest neighbor method speaks for itself, for computing the value of a point p, it takes the value of the nearest point q, and gives the point x the same value as y. More formally it is checked in which quadrant of the cell the point lays, and the according data value of the point belonging to the quadrant is taken. For example in Figure 14 if a value needs to be found for point P , it is checked which is the nearest neighbor of P , which in this case is Q12 . Point P is then given the exact same value as point Q12 . Bilinear interpolation takes the four corner values of a point into consideration and interpolates these twice. And then these found values are interpolated again to nd the value for the given point. For example in Figure 14 a value for point P has to be found. Q12 , Q22 , Q11 and Q21 are taken into consideration. First Q12 and Q22 are interpolated to nd a value for R2 . Next Q11 and Q21 are interpolated to nd the value for R1 . Then nally R2 and R1 are interpolated to nd the value for P .

19

5.4

Glyph parametrization

Visualization

Figure 14: Example bilinear interpolation [Wik08]

In formula form bilinear interpolation is computed by (where f (x, y) gives the value in point (x, y): f (x, y) =
f (Q11 ) (x2 x1 )(y2 y1 ) (x2 x)(y2 y)+ f (Q21 ) (x2 x1 )(y2 y1 ) (x x1 )(y2 y)+ f (Q12 ) (x2 x1 )(y2 y1 ) (x2 x)(y y1 )+ f (Q22 ) (x2 x1 )(y2 y1 ) (x x1 )(y y1 )

(3)

Of course bilinear interpolation gives better results, but because of the many interpolations is also much slower than the nearest neighbor interpolation method. In practice we do not see many dierence between the two interpolation methods and therefor we have chosen to take the nearest neighbor interpolation method as default.

5.4

Glyph parametrization

The glyphs can be ne tuned in a number of ways. The coloring of the glyphs can be set to color map coloring or direction coloring. If direction coloring is selected the glyph is colored depending on the direction the glyph has. If colormap coloring is selected the glyph has a color depending on the value of the selected data set in that specic point. All previous described color maps can be applied to glyphs.

5.5

Clamping

Glyphs can also be clamped if their length is longer than the grid cell size. If this happens glyphs will overlap each other, making it dicult to interpret the data. There are three dierent ways to set the clamping. Uniform All glyphs are made the same length. Independent of the data value, each glyph is scaled to be uniform length. 20

5.6

Lighting

Visualization

Normal All glyphs are scaled dependent of their data value. This could mean that glyphs overlap each other. Clamped Glyphs are scaled dependent of their data value, but are clamped to the grid cell size if they would exceed this value. An example of the clamping set the Normal and Clamped is shown in Figure 15

(a)

(b)

Figure 15: (a) Clamping option set to Normal. (b) Clamped, making it easier to interpret the data. It would be interesting to clamp the glyphs with a logarithmic scale. By making the vector length scale smaller as the data value increases. This is not implemented because of time constraints.

5.6

Lighting

To make the interpretation of the glyphs easier for the user, lighting is applied. Because of the applied lighting, the position and direction of the three dimensional glyphs are easier to see. The applied lighting is directed (no point, but sunlight eect) and always comes from one direction (independent of the camera position). Also bidirectional lighting is applied, this is to visualize the back side of polygons. This will later on become important when visualizing streamsurfaces.

5.7

Implemented glyphs

There are a number of glyphs which the user can select to apply to the dierent data sets. A selection can be made of Hedgehogs 2D or 3D, Cones 3D, Cones silhouette 3D and Arrows 3D. Dierent glyphs are useful in dierent types of situation. Each glyphs has its advantages and disadvantages. Below the dierent types of glyphs are explained and the advantages and disadvantages of each glyph type are claried. Glyphs are implemented using the openGL displayList. The great advantage of a displayList is that the glyphs are precompiled and rotations and transformations can be done on the model coordinates. After rotating and transforming the list is called and the glyphs is displayed in world coordinates, speeding up the rendering signicantly.

21

5.7

Implemented glyphs

Visualization

5.7.1

Hedgehogs 2D/3D

Two dimensional hedgehogs are basically just short lines. Within these glyphs several dierent variables can be encoded. The color, length and direction can be used to encode the data values. Hedgehogs have the advantage that they are simple to implement and fast to render. There are however some obvious problems with hedgehogs. Because the hedgehogs are 2D they are hard to interpret because no lighting model can be applied to them. This problem is easily solved by making 3D hedgehogs on which for example gouraud shading can be applied, therefor three dimensional hedgehogs are also implemented. Also because of the simplicity of a hedgehog it is not clear to which direction the glyph is pointing. This problem is solved by extending the hedgehog with an arrow point in the direction the data is owing, this is implemented in the form of three dimensional arrows (yet these arrow points introduce new problems as described in 5.7.2. Other problems are less easily solved. The visual impression and interpretation of the data depends heavily on the spatial distribution of the glyphs. Also using many glyphs gives unwieldy pictures. To solve these problems several dierent glyph distribution methods are implemented as described in Section 5.2. 5.7.2 Arrows 3D

As stated in the previous paragraphs arrow glyphs are hedgehogs with an arrow point attached to it. This solved the problem of not seeing in which direction the data ows. By the arrow point this becomes clear, yet another problem is introduced. As the glyph is scaled, how big should one make the arrow point. Should this also scale or remain the same length all the time. We chose to keep the arrow head always the same length, but this preference may be dierent for other users. To solve the problem of scaling the arrow head, three dimensional cones are introduced. 5.7.3 Cones 3D

Three dimensional cones are basically the arrow point without the arrow tube. This solves the problem of scaling because now the arrow point is the entire glyphs and its length and scale are according to the data to be visualised. To try to solve the problem of cluttering of glyphs which results in unwieldy pictures we implemented three dimensional cone silhouette glyphs. 5.7.4 Cones silhouette 3D

Three dimensional cone silhouette glyphs (see left image of Figure 16) are the same as 3D cone glyphs, but only the wire-frame of the cone is rendered instead of the whole surface. In theory this seemed a good idea to solve the problem of cluttering of glyphs, in practical this turned out rather useless (see Figure 16). The eect of only rendering the wire-frame of a glyph makes the picture even harder to interpreted. Because lighting has little eect on a single wire-frame (as opposed to the surface of solid cones) the picture is very dicult to interpreted. Because no lighting model can be applied, it cannot easily be seen which wire belongs to which glyph, making it extremely hard to interpret the data. Thus making cone silhouette glyphs not a good choice for a glyph. Clamped 3D arrow glyphs (right image of Figure 16) turned out to be the best choice of glyphs, if rendering speed is less important, otherwise 2D hedgehogs are preferred.

22

5.7

Implemented glyphs

Visualization

(a)

(b)

(c) Figure 16: (a) Three dimensional solid cones and three dimensional cone silhouette glyphs (b). The best choice of glyphs, clamped 3D arrow glyphs applied to the same data set (c).

23

Visualization

Gradient

The fourth step of the construction of the uid visualisation concerns the understanding and implementation of scalar eld operators. One scalar eld operator in particular has to be implemented into the uid ow simulation. The scalar eld operator to be implemented is the gradient operator. By implementing scalar eld operators several dierent quantities on scalar elds can be computed. The visualization of these computations enables the user to do various kinds of analysis on the scalar data eld. The gradient of a scalar eld is a vector which shows at every point, the direction in which the quantity of the data varies the most at that point. In other words, the direction of maximal change. The length of the vector encodes the amount of change per unit length in that direction.

6.1

Gradient computation methods

To compute the gradient, the derivative of the scalar quantity in that direction is computed. To compute the gradient, denoted as , of a function f the derivative is computed: f f f , , ) x y z

f =(

(4)

Because of the fact that we do not have a continuous function, but a discrete data value in each point the formula as stated above (Formula 4) cannot be directly applied to compute the gradient. In our uid simulation each point in the data eld has a discrete scalar value, therefor the partial derivative of this scalar is computed by the nite dierences method. To compute the derivative of a discrete value, three dierent methods of computing this can be applied. By forward computation (Formula 5) the point itself and the next point are taken into consideration, by backward computation (Formula 7) the previous point and the current point are taken into account and for central computation (Formula 6) both the previous and next point are considered. forward: x = xi+1,j,k xi,j,k central: 1 x = (xi+1,j,k xi1,j,k ) 2 (5)

(6)

backward: x = xi,j,k xi1,j,k (7)

All three methods are implemented in the uid ow simulation and are a user preference. For both the x and y direction the partial derivative of the uid density rho and the uid velocity x magnitude | v | are computed and combined into a vector . The vector is visualized in the y data eld using the glyphs described in Section 5. Which gradient (rho or | v |) is visualized can be selected in the graphical user interface. The gradient of the velocity is the fastest increase in that point. The superposition of all local velocities is the relation between the velocity and the gradient of the velocity. 24

6.2

Noise reduction methods

Visualization

6.2

Noise reduction methods

A dierent approach to compute the rst order derivatives of discrete data is to use the so called Sobel operator [IS73] or the Prewitt operator [PM66]. Both techniques are originally meant for stabilizing edge detection, but have the pleasant eect of reducing the presence of noise in the data. Both techniques are implemented to see if the noise reduction had an positive eect on the interpretation of the gradient data. These rst order derivatives are computed by Formula 8 and Formula 10. Both techniques turned out to have little eect on our dataset, which is of course the case, because there is little noise in our data set. Sobel operator: I (i, j) = Ii+1,j1 + 2Ii+1,j + Ii+1,j+1 Ii1,j1 2Ii1,j Ii1,j+1 x I (i, j) = Ii+1,j+1 + 2Ii,j+1 + Ii1,j+1 Ii+1,j1 2Ii,j1 Ii1,j1 y Prewitt operator: I (i, j) = Ii+1,j1 + Ii+1,j + Ii+1,j+1 Ii1,j1 Ii1,j Ii1,j+1 x I (i, j) = Ii+1,j+1 + Ii,j+1 + Ii1,j+1 Ii+1,j1 Ii,j1 Ii1,j1 y (10) (11) (8) (9)

(a)

(b)

Figure 17: (a) Three dimensional arrow glyphs (b) Gradient visualization using Sobel operator, applied to the same dataset

6.3

Gradient application

As suggested in [Tel] an example application of the gradient is the computation of the normal vector of a surface. A surface normal is given by the formula: f f , , 1) x y 25

n = (

(12)

6.3

Gradient application

Visualization

The gradient vector is the same as the normal vector, though it has not the same length. The normal can be computed from the gradient by normalizing the vector:
f x f y

n = (

( f )2 + ( f )2 + 1 x y

( f )2 + ( f )2 + 1 x y

, 1)

(13)

Note that the z component of the normal to the surface is always set to 1 as stated in [Tel]. The z could also be computed by taking the partial derivative in the z direction. This is possible because slices are introduced in the next section. This is not done because at the time of this implementation, slices did not yet exist.

26

Visualization

Streamlines

In the fth stage of the construction of the ow visualization program, stream objects were implemented. A special case of stream objects, streamlines are implemented. Streamlines are lines visualised in the ow eld. These lines follow the trajectory of one particle. The streamlines visualize the path the particle would follow if it is released at the start of the streamline. The starting points of the streamlines are called seedpoints. The streamline is the curved path over a given time interval T of an imaginary particle passing through a given start location or seed, in a stationary vector eld over some domain D [Tel].

7.1

Computation of the streamline

The streamlines are constructed by computing the integral of the eld v. This is because the trajectory of the uid velocity eld v must be visualised over a given time interval. This integration can be done by either the Euler integration method, or the Runge Kutta integration method. 7.1.1 Euler integration method

The Euler integration is given by the following formula:


/t

v(p)dt =
t=0 i=0

v(pi )t

(14)

where pi = pi1 + vi1 t 7.1.2 Runge Kutta second and fourth order integration method

Another method for computing the streamline, in stead of the Euler integration method is the Runge Kutta method. The Runge kutta method approximates the vector eld v between two sample points along a stream object with the average value v(pi )+v(pi+1 ) . The great advantage 2 of the Runge Kutta method over the Euler method, is the accuracy of the streamlines for the same timestep t. This means that to maintain the same accuracy of the streamlines the timestep can be increased. This in turn means that the process of computing the streamlines is much faster. Or it can be such that the time is held the same as it was with the Euler method but gives much more accurate results. So there is (as is often the case) a trade o between accuracy and time. The Runge Kutta method relies on an approximation to the Taylor polynomial. Two methods of computing the integral by approximating the Taylor polynomial are implemented. The second order Taylor polynomial (or Runge Kutta 2nd order) and the fourth order Taylor polynomial (or Runge Kutta 4th order). These again have the time versus accuracy trade o. The second order Runge Kutta is slightly faster but less accurate as opposed to the fourth order Runge Kutta method.

27

7.1

Computation of the streamline

Visualization

Runge Kutta second order [PTVF02]: k1 = hf (xn , yn ) 1 1 k2 = hf (xn + h, yn + k1 ) 2 2 yn+1 = yn + k2 + O(h3 ) Runge Kutta fourth order [PTVF02]: k1 = hf (xn , yn ) 1 1 k2 = hf (xn + h, yn + k1 ) 2 2 1 1 k3 = hf (xn + h, yn + k2 ) 2 2 k4 = hf (xn + h, yn + k3 ) 1 1 1 1 yn+1 = yn + k1 + k2 + k3 + k4 + O(h5 ) 6 3 3 6 7.1.3 Integration direction (18) (19) (20) (21) (22) (15) (16) (17)

For the Euler integration method (see Formula 14) and the Runge Kutta method (see Formulas 17 and 22) there are a number of dierent ways to compute the integral. There are three ways of computing the integral of which the last gives the best results: Implicit integration (backward integration) Explicit integration (forward integration) Bilinear integration (central integration) Implicit integration takes the current point and the point t before the current point into account to calculate the integral. Explicit integration takes the current point and the point t after this point into account. Bilinear integration takes both the point t before the current point and the point t after the current point into account to calculate the integral in the current point. The choice of the integration method is left to the user, initially the method is set to central integration because this method gives the best results in practice. Sometimes streamlines have 7.1.4 A value for t

Now there is the problem of nding the best value for t. This value depends on the data set cell sizes, vector eld magnitude, vector eld variation, desired streamline length, and 28

7.2

Seedpoints feeding strategy

Visualization

desired computation speed. In the ideal case t should be adapted to the situation and vary over time, for the same streamline. This is because the values locally dier as the streamline proceeds. There is even a fourth order Runge Kutta method with a self-adjusting step size [WJE00]. For uniform spatial sampling t is adjusted, depending on the spatial integration step and the vector magnitude:
1 2 3 4

if ( m _ s t r e a m L i n e U n i f o r m S p a t i a l S a m p l i n g ) { dt = m _ s t r e a m L i n e S p a t i a l I n t e g r a t i o n S t e p / mag ; }

There are however some guidelines on how to choose a right value for t. According to Telea [Tel] it is a good idea to set the spatial integration step to around 1 times the cell size: 3 In practice, spatial integration steps of around one-third of a cell size should yield good results for most vector elds. [Tel] To give the user some freedom the integration timesteps for all three methods (Euler, Runge Kutta second order, Runge Kutta fourth order) are user inputs. Initially the spatial inte1 gration step is set to 3 cell size, as advised by Telea. Since our cell size is 20 the spatial integration step is initialized at 20 = 6.667. According to [Tel] t the vector length should 3 equal the spatial integration step. Therefor t is calculated by the following formule: cellsize length(vector)
1 3

t =

(23)

The spatial integration step is important for the density based streamline method. Setting this value too small, could lead to overlapping streamlines because cells are stepped over. 7.1.5 Integration stop criterion

The length of the streamline depends on the actual vector eld value. Therefor most of the time a maximal time criterion is not very useful. A better approach is to integrate until a certain streamline length is reached. Not only when the maximum time or length is reached, but also when the actual value (is close to) zero, the integration is stopped. As suggested by [Tel] this constant is set to 0.0001. Also the minimum length for a streamline can be useful. Certain streamlines which are very short, would be ltered out. Combining the minimum length with the maximal length, certain ranges of streamlines can be visualized. The maximal integration time, maximal streamline length, minimal streamline length and the width of the streamline can all be adapted by the user. Initially these values are set to 1500, 1000, 120 and 3 respectively because for this uid ow simulation these turned out to give the best results.

7.2

Seedpoints feeding strategy

Not only the integration method, the choice for t and the stop criterion, have a great inuence on the visualized streamlines. The location and number of seedpoints is as equal important. Seedpoints can be placed regularly in the domain, but this leads to cluttering. Advantage of the regularly placed seedpoints is that you know for sure the whole domain is covered. Downside is that the generated picture may not be useful anymore because of the 29

7.2

Seedpoints feeding strategy

Visualization

cluttering. A solution to the cluttering problem is to trace a streamline until it gets arbitrarily close to itself or an already traced streamline. For the uid ow simulation three dierent methods of placing the seedpoints are implemented of which all take the cluttering solution into account. Random Seedpoints are placed randomly in the vector eld. The number of placed seedpoints is initially set to 10 but can be adapted by the user. The streamlines generated by the seedpoints are interactively. This means that if the ow simulation is running, the streamlines are real-time updated. Figure 18 shows the matter and according generated streamlines with 200 seedpoints.

(a)

(b)

Figure 18: (a) Matter uid ow visualization with (b) according generated streamlines with 200 random placed seedpoints

User dened By choosing this option in the graphical user interface the user is able to place the seedpoints on this place where he/she wants by right clicking on the desired place in the ow simulation. These user dened streamlines are also real-time updated. Density based For evenly spaced streamlines of arbitrary density the method and algorithm of [Lef97] is implemented. This method generates evenly distributed streamlines. Because the computation is taking signicantly more time than the Random and the User dened seedpoint placement, this method is not real-time updated, and the ow simulation has to be paused. This method is slower than the random method, but gives way better results. An example showing the dierence between placing the seedpoints on a regular grid an the density based method of Lefer et al. [Lef97] is shown in Figure 19. 7.2.1 Interpolation

Because of the fact that seedpoints can be placed anywhere within the domain, it is not always the case that it is placed at a grid point. The user is not limited in placing the seedpoints 30

7.3

Tapering eect

Visualization

Figure 19: Long streamlines with seed points placed on a regular grid (left); Same ow eld computed using lefer streamline placement method (right) [Lef97]

because of this. If the seedpoint does not lay on a grid point, the value of the selected seedpoint is computed by either nearest neighbor interpolation or bilinear interpolation (as described in Section 5.3). The value is computed by bilinear interpolation of the four surrounding cell points. An example of bilinear interpolation is shown in Figure 14.

7.3

Tapering eect

Because the streamlines are traced until they come close to itself or another already traced streamline, disparities of density appear in the resulting image. This leads to visual artifacts. Turk [TB96] suggested to taper the ends of the streamlines by decreasing the thickness of the lines as they go closer to another one. The thickness of the streamlines, to create the tapering eect can be computed by the thickness coecent as suggested in [Lef97]:

thicknessCoef =

1.0
ddtest dsep dtest

d dsep ; thicknessCoef [0; 1] (24) d < dsep

Where d is the distance to the closer streamline. This tapering eect and the visual artifacts that appear without this eect are shown in gure 20. The tapering eect is implemented dierent because of time constraints, but generates similar results. Now the tapering is implemented by increasing and decreasing the linewidth of a streamline at the beginning and the end. The length on which the linewidth increases and decreases is initially 10 timesteps and can be adapted by the user. If the actual timesteps is smaller than the user specied number of timesteps, half the timesteps of the streamline are taken to apply the tapering eect to. Initially the taper eect is enabled, to prevent the appearance of visual artifacts. Figure 21 shows the uid ow visualization without and with tapering eect generated with the density based method.

31

7.3

Tapering eect

Visualization

Figure 20: Streamlines computed without and with the tapering eect [Lef97]

(a)

(b)

Figure 21: Density based streamlines generated without (a) and with (b) the tapering eect

32

Visualization

Slices

The goal of the sixth stage of the construction of the uid ow simulation was to implement and understand the time-dependent slices technique. Slices are used to visualize a higher dimension of the uid ow simulation. In this specic application, slices are used to visualize the time dimension of the uid ow. This is achieved by stacking several dierent two dimensional planar grids on top of each other along the z-axis. The result of this is that the simulation now becomes a cube in stead of a plane (see Figure 22). Through slices volumetric data set can be visualized. Here it is used to show how the dierent data values change over time.

Figure 22: Hundered separate slices stacked on top of each other along the z-axis.

8.1

Ringbuer

The slices technique is achieved by implementing a ring buer structure. In this buer the two dimensional planar grids are stored. Each place in the buer represents a dierent moment in time. Every consecutive buer place stores the data values of the planar grid one moment later in time compared to the current data values. The planar grids in this buer are then drawn on top of each other. The dierent planes are equally spaced along the z-axis. The two dimensional planar grids are stored by a slice frame struct in the ring buer. Each slice frame struct has its own data and a displaylist, which is rendered once (precompiled), and then called to speed up the rendering process. Each slice frame structure has a pointer to the next to be rendered slice frame.
1 2 3 4 5 6 7 8 9

// Slice frame s t r u c t u r e typedef struct SliceFrame { fftw_real * m_pVx , * m_pVy ; // ( vx , vy ) = v e l o c i t y field at the current moment fftw_real * m_pFx , * m_pFy ; // ( fx , fy ) = user - c o n t r o l l e d s i m u l a t i o n forces , steered with the mouse fftw_real * m_pRho ; // smoke density at the current ( rho ) and p r e v i o u s ( rho0 ) moment fftw_real * m_pVx0 , * m_pVy0 ; // ( vx0 , vy0 ) = v e l o c i t y field at the p r e v i o u s moment fftw_real * m_pRho0 ; // smoke density at the current ( rho ) and p r e v i o u s ( rho0 ) moment GLuint m_displayList ; // holds the display list for this slice to speed up r e n d e r i n g }* SliceFramePtr ;

33

8.2

Transparency

Visualization

The number of planes can be set by the user through the graphical user interface. This value is initially set to 100 because this gave the best results. With 100 timesteps the eect of the change of data values over time can be seen. Of course more timeframes increases this eect but also needs more processor power. With 100 timeframes we were able to still have the uid ow simulation run smoothly and is therefor chosen as initial value. The slices technique is implemented for every other technique implemented in the previous stages of the construction. A box is drawn around the slice frames to show the area that contains all slice frames.

8.2

Transparency

In the previous steps the two dimensional grid is opaque. Consequence of this is that when the slices are stacked on top of each other, only the rst slice is visible. The rest of the slices are hidden underneath this rst slice. Therefor several dierent drawing methods are implemented to solve this problem. These drawing methods are separate, blended, rst and last, and composed. Below these dierent drawing methods are explained in detail. The dierent methods can be selected as desired, the previous selected methods will still apply to the earlier time frames until they disappear over time. The slice separation distance (the distance between the dierent sliceframes) can be adjusted by the user and is initially set to the width of a cell. 8.2.1 Separate

Slices are drawn along the z-axis without making the slices transparent. Consequence of this is that only the rst slice is visible. By rotating the camera around the cube the outer sides of the other slices can also be shown. Slices are drawn on top of each other such that the rst frame is the current time frame and the ones after that are each 1 timeframe later.

8.2.2

Blended

The slices are drawn along the z-axis with blending. This means that the color of the current pixel is combined with the color values in the frame buer. If we denote the color of the current drawn pixel by dst and the linked color of the pixel in the frame buer by src then the nal color of the pixel is: dst = sf src + df dst (25)

The destination and source weight factors sf and df can be varied in the range [0..1]. These weight factors are called the blending factors. For our uid ow simulation several dierent blending factors are implemented and can be selected by the user. The alpha value for slices depends on what drawing method is selected. If matter is drawn the alpha value depends on the rho data value of the uid ow, if glyphs are drawn the alpha depends on the velocity of the uid ow (see Listing below). These functions are fast because they are linear, and generate reasonable results by experience.
1 2 3

void C GL Fl ui d sW id g et :: setGlyphAlpha ( POINT2D pos ) { // Alpha values are d e p e n d i n g on the data , but the user can o v e r r i d e this value with a global user defined value

34

8.2

Transparency

Visualization

4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34

if ( m _ u s e G l o b a l A l p h a F o r S l i c e s ) { // O v e r r i d e the alpha value m_cur rentCol or [3] = m _ s l i c e G l o b a l A l p h a ; } else { // Derive alpha value int mPrevVecField = m_vectorField ; // Make sure we are s a m p l i n g v e l o c i t y vector field m_vectorField = V E C T O R _ F I E L D _ V E L O C I T Y ; m_cur rentCol or [3] = 0.5 + ( V E L O C I T Y _ D A T A _ S C A L E * VE CT O R3 D_ Le n gt h (&( g e t I n t e r p o l a t e d D i r e c t i o n ( pos ) ) ) ) ; m_vectorField = mPrevVecField ; } glColor4fv ( m_cur rentCol or ) ; } void C GL Fl ui d sW id g et :: se tMatterA lpha ( int idx ) { // Alpha values are d e p e n d i n g on the data , but the user can o v e r r i d e this value with a global user defined value if ( m _ u s e G l o b a l A l p h a F o r S l i c e s ) { // O v e r r i d e the alpha value m_cur rentCol or [3] = m _ s l i c e G l o b a l A l p h a ; } else { // Derive alpha value m_cur rentCol or [3] = 0.0 + ( RHO_D ATA_SCA LE * m_pCurrentSliceFrame - > m_pRho [ idx ]) ; } glColor4fv ( m_cur rentCol or ) ; }

The implemented blending factors are (shown in Figure 23): (sf , df ) (sf , df ) (sf , df ) (sf , df ) = = = = (SRC (SRC (SRC (SRC ALPHA, DST COLOR) COLOR, DST COLOR) ALPHA, DST ONEMINUSSRCCOLOR) ALPHA DST ALPHA)

Beside these blending factors a global alpha value for each slice is implemented. The global alpha value can be set by the user and is initially set to 0.1. Setting the global alpha value means that every slice gets a transparency of (1global alpha value). The great advantage of the global alpha value is that the problem of not being able to look through the slices is solved. Because of the blending and global alpha value slices are drawn in reverse order. Last slice rst, from back to front. An example where the global alpha feature is enabled and disabled is shown in Figure 24. Note that when a white background is used for blending, the colors count up to 1 very fast, which makes the visualization completely white. Therefor a white background should be avoided when blending is enabled. The background color can be adjusted by the user, and is initialized to black.

35

8.2

Transparency

Visualization

(a)

(b)

(c)

(d)

Figure 23: Dierent blending factors. (a) SRC ALPHA, DST COLOR (b) SRC COLOR, DST COLOR (c) SRC ALPHA, DST ONEMINUSSRCCOLOR (d) SRC ALPHA DST ALPHA

(a)

(b) Figure 24: (a) Global alpha blending of 0.05 enabled. (b) No global alpha blending.

36

8.2

Transparency

Visualization

8.2.3

First and Last

Only the rst and last slice of the uid ow simulation are drawn (see Figure 25), this will later on become very useful when visualizing stream surfaces (see Section 9).

Figure 25: Only the rst and last slice of the uid ow simulation are visualized.

8.2.4

Composed

For the composed drawing method several dierent projection methods are implemented. Maximum intensity projection function, Minimum intensity projection function, Average intensity function, Distance to value function and Isosurface function. These projection methods are all methods mentioned and explained in [Tel] (Figure 26 shows all the implemented projection functions applied to the same data set). It was rst implemented for every pixel, but this turned out to be too slow. Because the projection functions are a valuable addition to our visualization program, we decided to keep them but apply them to every grid point to increase rendering speed. The result of the projection function is because of this, somewhat more pixelated then the rest of the uid ow simulation. Maximum Intensity projection function For every grid point p, the Maximum Intensity projection function shoots a ray r through all the slices and nds the maximum value along this ray. This value is then projected and visualized to the rst slice. Using the parametrized ray notation this projection function is expressed as [Tel]:

I(p) = f ( max s(t))


t[0,T ]

(26)

37

8.2

Transparency

Visualization

Minimum Intensity projection function The Minimum Intensity projection function is much the same like the maximum intensity projection function. The only dierence is that this function projects the minimum value found along the ray:

I(p) = f ( min s(t))


t[0,T ]

(27)

Average Intensity Function For every grid point p, the Average Intensity Function shoots a ray through all the slices and averages all scalar data values found along this ray. This value is then projected and visualized to the rst slice:
T

s(t)dt I(p) = f (
t=0

(28)

Distance to Value Function This projection function diers from the above mentioned projections in the sense that it does not project a certain value, but projects the distance. For every grid point a ray is shoot to the rst point where the scalar value is at least a specied value . This distance is then projected and visualized to the rst slice.

I(p) = f (

min
t[0,T ],s(t)

t)

(29)

Isosurface Function The isosurface projection function is used to construct a isosurface. For every grid point a ray is shoot, if a value of is found along this ray then the projected grid point gets the according color. If no value of is found a background color I0 is assigned. f (), t [0, T ], s(t) = , I0 , otherwise

I(p) =

(30)

For the isosurface function a Phong lighting model is implemented [Tel]:

I(p, v, L) = Il (camb + cdif f max(L n, 0) + cspec max(r v, 0) ) (31) For calculating the normal vector, the gradient (as described in Section 6) is used.

38

8.2

Transparency

Visualization

(a)

(b)

(c)

(d)

(e) Figure 26: The dierent composed projection functions. (a) Maximum intensity projection function, (b) Minimum intensity projection function, (c) Average intensity function, (d) Distance to value function and (e) Isosurface function. All applied to the same data set.

39

Visualization

Streamsurfaces

The aim of the seventh step of the construction of the uid ow simulation is the implementation and understanding of stream surfaces. Stream tubes and stream ribbons are a special case of stream surface. Stream surfaces can be seen as generalised streamlines (from Section 7). If these streamlines are generated over time and visualized through the slices (from Section 8) and a surface is placed over these streamlines, a stream surface is created. More formally: Given a seed curve , a stream surface S is a surface that contains and is everywhere tangent to the vector eld. [Tel] The advantage of stream surfaces over streamlines and stream ribbons, is that stream surfaces are easier to follow visually. An example stream surface created with our visualization tool is shown in Figure 27.

Figure 27: Streamsurface with supporting streamlines.

9.1

Stream surface seed curves

A stream surface seed curve is similar to a seed point from a streamline. The seed curve is sweeped through the three dimensional volumetric dataset (slices) to create the stream surface. For the seed curve several dierent shapes can be chosen. A special case of shape is the point, which creates a single streamline, and thus is a seed point. But a single seed point can not create a surface and is therefor not useful in this context. For the uid ow simulation three dierent seed curves are implemented, a line, a quad, and a circle. A circle is another special case of seed curve namely a stream tube. The dierent seed curves can be selected by the user. Initially the seed curve is a line, because unlike a quad or circle this is not a special case of stream surface. Figure 28 shows the dierent seed curves applied to the same data set on the same seed point. For all the seed curves several dierent parameters can be set. For example the seed curve radius can be set (line length in case of a line), the number of sample points on the line, and the number of sample points on the circle can be 40

9.2

Integration method

Visualization

set. From every sample point on the seed curve a streamline is traced. Too few sample points on the seed curve will create a non-smooth surface, but too many will signicantly slow down the process and it may become too dense.

(a)

(b)

(c) Figure 28: The dierent implemented seedcurves: (a) Line, (b) circle, (c) quad. Applied to the same data set at the same seedpoint.

The placement of a seed curve is another important aspect. Just as is the case with the seed points of streamlines, wrong placement of the seed curve generates an uninteresting stream surface. Because the placement is important the user can freely move within the three dimensional volumetric data set and place the seed curves where desired.

9.2

Integration method

The computation of the streamlines supporting the stream surface are based on the streamline integration methods of Section 7. The user is able to choose between Euler integration or Runge Kutta second and fourth order. Again, as is the case with the streamlines, the dierent parameters can be set. The time steps, spatial integration step, maximal time, minimal and maximal length of the streamlines can be set. The direction of the integration (forward, backward or central) is again user set able.

9.3

Surface creation, splitting and joining

The creation of the surface is realised by connecting two consecutive streamline. While tracing the streamlines every time step a point is placed on each of the streamlines. These points 41

9.3

Surface creation, splitting and joining

Visualization

are then connected through a quad mesh between two consecutive streamlines. It may be the case that the distance between two streamlines becomes very small or very high. In the case that the distance becomes really small, it is wise to stop tracing one of the two streamlines, to speed up the process because really small polygons would not be drawn. This has no inuence on the other streamlines, they just keep tracing. By stop tracing one of the streamlines the performance is increases and the computational cost is reduced. If the distance between two consecutive streamlines becomes very high, there are two options. It can be the case to split (see Figure 29a) the surface in two dierent surfaces from a certain point, or to keep the surface as a whole. It can be the case that it is not desired to splitted surfaces, therefor the splitting of the stream surfaces is a user parameter. When also the joining option of streamsurfaces is enabled it is possible to create holes (see Figure 29b) in the surface. These occur if rst the distance between two consecutive streamlines gets very high and the surface is split, and later on becomes very small, and the lines join again. Note that streamlines can not merge with other streamlines. Streamlines only join streamlines if they rst splitted. The drawing of the supporting streamlines of a streamsurface can be enabled or disabled. This option is added because if many streamlines are traced with a small consecutive distance between each other, the picture becomes unclear. By not drawing the supporting streamlines the surface visualization becomes clear again.

(a)

(b)

Figure 29: (a) Splitting of a streamsurface (b) splitting and joining of a streamsurface, creating holes

42

Visualization

10

Image-Based Flow Visualization

The nal step of the construction of the uid ow simulation stated the implementation of image based ow visualization. The image based ow visualization method is introduced by van Wijk and Telea [vW02, TvW99]. Image-based ow visualization is a method for texturebased vector visualization. This method, unlike the line integral convolution (LIC), creates animated ow textures in real time. Again adding force with the mouse can be done in real time. The image-based ow visualization is even somewhat simpler to implement compared to the LIC method. Our implementation is mainly based on the method described in [Tel]. Figure 30 shows an Image-based ow visualization generated by our visualization program.

(a)

(b) Figure 30: (a) Matter visualization. (b) According Image-based ow visualization.

10.1

Method

Figure 31: Pipeline image-based ow visualization [vW02]

Image-based ow visualization diers from other visualization techniques such as vector glyphs and streamlines, in that it used textures to visualize the vector eld. The main idea is that the 43

10.2

Injected noise texture

Visualization

vector eld grid is warped, then a noise texture is injected to the warped grid. Examples of warped vector eld grids applied to our simulation is shown in Figure 32. The noise injection texture is then blended with a certain factor with the previous frame buer texture. Then in the next timeframe the above described process is repeated and an animated image-based ow is visualized. The image-based ow visualization pipeline is shown in Figure 31. The value for is initially set to 0.2 as advised by [Tel]: Good values in practice are [0, 0.2]. In practice it turned out that a value of = 0.05 gave the best results for our uid ow simulation. For this value the ow is smooth and therefor best visualized. Integration is done by the Euler integration method (see Section 7).

(a)

(b)

Figure 32: Warped vector eld meshes. Note that in Figure (b) the warp pulls the mesh out of its boundaries. Also the force is so great that points overlap other points. This could be avoided but then the ow eect would also decrease, and is therefor omitted.

10.2

Injected noise texture

The visualized image depends heavily on the injected noise texture, therefor this noise texture should be chosen wisely. To achieve a high spatial contrast, neighboring pixels should have dierent colors. This is done by generating a random texture of black and white dots. Then there is the problem of how big to make the black and white dots. This size is important because it should be correlated with the velocity magnitude. The spot size can be set by the user. The spot size is initialized to 2 as suggested in [Tel]: In practice, using a dot size d [2, 10] pixels gives good results. The noise textures N (x, t) are all time-dependent because they are generated from the original stationary noise texture N (x) by: N (x, t) = f ((t + N (x)) mod 1) (32)

where f : R+ [0, 1] is a periodic function with period 1. The number of noise sample textures is set initially to 32 because that is typically enough to capture the periodic behaviour of the function f . f as suggested by [Tel] is set to a simple step function: 44

10.3

Parameters

Visualization

1 2 3 4 5

// P e r i o d i c f u n c t i o n float CG LF lu i ds Wi d ge t :: ibfv_f ( int t ) { return ( t > 127) ? 1:0; }

This step function is called the sawtooth function by [vW02]. Some other suggestions are to use a sinusoidal function. This is tested but gave worse results than the sawtooth function.

10.3

Parameters

The work texture and frame buer are both set to 512 512 pixels. But can be adjusted by the user. The noise textures (also adjustable) are initially set to 256 256 pixels. The size of the noise texture is made smaller to save memory and increase performance. To ll the entire area of the frame buer openGL stretches and replicates the noise texture. All parameters have a huge impact on the generated visualizations. Generated pictures with one variable changed and the rest of the parameters kept constant to see the eect are shown in Figures 35, 36 and 37 on the next page.

10.4

Notices

Choosing textures bigger than 512 512 gave problems. When setting the texture to a size above 512 512, random video memory was shown (Figure 34) to the screen due to unkown reasons. Also choosing a dierent simulation timestep, created unexplainable artifacts and slow rendering. When creating textures with openGL the ltering method has to be set. The ltering method can be set to linear or nearest. Nearest takes the value of the closest texel and linear interpolates. By choosing the linear interpolation method, the noise texture is more smooth than taking the nearest interpolation method. For the noise and working texture it is better to take the nearest interpolation method because the noise shouldnt be smooth. Therefor the nearest interpolation method is chosen for generating the noise and working textures. Figure 33 shows the two dierent ltering methods applied to the generated noise and working textures. Because of time constraints the insertion of dye [vW02] is not implemented. This would add a great value to the image-based ow visualization.

45

10.4

Notices

Visualization

(a)

(b)

Figure 33: (a) Nearest ltering method applied to noise texture generation (b) Linear ltering method applied to noise texture generation. Notice that (a) gives better results than (b)

Figure 34: Random video memory

Figure 35: The alpha factor has a huge inpact on the generated visualizations. This Figure shows an alpha factor of 0.6, 0.4, 0.2 and 0.05 respectively.

46

10.4

Notices

Visualization

Figure 36: Also the working and frame buer texture size have a huge impact on the generated visualizations. Values of 10 10, 50 50, 100 100 and 512 512 are used to generate these pictures. (Noise texture size is kept constant at 512 512. Notice how in the rightmost image artifacts occur because the force is relatively big.

Figure 37: Dierent values of noise texture size: 10 10, 50 50, 128 128 and 512 512. (Working texture is kept constant at 512 512).

47

REFERENCES

Visualization

References
[BI07] [Hea96] [IS73] [Lef97] [PM66] David Borland and Russell M. Taylor II. Rainbow color map (still) considered harmful. IEEE Comput. Graph. Appl., 27(2):1417, 2007. Christopher G. Healey. Choosing eective colours for data visualization. Technical report, Vancouver, BC, Canada, Canada, 1996. G. Feldman I. Sobel. A 3 x 3 isotropic gradient operator for image processing. Pattern Classication and Scene Analysis, pages 271273, 1973. Wilfrid Lefer. Creating evenly-spaced streamlines of arbitrary density. pages 43 56, 1997. J.M.S. Prewitt and M.L. Mendelsohn. The analysis of cell images. Annals NY Acad. Sci, 128:10351053, 1966.

[PTVF02] William H. Press, Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery. Numerical recipes in c++: The art of scientic computing. February 2002. [TB96] [Tel] [TvW99] [vW02] [Wik08] [WJE00] Greg Turk and David Banks. Image-guided streamline placement. pages 453460, 1996. Alexandru C. Telea. Data visualization. Alexandru Telea and Jarke J. van Wijk. Simplied representation of vector elds. pages 3542, 1999. Jarke J. van Wijk. Image based ow visualization. pages 745754, 2002. Wikipedia. Bilinear interpolation. http://en.wikipedia.org/wiki/BilinearInterpolation, 2008. Rdiger Westermann, Christopher Johnson, and Thomas Ertl. A level-set method u for ow visualization. pages 147154, 2000.

48

You might also like