You are on page 1of 7

Interactive Framework of Ubiquitous Volume Rendering of oral

and maxillofacial data for visualization and pre-surgical


planning.

Introduction:
Oral and maxillofacial surgery mainly treats dentofacial deformities and injuries by surgery
like osteotomy, tooth extraction, implant insertion, and TMJ reconstruction. All these
surgeries require precise evaluation of complex dentofacial abnormalities of the craniofacial
skeleton. There is a growing interest in a feasible pre-surgical plan which depends on both
accurate diagnoses of skeletal and dental deformity and its prediction of a new facial outlook
[1].
With the adverse development of computed tomography (CT) scanning, in particular, cone-
beam computed tomography (CBCT) over the past decade, 3D medical imaging and
visualization have made significant progress in clinical applications [2]. The 3D visualization
provides clinicians with a more reliable and objective diagnostic process that can be
embodied in surgical planning and simulation. Moreover, the advanced imaging and
visualization methods also assist physicians in interpreting the data and investigating the
internal parts of the obtained data. These techniques generate the 3D volume of facial
anatomy and pathology. To directly extract inclusive 2D images from the 3D scalar data
(volume), various methods generally called "Volume Rendering" have been suggested [3].
Recently, the pervasiveness of Graphics Processing Units (GPUs) has entitled the usage of
direct volume rendering (DVR) to visualize 3D dentofacial images at collective frame rates.
The essential benefit of DVRs is that they permit interactive observations of complex 3D
volumes, including the images produced by CBCT, without prior processing or segmentation.
As a result, DVRs can noticeably enhance the visualization skills of various medical imaging
devices such as interventional radiological systems and surgical navigation systems [4].
Despite the interactivity of DVR technology, generating beneficial and perceptual images for
all visualization tasks stays a challenge. The simple color and opacity transfer functions that
are applied in most DVR systems do not have the discriminating power to separate specific
varieties of tissue with comparable intensities [5]. Often, difficulty arises during the accurate
recognition of depth and shape of the facial anatomy, especially during the adjustment of the
rendering parameters. Furthermore, unduly cluttered images and the presence of unpleasant
occlusion patterns that hide the structure of interest are common during the rendering of the
3D images. Different varieties of transfer functions allow higher separation of tissues and
specific strategies for enhancing the perception of shape and depth [6]. In many cases, simple
analytics mixed with the information from imaging modalities can notably improve the
rendering quality for particular clinical tasks. Regrettably, only some of these technologies
are incorporated into the current pre-surgical systems that are widely used in medical
research.
In this paper, we present the customized dynamic shading approach along with the 2D
transfer function. The prime aim is to implement a GPU-based volume rendering framework
by providing various user-friendly features, especially for the pre-surgical simulation of
maxillofacial surgery. Also, the main contribution is to propose a framework to reveal
conceptual characteristics of the volume rendering pipeline in medical visualization using the
existing pipeline by enveloping the elaborate technical details. The framework can also
integrate into open-source platforms like Unity to enable rapid prototyping of visualization
techniques for dental-specific clinical applications.

Related Work:
Direct Volume Rendering (DVR) is a well-known and efficient volumetric data rendering
technology [7]. As a result, DVR has become a more widely used tool in modern medicine
for 3D medical image interpretation and visualization. In DVR, 3D volume data is sampled
on a rectilinear grid and seen as blocks of a semi-transparent medium of variable density, like
in CBCT/CT scans. The scalar value of the medical data at each point maps optical
properties like color and opacity. The effects of these optical properties integrate to render the
medical image with the help of a transfer function. Initially, Max et.al. [8] offered the first
DVR formulation, but the GPU-accelerated pipeline described by Kruger et al. [9] played a
prime role in developing the pipeline for modern hardware-accelerated ray marching image.
Moreover, several visualization libraries are distinctly developed for 3D medical imaging,
among which Visualisation Toolkit (VTK) is a state-of-art library and has been used in
various medical imaging platforms [10]. Advancements in the latest hardware infrastructures
made VTK improve its volume rendering capabilities. The new sets of mappers were
introduced in VTK along with OpenGL and GPU systems to modernize the rendering engine
for better performance. Also, many functionalities have been added like cropping, export of
shader algorithm, depth-peeling, clipping, blending, masking, and many more have been
added.
The VTK currently supports two primary DVR techniques for rendering 3D medical data.
They are ray casting and 2D texture mapping [11]. However, the ray casting technique is
widely used due to its simplicity and afford high-quality render images. Recently, VTK has
replaced the ray-cast mapper with GPU and OpenGL2 module for better management of
textures and to benefit optimized rendering. Also, it provides the developers to make
modifications in ray casting techniques for various medical applications [12].
Initially, Kruger et.al [9] presented ray casting techniques for a single volume, which permits
the user to select different functions like iso-surface, maximum intensity projection, and
composition. Krissian et.al [13], presented a GPU-based ray-caster in a VTK class that can
simultaneously render two-volume, but fails to fully follow the exact VTK visualization
pipeline. The extended idea of this work was proposed by Bozorgi et.al [14], with the
additional capability to allow various operations such as clipping, cropping, and
transformations. This work primarily highlighted the users to enable the use of GPUs for
designing a shader for multiple general purpose. Hence, the multi-volume ray-cast technique
was implemented by shaders. It even suggests that loosing depth information is the main side
effect of this work and can easily misinterpret rendered image.
Similarly, Drouin and Collins [4] proposed a programmable ray integration shading model,
which executes a GPU-based ray casting technique that consists ray integration algorithm by
combining all other VTK supported graphical primitives. This work provides some additional
functionality like custom OpenGL Shading Language (GLSL) code and supporting multiple
volumes for different medical applications. Although the above work is flexible and usable in
various medical applications, it is computationally expensive. It suggests to add multi-
dimensional transfer function to simplify the separation of relevant anatomical structures.
Chaudhary et.al [15], presents OpenGL modernization in VTK Volume rendering classes by
creating multi-functional, cross-platform and high-performing renderer for multiple types and
formats of medical data. This work focuses on ensuring new visualization subsystem based
on VTK pipeline by providing flexible systems to address the wide variety of medical use
cases. The primary objective of this work is to support open-source software systems like 3D
Slicer or Paraview. It mainly provides unique feature called shader composition approach for
enhancing the flexibility of various visualization features. Currently, VTK uses three
independent 1-D transfer function for volume rendering and increasing the number
parameters can improve the discrimination of structures for better medical image
visualization. Therefore, 2-D transfer function features are advantageous to increase the
quality of volume rendering.
Furthermore, various open source platforms rely internally on VTK (ex: 3D Slicer, MITK,
Paraview, Tomviz) which supports implementation of custom application modules. Recently,
Wheeler et.al [16], presented method to integrate VTK volume rendering capabilities into
Unity for the development of medical image visualization. It also provides more flexibility to
both developers and researchers for the ease usage of volume rendering features of VTK.
However, some of the VTK features are yet need to be implemented in Unity.
In this paper, we propose multi-pass ray-cast implementation on existing VTK volume
rendering class by enabling dynamic fragment shaders and 2D transfer function technique.
The primary goal of this framework is to implement wide-variety of volume rendering
capabilities for better 3D medical interactions in real-time. This method is based on
Chaudhary et.al [15] work, which provides cross-platform ubiquitous volume rendering
techniques in VTK. Additionally, various operation effects which are majorly for pre-surgical
simulations are also added. Finally, the framework can be integrated on VTK supported
open-source platforms like Unity for facilitating easy usage of volume visualization in
clinical application of maxillofacial surgery.

Methods:
Ray-casting Method:
Several strategies for accelerating DVR computation have been proposed, out of which ray-
casting techniques explicitly take advantage of diverse hardware architectures. The ray-
casting technique provides high image quality at reasonable frame rates and can be derived
from the volume rendering integral. Furthermore, the approach is parallel in nature and has
the potential to be GPU optimized.
This approach describes emission-absorption optical model by integrating along the direction
of light flow starting from point s=s0 and ending at s=D. Here, each voxel of the volume is
represented as the source of light Io that is partially absorbed by other voxel on the way to
virtual camera. This can be represented using rendering equation using k absorption
coefficient and q emission source-term as
D D
D
−∫ k ( t ) dr −∫ k ( t ) dr
I ( D ) =I oe s0
+∫ q ( s ) e s0
ds (1)
s0

Then, the rendering integral can be approximated by discretization through emissive


contribution of volume samples obtained at regular intervals along a ray cast from pixel of
the image plane as:
n n
I ( D ) =∑ Ci ∏ Tj (2)
i=0 j=i +1

where C is color of the current sample point, T is transparency and maximal number of
sample points represented by n. Thus, the above rendering equation attempts to stimulate the
physical phenomenon of light emission and dispersion for realistic rendering in 3D medical
imaging.

Fig 1: The emission-absorption model, where integral of active emission q at evert point s on
the ray multiplied with absorption along the distance s-D is added.
Fig 2: An illustration of ray-casting method.
Figure 2 shows the process of single-volume ray-casting method. Initially, a ray for every
frame buffer pixel is generated. Traversing on ray provides a point that is sampled and
assigned to RGBA value by a look-up in the transfer function. After illuminating color with
shaders, the sample point is accumulated. A new point on the ray is sampled with a given step
size until the end of the volume reached. Finally, resulting sum is written into corresponding
pixel of the frame buffer. If the accumulated opacity is greater than a given threshold near 1,
the accumulation stops. The quality of ray-casting results depends on the shaders and transfer
function selection.
VTK Implementation:
References:
1. Buchart, C., San Vicente, G., Amundarain, A., & Borro, D. (2009, July). Hybrid visualization for
maxillofacial surgery planning and simulation. In 2009 13th International Conference Information
Visualisation (pp. 266-273). IEEE.
2. Hornak, J. P. (2000). Medical imaging technology. Kirk‐Othmer Encyclopedia of Chemical
Technology.
3. Zhang, Q., Eagleson, R., & Peters, T. M. (2011). Volume visualization: a technical overview with a
focus on medical applications. Journal of digital imaging, 24(4), 640-664.
4. Drouin, S., & Collins, D. L. (2018). PRISM: An open source framework for the interactive design of
GPU volume rendering shaders. PloS one, 13(3), e0193636.
5. Viola, I., Kanitsar, A., & Groller, M. E. (2005). Importance-driven feature enhancement in volume
visualization. IEEE Transactions on Visualization and Computer Graphics, 11(4), 408-418.
6. Ljung, P., Krüger, J., Groller, E., Hadwiger, M., Hansen, C. D., & Ynnerman, A. (2016, June). State of
the art in transfer functions for direct volume rendering. In Computer Graphics Forum (Vol. 35, No. 3,
pp. 669-691).
7. Wilson, O., Van Gelder, A., & Wilhelms, J. (1994). Direct volume rendering via 3D textures. Ucsc-crl-
94-19, Jack Baskin School of Eng., University of California at Santa Cruz.
8. Max, N. (1995). Optical models for direct volume rendering. IEEE Transactions on Visualization and
Computer Graphics, 1(2), 99-108.
9. Kruger, J., & Westermann, R. (2003, October). Acceleration techniques for GPU-based volume
rendering. In IEEE Visualization, 2003. VIS 2003. (pp. 287-292). IEEE.
10. VTK. http://www.vtk.org/
11. Schroeder, W. J., Avila, L. S., & Hoffman, W. (2000). Visualizing with VTK: a tutorial. IEEE
Computer graphics and applications, 20(5), 20-27.
12. O'Leary, P., Jhaveri, S., Chaudhary, A., Sherman, W., Martin, K., Lonie, D., ... & McKenzie, S. (2017,
March). Enhancements to VTK enabling scientific visualization in immersive environments. In 2017
IEEE Virtual Reality (VR) (pp. 186-194). IEEE.
13. Krissian, K., Falcón-Torres, C., Arencibia, S., Bogunovic, H., Pozo, J. M., Villa-Uriol, M. C., &
Frangi, A. (2012). GPU volume ray casting of two volumes within VTK. VTK J.
14. Bozorgi, M., & Lindseth, F. (2015). GPU-based multi-volume ray casting within VTK for medical
applications. International journal of computer assisted radiology and surgery, 10(3), 293-300.
15. Chaudhary, A., Jhaveri, S. J., Sanchez, A., Avila, L. S., Martin, K. M., Vacanti, A., ... & Schroeder, W.
(2019). Cross-platform ubiquitous volume rendering using programmable shaders in VTK for scientific
and medical visualization. IEEE computer graphics and applications, 39(1), 26-43.
16. Wheeler, G., Deng, S., Toussaint, N., Pushparajah, K., Schnabel, J. A., Simpson, J. M., & Gomez, A.
(2018). Virtual interaction and visualisation of 3D medical imaging data with VTK and
Unity. Healthcare technology letters, 5(5), 148-153.

You might also like