You are on page 1of 13

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/253086100

A remote real-time PACS-based platform for medical imaging telemedicine

Article  in  Proceedings of SPIE - The International Society for Optical Engineering · February 2009
DOI: 10.1117/12.811621

CITATIONS READS
3 313

3 authors:

Rouzbeh Maani Sergio G. Camorlinga


University of Alberta The University of Winnipeg
14 PUBLICATIONS   151 CITATIONS    40 PUBLICATIONS   527 CITATIONS   

SEE PROFILE SEE PROFILE

Rasit Eskicioglu
University of Manitoba
56 PUBLICATIONS   428 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Co-located Mobile Collaboration View project

All content following this page was uploaded by Rouzbeh Maani on 13 April 2015.

The user has requested enhancement of the downloaded file.


A Remote Real-time PACS-based Platform for Medical Imaging
Telemedicine
Rouzbeh Maani *a,b, Sergio Camorlinga a,b,c, Rasit Eskicioglu a,b
a
Computer Science Dept., University of Manitoba, Winnipeg, MB, CANADA;
b
TRLabs, 100-135 Innovation Drive, Winnipeg, MB, CANADA;
c
Radiology Dept., Faculty of Medicine, University of Manitoba, Winnipeg, MB CANADA.

ABSTRACT

This paper describes a remote real-time PACS-based telemedicine platform for clinical and diagnostic services delivered
at different care settings where the physicians, specialists and scientists may attend. In fact, the platform aims to provide
a PACS-based telemedicine framework for different medical image services such as segmentation, registration and
specifically high-quality 3D visualization. The proposed approach offers services which are not only widely accessible
and real-time, but are also secure and cost-effective. In addition, the proposed platform has the ability to bring in a real-
time, ubiquitous, collaborative, interactive meeting environment supporting 3D visualization for consultations, which
has not been well addressed with the current PACS-based applications. Using this ability, physicians and specialists can
consult with each other at separate places and it is especially helpful for settings, where there is no specialist or the
number of specialists is not enough to handle all the available cases. Furthermore, the proposed platform can be used as
a rich resource for clinical research studies as well as for academic purposes.
Keywords: real-time, collaborative, platform, telemedicine, medical imaging, PACS, RIS, 3D visualization

1. INTRODUCTION
Medical imaging systems have provided a framework for the physicians. They automate the diagnostic procedures,
making them faster, more reliable and less error-prone. The typical medical diagnostic procedure based on medical
imaging, traditionally, includes different stages and involves a range of diverse specialties. In a simple scenario, after a
doctor prescribes a medical image study for a patient, a case study is produced by the image acquisition modalities (e.g.
CT, MRI, US, etc.) Then a radiologist reads the case study images and adds appropriate comments to the acquired
images. Eventually, the report including the image(s) along with the comments is sent back to the prescribing doctor.
The emergence of modern technologies and particularly medical imaging systems has introduced new features to the
mentioned scenario. For instance, currently several clinical workflow communications are carried out via computer
networks and usually medical data are stored and handled by computerized systems such as PACS (Picture Archival and
Communication System) and RIS (Radiology Information System). These technologies accelerate the process of
medical-imaging-based diagnosis and as a result bring savings in cost and time.
Medical imaging systems have been developed in different pieces in their lifetime. Improvements range from the
development of advanced image acquisition equipment to the utilization of high performance technologies such as
parallel processing for different image processing operations (e.g. visualization, segmentation, registration). So far, many
attempts have been made in this area that has resulted in spectacular improvements in medical imaging services.
Along with all of these improvements, many have tried to make the medical imaging services widespread and easy to
access. An example is UCIPE [1] which has tried to provide ubiquitous medical imaging software. By insisting the fact
that medical image processing tools are bound with high-cost hardware and incapable of being shared by common
medical terminals, UCIPE provides a system with different image processing algorithms via web services. However,
UCIPE is neither collaborative nor real-time.
Another instance which has focused on about the same goal is proposed by IDEAS [2]. IDEAS is a group in e-Health
that defines projects in order to provide easy to access medical imaging applications. The group has presented a general

maani@cs.umanitoba.ca

Medical Imaging 2009: Advanced PACS-based Imaging Informatics and Therapeutic Applications
edited by Khan M. Siddiqui, Brent J. Liu, Proceedings of SPIE Vol. 7264, 72640Q · © 2009 SPIE
CCC code: 1605-7422/09/$18 · doi: 10.1117/12.811621

Proc. of SPIE Vol. 7264 72640Q-1


architecture to support a large variety of Telehealth services. Telecare and Teleradiology are the main applications which
have been developed and tested on top of the proposed platform. Nonetheless, the framework is not providing
collaborative and real-time support facilities.
There are also some efforts which have been focused to add real-time response and collaboration support to their
applications. For example, CoMed [5] allows medical specialists to share patient records and to communicate with each
other on the Internet. CoMed consists of a multimedia medical database containing relevant information about laryngeal
diseases and a real-time collaboration system including a teleconferencing system, a whiteboard and a chatting system.
Nevertheless, real-time responses are limited to some services with low-band demands.
TeleInViVo [3][4] is another application which has tried to introduce widespread and easy-to-access medical imaging
services. TeleInViVo was originally designed to serve as a 3D ultrasound visualization tool for medical applications.
The key features of the system are fast volumetric visualization algorithms and efficient network collaboration tools for
remote data analysis and consultation. In addition to ultrasound images, it also accepts other forms of medical imaging
data such as MRI and PET. The application permits the users to exchange and collaboratively interact with a shared data
set in geographically distributed locations. TeleInViVo provides real-time services for operations that need low
bandwidth though.
TeleMed [6] attempts to provide a real-time collaborative web-based medical system. It uses a media-rich graphical
patient record to allow multiple physicians, possibly located remotely across a wide-area network, have consultation on a
patient record. Consultations can take place interactively in real-time, or offline using textual or audio annotations
combined with graphical markers in the record. Like in CoMed, TeleMed real-time operations are limited to some
special services which do not need a high bandwidth communications.
Utilization of parallel and distributed computing also has been a trend in medical imaging systems. For example, Shastra
[7][8] is a collaborative distributed environment by which experts in a cooperating group can communicate and interact
across a network to solve problems. The Shastra environment consists of a group of applications called tools. Some tools
are responsible for managing the distributed environment, some are responsible for maintaining collaborative sessions,
some provide specific communication services and some provide scientific manipulation functionalities. In spite of all its
advantages, Shastra does not use the Internet and therefore cannot be broadly accessed.
A telemedicine system for remote computer-supported cooperative medical imaging diagnosis is presented in [9]. The
novel component of the proposed system is a Computer-Supported Cooperative Work (CSCW) distributed architecture,
which contains a collaborative toolkit to add audio-conferencing, tele-pointing, window sharing as well as application
synchronization facilities.
DISCIR[10] is another example of a distributed computing system that enables the use of high performance computing
components. The system provides a parallel computing environment for 3D image processing from any standard
computer connected to the network. A prototype system for remote medical image processing, based on DISCIR
architecture is DISMEDI, which provides tools for 3D segmentation, 3D and multi-planar projection, navigation,
measuring and histograms. All these tools remotely access a low-cost computing server connected to the hospital
network. However, DISCIR does not provide real-time services.
Mayer and Meinzer [11] introduced an approach to parallelize some of the more frequently used image processing
operators on distributed memory architectures. In fact, they came up with a client/server model that is designed for
interactive work with volumetric data. Besides implementing some algorithms such as volume visualization, they have
provided some innovations to support an interactive, real-time environment. Nonetheless, the approach is not widely
accessible.
There are also some tendencies toward using Grid computing in medical imaging applications. However, some issues
related to security (e.g. authentication, secure transfer, secure storage of data, authorization and access control,
anonymization, traceability, etc.) [12] make the applicability of Grid computing in medical imaging applications more
challenging. EDG [12] middleware and testbed is a part of the European DataGrid IST project aiming at providing a
basic grid infrastructure for testing grid-enabled medical applications. Another example is VIVE [13], a grid-enabled
interactive analysis tool for 3D medical images, which facilitates diagnosis, surgical planning, therapy evaluation, and
remote 3D examination. VIVE provides an interactive 3D environment with simple web-based user interface.

Proc. of SPIE Vol. 7264 72640Q-2


Regarding all developments in medical imaging systems, there are still some challenges to be accomplished. In this
paper we introduce an architectural platform to address some of the current key issues; namely, real-time response,
collaboration, security and privacy. Section 2 discusses the challenges we confront and the objectives we try to achieve
with the proposed platform. Section 3 discusses the current medical imaging model and their relevant drawbacks. The
architecture of the proposed platform and its general workflow are described in sections 4 and 5 respectively. Section 6
gives an initial assessment of the platform, and finally the conclusion is presented in section 7.

2. CHALLENGES AND OBJECTIVES


As mentioned before, there has been a major trend towards making the medical imaging systems more accessible. In
fact, one of the well-known medical technologies that has been distinguished for its clinical value is Telemedicine. A
substantial objective of Telemedicine is to enable healthcare delivery at anytime and anywhere irrespective of
geographical location [14]. However, there are many concerns that should be addressed by the applications using
telemedicine. Issues such as quality of service, security and privacy, reasonable response time, reliability, cost, and
workflow integration are the main technical objectives for those applications in order to have a proper clinical utilization
[15-18].
Also the utilization of high performance computing techniques in medical imaging processing has been popular.
Actually, the fusion of both techniques namely Telemedicine and high performance computing is considered valuable.
However, the cost of high performance computing technologies on the one hand and the performance barrier of the
Internet on the other hand have made this integration challenging and disputable.
Regarding the challenges in telemedicine and high performance computing as well as those relating to the combination
of them, the proposed PACS-based telemedicine platform aims at providing a framework for medical imaging
processing and visualization services that are widespread, easy-to-access, real-time, secure, reliable and cost-effective. In
addition to common functions provided by telemedicine applications (i.e. supplying widespread, ubiquitous, all-time
facilities), the proposed platform addresses the following characteristics:
1. Real-time services: Although many of medical imaging processes can be performed fast, some of them such as
registration or segmentation are naturally slow because they are computing intensive and the problem escalates
when those operations run in a typical desktop computer. One goal of the proposed platform is offering a real-
time service for such time-consuming operations. In this regard, we are utilizing two methods. First, a cluster of
high performance computers is employed in which the operations can be executed in parallel and the results are
provided in real-time. Second, for the purpose of sending the results to the end users, a technique called "screen
to be seen" has been applied. While a public, general-purpose network namely the Internet is used as the main
connection media, some approaches are needed to alleviate the sluggishness of it. In the "screen to be seen"
technique, only the pixels information which should be shown on the users’ desktops screens is extracted and
sent via the network.
2. Remote collaborative interactive 3D-supportive meeting environment: One of the areas that has not been
addressed well by the current medical applications is to have a remote collaborative interactive 3D-supportive
meeting ability for physicians to consult with each other from long distances. One of the major objectives of the
presented platform is delivering such an environment which is so beneficial for medical-teams (e.g. surgery
teams including different people with diverse specialties). Using the framework, different specialists can have a
remote virtual-meeting in their offices and have the same 3D image view simultaneously and see the changes
carried out by the different participants, whenever the changes occur.
3. Security and Privacy: Security and privacy have been always a major issue for telemedicine applications
especially where a shared channel like the Internet is the main communication media. The proposed platform,
not only uses common cryptographic approaches for transmitting data, but also employs a unique idea to
prevent direct access to the real raw data by the end-users. By using the proposed technique, called "screen to
be seen", instead of the real raw data, the post-processed data will be shown at the end-users' machines and
therefore the privacy and security cannot be violated.
4. Cost-effectiveness: Cost-effectiveness is noted as one of the key concerns for any telemedicine application. The
reason is clear: as long as a telemedicine application is costly, it cannot be used broadly. The proposed platform
provides an affordable solution by sharing expensive computing resources that can be used wherever an internet

Proc. of SPIE Vol. 7264 72640Q-3


connection exists. In addition, the users reduce costs (in terms of special hardware) for performing different
medical image processing activities by using the services provided via the platform. In a precise word, the end
users do not need any special computing or processing unit and they may use cheap affordable typical PCs to
have real-time services for medical imaging activities.
5. Ability for research and academic activities: In addition to provide different medical image processing
capabilities for the physicians, the proposed platform can be used as a rich resource for medical research.
Because of the fact that this platform is shared between many specialists with different purposes and cases, it
can be viewed as a rich source of information, where a variety of medical research can be performed.
Furthermore, the ability to connect to the different specialists via an interactive, collaborative 3D-supporttive
environment is not only beneficial for clinicians and researchers, but also for students (for academic purposes)
and even patients themselves (to understand more about their cases).

3. MEDICAL IMAGING TECHNOLOGIES


Medical imaging systems generally consist of three main parts: Modalities, information management systems and
clinical workstations (Figure 1). Modalities are scanner devices responsible for acquiring medical images. Some popular
modalities include MRI, CT, Ultrasound, etc. The core of the medical imaging system consists of information
management components like the Radiology Information System (RIS) and Picture Archiving and Communication
System (PACS). RIS is responsible for managing the patient records while PACS organizes the medical images acquired
by modalities. These two components usually work together to service the different requests issued by the users. There
are also workstations connected to the PACS and RIS components by which the clinicians can search and access the
patient records and related medical image examinations.

PACS

Clinical Workstation
Modality RIS

Clinical Workstation

Figure 1: The General Components of Medical Imaging Systems It includes modalities in one side, clinical workstations in
the other side and RIS and PACS in the core.
Along with the mentioned Medical Imaging Systems components there are complementary technologies which try to
improve the efficiency, throughput, accuracy, speed and access of the medical imaging systems. Some of the most
remarkable technologies are [19]:
• Computer-aided diagnosis technologies
• Telemedicine
These technologies attempt to facilitate the route of medical imaging procedures and make it more simple for clinicians.
Some of these technologies aim to provide processing facilities while some facilitate the access to the system. For
instance, medical images can be sent to remote workstations (beyond the territory of hospitals or radiology centers)
automatically, through the wide area network (WAN) owing to the Telemedicine technologies.
A success key factor for medical imaging technologies is presenting more information in a way that does not overwhelm
the clinician with too much data [19]. As a result of this important indication some methods such as 3D visualization
techniques came into the medical imaging field. In fact, developing some techniques such as 3-D visualization together
with segmentation and registration tools have been helping the clinicians to understand the abnormalities more

Proc. of SPIE Vol. 7264 72640Q-4


accurately. These methods are packaged in medical imaging software (e.g. computer-aided diagnosis software) and yield
savings in the diagnostic time.
By amalgamating three prominent technologies: medical imaging systems, computer-aided diagnosis and telemedicine,
clinicians can gain the benefits of all at the same time. As a result they can be equipped with more precise, reliable,
accessible and efficient diagnostic tools on the one hand and accelerate the diagnosis process on the other hand. In the
integrated model, the clinicians' workstations are equipped with some medical imaging software (i.e. computer-aided
diagnosis software) by which they can process the medical data, for instance to visualize the images. Thanks to the
Telemedicine technologies, the workstations can be in communication with medical imaging systems and the data can be
transferred to them automatically. Therefore, the clinicians can get the requested data and start processing that data in
their workstations using the installed software. Figure 2 illustrates this procedure.
This model (Figure 2) is the popular trend of the current medical imaging systems. The systems usually use the Internet
as the main media of connection since it is inexpensive, easy-to-access and ubiquitous. Some approaches suggested the
use of high speed and dedicated networks but because of the costs and also lacking ease of access these approaches have
not become popular. However, in spite of all advantages, the current model has some drawbacks and shortcomings. The
most important of which are: slowness, security and privacy potential threats and lack of well-equipped collaboration
tools.
The first weak point of this model is its sluggishness for large amounts of data. In recent years and due to the new
developments in medical imaging equipments, modern modalities can acquire a large amount of very high quality
images with a high resolution. As a result, each medical examination includes sets of images that may aggregate
megabytes and sometimes gigabytes of data. Table 1 shows the typical average amount of data for each examination
produced by CT and MRI scanners according to a study carried out at a hospital center [20].

Clinical Workstation Medical Imaging


Network System

a
Figure 2: The integrated model in the current medical imaging systems. The clinician asks for a certain case study, the
relevant images are sent to the workstation and then the clinician can start processing the data by using the medical
imaging processing software.
Regarding the large amount of data acquired by modern modalities and the existing model, if we use an inexpensive and
general communication media like the Internet, the system will be slow to move the data across. On the other hand if we
want to have a fast communication network instead, it is too expensive and that is why the general trend is using the
Internet and bearing the lethargy associated with it.
Table 1: The typical average amount of data for each examination produced by CT and MRI scanners according to a study
carried out at a hospital center [20].
Case Scanner type Image No. of bits Image size Average number of Total average size of
Number resolution per pixel (Bytes) images per exam each exam (Bytes)
1 CT-Adult 1024x1024 16 2,097,152 100 209,715,200
2 CT-Adult 512x512 8 262,144 100 26,214,400
3 MRI 512x512 16 524,288 48 25,165,824

Proc. of SPIE Vol. 7264 72640Q-5


The next drawback of this model is security. If we use the Internet, we face different security threats. Even if we transfer
data to the workstations properly and securely, the real patient data will exist in a place rather than a safe place like the
medical imaging centers which is usually regarded as a private and secure place.
Finally, the mentioned model lacks collaboration support. The contemporary diagnostic demands tend to use
collaboration since the process of diagnosis usually involves different specialties. In spite of all advantages of this
model, in many cases it is not operational when a group of specialists want to interactively and collaboratively discuss
on a particular case. They usually need to gather together in order to discuss the case. Although some Telemedicine
applications have tried to facilitate remote collaborative meetings (e.g. teleconferencing) they have not been equipped
with real-time image processing services such as 3D visualization. The main reason is the inherent slowness of the
Internet that prevents to have high-bandwidth-demand operations such as 3D visualization. However there are some
approaches such as TeleInViVo [3][4] which preload the images before starting a collaborative remote meeting and then
after the preload is completed, the collaboration can start.

4. THE PROPOSED PLATFORM


In this section the main architecture of the platform is presented and the features which address the mentioned issues of
the current model are discussed. In comparison with the current model (Figure 1), two new types of components are
added in the core of the system in the proposed platform. These new types of components are High Performance
Computing (HPC) components and Utility components. HPC components are responsible for providing real-time
medical imaging services (or real time processing) while utility components try to support the outstanding features of the
platform such as security and collaboration (Figure 3).

PACS HPC
Components Clinical Workstation

Utility
Modality
RIS Components

Clinical Workstation

Figure 3: The general components of the proposed platform. It includes modalities in one side, clinical workstations in the
other side. The core is composed of RIS, PACS HCP components and utility components.
In reality, these two components are not usually located in the same place as PACS and RIS. They are located in a
processing center which is a place equipped with a cluster of high performance computers that run in parallel to process
the requested operations in real-time. This processing center is connected with a high bandwidth link to the hospitals or
radiology centers, where RIS and PACS are located. On the other side, the processing center is accessible via the
Internet to the clinical workstations and the workstations can hook up to the system from anywhere (Figure 4).
The general and more detailed architecture of the platform is depicted in Figure 5. Note that for the sake of simplicity we
have removed some components (e.g. security components). The platform has three main parts. The first part is an
interface which resides at radiology centers (or hospitals), where PACS/RIS systems are usually located. The second part
is the processing center that consists of a cluster of powerful computers which provides different medical image
processing and visualizations services. Finally the last part is an interface located on the end-users' machines which may
reside at small clinics, physicians' offices or homes.
There is no direct connection between end-users and the PACS/RIS systems and all requests are sent off and processed
at the processing center. The platform uses high speed links as the communication channel between PACS/RIS systems
and the processing center while it employs the Internet as the main media to connect the processing center to the end-
users. Actually, the platform takes advantage of the Internet as a ubiquitous media for an easy and inexpensive access.

Proc. of SPIE Vol. 7264 72640Q-6


RIS / PACS
Systems

High-Bandwidth Link

HPC Utility
Components Components
The Platform

Internet
Clinical Workstation Clinical Workstation

Figure 4: The processing center location. It is connected to PACS/RIS systems with a high-bandwidth link on one side and
the Internet on the other side.
The heart of the platform is located in the processing center. The components in the processing center generally either
fall into the HPC or utility categories. The main utility components depicted in Figure 5 are:
1. Management Unit: this unit is the core component of the platform and responsible for leading and managing the
whole systems' components (both HPC and utility). It receives the control massages form different components
and issues the appropriate series of control messages. The messages may come form Interfaces, Load
Distributor, Encoding/Compression units as well as other supporting units (e.g. security unit) that are not shown
in Figure 5.
2. Interfaces: these components are in charge of making communication with the outer world; namely,
Hospital/Radiology centers and end-users. They are in connection with Management Unit and are managed by
this unit.
3. Image Pool: this database is similar to cache memory in PCs. The medical images related to the requested
examination are sent from RIS/PACS systems (in hospitals/radiology centers) and retrieved by the image pool
to be processed later by HPC components.
4. Load Distributor: it is responsible to fetch the images from the image pool and distribute them to the HPC
units. It supervises the computing procedure and after completion sends control messages to the Management
Unit.
5. Encoding/Compression: This unit is one of the most consequential parts of the platform. This unit enables us to
use the Internet, which is naturally a slow media, and yet offers real-time 3D visualization services. After the
final image is produced and ready to show to the end-user, the platform uses a technique called "screen to be
seen". In this technique, only the pixels information which should be shown on the screens of the users is
extracted and sent. Using this approach we attain several benefits. Firstly, the volume of data to be sent is
drastically reduced. Instead of sending the whole 3D data volume or bunch of 2D images, which is being used
in the current model, only 2D pixel information of the screen, captured out of the 3D image, is sent and thereby
a real-time delivery is possible. Secondly, the real data remains at the processing center and there is no way for
the users to access the real data. The only data delivered to the end users is post-processed pixels' information.
Capturing the "screen to be seen" is the duty of this unit.
On the other hand, the HPC components are categorized into two classes:
1. Processing Units: these units are responsible to process the data which is sent by the Load Distributor unit. In
fact, Load Distributor unit divides the bulky data into smaller parts and gives each unit a part which can be
processed in real-time. After the completion of each unit work, all results should be composed together.

Proc. of SPIE Vol. 7264 72640Q-7


2. Compositing Units: as mentioned before, Load Distributor unit divides the whole data into smaller parts and
sent them to the processing units. However, output of these small pieces of processed data should be combined
together in order to get the final result. The task of compositing these smaller processed parts is the duty of
Compositing Units.

Hospital/ Hospital/
Radiology Radiology
End-User End-User Center Center
Monitor/ Mouse/ Monitor/ Mouse/
Projector/ Keyboard Projector/ Keyboard RIS/PACS RIS/PACS
Screen Screen System System

Interface/Decode/ Interface/Decode/ Interface Interface


Decompression Decompression
High High
Internet Internet Speed Speed
Link Link
Processing Center
Interface Interface

Management
Unit
Legend
Image Pool
Load
Data Compositing Units (Raw &
Distributor
Processed)
Control
Instruction
Compositing Unit
Utility Encoding/
Component Compression Processing Units
HPC
Component Processing Processing
Compositing Unit Unit Unit
Connection

Figure 5: The general architecture of the platform. It is connected to PACS/RIS systems with a high-bandwidth link on one
side and the Internet on the other side via Interfaces. In the core, it consists of HPC and utility components.

5. GENERAL WORKFLOW
This section explains the general workflow of the platform. In order to connect and get the services, the users carry out
three steps. In the first step, all the participants in a collaborative meeting connect to the system and establish a remote
collaboration session. The second step is searching for a special case to be investigated and discussed. Finally, in the
third step, the users can collaboratively perform different processing operations.
5.1 Collaboration Establishment
The first step to have a remote collaborative session includes the connection of all participants to the system. In this step,
all participants try to connect to the system by their user names and passwords. Their requests are received by the
Interface, and sent to the Management Unit. Using security components, the system at first authenticates the users and
then determines whether or not the users are authorized to participate in that session. Upon the authorization assessment,
the appropriate message is produced by Management Unit and sent back to each user. At this stage, one user will be set
as the leader of the session. After the completion of this step, the system knows the number of participants in the session
and reserves adequate resources for them to have a real-time collaborative session.

Proc. of SPIE Vol. 7264 72640Q-8


5.2 Setup an Examination
Once all the participants are successfully connected to the system, the leader of the session can start searching for a
special case or special patient's record. According to the DICOM Query/Retrieve service class provider, the query can be
done by providing any the following information:
1. Patient Attributes: the patient information such as: Patient Name, Patient ID, Patient Birth Date, etc.
2. Study Attributes: the study information such as: Study Date/Time, Accession Number, Study ID, etc.
3. Series Attributes: the series information such as: Institution Name, Station Name, Performed Procedure Start
Date/Time, etc.
4. Instance Attributes: the instance information such as: Image Type, Acquisition Date/Time, Content Data,
Content Description, Content Creator's Name, etc.
When the session leader sends the query, the query is processed by the Management Unit. The query at first searches the
data in the local database, if it is not found, an appropriate message is created by Management Unit and sent to the
hospitals/radiology centers. If the data is not found in the hospitals/radiology centers either a "not found" message will
be sent back to the users or the data is retrieved to the local database and the information associated with that data is sent
to the end users. At the end of this stage, the case is ready to be analyzed and discussed by the users.
5.3 Consultation and Image Manipulation
At this step the data is ready for different image processing services in general and 3D visualization in particular.
Whenever a user requests a service (e.g. 3D visualization, rotation of the 3D image, zoom in/out, etc.) the request is
processed by the Management Unit and an appropriate message is sent to the Load Distributor. The processing is done in
real-time by the parallel computing units. Then 2D pixels' information is extracted, encoded and compressed to send
back to all participants.

6. ASSESSMENT
An important feature of the platform is the real-time processing. In practice, the whole system performance is affected
by two transmission times. The first occurs when a set of images are sent from a RIS/PACS system to the processing
center. This transmission happens only once per each case study in the second step of the general workflow; namely,
Setup an Examination. The next important transmission time occurs in the third step of the general workflow, when the
participants in a session request processing services and all can view the appropriate output simultaneously. This
transmission includes the extracted pixel information of the screen, which is encoded and compressed, before being sent
to the end users. Figure 6 illustrates the key difference of this concept with the current telemedicine model.

End End
User User
1 2

RIS/ Internet RIS/ Processing Internet


PACS PACS Center

End 2 End
User User

a) The current Telemedicine model b) The proposed platform model

Figure 6: The concept of the current Telemedicine model versus the proposed platform. a) In the current Telemedicine
model the raw data is sent to all participants. b) In the proposed platform in the first step the raw images are sent to the
processing center and in the second step only the pixel data which should be shown is sent to the end users.

Proc. of SPIE Vol. 7264 72640Q-9


Since we use a very high-bandwidth link between RIS/PACS systems and the processing center, large amount of data
can be quickly transmitted. Considering the average volume of data for each examination acquired by different
modalities presented in Table 1, the corresponding transmission times based on the type of the underlying networks [21]
are shown in Table 2. By using an appropriate network, the images and data associated with a case study can be
transferred fast to the processing center.
Table 2: The transmission time for each case study based on several network links. High-bandwidth networks include T3 45
Mbps, OC1 52 Mbps, OC3 155 Mbps and OC12 622 Mbps.
Case No. Average size of each T3 45 Mbps OC1 52 Mbps OC3 155 Mbps OC12 622 Mbps
examination (MB) Transmission Transmission Transmission Transmission
Time (Seconds) Time (Seconds) Time (Seconds) Time (Seconds)
1 210 43 37 11 2
2 26 5 4 1 Less than 1
3 25 5 4 1 Less than 1

The next important transmission time is the time for sending the extracted pixel data ("screen to be seen") to the users.
The size of data to be sent depends on the size of the screen and the number of bits for representing each pixel. Table 3
shows the size of data in Bytes for 512x512 and 1024x1024 screen sizes and 8 and 16 bit length for each pixel.
Table 3: The size of data (in Bytes) for different screen-size and bit-size per pixel.
512 x 512 1024 x 1024
8 bits 262,144 1,048,576
16 bits 524,288 2,097,152
24 bits 786,432 3,145,728

Two techniques help us to reduce the size of the data needed to be sent to the users. First we use compression techniques
to reduce the data volume and second we send only the pixel changes of the screen and not the whole image. These
methods enable to have a real-time response for the requested services. A user was asked to work with a 3D visualized
image with two motion speeds:
1. Very fast: The user continuously rotates the object and performs the rotation as fast as possible, making abrupt
changes in the screen by that operation.
2. Fast: The user moves the object incessantly. Here, the user is not supposed to rotate the object with the
maximum speed, but the user should not stop moving the object.
In this experiment a screen size of 1000 x 800 was considered. Each experiment took about 3 to 5 minutes and repeated
3 times. The goal of this experiment was to figure out how much data we may need to transfer with the abrupt and
continuous changes in that screen size in each second. Three loss ratios for the compression step were applied ( 0%, 25%
and 35%). Tables 4 and 5 show the maximum, average and standard deviation of the data volume after the pixel data is
extracted and compressed for the mentioned experiments (i.e. very fast and fast changes motion speeds. In these tables,
the amount of data which is sent at each second is measured in each sample. The ‘number of samples’ column shows the
number of samples taken for each row. Figures 7 and 8 show the experimental results.
In these experiments we tried to evaluate the worst possible cases; however, in reality we may need lower amount of
data to be sent since physicians usually do not continuously and incessantly rotate the objects.
Table 4: Maximum, average and standard deviation of the volume of data after extraction and compression for loss ratio of
0%, 25% and 35% for very fast changes (Experiment 1).
Maximum Volume Average Volume Standard Number of Samples
(Bytes/Second) (Bytes/Second) Deviation
0% loss 2,293,921 1,081,499 282,726 824
25% loss 873,061 543,431 107,639 468
35% loss 1,248,890 405,141 124,345 904

Proc. of SPIE Vol. 7264 72640Q-10


Table 5: Maximum, average and standard deviation of the volume of data after extraction and compression for loss ratio of
0%, 25% and 35% for fast changes (Experiment 2).
Maximum Volume Average Volume Standard Number of Samples
(Bytes/Second) (Bytes/Second) Deviation
0% loss 1,564,438 623,856 134,857 646
25% loss 1,098,626 318,475 83,969 605
35% loss 311,084 227,508 33,916 508

2500000 1000000 1400000


900000
1200000
2000000 000000

700000 1000000
1500000 600000

500000
C 800000

1000000 400000 600000

a 300000 400000
500000 200000
,nnnnn
100000

0
62 123 184 245 300 367 428 489 550 611 672 733 794 38 75 112 149 100 223 260 297 334 371 400 445
I 1 76 151 226 301 376 451 526 601 676 751 826 901
&lrflpIes

L1taVoIar1e - -Average
a) Expermet 1 (0% lava) 0) Experimevat 1(25% lava) a) Experament 1(35% lava)

Figure 7: Experiment 1 results. The user rotates the object as fast as possible and makes abrupt changes in the screen. The
image pixel data is extracted and compressed in three loss ratios (0%, 25% and 35%) and the data volume to be sent is
measured.

1800000 1200000 350000


1600000-
1000000- 300000
1400000-
1200000- 800000- 250000

I1000000_ 200000
600000-
800000 -
> 150000
600000 - 400000
400000 - 100000
200000-
200000- 50000
0
1 47
--
93 139 185 231 277 323 369 415 461 507 553 509 645
0 -- 0
I 42 83 124 165 206 247 288 329 370 411 452 493 534 575 7 73 109 145 181 217 253 289 325 361 397 433 469 505
Silliples
Lta V okrne - - A erage
a) Expermt 1 (0% baa) b) Exparimaaat 1(25% baa) e) Experuaaant 1(35% baa)

Figure 8: Experiment 2 results. The user rotates the object incessantly. The image pixel data is extracted and compressed in
three loss ratios (0%, 25% and 35%) and the data volume to be sent is measured.
In Experiment 1, that supports fast abrupt image changes, the average data volume sent is about 1.1 MB/s with standard
deviation of about 283 KB/s which may rise up to 2.3 MB/s. Therefore, in order to have less than one second delay we
may need a link with about 1-2 MB/s (8-16 Mbits/s). However, we can alleviate the bandwidth demand with lossy
compression techniques.
For the experiment 2 that moves and rotates continuously an image object, the average sent data volume is about 624
KB/s with standard deviation of about 135 KB/s which maybe increased up to 1.6 MB/s. Thus in order to have less than
one second delay we may need a 600-700 KB/s (4.8-5.6 Mbits/s) bandwidth which is now available in many home
service areas as high speed internet. In the worst case, the delay sometimes reaches to about 2 seconds which is still
bearable for some real-time uses. A lower bandwidth connection is also possible; however, we may either loose quality
(e.g. higher lossy compression) or bear longer delays. Therefore the whole procedure may take about 1-2 seconds since
the processing center time is usually in the order of milliseconds. It should be noted that the experiments are designed to

Proc. of SPIE Vol. 7264 72640Q-11


assess the worst case scenarios and in the commonplace medical usage scenarios a better performance will be reached.
However further research is needed to assess the best and average commonplace medical usage scenarios.

7. CONCLUSION
The paper introduces a platform for medical imaging systems, which is not only widespread, easy-to-access and cost-
effective like the other telemedicine applications but also addresses other important challenges including 3D
visualization and secure access. Another advantage of the platform is the provisioning of a real-time collaborative,
interactive meeting capability. By using the idea of sending just the "screen to be seen" to the end-users, the platform
enables us to provide a real-time and secure application through the Internet.
We believe the proposed platform model would be a good solution to address some common issues of the current
telemedicine applications. While utilizing parallel and high performance computing provides a fast and real-time
processing, sending the 2D captured screen from the 3D images, or "screen to be seen", enables the users to get these
expeditiously and real time via their inexpensive ubiquitous internet connections. This technique also makes the medical
imaging systems more secure, since the real date remains in the processing center and only the pixel information of the
post-processed data is sent to the end users.
The platform is also a cost-effective solution since it is sharing the high performance equipments, which may be
expensive, among a broad range of users.
Above all, a real-time, collaborative, interactive, virtual meeting capability is beneficial for surgery teams, specialists
and physicians in their consultations as well as medical students and researchers in their academic and research
discussions respectively.

REFERENCES
[1]
Sun A., Jin H., Zheng R., He R., Zhang Q., Guo W., Wu S., "UCIPE: Ubiquitous Context-Based Image Processing Engine for Medical Image
Grid," UIC 2007, LNCS 4611, 888–897 (2007).
[2]
Blanquer I., Hernandez V., Traver V., Naranjo J.C., Fernandez C., Garcia G., Meseguer J.M., Cervera J., "Technical Report: Integrated
Distributed Environment for Application Service in e-Health," IDEAS in e-Health, IST-2001-34614, (2001).
[3]
Coleman J., Goettsch A., Savchenko A., Kollmann H., Wang K., Klement E., Bono P., "TeleInViVo: Towards Collaborative Volume Visualization
Environments," Computer & Graphics, vol. 20, 801-811 (1996).
[4]
Jarvis S., Barton R., Coleman J., "TeleInViVo: a Volume Visualization Tool with Applications in Multiple Fields," OCEANS '99 MTS/IEEE. Riding
the Crest into the 21st Century, vol. 1, 474-480 (1999).
[5]
Sung M.Y., Kim M.S., Kim E.J., Yoo J.H., Sung M.W., "CoMed: a Real-time Collaborative Medicine System," International Journal of Medical
Informatics, vol. 57, 117-126 (2000).
[6]
Kilman D.G., Forslund D.W., "An International Collaboratory based on Virtual Patient Records," Communications of the ACM (CACM), vol. 40,
111-117 (1997).
[7]
Anupam V., Bajaj C., Schikore D., Schikore M., "Distributed and Collaborative Visualization," Computer, vol. 27, 37-43 (1994).
[8]
Anupam V., Bajaj C., "Shastra: Multimedia Collaborative Design Environment," IEEE Multimedia, vol. 1, 39-49 (1994).
[9]
Gomez E.J., Del Pozo F., Quiles J.A., Arredondo M.T., Rahms H., Sanz M., Cano P., "A Telemedicine System for Remote Cooperative Medical
Imaging Diagnosis," Computer Methods and Programs in Biomedicine, vol. 49, 37-48 (1996).
[10]
De Alfonso C., Blanquer I., Hernandez V., "Providing with High Performance 3D Medical Image Processing on a Distributed Environment,"
Proceedings of First European HEALTHGRID Conference, 72-79 (2003).
[11]
Mayer A., Meinzer H.P., "High Performance Medical Image Processing in Client/Server-Environments," Computer Methods Programs in
Biomedicine, vol. 58, no. 3, 207-217 (1999).
[12]
Montagnat J., Bellet F., Benoit-Cattin H., Breton V., Brunie L., Duque H., Legre Y., Magnin I.E., Maigne L., Miguet S., Pierson J., Seitz M.,
Tweed T., "Medical Images Simulation, Storage, and Processing on the European DataGrid Testbed," Journal of Grid Computing, vol. 2, 387-
400 (2004).
[13]
Marovic B., Jovanovic Z., "Web-based Grid-enabled Interaction with 3D Medical Data," Future Generation Computer Systems, vol. 22, 385-
392 (2006).
[14]
Graschew G., Roelofs T.A., Schlag P.M., "Digital Medicine in the Virtual Hospital of the Future," International Journal of Computer Assisted
Radiology and Surgery, vol. 1, 119-135 (2006).
[15]
Singh G., "Telemedicine: Issues and implications," Technology and Health Care, vol. 10, 1-10 (2002).
[16]
Bellazzi R., Montani S., Riva A., Stefanelli M., "Web-based Telemedicine Systems for Home-care: Technical Issues and Experiences,"
Computer Methods and Programs in Biomedicine, vol. 64, 175-187 (2001).
[17]
Picot J., "Meeting the Need for Educational Standards in the Practice of Telemedicine and Telehealth," Journal of Telemedicine and Telecare,
vol. 6, 59-62 (2000).
[18]
Bashshur R.L., "Telemedicine effects: Cost, Quality, and Access," Journal of Medical Systems, vol. 19, 81-91 (1995).
[19]
Dryer K.J., Hirschorn D.S., Thrall J.H., Metha A., "PACS: A Guide to the Digital Revolution - 2nd Edition", Springer-Verlag, New York, NY,
(2006).
[20]
Otukile M., Camorlinga S., Rueda J., "Winnipeg Hospitals Network and Traffic Flow Analysis", Technical Report, TRLabs Winnipeg, (2002).
[21]
The Education Center on Computational Science and Engineering at: http://www.edcenter.sdsu.edu/repository/calc_filetranstime.html

Proc. of SPIE Vol. 7264 72640Q-12

View publication stats

You might also like