Professional Documents
Culture Documents
A Remote Real-Time PACS Based Platform For Medical
A Remote Real-Time PACS Based Platform For Medical
net/publication/253086100
Article in Proceedings of SPIE - The International Society for Optical Engineering · February 2009
DOI: 10.1117/12.811621
CITATIONS READS
3 313
3 authors:
Rasit Eskicioglu
University of Manitoba
56 PUBLICATIONS 428 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Rouzbeh Maani on 13 April 2015.
ABSTRACT
This paper describes a remote real-time PACS-based telemedicine platform for clinical and diagnostic services delivered
at different care settings where the physicians, specialists and scientists may attend. In fact, the platform aims to provide
a PACS-based telemedicine framework for different medical image services such as segmentation, registration and
specifically high-quality 3D visualization. The proposed approach offers services which are not only widely accessible
and real-time, but are also secure and cost-effective. In addition, the proposed platform has the ability to bring in a real-
time, ubiquitous, collaborative, interactive meeting environment supporting 3D visualization for consultations, which
has not been well addressed with the current PACS-based applications. Using this ability, physicians and specialists can
consult with each other at separate places and it is especially helpful for settings, where there is no specialist or the
number of specialists is not enough to handle all the available cases. Furthermore, the proposed platform can be used as
a rich resource for clinical research studies as well as for academic purposes.
Keywords: real-time, collaborative, platform, telemedicine, medical imaging, PACS, RIS, 3D visualization
1. INTRODUCTION
Medical imaging systems have provided a framework for the physicians. They automate the diagnostic procedures,
making them faster, more reliable and less error-prone. The typical medical diagnostic procedure based on medical
imaging, traditionally, includes different stages and involves a range of diverse specialties. In a simple scenario, after a
doctor prescribes a medical image study for a patient, a case study is produced by the image acquisition modalities (e.g.
CT, MRI, US, etc.) Then a radiologist reads the case study images and adds appropriate comments to the acquired
images. Eventually, the report including the image(s) along with the comments is sent back to the prescribing doctor.
The emergence of modern technologies and particularly medical imaging systems has introduced new features to the
mentioned scenario. For instance, currently several clinical workflow communications are carried out via computer
networks and usually medical data are stored and handled by computerized systems such as PACS (Picture Archival and
Communication System) and RIS (Radiology Information System). These technologies accelerate the process of
medical-imaging-based diagnosis and as a result bring savings in cost and time.
Medical imaging systems have been developed in different pieces in their lifetime. Improvements range from the
development of advanced image acquisition equipment to the utilization of high performance technologies such as
parallel processing for different image processing operations (e.g. visualization, segmentation, registration). So far, many
attempts have been made in this area that has resulted in spectacular improvements in medical imaging services.
Along with all of these improvements, many have tried to make the medical imaging services widespread and easy to
access. An example is UCIPE [1] which has tried to provide ubiquitous medical imaging software. By insisting the fact
that medical image processing tools are bound with high-cost hardware and incapable of being shared by common
medical terminals, UCIPE provides a system with different image processing algorithms via web services. However,
UCIPE is neither collaborative nor real-time.
Another instance which has focused on about the same goal is proposed by IDEAS [2]. IDEAS is a group in e-Health
that defines projects in order to provide easy to access medical imaging applications. The group has presented a general
maani@cs.umanitoba.ca
Medical Imaging 2009: Advanced PACS-based Imaging Informatics and Therapeutic Applications
edited by Khan M. Siddiqui, Brent J. Liu, Proceedings of SPIE Vol. 7264, 72640Q · © 2009 SPIE
CCC code: 1605-7422/09/$18 · doi: 10.1117/12.811621
PACS
Clinical Workstation
Modality RIS
Clinical Workstation
Figure 1: The General Components of Medical Imaging Systems It includes modalities in one side, clinical workstations in
the other side and RIS and PACS in the core.
Along with the mentioned Medical Imaging Systems components there are complementary technologies which try to
improve the efficiency, throughput, accuracy, speed and access of the medical imaging systems. Some of the most
remarkable technologies are [19]:
• Computer-aided diagnosis technologies
• Telemedicine
These technologies attempt to facilitate the route of medical imaging procedures and make it more simple for clinicians.
Some of these technologies aim to provide processing facilities while some facilitate the access to the system. For
instance, medical images can be sent to remote workstations (beyond the territory of hospitals or radiology centers)
automatically, through the wide area network (WAN) owing to the Telemedicine technologies.
A success key factor for medical imaging technologies is presenting more information in a way that does not overwhelm
the clinician with too much data [19]. As a result of this important indication some methods such as 3D visualization
techniques came into the medical imaging field. In fact, developing some techniques such as 3-D visualization together
with segmentation and registration tools have been helping the clinicians to understand the abnormalities more
a
Figure 2: The integrated model in the current medical imaging systems. The clinician asks for a certain case study, the
relevant images are sent to the workstation and then the clinician can start processing the data by using the medical
imaging processing software.
Regarding the large amount of data acquired by modern modalities and the existing model, if we use an inexpensive and
general communication media like the Internet, the system will be slow to move the data across. On the other hand if we
want to have a fast communication network instead, it is too expensive and that is why the general trend is using the
Internet and bearing the lethargy associated with it.
Table 1: The typical average amount of data for each examination produced by CT and MRI scanners according to a study
carried out at a hospital center [20].
Case Scanner type Image No. of bits Image size Average number of Total average size of
Number resolution per pixel (Bytes) images per exam each exam (Bytes)
1 CT-Adult 1024x1024 16 2,097,152 100 209,715,200
2 CT-Adult 512x512 8 262,144 100 26,214,400
3 MRI 512x512 16 524,288 48 25,165,824
PACS HPC
Components Clinical Workstation
Utility
Modality
RIS Components
Clinical Workstation
Figure 3: The general components of the proposed platform. It includes modalities in one side, clinical workstations in the
other side. The core is composed of RIS, PACS HCP components and utility components.
In reality, these two components are not usually located in the same place as PACS and RIS. They are located in a
processing center which is a place equipped with a cluster of high performance computers that run in parallel to process
the requested operations in real-time. This processing center is connected with a high bandwidth link to the hospitals or
radiology centers, where RIS and PACS are located. On the other side, the processing center is accessible via the
Internet to the clinical workstations and the workstations can hook up to the system from anywhere (Figure 4).
The general and more detailed architecture of the platform is depicted in Figure 5. Note that for the sake of simplicity we
have removed some components (e.g. security components). The platform has three main parts. The first part is an
interface which resides at radiology centers (or hospitals), where PACS/RIS systems are usually located. The second part
is the processing center that consists of a cluster of powerful computers which provides different medical image
processing and visualizations services. Finally the last part is an interface located on the end-users' machines which may
reside at small clinics, physicians' offices or homes.
There is no direct connection between end-users and the PACS/RIS systems and all requests are sent off and processed
at the processing center. The platform uses high speed links as the communication channel between PACS/RIS systems
and the processing center while it employs the Internet as the main media to connect the processing center to the end-
users. Actually, the platform takes advantage of the Internet as a ubiquitous media for an easy and inexpensive access.
High-Bandwidth Link
HPC Utility
Components Components
The Platform
Internet
Clinical Workstation Clinical Workstation
Figure 4: The processing center location. It is connected to PACS/RIS systems with a high-bandwidth link on one side and
the Internet on the other side.
The heart of the platform is located in the processing center. The components in the processing center generally either
fall into the HPC or utility categories. The main utility components depicted in Figure 5 are:
1. Management Unit: this unit is the core component of the platform and responsible for leading and managing the
whole systems' components (both HPC and utility). It receives the control massages form different components
and issues the appropriate series of control messages. The messages may come form Interfaces, Load
Distributor, Encoding/Compression units as well as other supporting units (e.g. security unit) that are not shown
in Figure 5.
2. Interfaces: these components are in charge of making communication with the outer world; namely,
Hospital/Radiology centers and end-users. They are in connection with Management Unit and are managed by
this unit.
3. Image Pool: this database is similar to cache memory in PCs. The medical images related to the requested
examination are sent from RIS/PACS systems (in hospitals/radiology centers) and retrieved by the image pool
to be processed later by HPC components.
4. Load Distributor: it is responsible to fetch the images from the image pool and distribute them to the HPC
units. It supervises the computing procedure and after completion sends control messages to the Management
Unit.
5. Encoding/Compression: This unit is one of the most consequential parts of the platform. This unit enables us to
use the Internet, which is naturally a slow media, and yet offers real-time 3D visualization services. After the
final image is produced and ready to show to the end-user, the platform uses a technique called "screen to be
seen". In this technique, only the pixels information which should be shown on the screens of the users is
extracted and sent. Using this approach we attain several benefits. Firstly, the volume of data to be sent is
drastically reduced. Instead of sending the whole 3D data volume or bunch of 2D images, which is being used
in the current model, only 2D pixel information of the screen, captured out of the 3D image, is sent and thereby
a real-time delivery is possible. Secondly, the real data remains at the processing center and there is no way for
the users to access the real data. The only data delivered to the end users is post-processed pixels' information.
Capturing the "screen to be seen" is the duty of this unit.
On the other hand, the HPC components are categorized into two classes:
1. Processing Units: these units are responsible to process the data which is sent by the Load Distributor unit. In
fact, Load Distributor unit divides the bulky data into smaller parts and gives each unit a part which can be
processed in real-time. After the completion of each unit work, all results should be composed together.
Hospital/ Hospital/
Radiology Radiology
End-User End-User Center Center
Monitor/ Mouse/ Monitor/ Mouse/
Projector/ Keyboard Projector/ Keyboard RIS/PACS RIS/PACS
Screen Screen System System
Management
Unit
Legend
Image Pool
Load
Data Compositing Units (Raw &
Distributor
Processed)
Control
Instruction
Compositing Unit
Utility Encoding/
Component Compression Processing Units
HPC
Component Processing Processing
Compositing Unit Unit Unit
Connection
Figure 5: The general architecture of the platform. It is connected to PACS/RIS systems with a high-bandwidth link on one
side and the Internet on the other side via Interfaces. In the core, it consists of HPC and utility components.
5. GENERAL WORKFLOW
This section explains the general workflow of the platform. In order to connect and get the services, the users carry out
three steps. In the first step, all the participants in a collaborative meeting connect to the system and establish a remote
collaboration session. The second step is searching for a special case to be investigated and discussed. Finally, in the
third step, the users can collaboratively perform different processing operations.
5.1 Collaboration Establishment
The first step to have a remote collaborative session includes the connection of all participants to the system. In this step,
all participants try to connect to the system by their user names and passwords. Their requests are received by the
Interface, and sent to the Management Unit. Using security components, the system at first authenticates the users and
then determines whether or not the users are authorized to participate in that session. Upon the authorization assessment,
the appropriate message is produced by Management Unit and sent back to each user. At this stage, one user will be set
as the leader of the session. After the completion of this step, the system knows the number of participants in the session
and reserves adequate resources for them to have a real-time collaborative session.
6. ASSESSMENT
An important feature of the platform is the real-time processing. In practice, the whole system performance is affected
by two transmission times. The first occurs when a set of images are sent from a RIS/PACS system to the processing
center. This transmission happens only once per each case study in the second step of the general workflow; namely,
Setup an Examination. The next important transmission time occurs in the third step of the general workflow, when the
participants in a session request processing services and all can view the appropriate output simultaneously. This
transmission includes the extracted pixel information of the screen, which is encoded and compressed, before being sent
to the end users. Figure 6 illustrates the key difference of this concept with the current telemedicine model.
End End
User User
1 2
End 2 End
User User
Figure 6: The concept of the current Telemedicine model versus the proposed platform. a) In the current Telemedicine
model the raw data is sent to all participants. b) In the proposed platform in the first step the raw images are sent to the
processing center and in the second step only the pixel data which should be shown is sent to the end users.
The next important transmission time is the time for sending the extracted pixel data ("screen to be seen") to the users.
The size of data to be sent depends on the size of the screen and the number of bits for representing each pixel. Table 3
shows the size of data in Bytes for 512x512 and 1024x1024 screen sizes and 8 and 16 bit length for each pixel.
Table 3: The size of data (in Bytes) for different screen-size and bit-size per pixel.
512 x 512 1024 x 1024
8 bits 262,144 1,048,576
16 bits 524,288 2,097,152
24 bits 786,432 3,145,728
Two techniques help us to reduce the size of the data needed to be sent to the users. First we use compression techniques
to reduce the data volume and second we send only the pixel changes of the screen and not the whole image. These
methods enable to have a real-time response for the requested services. A user was asked to work with a 3D visualized
image with two motion speeds:
1. Very fast: The user continuously rotates the object and performs the rotation as fast as possible, making abrupt
changes in the screen by that operation.
2. Fast: The user moves the object incessantly. Here, the user is not supposed to rotate the object with the
maximum speed, but the user should not stop moving the object.
In this experiment a screen size of 1000 x 800 was considered. Each experiment took about 3 to 5 minutes and repeated
3 times. The goal of this experiment was to figure out how much data we may need to transfer with the abrupt and
continuous changes in that screen size in each second. Three loss ratios for the compression step were applied ( 0%, 25%
and 35%). Tables 4 and 5 show the maximum, average and standard deviation of the data volume after the pixel data is
extracted and compressed for the mentioned experiments (i.e. very fast and fast changes motion speeds. In these tables,
the amount of data which is sent at each second is measured in each sample. The ‘number of samples’ column shows the
number of samples taken for each row. Figures 7 and 8 show the experimental results.
In these experiments we tried to evaluate the worst possible cases; however, in reality we may need lower amount of
data to be sent since physicians usually do not continuously and incessantly rotate the objects.
Table 4: Maximum, average and standard deviation of the volume of data after extraction and compression for loss ratio of
0%, 25% and 35% for very fast changes (Experiment 1).
Maximum Volume Average Volume Standard Number of Samples
(Bytes/Second) (Bytes/Second) Deviation
0% loss 2,293,921 1,081,499 282,726 824
25% loss 873,061 543,431 107,639 468
35% loss 1,248,890 405,141 124,345 904
700000 1000000
1500000 600000
500000
C 800000
a 300000 400000
500000 200000
,nnnnn
100000
0
62 123 184 245 300 367 428 489 550 611 672 733 794 38 75 112 149 100 223 260 297 334 371 400 445
I 1 76 151 226 301 376 451 526 601 676 751 826 901
&lrflpIes
L1taVoIar1e - -Average
a) Expermet 1 (0% lava) 0) Experimevat 1(25% lava) a) Experament 1(35% lava)
Figure 7: Experiment 1 results. The user rotates the object as fast as possible and makes abrupt changes in the screen. The
image pixel data is extracted and compressed in three loss ratios (0%, 25% and 35%) and the data volume to be sent is
measured.
I1000000_ 200000
600000-
800000 -
> 150000
600000 - 400000
400000 - 100000
200000-
200000- 50000
0
1 47
--
93 139 185 231 277 323 369 415 461 507 553 509 645
0 -- 0
I 42 83 124 165 206 247 288 329 370 411 452 493 534 575 7 73 109 145 181 217 253 289 325 361 397 433 469 505
Silliples
Lta V okrne - - A erage
a) Expermt 1 (0% baa) b) Exparimaaat 1(25% baa) e) Experuaaant 1(35% baa)
Figure 8: Experiment 2 results. The user rotates the object incessantly. The image pixel data is extracted and compressed in
three loss ratios (0%, 25% and 35%) and the data volume to be sent is measured.
In Experiment 1, that supports fast abrupt image changes, the average data volume sent is about 1.1 MB/s with standard
deviation of about 283 KB/s which may rise up to 2.3 MB/s. Therefore, in order to have less than one second delay we
may need a link with about 1-2 MB/s (8-16 Mbits/s). However, we can alleviate the bandwidth demand with lossy
compression techniques.
For the experiment 2 that moves and rotates continuously an image object, the average sent data volume is about 624
KB/s with standard deviation of about 135 KB/s which maybe increased up to 1.6 MB/s. Thus in order to have less than
one second delay we may need a 600-700 KB/s (4.8-5.6 Mbits/s) bandwidth which is now available in many home
service areas as high speed internet. In the worst case, the delay sometimes reaches to about 2 seconds which is still
bearable for some real-time uses. A lower bandwidth connection is also possible; however, we may either loose quality
(e.g. higher lossy compression) or bear longer delays. Therefore the whole procedure may take about 1-2 seconds since
the processing center time is usually in the order of milliseconds. It should be noted that the experiments are designed to
7. CONCLUSION
The paper introduces a platform for medical imaging systems, which is not only widespread, easy-to-access and cost-
effective like the other telemedicine applications but also addresses other important challenges including 3D
visualization and secure access. Another advantage of the platform is the provisioning of a real-time collaborative,
interactive meeting capability. By using the idea of sending just the "screen to be seen" to the end-users, the platform
enables us to provide a real-time and secure application through the Internet.
We believe the proposed platform model would be a good solution to address some common issues of the current
telemedicine applications. While utilizing parallel and high performance computing provides a fast and real-time
processing, sending the 2D captured screen from the 3D images, or "screen to be seen", enables the users to get these
expeditiously and real time via their inexpensive ubiquitous internet connections. This technique also makes the medical
imaging systems more secure, since the real date remains in the processing center and only the pixel information of the
post-processed data is sent to the end users.
The platform is also a cost-effective solution since it is sharing the high performance equipments, which may be
expensive, among a broad range of users.
Above all, a real-time, collaborative, interactive, virtual meeting capability is beneficial for surgery teams, specialists
and physicians in their consultations as well as medical students and researchers in their academic and research
discussions respectively.
REFERENCES
[1]
Sun A., Jin H., Zheng R., He R., Zhang Q., Guo W., Wu S., "UCIPE: Ubiquitous Context-Based Image Processing Engine for Medical Image
Grid," UIC 2007, LNCS 4611, 888–897 (2007).
[2]
Blanquer I., Hernandez V., Traver V., Naranjo J.C., Fernandez C., Garcia G., Meseguer J.M., Cervera J., "Technical Report: Integrated
Distributed Environment for Application Service in e-Health," IDEAS in e-Health, IST-2001-34614, (2001).
[3]
Coleman J., Goettsch A., Savchenko A., Kollmann H., Wang K., Klement E., Bono P., "TeleInViVo: Towards Collaborative Volume Visualization
Environments," Computer & Graphics, vol. 20, 801-811 (1996).
[4]
Jarvis S., Barton R., Coleman J., "TeleInViVo: a Volume Visualization Tool with Applications in Multiple Fields," OCEANS '99 MTS/IEEE. Riding
the Crest into the 21st Century, vol. 1, 474-480 (1999).
[5]
Sung M.Y., Kim M.S., Kim E.J., Yoo J.H., Sung M.W., "CoMed: a Real-time Collaborative Medicine System," International Journal of Medical
Informatics, vol. 57, 117-126 (2000).
[6]
Kilman D.G., Forslund D.W., "An International Collaboratory based on Virtual Patient Records," Communications of the ACM (CACM), vol. 40,
111-117 (1997).
[7]
Anupam V., Bajaj C., Schikore D., Schikore M., "Distributed and Collaborative Visualization," Computer, vol. 27, 37-43 (1994).
[8]
Anupam V., Bajaj C., "Shastra: Multimedia Collaborative Design Environment," IEEE Multimedia, vol. 1, 39-49 (1994).
[9]
Gomez E.J., Del Pozo F., Quiles J.A., Arredondo M.T., Rahms H., Sanz M., Cano P., "A Telemedicine System for Remote Cooperative Medical
Imaging Diagnosis," Computer Methods and Programs in Biomedicine, vol. 49, 37-48 (1996).
[10]
De Alfonso C., Blanquer I., Hernandez V., "Providing with High Performance 3D Medical Image Processing on a Distributed Environment,"
Proceedings of First European HEALTHGRID Conference, 72-79 (2003).
[11]
Mayer A., Meinzer H.P., "High Performance Medical Image Processing in Client/Server-Environments," Computer Methods Programs in
Biomedicine, vol. 58, no. 3, 207-217 (1999).
[12]
Montagnat J., Bellet F., Benoit-Cattin H., Breton V., Brunie L., Duque H., Legre Y., Magnin I.E., Maigne L., Miguet S., Pierson J., Seitz M.,
Tweed T., "Medical Images Simulation, Storage, and Processing on the European DataGrid Testbed," Journal of Grid Computing, vol. 2, 387-
400 (2004).
[13]
Marovic B., Jovanovic Z., "Web-based Grid-enabled Interaction with 3D Medical Data," Future Generation Computer Systems, vol. 22, 385-
392 (2006).
[14]
Graschew G., Roelofs T.A., Schlag P.M., "Digital Medicine in the Virtual Hospital of the Future," International Journal of Computer Assisted
Radiology and Surgery, vol. 1, 119-135 (2006).
[15]
Singh G., "Telemedicine: Issues and implications," Technology and Health Care, vol. 10, 1-10 (2002).
[16]
Bellazzi R., Montani S., Riva A., Stefanelli M., "Web-based Telemedicine Systems for Home-care: Technical Issues and Experiences,"
Computer Methods and Programs in Biomedicine, vol. 64, 175-187 (2001).
[17]
Picot J., "Meeting the Need for Educational Standards in the Practice of Telemedicine and Telehealth," Journal of Telemedicine and Telecare,
vol. 6, 59-62 (2000).
[18]
Bashshur R.L., "Telemedicine effects: Cost, Quality, and Access," Journal of Medical Systems, vol. 19, 81-91 (1995).
[19]
Dryer K.J., Hirschorn D.S., Thrall J.H., Metha A., "PACS: A Guide to the Digital Revolution - 2nd Edition", Springer-Verlag, New York, NY,
(2006).
[20]
Otukile M., Camorlinga S., Rueda J., "Winnipeg Hospitals Network and Traffic Flow Analysis", Technical Report, TRLabs Winnipeg, (2002).
[21]
The Education Center on Computational Science and Engineering at: http://www.edcenter.sdsu.edu/repository/calc_filetranstime.html