You are on page 1of 35

Nama : Putri ulul azmi

Nim : P07120317037
Prodi : DIV4 Keperwatan Tingkat 4

3D and 4D Medical Image Registration Combined with Image Segmentation and


Visualization
INTRODUCTION
Image registration, segmentation, and visualization are three major components of
medical image processing. Three-dimensional (3D) digital medical images are three
dimensionally reconstructed, often with minor artifacts, and with limited spatial resolution
and gray scale, unlike common digital pictures. Because of these limitations, image filtering
is often performed before the images are viewed and further processed (Behrenbruch,
Petroudi, Bond, et al., 2004). Different 3D imaging modalities usually provide
complementary medical information about patient anatomy or physiology. Four-
dimensional (4D) medical imaging is an emerging technology that aims to represent patient
motions over time. Image registration has become increasingly important in combining
these 3D/4D images and providing comprehensive patient information for radiological
diagnosis and treatment. 3D images have been utilized clinically since computed
tomography (CT) was invented (Hounsfield, 1973). Later on, magnetic resonance imaging
(MRI), positron emission tomography (PET), and single photon emission computed
tomography (SPECT) have been developed, providing 3D imaging modalities that
complement CT. Among the most recent advances in clinical imaging, helical multislice CT
provides improved image resolution and capacity of 4D imaging (Pan, Lee, Rietzel, &
Chen, 2004; Ueda, Mori, Minami et al., 2006). Other advances include mega-voltage CT
(MVCT), cone-beam CT (CBCT), functional MRI, open field MRI, time-of-flight PET,
motion-corrected PET, various angiography, and combined modality imaging, such as
PET/CT (Beyer, Townsend, Brun et al., 2000), and SPECT/CT (O’Connor & Kemp, 2006).
Some preclinical imaging techniques have also been developed, including parallel
multichannel MRI (Bohurka, 2004), Overhauser enhanced MRI (Krishna, English, Yamada
et al., 2002), and electron 3D and 4D Medical Image Registration paramagnetic
resonance imaging (EPRI) (Matsunoto, Subramanian, Devasahayam et al., 2006).
BACKGROUND
3d/4d Medical Imaging A 3D
medical image contains a sequence of parallel two-dimensional (2D) images
representing anatomic or physiologic information in 3D space. The smallest element of a
3D image is a cubic volume called voxel. A 4D medical image contains a temporal series of
3D images. With a subsecond time resolution, it can be used for monitoring
respiratory/cardiac motion (Keall, Mageras, Malter et al., 2006).
Image Segmentation and Visualization
Medical image segmentation defines regions of interest used to adapt image changes,
study image deformation, and assist image registration. Many methods for segmentation
have been developed including thresholding, region growing, clustering, as well as atlas-
guided and level sets (Pham, Xu, & Prince, 2000; Suri, Liu, & Singh et al., 2002). Atlas-
guided methods are based on a standard anatomical atlas, which serves as an initial point
for adapting to any specific image. Level sets, also called active contours, are
geometrically deformable models, used for fast shape recovery. Atlas-based level sets
have been applied clinically for treatment planning (Lu, Olivera, Chen et al., 2006a; Ragan,
Starkschall, McNutt et al., 2005; ) and are closely related to image registration (Vemuri, Ye,
& Chen, et al., 2003). Figure 1 shows automatic contours. Depending on how the 3D
image is segmented, it can be either 2D-based or 3Dbased (Suri, Liu, Reden, &
Laxminarayan, 2002).
rigid Image registration Rigid image
registration assumes a motionless patient such that the underlying anatomy is
identical in different imaging modalities for alignment. Three approaches to rigid
registration are: coordinate-based, extrinsicbased, and intrinsic-based (Maintz & Viergever,
1998). Coordinate-based registration is performed by calibrating the coordinate system to
produce “co-registered” images. Multimodality scanners, such as PET/CT and SPECT/CT,
are typical examples.

deformable Image registration


Deformable image registration contains a nonrigid transformation model that specifies
the way to deform one image to match another. A rigid image registration is almost always
performed to determine an initial position using rigid transformation with six variables (3
translations and 3 rotations). For nonrigid transformation, the number of variables will
increase dramatically, up to three times the number of voxels. Common deformable
transformations are spline-based with control points, the elastic model driven by image
similarity, the viscous fluid model with region growth, the finite element model using rigidity
classification, the optical flow with motion estimation, and free-form deformation (Chi,
Liang, & Yan, 2006; Crum, Hartkens, & Hill, 2004; Lu, Olivera, Chen et al., 2006b).
a challenge from 3d/4d conformal radiotherapy: deformable Image registration
Broadened Concept of D Medical Imaging The 4D imaging concept has been
broadened to cover various time resolutions. The common 4D image has subsecond
temporal resolution (Pan et al., 2004), while a series of 3D images, reflecting patient
changes over a longer time span, should be also qualified as a 4D image with sufficient
resolution to assess slower changes, including tumor growth/shrinkage and weight
gain/loss during a course of treatment.
challenges in deformable Image
registration For deformable image registration, the underlying anatomy changes,
therefore voxel mapping among images is a challenge. First, the deformable
transformation handles a large number of positioning variables that must be determined for
every voxels within the anatomy. This can be abstracted as a multiple variable optimization
problem in mathematics, limiting the performance of deformable image registration for
many years (Crum, 2004). Second, the deformable registration is extremely difficult to be
validated as there is lack of absolutes with respect to the location of corresponding voxels.
Therefore, the accuracy and reliability of deformable registration should be evaluated on a
case specific basis (Sanchez Castro, Pollo, Meuli et al., 2006; Wang, Dong, O’Daniel et al.,
2005).
Gaps between Frontier research and clinical Practice
Despite of the advances in 3D and 4D imaging and image registration, 2D-based rigid
registration techniques are predominantly used in the clinic; although automatic rigid
registration methods exist in most commercial treatment planning software. Two reasons
are primarily responsible for this disconnect: First, the user must visually verify the final
registration using the 2D-based visualization tools available for image fusion in most
commercial software. Second, most clinical images have some degree of pre-existing
deformation so that automatic rigid registration can prove unreliable but manual methods
allow the user to perform local organ registration. Some recent commercial software has
recognized this problem and provides the option of selecting the region-of-interest to ease
the deformation problem. This method, however, is only a partial solution to cope with
image changes.

FUTURE TRENDS
3D rigid image registration will dominate clinical practice and will remain essential as
more specialized complementary 3D imaging modalities become clinically relevant.
Although the simplicity of automatic image registration is more attractive, manual image
registration with 2D/3D visualization is irreplaceable because it permits incorporation of
medical knowledge for verification and adjustment of the automatic.registration results. As
awareness of the problems of the patient motion and anatomic changes increases, further
research on 4D imaging and deformable registration will be stimulated to meet the clinical
demands. Motion correction in the PET/CT and SPECT/CT will continue to improve the
“coregistration” of these images. Interdisciplinary approaches are expected to offer further
improvements for the difficult registration problem. With advances in hybrid registration
algorithms and parallel computing, more progresses are expected, resulting in improved
accuracy and performance.

CONCLUSION
Higher dimensional deformable image registration has become a focus of clinical
research. The accuracy, reliability, and performance of 3D/4D image registration have
been improved with assistance of image segmentation and visualization.

The ABC Approach and the Feminization of HIV/AIDS in the Sub-Saharan Africa
INTRODUCTION
The growing prevalence of HIV/AIDS infections among women in African nations
south of the Sahara is a complex and pressing public health concern. In this article, we
examine how HIV/AIDS prevention campaigns construct women as the new face of
HIV/AIDS in Sub-Saharan Africa. We do so by providing a feminist analysis of the US
Government’s Abstain, Be faithful, and correct and consistent use of condoms (ABC)
health campaign. President Bush’s Emergency Plan for AIDS Relief is the largest
commitment ever made by a single nation towards an international health initiative—a
fiveyear, $15 billion approach to combating HIV/AIDS. The centerpiece of the prevention
component of this plan is the ABC approach (Office of the United States Global AIDS
Coordinator, 2005). Abstinence, according to this theory, should take precedence for
people who are not in a relationship. Those who are in a relationship should remain faithful
to their partners. And if the first two strategies fail for any reason, condoms should be used
to prevent the transmission of HIV. Global AIDS Coordinator Randall Tobias endorsed a
provision in U.S. law requiring that at least one-third of all U.S. assistance to prevent
HIV/AIDS globally be reserved for “abstinence-until-marriage” programs. In effect, this
makes “abstinence-until-marriage” advocacy the single most important HIV/AIDS
prevention intervention of the U.S. government.

BACKGROUND
Women in Sub-Saharan Africa have become the new face of HIV/AIDS . While calling
attention to women may help to end their silent suffering, if not done sensitively, it may
unwittingly reproduce a discourse that depicts Africa in largely pessimistic terms. Media
images of Black children with emaciated bodies, impoverished communities facing
environmental and epidemic catastrophes, and bare-breasted women standing besides
grass huts are imprinted on the collective consciousness of citizens in the West. The
internet provides a global forum for disseminating “afropessimism” through a broad range
of communication channels, including televised and printed media reports, news outlets,
medical journals, Web sites, press releases, and policy documents, among others.

Abstinence
The feminization of AIDS focuses efforts on protecting “vulnerable” women and their
children. Female sexuality is constructed around purity, selfrestraint, and the denial of
sexual pleasure, with chastity and morality as the underlying logics (Cheng, 2005). Thus,
the health message for women is to abstain from premarital and extramarital sex
—“Abstinence is the only sure way to prevent sexual transmission of AIDS and other
sexually transmitted diseases.”
Be Faithful
The “be faithful” message privileges mutually faithful monogamous relationships in the
context of marriage as the expected standard of human sexual activity (Collins, Alagiri, &
Summers, 2002). However, mounting evidence suggests that married monogamous
women are among the groups at greatest risk of infection. Forinstance, in Kenya and
Zambia, data reveal higher rates of infection among young married women (age 15 to 19)
than among their sexually active, unmarried (female) peers. These studies found that the
rate of HIV infections in husbands was higher than in the boyfriends of sexually active
single teenage women. Women in marital relationships were also more frequently exposed
to unprotected sex (UNAIDS/WHO, 2004). Women with no economic independence feel
constrained to adopt whatever behavior is necessary to protect their marital status,
including overlooking their partner’s infidelities (Gupta, 2000).
condoms
Consistent and correct use of condoms is the third component of the ABC health
campaign. This message is generally targeted at heterosexual women, and suggests that
women should act assertively to control the course of their sexual encounters to ensure
that the male partner uses a condom (Gavey, McPhillips, & Doherty, 2001). This message
may, however, be problematic for several reasons. First, the discourse of condom use is
couched in Western notions of individualism and personal responsibility. This creates
contradictions in gender roles and personal identity as a women’s desire to be a faithful
and committed partner. It also contradicts the cultural significations of sexual intercourse
as an expression of monogamy, commitment, love, and trust. This demand for condom use
also calls for women to enact assertive sexual behaviors that may go against their feminine
identity to acquiesce irrespective of her desire not to have unprotected sex. Some women
think of sex as something that happens to them, rather than something they choose. Thus,
if a woman’s desire to acquiesce overrides her desire to be assertive, then condoms
provide little support. For instance, a study in Zambia found that fewer than 25% of the
women interviewed believed that a married woman could refuse to have sex with her
husband, even if he had been demonstrably unfaithful and was infected. Only 11% thought
that a woman could ask her husband to use a condom in these circumstances (UNIFEM,
2001).

DISCUSSION
While the discourse surrounding the ABC approach appropriates a feminist tone in its
concern for women, it is in some ways limited in its promotion of legitimate debate on
gender relations. According to Kofi Annan, former UN Secretary General, the ABC
approach requests individual change without enacting the societal change that would
facilitate women’s agency. The driving forces for HIV transmission in southern Africa are
linked to structural inequities such as poverty, the economic and social dependence of
women on men, and a fear of discrimination that prevents people from openly discussing
their status. Women are not able to disclose to their partners that they may have been
exposed to HIV in case they are vilified, deserted, and left destitute. Society’s inequalities
also put them at risk through the lack of access to AIDS treatment, coercion by older men,
and men having several partners (UN Office for the Coordination of Humanitarian, 2004).
This broader sociopolitical context contributes to the AIDS crisis in Africa. Ecological
degradation, migratory labor systems, rural poverty, and civil wars are the primary threats
to African lives (Geshekter, 1995).

Acoustic Feature Analysis for Hypernasality Detection in Children


INTRODUCTION
In the treatment of children with fixed Cleft Lip and Palate (CLP), problems such as
hipponasality and hypernasality, which are related to vocal emission and resonance, might
appear. Nevertheless, according to the report presented in Castellanos et al. (2006),
hypernasality is more frequently found than hipponasality (90% vs. 10%). The interest
shown in hypernasality detection is that its occurrence points out problems of anatomical
and neurological sort that are also related to the peripheral nervous system (Cairns,
Hansen, & Riski, 1996). The presence of hypernasality, understood as the leak of nasal air
and compensatory articulations, leads to low intelligibility of speech. This declining of the
subject’s communication capabilities may end up in behavioral and social changes. In
velopharyngeal learning, the distortion of the acoustic production leads up to nasalized
voice. More over, since air loss or nasal leak is massive, articulator mechanisms are
compromised. The patient can not speak clearly and intelligibly, and thence they replace
their velum palatine sphincter by glottal articulation that allows for clearer speech: /p/, /t/,
/k/, /b/, /d/, /g/ come from glottal stops, while sounds like /ch/, /s/, /t/, /j / are accompanied
by hoarseness (Habbaby, 2002).
Although hard palate has been repaired surgically, it might not provide velopharyngeal
competence necessary for a normal speech production. Even if the palate is potentially
capable after surgery, previous speech habits might have developed compensatory
articulations or physiologic compensations that aimed to approx to intelligibility that
enhance the number of pathological patterns in speech. As a result, compensatory
articulations persist generally, even after undergoing post technical or post surgical
manipulation that had forecasted a plenty shutting. Thus, they have to be fixed before
increasing the performance of the velopharyngeal sphincter throughout language therapy.

BACKGROUND
nasalization and nasal emission
Nasalization is defined as the link between nasal cavity and the rest of the vocal tract;
while nasal emission refers to abnormal air loss through nasal route. This abnormal
leakage reduces intra-oral pressure causing distortion in consonants. When air loss turns
into an audible reblowing, the nasal emission is more obstructive and speech is seriously
affected. Nasality commonly named hypernasality refers to low speech quality, which
results from inappropriate adding of the resonance system to vocal tract. Conversely to
nasal emission, nasality does not involve large flows of nasal air, so that there is no
significant change in intra-oral air pressure. For this pathology, identification studies based
on signal modeling (specialized diagnosis) can be related to acoustic features by using
pattern recognition techniques.
AcouStIc FeatureS and MultIVarIate analySIS
Acoustic features can be split into two categories according to the acoustic properties
to be measured. Based in additive noise, among them: Harmonic to Noise Ratio (HNR),
Normalized Noise Energy (NNE), Glottal Noise Excitation (GNE), defined as the noise
estimation and it is based in the assumption that resulting glottal pulses from collisions of
vocal folds head to a synchronous excitation of the different band frequencies, Normalized
Error prediction (NEP) that can be expressed as the relationship between geometric and
arithmetic means of spectral model, and Turbulence Noise Index (TNI). Other acoustic
features are associated to frequency modulation noise, among them pitch or fundamental
period of the signal and jitter, which is defined as the average variation percentage
between two consecutive values of pitch. In addition, there are considered features
associated to parametric models of speech generation. Among them: cepstral coefficients
derived from linear prediction analysis (LPC Linear Prediction Coefficients), cepstral
coefficients over pounded frequencies scale (MFCC Mel-Frequency Cepstrum
Coefficients), and RASTA coefficients (Relative Spectral Transform) (Castellanos,
Castrillón, & Guijarro, 2004)

EXPERIMENTAL BACKGROUND
database
The sample is constituted by 68 children. Classes are balanced (34 patients with normal
voice and 34 with hypernasality) and evaluated by specialists. Each recording is conformed
by five words of Spanish language: /coco/, /gato/, /jugo/, /mano/, and /papá/. Signals are
acquired under low noise conditions using a dynamic, unidirectional microphone
(cardioide). Signal range is between (−1, 1).

INITIAL FEATURE SPACE


A complete set of considered features are: Pitch (F0) (mean value and standard
deviation) (Manfredi, D’Aniello, Bruscaglioni, & Ismaelli, 2000; Sepúlveda, Castellanos, &
Quintero, 2004), Jitter, Shimmer (Childers, 2000), TNI (Hadjitodorov & P.Mitev, 2002),
HNR (Yumoto & Gould, 1982), NNE (Kasuya, Ogawa, Mashima, & Ebihara, 1988), and
Mel Frequency Cepstrum Coefficients (MFCC) (13 coefficients) (Huang, Acero, & Hon,
2001). Depending on the analysis word, the aforementioned features can be extracted for
one or two voiced segments, which imply that 20 or 40 features represent each register.
Data Preprocessing
The main target of data processing is to reduce or even eliminate the influence of
measurement errors. Among them: systematic errors during acquisition, occasional failures
in the measurement instruments, and so on. It is also used for controlling the homogeneity
principle of the different statistical properties of phenomena in analysis. Data
preprocessing consists of analyzing odd logs for each feature and assessing their
normality (Peña, 2002).
Classification
The employed classifier is Bayesian. Five classifiers of this kind are used; each of
them for analyzing the error between classes (hyper-nasal and normal or control)
for each one of the words previously mentioned. Be sides, a leave one out cross-validation
was conducted to observe the variation in the classifier’s parameters and its generalization
capability (Webb, 2002).

RESULTS
Acoustic feature effectiveness can be measured according to classification
performance. Next, the results are displayed for each one of the stages. The abnormal
value detection is conducted for each feature. It allows making clear the quality of the
measurements. Those features with 10% or more outliers in either population samples are
discarded. Acoustic features results are shown on Table 1. Average reduction for word-set
is between 30% to 35%.

Adoption of ICT in an Australian Rural Division of General Practice


INTRODUCTION
Many information technology (IT) products have been developed to support medical
general practitioners (GPs) in all aspects of their work (GPSRG, 1998), and much research
and development in this area has already been done. It is apparent, however, that GPS are
not making as much use of these systems as they could. Our research showed that there
is still reluctance, in particular from many rural general practitioners to fully implement
information and communication technologies (ICT) in primary health care in rural Australia
(Everitt & Tatnall, 2003). While a simple analysis of the statistics of the numbers of
computers in medical practice shows that there are computers in most general practices, it
is not so clear how, or even whether, they are being used. Rural GPs, however, operate
very much in the mode of small business (Burgess & Trethowan, 2002). Some national
research shows that GPs use ICT mainly for administrative and some clinical functions, but
that much less use is made of online functions (NHIMAC, 1999; GPCG, 2001). This is even
more pronounced for rural GPs. PractItIoners While one might expect that highly educated
professionals such as most GPs would be at the forefront of the information management
revolution; however, our research has shown that this is not entirely the case (Everitt &
Tatnall, 2003). It appears that the slow uptake has continued to some degree in all areas of
medical general practice, despite continued support and promotion of computer use
(Tatnall et al., 2004). The Commonwealth Department of Health and Aged Care, along with
the General Practice Computing Group, also report that general practitioners in Australia
are still being encouraged via heir Divisions of General Practice to adopt electronic
information systems to enhance clinical and practice management. However, Richards
(1999) notes, “The adoption of computers by Australian general practitioners has been
slow in comparison with other English speaking countries.” It is clear that over a long
period of time, much research and development have been done on the use of medical
ICT, and many products.

METHODOLOGICAL FRAMEWORK FOR THE RESEARCH


Many factors and entities are involved in determining how GPs adopt and use ICT,
and any approach that ignores the inherent complexity of this sociotechnical situation is
unlikely to produce useful answers. This chapter will argue that a qualitative approach
using the Actor-Network Theory (ANT) allows for the views of all subjects to be fully
documented and explained. This approach was necessary because the complexity of
health delivery in Australia has increased the need to manage information in medical
practices, leading to a multistakeholder environment involving human and nonhuman
entities. Our research aimed to explain the patterns of computer use by rural GPs in an
Australian Division of General Practice, and to draw out the factors that contribute to
patterns of computer use. In undertaking this research, we compared two theoretical
approaches that seek to explain the adoption and use of computers by GPs: Innovation
Translation (from the Actor-Network Theory) and the better-known theory of Innovation
Diffusion.

RESULTS AND CASE STUDY


A significant difficulty experienced in collecting data for this project related to making
contact and establishing working relationships with individual GPs, who typically must be
accessed via their practice manager. As GPs have such a busy schedule, making time to
take part in a research project can be a big time sacrifice and, as they are aware of this,
the practice managers are often reluctant to facilitate access. We have learned to regard
the practice managers as some of the key contacts in the research project, and as
important people who were not at first acknowledged as being important to the study. They
act as gatekeepers of the information, so to speak, and are important contacts in reviewing
how IT works in medical practices. Another issue arose around doing qualitative research
in an industry that is only just coming to grips with research of this type and the processes
involved.
Continually having to justify the premise for qualitative research certainly keeps the
researcher on his or her toes and ensures that the source of funding is given credit. The
study also provided a chance to sell qualitative practice in the health sector and contribute
in a very positive manner to the development of research methodology in this sector. The
following data relate to a rural Division of General Practice not far from Melbourne,
Australia, where this study was conducted. The data come from 98 participating GPs
(approximately two-thirds of those currently practicing in this division), of whom 89 were
male and nine were female. Of these GPs, 42 practiced full-time and 56 part-time. The
most common practice size comprised four GPs (there were 10 of these practices)
followed by four larger practices (with more than four GPs) and two solo medical practices.
There were 55 GPs in the age range 25 to 35, 27 from 36 to 45, and 16 older than 45.
Each practice made some use of computers, and all had been doing so for at least 12
months, with about 60% using them for both clinical and administrative purposes and the
remainder using them only for practice administration.
Advances and Trends in Tissue Engineering of Heart Valves
INTRODUCTION
Improvements in health care and treatment of diseases have led to an increase in life
expectancy in developed countries. However, this achievement has also inadvertently
increased the prevalence of chronic illnesses such as cardiovascular disease, adding to
the growing burden of health care cost globally. Unfortunately, this trend is expected to
escalate in the foreseeable future. Cardiovascular disease remains one of the main
problems in contemporary health care worldwide, accounting for approximately one third of
the world’s total death (Poole-Wilson, 2005). This article focuses on a subgroup of
cardiovascular disease known as valvular heart disease whereby abnormalities or
malfunctions of the heart valves are detected. It is estimated that 93,000 valvular surgeries
were conducted in the United States in 2002 (American Stroke Association & American
Heart Association, 2005) and valve replacement surgeries accounted for 75% of the
surgery performed for valvular defects in Australia and, of that, 56% were foraortic valves
(Davies & Senes, 2003).

BACKGROUND
Currently, there are two types of artificial heart valves used in valve replacement
surgeries: mechanical and tissue valves. However, these prostheses are not without
limitations. Mechanical valves are usually made from pyrolytic carbon attached to a PET-
covered metal such as titanium frame. Although more durable than tissue valves, patients
implanted with mechanical valves are subjected to long-term complications such as
thromboembolism, leading to a life-time administration of anti-coagulant (Bloomfield,
Wheatley, Prescott, & Miller, 1991; Oxenham et al., 2003). Alternatively, tissue valves
created from biological tissues from human or animal (porcine or bovine) may be used.
While tissue valves do not require long-term anticoagulants, they undergo progressive
deterioration such as calcification and tearing of cusps, leading to structural failure
(Hammermeister, Sethi, Henderson, Oprian, Kim, & Rahimtoola, 1993; Schoen & Levy,
2005). Moreover, these clinically used prostheses are incapable of growth or remodelling.
Hence, extensive research and development is being conducted worldwide to explore the
potential of an emerging field, Tissue Engineering (TE), as a solution for addressing the
shortcomings of current prosthesis used in valve replacement surgeries.

RAPID PROTOTYPING TECHNOLOGIES


Recently, an increased interest has been generated for Rapid Prototyping (RP)
techniques as powerful tools for fabrication of scaffolds. RP techniques may be able to
address some of the limitations encountered in the conventional techniques. There are
three types of RP techniques discussed in this article: fused deposition modeling (FDM),
3D printing, and bioprinting. FDM is a material deposition process which uses a computer-
aided design (CAD) model to generate 3D scaffolds (Masood, Singh, & Morsi, 2005). The
scaffolds are generated through the extrusion of thin rods of molten polymer using a
computer-controlled XYZ robotic dispenser (Figure 1). The layers of polymer are deposited
in an interconnected manner, thus improving the mechanical stability of the scaffold. FDM
enables complex yet accurate characteristics to be reconstructed from CT scans, which
leads to the ability to create scaffolds customised to patients’ needs. Scaffolds
demonstrating the complex geometry of the aortic valve which incorporated the exact
dimensions of the sinuses of Valsalva (required to preserve the flow characteristics of the
valve) were successfully manufactured using FDM (Figure 2) (Morsi & Birchall, 2005). This
technique offers a high degree of control over the shape, pore interconnectivity, and
porosity of scaffolds as individual process parameters can be defined and improved (Ang
et al., 2006). A high resolution of 250 µm can be achieved with the FDM. An added
advantage of the FDM technique is that the process does not utilise toxic solvents and
porogens for the manufacturing of scaffolds (Leong, Cheah, & Chua, 2003; Yang, Leong,
Du, & Chua, 2002). The flexibility of this technique lends itself to produce scaffolds of
varying designs and complexity, thus expanding its application to other areas of TE aside
from heart valves.

CURRENT RESEARCH AND FUTURE TRENDS


RP is a technology that is developing rapidly, diverging from its traditional role in
engineering to being used as a powerful tool in medicine, for example, in the area of
prosthetics, surgical planning, and medical instrumentation. The transition from a
technology mainly used in engineering to medicine was eased by the fact that computed
tomography (CT) techniques used to scan various parts of the body operate via similar
layer-based technologies as RP. Data obtained from the CT scans can be interpreted and
translated to the RP system accurately. RP techniques have been used in hip
replacements and maxillofacial prostheses where models are created so that the resulting
implant can be fitted onto the patient precisely (Eggbeer, Bibb, & Evans, 2006; Sanghera,
Naique, Papaharilaou, & Amis, 2001; Sykes, Parrott, Owen, & Snaddon, 2004). Complex
and challenging tasks such as sculpting an ear cast have also turn to RP so that an
accurate model of the patient’s ear can be generated in a time-efficient manner (Al Mardini,
Ercoli, & Graser, 2005). Moreover, these techniques have been implemented to aid
surgeons in their surgical planning for reconstruction procedures and act as guides during
the procedure so that optimal outcomes, both clinically and aesthetically, were obtained, as
well as for education purposes such as improving the understanding of trainees (Muller,
Krishnan, Uhl, & Mast, 2003; Poukens,Haex, & Riediger, 2003; Toso et al., 2005).

CONCLUSION
RP is being increasingly used in medical research due to its flexibility in creating a
variety of complex shapes which can be custom made to the patient’s specifications. This
ensures that implants will be optimally positioned in individual patients, thus improving the
quality of treatment. Manufacturing scaffolds using RP technology also enable the
transformation from design to a 3D model in a time-efficient and cost effective manner.
Additionally, the operation of RP systems requires minimal human resources as most of
the processes are automated. The ultimate goal is to refine existing RP technology so that
it is applicable to medical research whereby scaffolds of heart valves fabricated from
biocompatible and biodegradable polymer can be achieved using direct RP techniques.
Such an achievement will not only produce heart valve scaffolds that are anatomically
correct but ones that will degrade as extracellular matrix and tissue gradually takes over
the scaffold, taking it a step closer to the generation of a true living tissue.

Advances in Bone Tissue Engineering to Increase the Feasibility of Engineered Implants


INTRODUCTION
Millions of patients experience bone loss as a result of degenerative disease, trauma,
or surgery (Xu, Othman, Hong, Peptan, & Magin, 2005). Healthy bone tissue constantly
regenerates itself and remodels its architecture to meet the mechanical demands imposed
on it, as described by Wolff’s “Law of Bone Remodeling” (Wolff, 1986). However, this
capacity is severely limited when there is insufficient blood supply, mechanical instability,
or competition with highly proliferating tissues (Pinheiro & Gerbei, 2006). Furthermore,
severe bone losses can be detrimental to individuals, because they reduce the bone’s
ability to remodel, repair, and regenerate itself (Luo et al., 2005; Nordin & Franklin, 2001),
ultimately resulting in the deterioration of a patient’s health, and, in some instances, death
(Luo et al., 2005). Because the repercussions of bone loss are severe, it is important to
replace lost bone in patients. The current gold standard for specific-site structural and
functional bone defect repair is autologous bone grafts (Mauney, Volloch, & Kaplan, 2005)
or autografts. While autografts do not present the problem of immune rejection, since the
bone tissue is being transplanted from another region of the patient’s own body (Rahaman
& Mao, 2005), they present certain complications such as significant donor site morbidity
(death of tissue remaining in the region from which the donor tissue was removed),
infection, malformation, and subsequent loss of graft function (Mauney et al., 2005).

BACKGROUND
One basic scheme of the bone tissue engineering process currently employed is
illustrated in Figure 1. Briefly, mesenchymal stem cells are obtained from the patient,
generally from the bone marrow (Stock & Vacanti, 2001). After a period of cellular
expansion, the cells are seeded on biodegradable and biocompatible scaffolds (Stock &
Vacanti, 2001). Poly-DL-lactic-coglycolic acid (PLGA), gelatin, and collagen scaffolds are
frequently employed as surfaces for bone tissue development (Wu, Shaw, Lin, Lee, &
Yang, 2006; Xuet al., 2005; Zhang et al., 2006). These scaffolds are supplemented with
bone differentiation promoting factors such as bone-morphogenic protein, dexamethasone,
and ascorbate-2-phosphate that enable the stem cells to differentiate into osteoblasts
(bone-forming cells) (Kimet al., 2005). After a substantial period of culturing, implantation of
the scaffold into the patient occurs, leading to bone restoration (Xu et al., 2005). Although
this process has the potential to treat bone loss, it is far from optimal. Formation of
engineered bone tissue currently takes several weeks (at least 3 to 4 weeks), resulting in
extensive waiting periods for patients (Cartmell et al., 2005). Since time is of the essence
for patients with bone loss, reducing the culture time of stem cells is necessary for implants
to be effective. In addition, a portion of the engineered tissue is destroyed during invasive
histological assessment conducted to confirm the formation of bone tissue. This form of
assessment can further increase patient waiting periods, as the portion of engineered
tissue used for testing is no longer available for implantation. A need exists for a bone
tissue engineering process that overcomes these problems.

DEVELOPMENT OF A FEASIBLE BONE TISSUE ENGINEERING PROCESS


The development of a feasible bone tissue engineering process calls for a
combination of a noninvasive stimulation device to reduce stem cell culture time, and a
noninvasive method for monitoring the growth of the engineered constructs. Of the various
methods available to reduce the culture time of engineered bone tissue, ultrasound (US)
stimulation may be the most promising one. This is because US is well known to be a
noninvasive technique (Buckwalter & Brown, 2004), which is very relevant to bone tissue
engineering because it insures the integrity of the engineered constructs, which are
generally small and delicate. Moreover, previous studies have indicated that low-intensity
pulsed US, administered with a dose as short as 20 minutes per day, activated ossification
in vitro via a direct effect on osteoblasts and ossifying cartilage, after other animal and
clinical studies showed that low-intensity US accelerated bone healing in vivo (Xu et al.,
2005).

FUTURE TRENDS
Our recent work (Moinnes et al., 2006) made a giant stride in enhancing current bone
tissue engineering processes, and also lends support to the conjecture that a combination
of stem cell stimulation and noninvasive construct monitoring will increase the efficacy of
bone tissue engineering processes. Yet, our work still leaves room for advancement.
Notably, the optimized ultrasound parameters have yet to be determined. For instance, the
optimal duration or frequency of ultrasound treatment and the critical stage in the cell
differentiation process where it would have the greatest effect must be established;
knowledge of this type could significantly reduce the net duration of ultrasound
administration. Also, future researchers can study and quantify the effect of various
ultrasound operating frequencies on accelerating bone tissue formation in the engineered
constructs, and therefore deduce the optimal one for this type of tissue engineering; this
would require constructing specifically customized US transducers which would have
variable frequencies, since they are not available in the market. Additionally, the optimal
MRM parameters for tissue development monitoring must also be determined. These
examples of future research directions demonstrate the need for greater knowledge on the
optimization of enhanced bone tissue engineering processes

CONCLUSION
The specific techniques described in this work are not expected to be the ultimate
ones that are to be utilized in future bone tissue engineering processes. Several major
parameters are to be modified when it comes to the actual application of such techniques
in a clinical setting, in order to completely substitute for the traditional autografts and
allografts, which are widely employed today. Those parameters are mainly dependent on
time and cost, among other constraints. Thus, it is undeniable that we are currently in the
very early stages of what is sure to be a long path towards restoring bone tissue in humans
using tissue engineering. Much time and research is still needed to progress from these
simple experimental preparations, where all environmental and physical conditions can be
utterly controlled, to human and clinical settings where unexpected physiological
fluctuations introduce uncontrollable variations. Notwithstanding, these experiments are a
good starting point on the path to complete bone restoration via tissue engineering.

Agent-Based Patient Scheduling


INTRODUCTION
Agent-oriented software engineering has, by many researchers, been dubbed the new
paradigm in software development, and from its original concepts in the early ‘80s, agents
and agent systems are now active research areas in computer science. This evolution
offers a promising approach to the development of patient scheduling systems.
Coordinating and processing a vast amount of complex variables, such a system should be
designed to stock and schedule a wide range of resources based on the patients’ health
condition and availability, drawing on the advantageous data control and optimization
abilities of agent technologies. This article presents the design of a working agentbased
patient scheduling system prototype.

BACKGROUND
Agent Systems Definitions
A software agent is an autonomous entity capable of performing actions and
interactions typically based on notions of beliefs and goals. In addition to autonomy and
pro-activeness (Wooldrige & Ciancarini, 2001), typical characteristics of agents are
anthropomorphism, situatedness, and social ability. Agent systems can consist of just one
such agent or a collection of agents performing different tasks based on individual or
common goals. Due to the individualistic characteristics described above, an agent system
can collectively draw on further advantages, including mobility, dynamic sizing, and
complex cooperation through negotiation.
historical context
The term agent can be traced back to the Actor Model first presented by Hewitt,
Bishop and Steiger (1973). This early concept simply defined agents as entities with a
memory address and computational behavior to help solve common tasks. In the late ‘80s,
the Belief-Desire-Intention (BDI) model was proposed (Bratman, Israel, & Pollack, 1988).
The model represented a novel approach of giving human properties to digital agents.
Through available information about the environment (beliefs), the agents are given a set
of certain possible actions (desires) whichare activated based on agent goals (intentions).
With sophisticated communication, agents can interact to cooperatively achieve global
tasks and goals. But this coordination needs more than sufficient shared semantics. It also
requires planning and scheduling techniques to govern the order and partition of tasks.
Roughly, there are two general frameworks developed over the last decades to deal with
these challenges; namely, the partial global planning (PGP) algorithms, and the joint
intentions framework. PGP (Durfee & Lesser, 1991) was an early attempt at planning in a
distributed dynamic environment. By sharing and communicating intentions globally, the
framework allowed agents to make optimal decisions locally.
Application to Patient Scheduling Systems
Decision support systems and patient scheduling systems in particular, have become
an increasingly important factor in many hospitals and medical institutions (Manansang &
Helm, 1996). The primary goal of patient scheduling systems is to treat as many patients
as possible in the shortest possible time (Bartelt, Lamersdorf, Paulussen, & Heinzl, 2002).
The examination and treatment process for patients involves a high degree of uncertainty
regarding time spans and the resulting diagnosis, thus patient scheduling systems have
been deemed complex (ibid.). Modern patient scheduling system design focuses on
patients, rather than specific tasks or resources (Guoet al., 2004). Hence, patient
scheduling systems exhibits many of the same characteristics as those recognized in the
agents and agent systems literature. Characteristics like entity-focused design, and high
complexity and abstraction levels are well-founded identifiers in agentoriented literature
Some earlier proposals exist—most noteworthy, the MedPage project (e.g., Bartelt,
Wagner, & West, 2002), which is an ongoing attempt to introduce agent planning and
scheduling systems at German hospitals.

AGENT SCHEDULING SYSTEM DESIGN


Following the theories presented, this article proposes a patient scheduling technique
founded on software agents. Using well-established optimization theories from various
fields of science, including optimal decision processing and game theory, this section will
present an agent system labeled AgentMedic to effectively schedule patients in a medical
institution. The following four subsections will describe this system in detail. First, we define
the three distinct agent types used in the system, focusing on their tasks and goals.
Second, the communication and utility data flow between these agents are introduced. The
third subsection defines the optimal decision functions used to determine the relevant
value of patients and their position in the treatment cycle. And lastly, the optimization and
scheduling processes are presented.

Agent Specifications
When choosing the types of agents needed in such a system, it is convenient to
remember the typical agent characteristics presented earlier in this article. The design
should allow for the agents to make use of their anthropomorphic and pro-active nature, so
as to represent a live entity in the best possible way (Foner, 1993). Furthermore,
autonomous entities requiring both social and flexible behavior must, in particular, be
considered for agent abstraction (Wooldridge & Ciancarini, 2001).

CONCLUSION
Agent-based systems is an emerging approach in dealing with modern complex
information systems. Furthermore, agents are well suited to deal with lareamounts of
variables, using negotiation to deal with complex planning procedures. As such, this article
presents an agent-based patient scheduling system based on planning and optimization
algorithms from game theory. While the developed prototype from Hovland (2006) shows
that such an approach is feasible, it is still a long way from real-life application. The
presented approach demonstrates the core scheduling operations of such a system, but
assumes some simplifications especially in regards of utility functions of the patients.

Analytics: Unpacking AIDS Policy in the Black Community


INTRODUCTION
With advances in information technology analytics applications, society stands to
benefit greatly from health care innovation. The ability to link physicians, hospitals,
pharmacies, clinics, and patients to health information networks and clinical and financial
data management and analyses can prove to be invaluable in the diagnosis and treatment
of chronic episodes of illnesses such as AIDS/HIV. This access to data is a necessity in
order for hospitals and physicians to provide the highest level of safety and quality of care.
Providing the correct diagnosis and procedures is critical for the patient’s utmost care. With
the high costs associated with AIDS/HIV procedures, medications, and physician
consultants, the integration of IT can offset these costs and improve the efficiency of the
organizations. Factors such as cost of care and length of stay continue to drive health
service delivery, resource availability, and quality of care. Business analytics (BA), often
termed.

BACKGROUND
According to Data Bulletin (2003), between 1991 and 2003, per capita spending on
health care in the United States rose almost 95%, with little improvement in national health
metrics. Among policymakers, wellregarded media outlets, and others (Kovak, 2005), there
is widespread disagreement about a final solution to the problem of rising health care
costs. Moreover, there is equally widespread agreement that one element must be a large-
scale, systemic change in the uses of information technology for health care management
and delivery. Comprehensive IT systems have improved efficiency and productivity in
virtually every major industry, with the conspicuous exception of health care, based on
recent RAND reports (Fonkych & Taylor, 2005). Used primarily for administrative tasks
such as billing and scheduling, IT offers great promise for use in Electronic Medical Record
Systems (EMR-S) or as a clinical diagnostic aid. The AIDS/HIV epidemic continues to have
a riveting impact on the United States. In order to slow the epidemic, analytics enables the
field to improve upon its understanding of the dynamics behind the disease. There are an
estimated 800,000 to 900,000 people currently living with AIDS/HIV in the United States,
with approximately 40,000 new AIDS/HIV infections occurring in the United States every
year. More recently, gender has become a significant factor to pay attention to when
identifying new cases each year. For several years, men dominated the estimates of new
infections; women, in general, are now also significantly affected, and Black women, in
particular. Adopted from the A Centers for Disease Control and Prevention (CDC), Figure
1 shows that 70% of new HIV infections each year occur among men, although women are
also significantly affected and hold the other 30%.

BUSINESS ANALYTICS: HOW IT (UN)INFORMS US


Business analytics (BA) can better inform hospitals, insurers, and providers of two
critical factors in the wake of managed care and capitation: cost of care (COC) and length
of stay (LOS) for HIV/AIDS cases. Factors such as a patient’s age, race, gender, and high-
risk behaviors are significantly influential in determining COC and LOS. Moreover,
geographic location has emerged as a critical factor in the equation. Recent reports by
Newsweek (2006), UNAIDS (2004), and the Centers for Disease Control and Prevention
(2005) confirm the notion that HIV/AIDS cases have spiked disproportionately in rural
United States (particularly the South) and sub-Saharan African countries (particularly
Kenya, Tanzania, and South Africa) where people of color comprise significant proportions
of the total population. Using stepwise regression models, SAS Enterprise Guide software
(see www.sas.com) and a patient de-identifiable dataset of nearly 90,000 hospital
encounters, the author sought to determine how many African American persons had
HIV/AIDS. Table 1 shows that roughly 75% of the cases in the dataset represented Blacks,
while 23% were whites. Figure 3 plots these figures along the normal curve to illustrate the
vast differences in the number of cases between the two groups.

FUTURE TRENDS AND CONCLUSION


Business analytics are applications and methodologies that can augment our
understanding of a phenomenon in question. In this case, BA was used to analyze a
deidentifiable set related to medical encounters or visits to a teaching hospital. BA offers
statistical analyses, forecasting, and scorecards to describe the AIDS/HIV cases
discovered in the dataset. However, BA is only as sound as the data provided to its users.
Oftentimes, this will require that health care researchers analyze more than one dataset or
examine findings from other reference disciplines to enrich their interpretation of results.
The goal here is to offer the reader the opportunity to step back and reflect on the
discourse at hand and recognize that chronic diseases such as AIDS/HIV in vulnerable
populations (e.g., Blacks in rural South or in significantly vulnerable scenarios) are not
monolithic. A deeper understanding of broader public health frameworks can shed auxiliary
viewpoints on the data, thereby affecting one’s interpretation of the results.

Impact of RFID Technology on Health CareOrganizations


INTRODUCTION
Radio Frequency Identification (RFID) technology has been considered the “next
revolution in supply chain management” (Srivastava, 2004, p. 60). Current research and
development related to RFID focuses on the manufacturing and retail sectors with the aim
of improving supply chain efficiency. After the manufacturing and retail sectors, health care
is considered to be the next sector for RFID (Ericson, 2004). RFID technology’s potential to
improve asset management in the health sector is considerable, especially with respect to
asset management optimization. In fact, health expenses have increased substantially in
Organisation for Economic Co-operation and Development (OECD) countries in recent
years. In Canada, the public health budget amounted to $91.4 billion (CAD) for the year
2005–2006 compared to $79.9 billion in 2003–2004 (CIHI, 2005). Moreover, the health
care industry has been the focus of intense public policy attention. In order to curb this
upward trend, the public heath sector in Canada is subject to strict budget constraints.
Among the different alternatives for reducing expenditures, the improvement of asset
management within the different health institutions appears to be worthwhile. RFID
technology seems to be a viable alternative to help hospitals effectively manage and locate
medical equipment and other assets, track files, capture charges, detect and deter
counterfeit products, and maintain and manage materials. In other words, health care
organizations would benefit particularly from RFID applications.

CURRENT CONTEXT OF THE HEALTH CARE SECTOR


The health care sector has been investing ever more money in information technology
(IT) to reduce operating costs and improve patient safety and medical services, and RFID
is expected to become critical to health care organizations in achieving these goals. The IT
implementation trend in health care works toward common IT platforms, which allow
patient and product information to be exchanged. For many observers, the adoption and
use of IT-related technologies, especially RFID, by health care organizations could boost
the effectiveness and efficiency of this information-intensive sector. However, the health
care sector has been relatively slow to embrace the full potential of IT initiatives. In
general, the implementation of IT in hospitals has not been particularly successful (Aarts,
Doorewaard, & Berg, 2004; Hersh, 2004; Pare, 2002). The major impediments appear to
be linked more to organizational issues than to technological problems (Southon, Sauer, &
Dampney, 1997). Among the many factors slowing down the implementation of IT
initiatives, previous studies have identified the complexity of health care organizations
(Glouberman & Mintzberg, 2001; Glouberman & Zimmerman, 2002), their inappropriate
organizational structure (Mintzberg, 1979), and integration issues (Christensen, Bohmer, &
Kenagy, 2000; Kumar, Subramanian, & Strandholm, 2002).
Characteristics of the Health Care Supply Chain
In response to all those constraints, some visionaries understand and are already
taking action to rectify supply chain processes as a key strategic factor supporting patient
service. A study within certain hospital departments made it possible to identify the
priorities (Landry & Beaulieu, 1999). The priorities of administrative departments are
generally the review and improvement of the supply chain process, IT system
modernization, and system integration on a common platform used by other health care
organizations. In fact, Beaulieu, Duhamel, and Martin (2004) state that the integration of
procurement activities would improve efficiency by eliminating nonvalue-added activities;
this would allow health care organizations to concentrate on more strategic activities
(Landry & Beaulieu, 1999). According to some researchers, better resource monitoring and
allocation will reduce costs throughout the restocking chain (Blouin, Beaulieu, & Landry,
2001; Perrin, 1994). In addition, the procurement activities represent a large proportion of
health expenses. In a hospital, for instance, they range from 30% to 46% of all expenses
(Bourgeon et al., 2001; Poulin, 2004). Moreover, the health sector currently loses up to
15% of its assets due to inappropriate and inefficient monitoring procedures (Nabelsi,
2007). The larger the hospital, the bigger these problems are (Nabelsi, 2007).

RFID APPLICATIONS IN HEALTH CARE ORGANIZATIONS


The market potential is interesting: according to a recent study, the worldwide market
for RFID tags (active, passive and semi-active) and systems in the health sector will rise
from $90 million in 2006 to $2.1 billion in 2016 (Harrop & Das, 2006). The sale and
implementation of a complete RFID platform solution would obviously represent much
higher revenues for the participating solution providers. The application of RFID in
hospitals has received a great deal of attention over the last few years, with a “boom” in
early 2003 due to the rapid spread of Severe Acute Respiratory Syndrome (SARS) in
Taiwan (Li, Liu, Chen, Wu, Huang, & Chen, 2004; Wang, Chen, Ong, Liu, & Chuang,
2006). This emergent technology plays a significant role in the global fight to contain
SARS; some hospitals have tested an RFID infrastructure system that tracks the
movement of patients, medical professionals, and visitors in order to trace and identify
when and where people may have been in contact with patients infected by SARS (Li et
al., 2004; Wang et al., 2006). The real-time information can be received from this system to
trace detailed patient medical records and read bio-information such as pulse,
temperature, and respiratory rate (Li et al., 2004). A number of hospitals have announced
new projects including several that will use RFID for supply chain management
applications. Tracking assets has become a potential area for improving hospitals’
performance (Anonymous, 2006a). In addition, the use of RFID technology for equipment
tracking in the health care supply chain can lead to a tremendous reduction in inventory
levels and better collaboration among supply chain players. For example, RFID can reduce
the time staff members spend looking for equipment they need, thereby improving the
utilization rate of equipment and cutting down on missing equipment (Ostbye et al., 2003).
Furthermore, the greatest use of RFID in the health care sector will be for labels on
medical equipment, drugs, and other products at the item level and the infrastructure and
services to support this throughout the supply chain. It will also be used in health care
facilities to protect products against counterfeiting. The primary purpose will be to prevent
counterfeiting by establishing the full history of a given package at all times—known as its
pedigree.
Research Design
Since the main objective of this case study is to gain a better understanding of the
potential for RFID technology in the context of warehousing activities in one specific supply
chain, the research design corresponds to an exploratory research initiative. This
appearsappropriate since it enables researchers to examine a new interest. Moreover, a
“case study is a research strategy which focuses on understanding the dynamics present
within single settings” (Eisenhardt, 1989, p. 534). This research strategy is consistent with
the goal of our study. The field research was conducted in 12 consecutives steps. This
methodology was developed at the ePoly research center (see Figure 1, adapted from
Lefebvre, Lefebvre, Bendavid, Fosso Wamba, & Boeck, 2005). The first six steps
correspond to an initial phase which can be broadly termed as the “opportunityseeking
phase.”
• Step 1: All health care organizations are highly motivated to find the best alternatives to
give patients better care and reduce time-consuming human administrative activities.
• Steps 2 and 3: Benefits are observed in several areas of opportunity related to product
information which is to be exchanged with partners and/or departments within the same
organization.
• Steps 4 and 5: Reflect the current context in terms of supply chain structures and
dynamics, and existing intra- and inter-organizational business processes. The
implementation of integrated IT platforms allows systems to be integrated and facilitates
the exchange of information. The second phase constitutes “scenario building” to evaluate
RFID opportunities; this phase incorporates steps 6 to 10.
• Steps 6 and 7: Several scenarios are evaluated regarding the implementation of RFID in
applications such as tracking patients, equipment, and errors.
• Step 8: Business and technological concerns are evaluated. RFID implementation
provides increased integration with suppliers as information is captured in the RFID tags.
This minimizes documentation exchanges, dramatically improves the quality of information
transferred and reduces the need for internal control or manual activities to zero.
• Step 9: Business processes are redesigned to integrate RFID technology. By including
enterprise resource planning (ERP) and middleware integration, steps are taken toward
process automation; some of those steps have already been accomplished in some health
care organizations. A free flow of information, the identification of impacts on human
resources and supply chain alignment are assets in redesigning and simplifying the
business process activities.
• Step 10: Several scenarios are simulated. Suggestions are received from supply chain
departments, regarding replenishment needs in the ER, product validation at the time of
receipt, tracking mobile beds and chairs, and so forth. The final phase is to “validate the
scenarios” retained from the second phase in a controlled environment (Proof of Concept
or PoC; step 11) and then in a real-life setting (step 12).
• Step 11: PoC in laboratory. Product receiving and/or replenishment are reproduced in a
similar physical and technological environment to that found in the health care
organization. The main goal is to demonstrate the feasibility of RFID technology and
assess the ERP and middleware integration, process automation, information flow, and
human resources impact for all supply chain members. The equipment is acquired,
calibrated, and configured, and the business rules are identified and configured in various
middleware applications and integrated with the ERP engine. Finally, dry-run tests are
conducted to validate the IT and process integration at the level of all supply chain
members.
• Step 12: Based on the analysis of PoC results, the decision to run beta tests in a real-life
setting is made; the application is deployed in the health care organization and
appropriated by the staff involved.
Research Sites
One hospital was involved in the research design. This hospital is briefly described in
the following paragraph.
Hospital Profile
The nonprofit hospital is a regional health care facility and is one of the major
hospitals affiliated with the Université de Montréal; it is one of the largest hospitals in
Quebec (North America). The hospital, which employs 4,000 persons currently, has 554
beds. It has an operating budget totaling $231 million and $66 million of charges
associated with supply chain activities. The hospital is a university center that pro vides
specialized and ultraspecialized services to a regional and supraregional clientele. The
hospital uses the systems, applications, products (SAP) platform to integrate and centralize
patients’ personal data such as patient indices from different sources, scheduling and visit
information, patient movements (admission, transfer, and discharge information) and
gather activity volumes related to clinical data from various sources, all of which are
valuable for the management of finances and controls. The supply chain activity is a target
for integration with external suppliers using the global healthcare exchange (GHX) platform
through an e-supply chain, by automating the ordering process and electronic invoices.
Probably the most complex IT integration process will start with the exchange of a single
“Electronic Patient File” with health care organizations all over Canada.
Data Collection
The case study methodology allowed us to employ multiple data collection methods to
gather informa-I tion from one or a few entities (Benbasat, Goldstein, & Mead, 1987). Data
collection for the case study approach was based on:
1. Various focus groups were conducted in the university-based research center with
health care professionals (5) and IT experts (3). The main objective of these focus
groups was to reach a consensus on strategic intent with respect to the use of
RFID technology in one product value chain (steps 1, 2, and 3), and to evaluate
different scenarios and retain the “preferred” or “as could be” scenario (steps 7, 8,
and 9). Each consecutive step of the methodology illustrated in Figure 1 was
evaluated and validated and agreed upon with members of the focus groups.
2. On-site observations at the research site were performed in order to carry out the
process mapping required for Steps 5, 6, and 9. The analysis of the current
inter-organizational business processes allows the researchers to understand the
supply chain dynamic and its business environment.
3. Semistructured interviews were conducted with seven participants at the research site.
The participants in the case study were the department head responsible for the
logistics and distribution division (1) and some warehousemen (6). The purpose was
to document and obtain more detailed information, resolve potential inaccuracies in
the mapping of existing business processes, and ensure that our observations and
the results of the mapping were valid and representative of the normal flow of
operations (Steps 5 and 6). The researchers acted as observers, interviewers, and
facilitators (for focus groups). They also formulated and presented the detailed
scenarios that were developed from the empirical evidence gathered in the nonprofit
hospital.

RESULTS AND DISCUSSION


The health care professionals involved in this study are aware of the potential
advantages derived from the implementation of RFID technology. Their initial motivations
were focused on the reduction of manual interventions by employees for the overall
“receiving” and “verifying” processes and the increased accuracy of validation of controls
such as validate delivery order (DO) against purchase order(PO), and quantities of items
received vs. ordered. Within the scope of this article, our discussion will mainly focus on
the empirical results obtained from steps 5, 6, and 9 (Figure 2). These three steps
correspond to the mapping of current business processes and redesigned processes
integrating RFID technology. Proponents of the process-based approach believe that the
process view is “a more dynamic description of how an organization acts” (Magal, Feng, &
Essex, 2001, pp. 2). Moreover, this process view provides a structured approach which
allows one to focus on value creation. The process view is also used increasingly often to
evaluate the impact of information technologies (Subramaniam & Shaw, 2004).

Implementing RFID Technology in Hospital


Environments
INTRODUCTION
A promising approach for facilitating cost containment and reducing the need for
complex manual processes in the healthcare space, RFID (Radio Frequency Identification)
technology enables data transport via radio waves to support the automatic detection,
monitoring, and electronic tracking of objects ranging from physicians, nurses, patients,
and clinical staff to walkers, wheelchairs, syringes, heart valves, laboratory samples,
stents, intravenous pumps, catheters, test tubes, and surgical instruments (Karthikeyan &
Nesterenko, 2005). RFID implementations streamline hospital applications and work in
concert with WLANs (wireless local area networks) and mobile devices such as cellular
phones and personal digital assistants (PDAs). RFID technology also safeguards the
integrity of the drug supply by automatically tracing the movement of medications from the
manufacturer to the hospital patient. This article begins with a discussion of RFID
development and RFID technical fundamentals. In the sections that follow, the work of
standards organizations in the RFID space is introduced, and capabilities of RFID solutions
in reducing costs and improving the quality of healthcare are described. Descriptions of
RFID initiatives and security and privacy challenges associated with RFID initiatives, are
explored. Finally, trends in the use of RFID-augmented wireless sensor networks (WSNs)
in the healthcare sector are introduced.

BACKGROUND
RFID technology traces its origins to 1891 when Guglielmo Marconi first transmitted
radio signals across the Atlantic Ocean, and demonstrated the potential of radio waves in
facilitating data transport via the wireless telegraph. During the 1930s, Alexander Watson
Watt discovered radar, and illustrated the use of radio waves in locating physical objects.
Initially used in World War II in military aircraft in what is now called the first passive RFID
system, radar technology enabled identification of incoming aircraft by sending out pulses
of radio energy and detecting echoes (Want, 2004). Libraries have used RFID technology
for electronic surveillance and theft control since the 1960s. Present-day RFID solutions
track objects ranging from tools at construction sites and airline baggage, to dental molds
and dental implants. RFID systems monitor the temperature of perishable fruit, meat, and
dairy products in transit, in order to ensure that these goods are safe for consumption, and
facilitate the detection of package tampering and product recalls (Want, 2005). The U.S.
Department of Defense (DoD) mandates the use of RFID tags as replacements for
barcodes for tracking goods (Ho, Moh, Walker, Hamada, & Su, 2005), and requires
suppliers to use RFID tags in equipment and clothing shipped to military personnel. RFID
technology is widely used by major retailers that include Home Depot and Wal-Mart in the
U.S., and Marks and Spencer in the United Kingdom to track inventory. In the
transportation and education sectors, credit cards that incorporate RFID technology enable
automatic transactions at gas stations and toll plazas and at university bookstores,
libraries, and cafeterias. RFID systems also facilitate building access, port security, vehicle
registration, and supply chain management; verification of the identity of pre-authorized
vehicles and their drivers at security checkpoints; and reduction in the circulation of
counterfeit goods and paper currency (Garfinkel, Jules, & Pappu, 2005).
RFID Technical Fundamentals
RFID systems consist of RFID tags or transponders, and interrogators or readers.
Classified as passive, semiactive, or active, a RFID tag is an extremely small device
containing a microchip, also called a silicon chip or integrated circuit that, at a minimum,
holds digital data in the form of an EPC (Electronic Product Code). RFID tags are affixed to
or incorporated into objects, such as persons or products (Weinstein, 2005). A RFID tag is
also equipped with an antenna for enabling automatic receipt of and response to a query
from an RFID interrogator, via radio waves (Myung & Lee, 2006). The RFID
communications process involves the exchange of an electromagnetic query and
response, thereby eliminating RFID dependency on direct lineof-sight connections.
Subsequent to transmission of the EPC from the RFID tag to the RFID interrogator, the
tagged object can be monitored and traced. Passive RFID tags are inexpensive and
limited, in terms of functions supported (Weinstein, 2005). In terms of transmission, a
passive nonbattery operated RFID tag makes use of incoming radio waves when it is within
range of a RFID interrogator to transmit a response. A passive RFID tag contains the EPC
in the form of eight-bit data strings associated with a distinct object and several bits of
memory for storing data describing the tagged object. When multiple passive RFID tags
transmit EPCs concurrently in response to RFID interrogators, collisions occur, thereby
disrupting information flow. Designed to support passive RFID tag operations, the adaptive
binary splitting (ABS) collision arbitration protocol diminishes the occurrence of collisions,
thereby significantly reducing delay and communications overhead in the transmission
process (Myung & Lee, 2006).
Privacy and Security Considerations
RFID technology plays a vital role in monitoring the health and safety of patients in
hospitals and medical centers. Nonetheless, the ability to obtain real-time detailed
information as a consequence of RFID deployment also raises concerns about security
(Nath, Reynolds, & Want, 2006). Possible abuse of RFID tracking capabilities also raises
questions about potential violations of personal privacy (Ohkubo, Suzuki, & Kinoshita,
2005). Generally, patient-related information collected by RFID systems in the healthcare
space is extremely sensitive, and contains personal information that is protected by the
Health Insurance Portability and Accountability ACT (HIPAA), and requires the
enforcement of strict privacy controls (Karygiannis et al., 2006). Data obtained from RFID
tags embedded in medical implantations and patients’ wristbands for one purpose may be
covertly used for monitoring individuals without their knowledge or consent. Data obtained
from RFID tags embedded in consumer items such as shoes and clothing can potentially
be used by employers to monitor surreptitiously the work of employees and terrorists to
target attacks against specific political and ethnic groups. As a consequence of RFID
system abuse and its potentially adverse impact on data confidentiality and privacy, groups
such as CASPIAN (Consumers Against Supermarket Privacy Invasion and Numbers) have
launched protest campaigns against manufacturers and retailers worldwide.
Managing Operating Room Cost in Cardiac Surgery with Three Process Interventions
INTRODUCTION
Health care providers in both public and private sectors are facing increasing pressure
to improve their cost efficiency and productivity. The increasing cost of new technological
solutions has enforced the application of operations management techniques developed
for industrial and service processes. Meyer’s (2004) review of existing research shows
that, on average, operating rooms (ORs) operate only at 68% capacity. Using OR time
efficiently is especially challenging when long operations are scheduled to fixed OR block
time. This situation is typical in open heart surgeries where a high variability in the length of
required OR time combined with four and a half hour average OR time duration makes
scheduling two operations during a normal eight-hour workday difficult. The objective of
this chapter is to analyze the effect that three process interventions have on the OR cost in
OR performing open heart surgeries. The investigated process interventions are four days
OR week (4D), the better accuracy of operating room time forecast (F), and doing
anesthesia induction outside the OR (I).
These interventions emerged from practical organization context. This chapter is organized
as follows. First we provide a review of the existing literature on measures of OR utilization
and the investigated three interventions. Based on existing literature, we construct a
simulation model to test the interventions’ effects on OR utilization. Conclusions of results
are presented, and practical implications and new contributions to existing theory of OR
management are discussed.

BACKGROUND
Operating room efficiency is typically defined as rawutilization, which means a
percentage of time that patients are in the operating room during resource hours (Donham,
Mazzei & Jones, 1996). This definition for OR efficiency, however, does not take into
account the cost of overused time, which emerges when operations are stretched. Thus, a
more valid measure for OR efficiency is a weighted sum of underused and overused OR
time (Dexter, 2003). Estimates for relative cost of overused to underused OR time varies in
literature from 1.75 (Dexter, Traub & Macario, 2003) to 4 (Dexter, Yue & Dow, 2006).
Besides this relative cost, the total OR cost depends on substitutive tasks for underused
OR time. Therefore, when evaluating the effect of various process improvements to OR
cost, results have to be calculated with case-specific, relative cost for operating time,
underused time, and overused time. In the next section, we consider the estimated
interventions from existing literature’s point of view

RESEARCH QUESTIONS AND METHODOLOGY


The research questions of this chapter are:
1. What is the effect of a four-day OR week, the better accuracy of operating room
time forecast, and doing anesthesia induction outside the OR, as well as their
combinations, on OR cost per patient?
2. How does the relative cost for operating time, slack time, and overtime impact the
effects of the interventions? The effects of three process interventions and their
combinations are tested with a discrete-event simulation model on the open heart
surgery patient process. Discrete-event simulation enables the evaluation of
alternative productivity improvement proposals while maintaining the dynamic
nature of the open heart patient queue. The simulation model was constructed based
on Kuopio University Hospital (KUH) open heart surgery process.
Simulation Model for Evaluation of Production Improvement Proposals
A discrete event simulation model was created to capture the most important
elements of the operation theater scheduling system in the case organization (Figure 1).
The emergency patient weekly arrivals are modeled by a Poisson (7,15) distribution. Each
of these arrivals is randomly assigned to a weekday. When the assigned weekday is
reached, the arriving emergency patient is placed in either an operating room or an
emergency patient queue. The OR time as well as OR time forecast are assigned by
randomly selecting from the historical emergency patient data. Ten operating rooms per
week are reserved for the emergency patients, leaving on average 2.85 rooms per week
for elective patients. The emergency surgery overcapacity is supplemented by a specific
buffer (fillbuffer), which models the possibility of calling patients from the elective queue at
short notice. For each emergency operation, the buffer is checked for the possibility of a
second operation during the day. The sum of forecasted operating room times must be
less than the length of a workday and of the planned slack. The buffer is filled weekly from
the elective queue.

FUTURE TRENDS
Recent studies highlight the role of process management methods in operating units.
Flexible process and space solutions (Friedman et al., 2006), effective scheduling
(Spangler et al., 2004) and personnel incentives are also becoming general in cardiac
surgery units. Due to the complexity and multidimensionality of an operating unit,
simulation and other computer-based tools will increasingly be utilized when evaluating
organizational and process changes.

CONCLUSION
This chapter shows that using anesthesia induction outside the OR improves cost-
efficiency. A four-day OR week and better forecasting accuracy have both a positive but
relatively smaller impact on OR cost per patient. Forecast accuracy becomes more crucial
when the penalty for overtime increases. The benefit of implementing a four-day week and
induction are not as much depending on weights for overtime. Based on results, it also can
be seen that the effect of all three interventions is more limited if an organization can
reallocate its resources to other tasks during the slack time.
For health care managers, this study implicates that the optimal case-specific
planning slack times for scheduling can be determined based on discrete event simulation
model and by doing anesthesia induction outside the OR in order to reduce OR cost per
patient.

You might also like