You are on page 1of 109

Recommendations and quality criteria for

hospital information systems

(Version 1 – December 2002)

Prof. Dr. Bart Van den Bosch


Prof. Erwin Bellon
André De Deurwaerder
Mark Vanautgaerden
Dr. Marc Bangels
Contents

1 INTRODUCTION ........................................................................................................................................ 4

2 ARCHITECTURE AND INTEGRATION OF THE HIS ........................................................................ 6

2.1 INTRODUCTION ........................................................................................................................................... 6


2.2 DEFINITIONS ............................................................................................................................................... 7
2.2.1 Connected systems............................................................................................................................. 7
2.2.2 Integrated systems ............................................................................................................................. 9
2.3 SPLITTING OF THE HIS: CONTENT OPTIONS ............................................................................................... 12
2.3.1 Splitting by user group .................................................................................................................... 13
2.3.2 Splitting by department ................................................................................................................... 14
2.4 TECHNICAL OPTIONS FOR DATA EXCHANGE .............................................................................................. 16
2.4.1 Syntax and standards for data exchange......................................................................................... 16
2.4.2 Operational security of the connection ........................................................................................... 18
2.4.3 Central node.................................................................................................................................... 20
2.4.4 Data replication: content options.................................................................................................... 22
2.5 RESULTS SERVER ...................................................................................................................................... 25
2.5.1 Implications for access control ....................................................................................................... 25
2.6 COMPONENT INTEGRATION ....................................................................................................................... 26
2.6.1 Back-end components...................................................................................................................... 28
2.6.2 Front-end components..................................................................................................................... 28
2.6.3 Cooperating applications as an alternative to front-end components – CCOW ............................ 29

3 CONTENT-RELATED STRUCTURE .................................................................................................... 30

3.1 DATA ORGANIZATION ............................................................................................................................... 30


3.1.1 Patient number ................................................................................................................................ 30
3.1.2 Codings ........................................................................................................................................... 32
3.1.3 Structuring....................................................................................................................................... 34
3.2 FUNCTIONALITY ....................................................................................................................................... 34
3.2.1 Appointments management.............................................................................................................. 34
3.2.2 Results management........................................................................................................................ 36
3.2.3 Request and registration system...................................................................................................... 36
3.2.4 Medication prescription and administration................................................................................... 38
3.2.5 Patient movements........................................................................................................................... 38
3.2.6 Invoicing.......................................................................................................................................... 39
3.2.7 Problem list and progress notes ...................................................................................................... 39

4 AVAILABILITY OF THE COMPUTER SYSTEM............................................................................... 40

4.1 CAUSES OF UNAVAILABILITY .................................................................................................................... 40


4.2 TECHNIQUES TO LIMIT UNAVAILABILITY .................................................................................................. 41
4.2.1 Backup............................................................................................................................................. 41
4.2.2 Hardware redundancy..................................................................................................................... 43
4.2.3 Maintenance contacts...................................................................................................................... 44
4.2.4 Cluster ............................................................................................................................................. 44

i
4.2.5 Replication server ........................................................................................................................... 45
4.2.6 Equipping of computer rooms ......................................................................................................... 46
4.2.7 UPS, emergency power for infrastructure outside the computer rooms ......................................... 46
4.3 REDUNDANCY CRITERIA FOR A HOSPITAL ................................................................................................. 46

5 ARCHIVING .............................................................................................................................................. 48

5.1 PHYSICAL PROBLEMS ................................................................................................................................ 48


5.2 LOGICAL PROBLEMS ................................................................................................................................. 48

6 SECURITY ................................................................................................................................................. 50

6.1 ACCESS CONTROL ..................................................................................................................................... 50


6.2 AUTHENTICATION ..................................................................................................................................... 50
6.2.1 Basic techniques.............................................................................................................................. 50
6.2.2 Discipline in authentication ............................................................................................................ 51
6.2.3 What authentication techniques should be used for an HIS? .......................................................... 54
6.2.4 Individual names ............................................................................................................................. 55
6.3 AUTHORIZATION ....................................................................................................................................... 55
6.4 MANAGEMENT OF USERS .......................................................................................................................... 56
6.5 AUTOMATIC BLOCKING OF A SESSION IN THE CASE OF INACTIVITY ........................................................... 57
6.6 AUDITING ................................................................................................................................................. 57
6.7 ENCRYPTION AND DIGITAL SIGNATURE..................................................................................................... 58
6.7.1 Symmetric encryption algorithms.................................................................................................... 58
6.7.2 Public-private key algorithms ......................................................................................................... 59
6.7.3 Certificates ...................................................................................................................................... 59
6.7.4 Hashing techniques ......................................................................................................................... 60
6.8 VIRUSES, WORMS AND TROJANS ............................................................................................................... 60
6.9 PHYSICAL ACCESS TO COMPUTER SYSTEMS .............................................................................................. 61
6.10 THEFT OF LAPTOPS ............................................................................................................................... 61
6.11 ACCESS FOR HARDWARE SUPPORT........................................................................................................ 61
6.12 INTERNET ACCESS ................................................................................................................................ 62
6.13 VIRTUAL PRIVATE NETWORKS .............................................................................................................. 63
6.14 ACCESS CONTROL AT APPLICATION LEVEL ........................................................................................... 63
6.14.1 Authentication............................................................................................................................. 63
6.14.2 Authorization .............................................................................................................................. 63
6.14.3 Audit............................................................................................................................................ 66

7 KEY ISSUES IN PROJECT MANAGEMENT....................................................................................... 69

8 PACS – ‘PICTURE ARCHIVING AND COMMUNICATION SYSTEMS’........................................ 72

8.1 INDIVIDUAL ASPECTS OF A PACS ............................................................................................................. 72


8.1.1 Archiving of images......................................................................................................................... 72
8.1.2 Distribution of radiological images throughout the hospital network ............................................ 77
8.1.3 Connecting the imaging devices...................................................................................................... 77
8.1.4 Viewing for primary diagnosis ........................................................................................................ 79
8.1.5 Viewing throughout the clinic ......................................................................................................... 82
8.2 INTEGRATION OF PACS INTO THE WHOLE IT SOLUTION ........................................................................... 83
8.2.1 ‘Back-office’ integration of PACS into the overarching information system .................................. 83

ii
8.2.2 ‘Front-office’ integration of PACS in an overarching user interface ............................................. 86
8.2.3 Information exchange with the imaging devices ............................................................................. 89
8.2.4 Correction of errors in the PACS.................................................................................................... 91

9 TELEMATICS ........................................................................................................................................... 93

9.1 SECURITY IN TELEMATICS ......................................................................................................................... 93


9.1.1 Defining responsibilities ................................................................................................................. 93
9.1.2 Securing of communications ........................................................................................................... 94
9.1.3 Authentication of users.................................................................................................................... 94
9.1.4 Confinement of business logic to the client software....................................................................... 95
9.1.5 Detailed access rules and external doctors..................................................................................... 96
9.2 TECHNOLOGY FOR TELE-INTERACTION BETWEEN PERSONS ...................................................................... 96
9.2.1 Video conferencingLevel of expectations ........................................................................................ 97
9.2.2 Data conferencing ........................................................................................................................... 98
9.2.3 ‘Delayed time’ interaction by exchanging files............................................................................... 99
9.3 TELELINKING OF AND TELELINKING TO MEDICAL FILES .......................................................................... 100
9.3.1 Exchange of information between independent medical files........................................................ 101
9.3.2 Interactive tele-access to the file within a hospital ....................................................................... 103
9.3.3 The sharing of a central file between independent care providers................................................ 106

iii
1 Introduction

The aim of this text is to make a number of recommendations and to provide an initial
stimulus for the establishment of quality criteria for hospital information systems (HIS). A
hospital information system is a very complex system, in which every aspect of IT is
involved. It is impossible to discuss all these aspects in one text: we shall therefore confine
ourselves to certain part-aspects that, in our opinion, are to a greater or lesser extent specific
to the hospital sector.

Strictly speaking there is no such thing as medical informatics; there is only the application of
IT within the medical sector. Some IT techniques arise more frequently in this sector than
they do in others, and some problems (e.g. image storage and distribution) receive a different
emphasis than they do in most business applications. Many hospitals have a management
structure that is fundamentally different from those encountered in most businesses:
although the various medical departments work closely together, they enjoy considerable
autonomy. Often each department of a hospital can pursue its own IT strategy, whereas in
the business world this strategy is usually determined centrally. This has an impact on the
problems that arise and on their potential solutions. In this text we have tried to identify
some key issues and to propose a generic approach that makes due allowance for this
specific situation.

We shall be confining ourselves to those technical and architectural decisions that cannot be
made purely and simply by technical experts but in which the management also have to be
involved because the decisions have an impact on the functioning of the hospital, the
(electronic) interactions between departments, the security strategy, the access control, the
extensibility of the system and other factors.

In addition, we are focusing primarily on the problems and choices that have to be made in
the implementation and integration of the clinical systems. These are heavily influenced by
the fact that they require a very strict authentication, while it is precisely these applications
that are most subject to unusually complex access control and access control measures. For
administrative systems, the situation is not only simpler but also less specific, so that more
expertise will be found within the internal management and among any external staff who
handle the implementation.

We do not wish to define any minimum standards for the contents of the medical file. For
that we refer the reader to the Royal Decree of 1999 May 1999. which lays down such
minima. It is not up to the authors of this text to define priorities for the further
development of the medical file. What is optimally and pragmatically feasible for one
hospital is very difficult to implement for another. We do make reference, however, to

4
integrations that in our opinion, are desirable or even indispensable if a decision is made to
develop or purchase specific modules.

The text tries to distinguish between measures that are strictly required and those that are
merely desirable or recommendable. We have decided not to produce a list of strict
standards. A list of standards could, of course, lead to a strict scoring of hospital information
systems, but it struck us as being well nigh impossible to produce a single meaningful
relative scoring system for technical, substantive, integrational, safety-related and
managerial aspects of an HIS without first having exhaustively listed and described all the
criteria. As the latter struck us as impossible, we have confined ourselves to a qualitative
description.

The text is not, therefore, an exhaustive treatment of all the possible choices that must be
made when implementing an HIS. It is more a collection of ‘best practices’ and some
practical suggestions for the solution of problems. The aim is to impart some structure to
frequently arising problems and thus help the management in the drawing up of the IT
master plan and the implementation of the HIS.

Recommendations printed in this style are both extremely important and strictly necessary.

Recommendations in this style are less important and can be regarded as secondary.

5
2 Architecture and integration of the HIS

2.1 Introduction

The architecture of the HIS partly determines the possibilities and limitations of that system.
Various aspects of an HIS are affected by the architecture:

• Access control

• Migration of applications and systems

• Application integration and workflow monitoring

• Data storage

• etc.

The architecture is not determined purely by technical factors but also by how the system has
grown historically and (above all) by the structure and level of integration of the
management. To build an integrated IT system requires an unequivocal vision of this
architecture and unequivocal leadership to then implement this vision. In many hospitals
however, (medical) policy is decentralized. There is usually, it is true, a central policy for the
administrative and logistical support, but the medical policy is run by the various
departments (specialisms) in a fully decentralized manner. As a result we often see an
integrated patient management system and an integrated invoicing system, but also differing
medical subsystems across which the patient’s medical file is divided: the internist works
with a different package to the surgeon, the neurologist has something different again, and
so on. Services such as Pharmacy, Radiology or other function measurements generally use
very sector-specific software. The system then grows through the acquisition or expansion
of modules without a global architecture being thought out in advance. Integration has to be
realized retrospectively and is, as a result, more costly and more difficult. Because the
departments are also financially independent and have their own budgets, you still have to
negotiate with the different departments to convince them to release the necessary budgets
for integration.

The fact that the management structures have developed the way they have is of course to do
with the way in which the government finances the hospitals. This is now unfortunately a
reality, and when delineating the architecture of the HIS we have to take this into account: it
makes no sense to propose a level of integration that demands management decisions that

6
cannot be taken because there is no management structure with the authority to take them.
In other words, the level of integration of the IT system is restricted by the level of
integration of the management.

If allowance is made for this factor when delineating the global IT master plan, you should
also be able to adjust the expectations of the users. By informing the departments about the
consequences of their choice of one or other subsystem, you can still if necessary keep a
number of integration options open. Even if you are working with different subsystems
there will still be a number of integration options available to you. First we describe a
number of basic architectures and their respective pros and cons. We then describe a number
of possible integration and connection techniques.

In the IT master plan, a high-level architecture must be chosen in which the integration of
the various subsystems is described. The plan describes the information flows and the
integration techniques at a high level. If the choices made exclude specific options then these
must be stated.

2.2 Definitions

We shall first discuss a number of conceptually basic architectures and introduce a few terms
to facilitate further discussion. The classification that we offer is certainly not absolute: there
are different types of architectures, but between them lies a continuum. An HIS will usually
be a combination of these types. Nor are the names of these types standardized: the
terminology encountered in other texts may therefore be different.

2.2.1 Connected systems

A connected system consists of separate subsystems that are linked together by data
connections. In this way data is transported from one system to another. This can be done in
2 ways: by on-line retrieval or by data replication.

In an on-line retrieval system, A will retrieve the data it needs from system B as and when it
needs it. The data will only be processed at A – it will not be stored there. If A needs the
same data again later then A will retrieve it again. A therefore acts as a client and B as the
server.

7
In data replication, A will retrieve data from B at regular intervals and then store it locally.
Alternatively, B can send its data to A either regularly or continuously. B’s data is therefore
stored redundantly on A.

Data replication has the advantage of efficiency: the data is locally available on the second
system, which makes interrogation much faster. On the other hand, there is a need for copy
management (see p. 22).

If different subsystems re-implement the same functionality we refer to it as function


replication. To take an example: suppose that the Internal Medicine and Surgery
departments have different IT systems. At both departments a Radiology request module
has been implemented at the request of the Radiology department. This is an example of
function replication. It means that the logic behind the request module must be created in
two different systems. Function replication is expensive and is rarely performed, but
sometimes it is necessary. Sometimes also it is not enough just to replicate the data to
another system – you also have to add programs to make it possible to view and/or
manipulate this data.

Given that different systems are being connected, a distinction can also be made here
between ad hoc connections and connections via a central node. Ad hoc connections do not
need much defining: in this case, two subsystems are being connected to each other. If a
central node is being used, then all the subsystems are connected to this node. The node also
usually controls the traffic between the subsystems and acts as a sort of triage station.

Radiology Internal

Dermato Surgery

Data connection
Ad hoc protocol, HL/7
connection or other.
Pharma

Illustration 1:Ad hoc connected systems

8
2.2.2 Integrated systems

In an integrated system, the user sees just one system of subapplications interacting with one
another. The degree and complexity of this interaction can vary substantially. An integrated
system can be either monolithic or component-oriented.

In a monolithic system, one manufacturer makes everything and the degree of integration is
usually very high. The subapplications have seamless interfaces and the user has the feeling
of working with one big application. The main disadvantage is that there is rarely one
manufacturer who is in a position to provide a satisfactory solution for all the subsystems. A
further drawback is vendor lock-in: the hospital is heavily dependent on that manufacturer
and the applications that this manufacturer supports.

In a component-oriented system, different manufacturers make the various subapplications


but interact with one another by using each other’s functions. The user will, however,
usually note differences in style between the subapplications because they all have a
different origin. The system does, however, behave as a single unit.

Illustration 2 : Connecting via central node

9
Radiology Internal

Interface engine (HL/7)


or Message broker

Dermato Surgery

Pharma

Difference between function replication and component orientation

The difference is that in a component-oriented system one component implements the


function but the function is used in different subsystems. The consistent implementation of
the function throughout the whole system is guaranteed. The program code that implements
the function logic only exists at one location; if it has to be adapted then this also only needs
to be performed at one location.

In function replication, the function logic will be re-implemented each time in the target
package. There are accordingly different implementations of the same function logic and
usually in different technologies also. This is expensive both to develop and maintain. Each
application must also be implemented by all the different parties. There is a risk that not all
the subsystems will implement the logic in exactly the same way.

Back-end and front-end components

Component integration can be performed at both the front end and the back end.

Front-end components are separate components from which an application has been made
at the front end (client) and which may also have their own back-end system. So, for
example, you could have a component for the presentation of radiological exams and
radiological reports, which retrieves this data from its own radiology database but which is
still built in as a ‘whole’ component into the user interface of the application that presents the
medical file as a whole.

10
Application du dossier médical

Admission
Sortie ID pat.
Transfert
de patients

Info ID
Système RX
Planificateur
de rendez-
vous

Serveur de BD admin.
Système RX
rendez-vous patients

Illustration 3: Front end components

Back-end components are usually implemented in a middle tier in a so-called 3-tier


architecture. The third tier usually takes care of the persistence of the data. This architecture
is very often used in Internet applications, where you do not have (or wish to have) any
control over the front-end systems and you therefore implement all the business logic in a
middle tier. The middle tier provides services to various front-end applications. The great
advantage of this is that the business logic only needs to be maintained at one location. You
can also change it without having to make adjustments to the client applications as long as
you do not change the interface to the clients. For access via the Internet, for example,
another front-end can be built that is different to the one that is used within the healthcare
institution in question (see p.105).

The above terms represent a number of different types. Within each of these types there are
different variants.

11
Application 1 Wireless Application 2
Application

Implemented on
"application servers"
type J2EE, .NET or
servlets.
Several middle-tier
servers possible.

Middle tier Middle tier

RX RX Appoint- Patient Medication


request reporting ments admin prescription

Patient
RX system Pharma
admin

Illustration 4: Back-end components

2.3 Splitting of the HIS: content options

The management must make a number of integration choices in its IT master plan. As stated
above, the architecture of the system helps to determine what it will be easy or difficult to
realize with the system. The management must therefore make due allowance for the
limitations of the architecture of the HIS while seeking to adapt the architecture to the
objectives that it wishes to achieve.

Unless a monolithic system from one manufacturer is chosen, the management will always
have to decide how the system is going to be split into manageable parts that will serve as
the building-blocks of the HIS. This choice is best included in the IT master plan, along with
the approach adopted for connecting these building-blocks with each other and with the
systems already implemented in accordance with one of the techniques mentioned above.

The different parts of the HIS can then be purchased or developed independently of one
another.

12
How the system should be functionally divided is a separate issue from the technical choices
that you make when integrating the different components. We shall now discuss a number
of basic choices that can be made in the splitting of the system and shall then examine some
integration techniques.

2.3.1 Splitting by user group

The system can be split by group of users. In other words, each group has its own ‘file’. We
sometimes refer to a medical file and a nursing file. The former is used by the doctors, the
latter by the nursing staff. Sometimes a third file is added: the social file.

General Abdominal
Cardiology Dermatology Anaesthesiology
Internal surgery

Medical file

Nursing file
S`

Administrative system

Illustration 5 : Splitting by group

Advantages

The main advantage of this approach is that it is usually possible to roll out just one nursing
file throughout the hospital, while it is more difficult to achieve consensus about one medical
file (package) because of the major technical differences between the specializations. As a
result, it is sometimes easier to set to work on the computerization of the nursing side first.

Disadvantages

It is, however, a very great disadvantage to accommodate the nurses and the doctors in two
or more different systems that must then be integrated with each other. As information has

13
to be transferred in large volumes and with a high frequency between nurses and doctors, a
large number of (data) connections between the two systems is required. As this is quite
expensive it only happens to a limited extent and there is therefore a loss of functionality.
Without heavy integration or connection efforts, it is difficult to implement workflow in such
a system.

If you opt for a split by user group, the connection between the medical and nursing systems
will have to be very efficient because most workflows exceed the limit of these systems from
time to time. In such cases it is best to perform the integration at database level, with one
system directly interrogating the database of the other (client server) or to opt for an
expanded data replication with redundant storage of the data required in the workflow(s).

2.3.2 Splitting by department

Some prefer to set up a patient file by specialism (group). Nurses and doctors from the same
department work in one system but have different packages for each specialism: Internal
Medicine, Gynaecology, Surgery and so on.

General Abdominal
Cardiology Dermatology Anaesthesiology
Internal surgery

File Package A File Package B File Package C Anaesthesia/


pre-op monitoring

Administrative system

Illustration 6 : Splitting per specialism

Advantages

Specificity. One advantage of this approach is that a very specific system can be chosen for
each department. For the more ‘technical’ departments in particular, a generalized system

14
can be insufficient. In practice, however, a split of this kind is usually made not for this
reason, but because each department has a fully autonomous management and selects a
system without consultation.

Manageability. You have the advantage of manageability, which is especially useful if you
do not wish to integrate too tightly. Each of the systems can be adapted or migrated
individually without disrupting the others too much.

Disadvantages

Fragmentation. In an approach of this kind, the patient’s medical file is stored in a


fragmented way. Usually the service providers of one department do not have access to the
package of the other department.

Expensive integration. The integration is expensive and, if the systems are strongly
integrated, some of the advantages mentioned above are lost. If, for example, you want to
implement order entry then function replication will very soon be required.

General Abdominal
Cardiology Dermatology Anaesthesiology
Internal surgery

File Package A File Package B File Package C Anaesthesia/


pre-op monitoring

Nursing file

Administrative system

Illustration 7 : Combination of splitting by group and by specialism

Several user interfaces. If people are working in different departments they are confronted
with different user interfaces. This can be a problem not just for interns undergoing training
at a training hospital but also for nurses who are on rotation.

Most HIS are combinations of the above splits (see Illustration 7).

15
2.4 Technical options for data exchange

When designing/planning data exchange between different systems, the following points
must be defined for each of the connections:

• Syntax for data exchange

• The measures taken to guarantee the operational reliability and correct functioning of
the connection

- Does data replication resume after the downtime of one of the systems?

- Transaction monitoring

• Performance requirements

2.4.1 Syntax and standards for data exchange

HL/7

A great many standards have been defined for data exchange in medical informatics, but few
are actually being followed by manufacturers on a large scale. One of the few that has a
worldwide commercial following is HL/7 (Health Level 7). Even with HL/7, ‘plug-and-
play’ has still only been tackled as far as the level of ‘plug-and-pray’. There are also both
minor and major dialect differences that mean that setting up an HL/7 connection can
rapidly demand several man-weeks. For the exchange of diagnostic images and related
information the DICOM standard is generally accepted (see p. 77).

There is a difference between a syntax (and a protocol) for communications and a standard.
The syntax defines the manner in which data in the message must be coded, the protocol is
the technique for getting the data to the communications partner, while the standard defines
the meaning. The HL/7 standard is ‘extensible’, i.e. the message can be supplemented with
user segments in which data can be placed that is not (yet) included in the standard. In fact,
in that case HL/7 is used purely as a technique for communicating data regarding which you
still have to conclude separate agreements (which is, of course, an advantage if such a
technique is already needed).

16
XML

A promising new syntax is XML and a related communication protocol is SOAP. These will
become more important, but they are not currently replacing any of the standards. Just
because two systems support XML does not mean that they can exchange messages with one
another (and process them). XML defines only the syntax of the message but does not as yet
define the structure and the meaning of the elements in this structure.

XML is generally supported by the whole computer industry. XML will therefore also
become the syntax above which the standards will be implemented. Later versions of HL/7
will be XML-based. The significance and structure of the message and the items it contains
are determined by the HL/7 consortium.

When two or more systems are linked, you must consider whether using a standard format
is worth the trouble. All too often, people automatically assume that a standard must be
used. Sometimes, however, there are reasons for taking a more direct route and creating an
ad hoc link (which can then still be based on XML), certainly if only purely ‘private elements’
are still being used in HL/7.

Performance

If a standard such as HL/7 is used then the data that you want to send must first be fully
converted to HL/7 format. The data is then forwarded via the network to the receiving
system or the central node (see below). The receiving system must then ‘unpack’ the data
and convert it to the local format. All this creates overhead, which is not always acceptable
in a transactional system.

If the structure of the information that you want to transfer is relatively simple, is familiar,
and will change little over time, then it can often be much simpler to set up an ad hoc format
and data replication procedure, which is much closer to the source and target system. For
example, if a source and target system both use a relational database for data storage then
you can usually replicate data between databases more efficiently by getting a program to
retrieve the necessary data directly from one system and inserting it into the other. The
procedure for the latter is best left to the manufacturer of the receiving system.

You must check in advance whether the standard that you wish to use involves excessive
overhead, as a result of which the performance requirements cannot be achieved.

17
2.4.2 Operational security of the connection

For each connection that you create between two systems, you must make allowance for the
fact that both the sender and the recipient and the process that handles the data replication
can temporarily drop out, and that it must be possible to restart the data replication again
after rectifying the defect.

2.4.2.1 INDEPENDENCE OF SENDER, RECIPIENT AND TRANSFER PROCESS

The connection must be set up in such a way that the sender, recipient or transfer process can
fail without the other two being prevented from functioning.

This is precisely one of the advantages of splitting systems: defects and downtime remain
restricted to just one part.

This splitting can be achieved by, among other methods, buffering messages on both sides.
The sender writes the data to be transferred into a local buffer. The transfer process retrieves
the data from this buffer and then sends it to the buffer of the recipient. The recipient reads
the data out of this buffer to process it. If the recipient is not operational then the sender can
still continue working and adding data to the buffer. If the processing is only down at the
recipient’s end then the transfer process can send the already transferred data to the
recipient’s buffer. If the recipient is completely unavailable then the data to be sent remains
buffered at the sender. The buffers must be sufficiently large to store the data that
accumulates during downtime.

2.4.2.2 GUARANTEED DELIVERY

The connection must guarantee the delivery of each message.

If the medical file is spread over different systems, the non-delivery of a message can cause
important medical information to be lost. Usually a message will only be deleted from the
send buffer or be checked off as having been sent after confirmation of receipt.

Idempotent operations

Often the sender is uncertain, after the recipient has gone down, whether his last message
did in fact arrive but no confirmation of receipt was sent in return or whether the message
never arrived at all. After everything is operational again the sender can see which was the
last message to arrive, but this makes interaction between sender and recipient rather
complex.

18
A simpler way is to work with idempotent operations. These are operations that can be
performed several times one after the other but which cease to be effective after the first time.
That means that whether you perform an idempotent operation just once or several times in
succession you get exactly the same result. If, in data communications, you can make sure
that each message can be sent in duplicate or even several times then when you restart you
will start sending all those messages for which no confirmation of receipt has been obtained.
In this way the recipient can receive the same message several times, but without this having
any ‘effect’. The recipient accordingly does not need to check what the recipient has already
processed, which keeps the connection simple.

Most commercial messaging engines have built-in mechanisms for guaranteed delivery (see
below).

2.4.2.3 RESTART PROCEDURE

For each connection a restart procedure must be worked out and described that prevents any
messages from getting lost. The systems must be dimensioned in such a way that they can
clear the backlog at sufficient speed.

In addition to the measures necessary to counteract the loss of messages, you must also
ensure that the systems that handle the data replication have been sufficiently dimensioned.
If, for example, the system has been down for 4 hours due to a breakdown you must be able
to rapidly clear the messages that have stacked up during those 4 hours. As long as the
system is still processing ‘old’ messages that were generated during the breakdown, the new
messages cannot be sent and the recipient remains ‘behind’ the sender. For the recipient the
effective downtime = the period of the breakdown + the time required to clear the backlog.

2.4.2.4 TRANSACTIONS

Implementing transactions across messaging systems is very complex. You can set up
simple data replication yourself, but in order to have transactions working on top of a
messaging system it is best to use specialized software. During the transaction, locks remain
open on the database, which in their turn block other (local or distributed) transactions. If
the transactions are being processed with insufficient speed this causes a vicious circle that
brings the systems to a standstill. The slower the transactions, the faster this phenomenon
arises. Transactions via messages are usually much slower than transactions between
databases, which in their turn are slower than transactions within a database.

19
Before you set up an architecture in which distributed transactions are required, you must
make sure that these transactions can be handled with a speed that is sufficient not to
jeopardize the operational functioning of the participating systems.

2.4.3 Central node

If you have to create a lot of connections between different systems, you can opt to route all
the connections via a central node (see Illustration 2 p. 9). Each of the systems is connected to
the node.

Advantages

• The number of connections that has to be maintained falls drastically.

• The node can also perform routing: a sender needs to send the message only once to the
node, which can deliver the message to more than one place. In some commercial
products it is possible to specify quite complex routing rules.

• Single point of management: via the node you can monitor all the connections.

• Protocol conversion: the sender sends the message to the node in a particular protocol.
The node can convert the message to another protocol before sending it to a recipient.

• Maintainability: one system to set up buffers, one system of guaranteed delivery, and so
on.

Disadvantages

• Single point of failure: when the node is down, all the connections fail.

• On each occasion, two connections are required to link systems: from sender to node and
from node to recipient. As a result the transmission time increases still further (see also
page 17). For on-line queries this way of working is often too slow.

• Cost. If you want just a few connections this is usually not worth the trouble: you need a
separate machine and a software licence (if you are using a commercial product), you
have to learn how to use the package, maintain it, etc.

20
2.4.3.1 INTERFACE ENGINE

People in the medical world have been using so-called HL/7 interface engines for a long
time. An HL/7 interface engine is a central node for HL/7 messages. Usually this software
is installed on a separate server. All the systems send their HL/7 messages to the interface
engine. The interface engine buffers them and ensures that they are delivered.

Some interface engines allow the implementation of (limited) business logic. This is used
primarily for routing and filtering. By using rules you can specify which messages must be
sent to which recipients. Usually you can still make changes to a message before forwarding
it to a recipient. So, for example, you can define filters. Suppose that a medical system is
sending data to another medical system and that an administrative system has to be
informed of the fact that an exam has been performed but not of the actual result. In that
case you can define a setting on the interface engine that will specify that the other medical
system will receive the complete message but that the administrative system will only
receive a filtered copy (i.e. without the result) of the original message.

Many interface engines can also handle other protocols and can convert HL/7 messages
from one version to another.

2.4.3.2 MESSAGE BROKER

You can also opt for a general message broker. The market for EAI (enterprise application
integration) has produced several general integration products. The difference with the
HL/7 interface engines is that these do not directly support the HL/7 protocol but only more
general application-aspecific protocols.

These products have the advantage that they can also be used for the integration of non-
medical applications where HL/7 is unknown. As they cover a larger market they are
generally more sophisticated too, but they are also usually more expensive.

As a central node is an especially important and vulnerable component of the architecture,


the following options must be specified in the IT plan:

• Securing of the node: who has access, who defines the routing rules, etc?

• Performance requirements: routing time of messages, recovery time required to process


messages after restart, etc.

• Operational security: which mechanisms handle guaranteed delivery, uptime (is


deduplication of the server necessary?), will idempotent operations be used or not?

21
2.4.4 Data replication: content options

Separate from the abovementioned technical options there are content-related decisions that
have to be made and that are solely concerned with the fact that you want to replicate data
(separately from any technical option).

If the various systems that receive replicated data are under separate management and
pursue their own policy, then a number of agreements have to be made with regard to copy
management. Who manages the copies of the replicated data? If department A forwards
data to department B:

• Who is then responsible for the access control to the replicated data?

• Who is responsible for the correct presentation and visualization of the data?

• Is department B allowed to change the data?

• Is department B allowed to apply interpretation rules to A’s data (outside the control of
A)? We are not referring here to crude changes to basic data but with their interpretation
from a programming perspective, whether or not through combination with data of your
own.

• Is B permitted to forward the data in its turn to C?

Presentation and visualization

In data replication, the data is sent to another database, but this still does not guarantee that
it will be presented there correctly. For example, you may forward lab results to a system
that then presents these to its users without normal values or with superseded normal
values. As the head of the laboratory is responsible for the reporting, this person must check
whether the data is being correctly presented on the other system.

Some data is propagated as ‘raw data’ and cannot be visualized without further processing.
Examples are ECG traces, radiological images or the results of image processing. For results
of this kind you always need specialized viewers. The department that produced the data
must check whether the recipient department can visualize the data. With complex data,
consideration can be given to the question of whether, instead of function replication (see
page 85), the producing department would not be better advised to make available a
visualization component that can be built in by the recipient department.

22
2.4.4.1 MASTER- SLAVE

It is best to create a master-slave relationship between the sender and the recipient. All
changes in the propagated data are made via and by the master, which propagates the
changed data to all the slaves.

Master-slave relationship in this context means that only the sender has the right to add,
change or delete data of that type. The recipient may only read the data. If the recipient is
an autonomously managed system then this will have to be made mandatory through
procedural agreements: technically, there is no way of preventing the recipient from
changing the data.

For most data types it is a simple matter to define a so-called authentic source, which then
acts as the master.

Even so, you may sometimes want to amend data from a system that you are not the master
of. If this has to be done then the best way of doing it is to have the master define a function
via which the other systems can amend data on the master. The amended data is then
propagated again from the master to all the slave systems. This has the following
advantages:

• the master contains the necessary business logic to check whether the change is
permitted and to keep the data internally consistent;

• all the other applications (slaves) also have the changed data sent to them;

• the system that requested the change does not need to know which other systems have to
be informed of this or how;

• there are no inconsistencies between the copies on the different systems.

2.4.4.2 ROUTING RULES

Even if the different subsystems are managed decentrally it is best if the routing rules and
filters are centrally managed on the central node. Administrator access and passwords for
the central node must be protected with special care, as all (medical) information passes via
this server.

23
2.4.4.3 SYNCHRONIZING OF DIFFERENT DATA COPIES

If, after the data on the master has been sent, changes still have to be made to it, then
synchronization procedures and messages must also be defined. A potential problem in this
context is that if a large part of the content of the message changes then it is not always clear
whether it is a new message or simply an old message that has been changed.

Each message must have a unique identifier to which reference can be made when passing on
any subsequent changes.

2.4.4.4 IMPLICATIONS OF DATA REPLICATION ON ACCESS CONTROL

Whether you use a central node or work with direct connections, different copies of medical
results will end up on different servers. This increases the risk of unauthorized access and
makes control more difficult. We can mention the following key issues:

Uniformity of the access control rules

Are the same access control rules respected in all systems? It does not make any sense to be
very strict in one system but then to allow people to access the data just by logging on to
another system.

Uniformity in the allocation of access rights

The access control rules can be implemented simultaneously in two systems, but there could
be a difference in the allocation of the privileges in the two systems. Suppose that in the two
systems only ‘administrators’ have certain access rights, but that in one of the two systems it
is possible to very rapidly obtain the rank of administrator – that system is therefore a
security risk.

Uniformity in the follow-up

In all systems there will be some form of follow-up (audit) of the accesses made. It is best to
do this in the same way – if technically possible – for all copies.

If medical data on different servers is copied, the access control across these different
systems must be made as uniform as possible as regards rules, allocation of privileges and
audit.

24
2.5 Results server

A results server is, in a certain sense, a special case of the architecture with a central node.
All the systems connect with the results server but only ‘results’ are passed on, and the
results server stores them instead of forwarding them. The other systems ask for the data
on-line as and when they need it.

Just as with the node you need fewer connections, but in this case instead of managing the
message stream centrally you manage the results yourself centrally.

Advantages

• Much less redundancy. The results are only present at two places: in the producing
system and in the results server. That also means that you need less storage space from a
global perspective.

• Access control to the replicated data is more manageable because it can be centralized
(see below).

Disadvantages

• Performance. The replicated data is no longer updated locally in the various systems and
must now be retrieved on-line from the results server. This is slower.

• Single point of failure.

If you have a results server then you had best avoid a situation where subsystems store the
data redundantly again on this server.

2.5.1 Implications for access control

Centralized access control

As all the results end up in one results server, you only need to define the access control
rules at one location to achieve a uniform implementation. In order to implement more
refined access controls you will, however, need more information than just the results. You
may, for example, need the location where the patient is hospitalized, the department where
he or she has been registered, the appointments, etc. in order to evaluate whether access to
the file is to be permitted or not. This means that this data – along with the results
themselves – must be replicated to the results server. In addition, all the users must be

25
individually known in the results server, as also must the group to which they belong or the
roles that they can assume. This is, of course, technically possible but is not always feasible
because its implementation is labour-intensive.

Decentralized access control

The other extreme is to implement the access control rules on the querying systems and to
allow these to retrieve data from the results server without the results server performing an
additional check. The other systems are then fully trusted by the results server. Sometimes
the querying systems are allowed to query the data on the results server but with only one
log-in name each. In that case the results server can only maintain a limited log of what has
been queried: you only know what data has been sent to which application server, and not
to which user it has ultimately gone.

If you opt for the latter arrangement, it is of the greatest importance that the log-in
name/password combination with which the application servers go to the results server is
well protected. As the results server does not perform any further checks, the divulgence of
this log-in name/password combination is a major security risk.

Usually the access control on a results server will be defined partially centrally and partially
decentrally.

As far as possible, define the access control rules centrally. We suggest working with unique
log-ins on the results server and, where an application server retrieves results with a group
log-in, to determine what security measures this server must implement in the area of
authentication, authorization and logging.

Combination of results server and central node

For order entry a results server is not a solution: the requests must ultimately flow into the
application server of the department that has to execute the request. For this reason it can be
useful to combine the central node with a results server. The results server can then receive
the results either directly or via the node. Results can also be queried via the node or directly
in client-server mode. The latter is, of course, faster.

2.6 Component integration

Component integration is the most modern method of integration and also offers the most
possibilities, but it is also technologically the most difficult. It is only really possible with the

26
more modern technological resources. Older applications will not easily lend themselves to
integration or be easily integrated as a component. Here we discuss only the technical
options that have to be chosen by the management.

Standardization on one technology

If you move over to a component-oriented system then you should try as far as possible to
standardize on the basis of one component technology. There are bridges between
component technologies, but these make integration more difficult and usually this results in
a loss of functionality and performance. It looks as if the market is going to be dominated by
two technologies: .NET from Microsoft, and Java, which is supported by a number of
suppliers both large and small. It is obviously not always possible to avoid using another
technology, but you must (1) define one technology as the preferred one from the
organization’s perspective and (2) in exceptional cases, decide on a case-by-case basis
whether the advantages of the ‘alien’ component outweigh the additional cost involved in
building bridges and maintaining the knowledge of the other technology required for this
particular case.

Whether you opt for front-end or back-end component integration, it is best to stipulate in
the IT master plan the technologies that the hospital wants and can support so that these
can serve as guidelines for all component suppliers. You must also determine what basic
interfaces the components must be able to satisfy if they are to be built in successfully.

You must also stipulate in advance which functions the subcomponents must delegate to the
overarching front-end application in which the components are integrated or to the back-end
framework. It is best to allow the subcomponents to deal only with the specific
functionality and internal access control. The following functions are best housed in the
overarching application or within the structures of the framework:

• Authentication, so that this is uniform (where appropriate via LDAP).

• Access to the component itself: only authorized users should get to see the component.
Access to the various functions within the component must, of course, be implemented by
this component itself.

• Logging of use.

Components are usually smaller than subsystems. You are therefore getting a system to
which many different manufacturers will have contributed building-blocks. Each of these
components has its own migration path but, above all, its own migration speed. A problem
arises if you want to upgrade the component framework to a subsequent version. At that
moment all the components that are used within that framework must be able to work with

27
this version. This cannot be taken for granted, as manufacturers are sometimes not inclined
to upgrade old systems. Others, on the other hand, sometimes do not want to support ‘old’
versions of the framework. In this way you can get into a blind alley in which some
components require at least a certain version of the framework and others can no longer run
on it. This can block the evolution of the system.

When purchasing components you must contractually ensure that the manufacturer will
continue with the versions of the component framework.

2.6.1 Back-end components

Components that have to interact with one another a great deal are best implemented on the
same type of framework.

There are bridges between the different component frameworks, but using them usually
involves a loss in performance. Implementing transactions across components in two or
more different frameworks is also no easy matter.

Delegate as many as possible of the user administration and access control rules to the
functions provided for these in the framework itself.

This makes a uniform application of the rules simpler and thus enables you to obtain (within
that framework) a centralization of them that facilitates maintenance and control.

2.6.2 Front-end components

If you create an application for the medical file with the help of front-end components then
the overarching application in which the components are integrated plays an important role.
The above remarks remain applicable. The more functions the components share with each
other, the less function replication is required and the more consistent is the interface and the
use.

In the case of front-end components, you had best also define the visual integration (look
and feel, integration in one window or one window per component, etc.). Allow components
to share as many functions with each other as possible (especially basic functions such as
patient and user identification, looking up patients, users, materials, etc.).

28
2.6.3 Cooperating applications as an alternative to front-end components – CCOW

‘By synchronizing and coordinating applications so that they automatically follow the user’s context,
the CCOW Standard serves as the basis for ensuring secure and consistent access to patient
information from heterogeneous sources. The benefits include applications that are easier to use,
increased utilization of electronically available information, and an increase in patient safety.’1

CCOW is not a component framework, but rather a technology with which you can
synchronize different applications that are open simultaneously. CCOW uses the concepts of
‘patient’ and ‘episode’ and will, if the user selects a patient and a certain episode in one
application, also inform the other open applications about them so that they can select the
same patient and episode. CCOW is now managed by the HL/7 consortium. This is
expanding CCOW even further so that the applications can be synchronized with one
another even more. CCOW supplies synchronization but not integration.

1 From ‘Overview of HL7’s CCOW Standard’ by Robert Seliger, Co-Chair, CCOW Technical
Committee, www.hl7.org

29
3 Content-related structure

In this chapter we primarily discuss key issues relating to the manner in which the content of
the medical file is structured. This covers, in particular, the degree and manner of coding
and structuring. As regards the functionality, no judgment is made on which functions must
or must not be present. Which functions you want to automate (and which not) do not
provide any indications about what makes a better medical file. In a number of cases non-
automation is, after all, just as good an option. The functionality is only discussed insofar as
recommendations are made about the way in which a certain functionality is automated.

3.1 Data organization

3.1.1 Patient number

3.1.1.1 MANAGEMENT

The patient number is the central anchoring point – the index – for all patient-related
information in the applications.

Choose a meaningless patient number.

The purpose of a patient number is that it remains constant over time. Anything that has
any significance whatsoever with regard to the patient (date of birth, gender, etc.), the
organization (serial number per year, year of first contact, etc.), external references (Social
Information System card number, health insurer, etc.) and which is included in this number
can change or be recorded incorrectly and thus give rise to the patient number having to be
changed. This must be avoided as far as possible. All this information must be an attribute
of the patient identification that is related to this meaningless number.

If there is the slightest doubt as to whether some information relates to a patient file then a
new patient number must be created. There must be a procedure for merging two patient
numbers and related files which can reverse this deduplication after it has been proved that
the same patient is involved.

It is impossible to automatically ‘unsplit’ wrongly merged patient files or information about


different patients in the same file. Correcting these errors requires manual interventions by
users who are familiar with the (medical) content. This is usually a time-consuming task that
can only be performed by the attending physician. In addition it is better to have less

30
information by separating a file wrongly than to have incorrect information by retaining
wrongly merged files for different patients.

3.1.1.2 UNIQUENESS OF THE PATIENT NUMBER

The decision as to whether or not you can use a unique patient number in the hospital is
unnecessary if only one (central) application is available.

Unique patient number throughout the entire hospital

If there are several (linked) patient applications, you should opt for one application in
which the basic file of the patient identifications is managed. This basic file is then copied
to all the other patient applications. In the other patient applications no changes,
additions, etc. to the copy data are permitted.

See p. 15 for techniques for linking applications.

In the application in which the patient identifications are managed it must be possible to
enter all the patients.

It is important that procedures are agreed for determining who can enter and change patient
data in the central patient application and when.

Make due allowance for exceptional situations such as weekends, nights, etc. when new
patients have to be included in the system and the persons who normally perform this
identification are unavailable.

Make due allowance for situations in which the central patient application is down while the
other applications that receive a copy of this data remain available.

Each patient application therefore receives a copy of each patient record and can thus contain
(usually only administrative) data of patients for whom no medical data is available in these
applications. Allowance must be made for this in the authorization rules of this application.
In some cases, even administrative data contains information that should not be available in
this application.

Several patient numbers across the entire hospital

You can decide to generate the patient number and to have the corresponding patient data
managed in each application separately. In this case the patient number is split on the basis
of a splitting of the application itself (per department, function, etc., see p.12). The
advantage of this is that you do not have any administrative patient data in applications that

31
also do not have other data about this patient. This makes any special authorization rules
superfluous. Also, unavailability of the central application is not a problem.

If a link has to be established between patient data right across the applications then a
relationship must be established between the patient numbers of the various applications.

Possible links that must be considered are those with the invoicing system, the results server,
etc.

Possible techniques for this relationship are the inclusion of a common attribute in the
various applications, such as the SIS card number (what about patients without an SIS
card?), combination of name, address, birth date (what about incorrect data or variants in
spelling, etc.?)

3.1.2 Codings

Codings are pre-defined lists of concepts in which each concept contains an unambiguous
and unique alphanumeric combination of signs to which one or more definitions are linked.
There are internationally recognized and standardized codings but they can also be local lists
2.

Data that you want to process automatically must be coded.

If you allow data to be entered in free text, then different names, spellings, typos, etc. will be
used for the same concept, which the computer will not identify as the concept concerned.
These errors, variants etc. are committed both by different users and by the same user over
time Coding is therefore necessary even if there is only one user who thinks he is acting
consistently 3. The calculation of numbers (statistical processing), the use of the system for
warnings (interactions between medication and diagnoses, etc.), the calculation of derived
values, etc. requires the computer to be able to use a code.

Use a structured coding system for code lists that have a lot of codes.

2 The minimal example is a question that can have an affirmative or a negative answer. This
can be coded by "yes" and "no".

3 When using free text in the above mentionned example, one could obtain "yes", "y", "OK",
"+",... and "no", "n", "-",... as a result.

32
In a structured coding system the code consists of:

• different axes that are separate from one another (e.g. SNOMED). Each axis has its own
significance and is in its turn its own coding system that can be interpreted separately
from the other parts of the code

or

• a hierarchical structuring in which each level must be interpreted on the basis of the
previous level (e.g. ICD-9).

If you do not have a structured code then you will have to create families of codes to be able
to answer certain questions with the computer. This can be a very laborious task involving
continuous maintenance and updating.

There are various techniques for inputting the correct code. Each of these techniques has its
application in a specific context:

• hierarchical search systems in which the codes are presented per level and you scroll
through the various sublists as if travelling along a path. When each level is chosen the
sublist for the following level appears;

• search systems based on the definition of codes (exact, phonetic, etc. searching);

• personalized and restricted lists of the most commonly used codes. Most users will, in
the majority of cases to be coded, only use a restricted list of codes. If these can be
presented separately, this speeds up the searching and you reach the complete lists by
selecting the ‘ other’ option;

• natural language identification systems in which the user enters the text in free format
and the system then suggests codes that are potentially applicable.

First you have to code these systems, which serve as input for other systems.

The best candidates for quick coding are medication prescriptions, problem lists (diagnoses
and operations) and activities.

The user needs more time to input codes than free text. The advantage of coding is shown
primarily when more than one system can use the coded information. For that reason it is
recommended to invest primarily in having the basic functionality coded. If the user notes
that the coding can be used fruitfully in other systems because there are advantages in
interaction, detection, reporting, etc., then s/he will not be so inclined to consider the extra
time invested in coding compared with free text as a waste of time.

33
3.1.3 Structuring

Structuring is the division of information into smaller parts in which each part has a
predefined significance and a distinctive place with regard to the other information.

If you want to exchange information with other systems then you have to structure this
information. Structuring can also be necessary to ensure that information is always entered
in the same way (e.g. for reasons of consistency or training).

Another reason for structuring can be the desire to re-use information. The structuring of a
discharge letter into, for example, ‘medical history’, ‘current problems’, ‘findings’ and
‘decision’ makes it possible to automatically copy some of these sections into a subsequent
letter or to fill them in from a problem list.

Specify which data have to be exchanged with other systems. Structure this data and code
the individual data elements as far as possible.

Interesting candidates for exchange between systems are letters reporting technical
investigations (or examinations) and discharge letters. These must in the first instance be
structured so that parts of them can be re-used easily and automatically in another letter. It
also facilitates the integration with GP packages when forwarding them to GPs. The
structuring of a problem list (see below) can also be useful for the construction of the letters.

3.2 Functionality

3.2.1 Appointments management

A rough distinction can be made between two kinds of appointments management systems:

• ‘slot based. In this version, empty spaces are provided in the system in advance into
which an appointment can then be fitted. A patient’s name can be entered into each
space, after which that space is ‘taken’. You specify in advance the time and duration of
this slot and the number of slots that are available with their degree of accessibility (can
only be queried by specified users, only if certain conditions are met,(such as information
that is entered when searching for the slot, e.g. ‘urgent’, ‘first visit’, etc. but also the
status of the appointments book (more than 80% full, less than 2 days before the time of
the appointment, etc.)). The disadvantage of this procedure is that you have to decide in
advance how long the duration of each slot will be (without knowing which patient is

34
being seen or for which problems). The advantage is that you can search for free spaces
very quickly.

• rule based. In this version you list only the restrictions/possibilities in advance (e.g. each
working day between 8.00 a.m. and noon and 2.00 p.m. and 6.00 p.m., Saturday between
10.00 a.m. and 11.30 a.m.). The advantage is that patients can be added at free moments
and for freely defined periods of time. The drawback is that searching for free slots can
involve complex calculations and that unfillable slots arise, which means that in total
there is still enough time for an appointment but that this time is fragmented over
different moments.

By definition, an appointments system requires many new patient identifications to be


created. It is best if an appointments system is not chosen as a central patient application.

Make allowance for the fact that an appointments system requires provisional patient
identifications to be created and that these provisional identifications should not be taken
over into the other systems.

The identification of patients in an appointments system proceeds very quickly (by


telephone, etc.) and is subject to little control, so that the risk of incorrect information is very
high.

If you have a request and appointments system, the appointments system must not contain
any request information, but these systems must refer to each other.

If you only have an appointments system then a logical expansion of functionality is a


request for extra clinical parameters when filling in or searching for an appointment. If you
have a request system, however, you should not take this information over into the
appointments system or make a fresh query but should be able to make an appointment on
the basis of a request (with all its clinical parameters). The appointment and the request are
linked to each other: deleting the request deletes the appointment, while deletion of the
appointment makes the request unscheduled.

To make combination appointments you should opt for a system that displays the
possibilities for each part of the combination from which the user himself can determine the
combination, instead of a system that suggests the combinations itself.

For the optimum utilization of the appointments you have to enter far too many parameters
to get the computer itself to suggest a good combination. If you do not enter this information

35
the suggested combination will always be less optimum than the one that a user can
determine 4.

3.2.2 Results management

Copy results between systems minimally. If you copy results between different systems you
must make sure in advance that copy management is being performed

For further discussion, see p. 22.

3.2.3 Request and registration system

Before buying a request or registration system or having one developed, you must determine
who is going to register what, when, and where. Determine whether the registration will be
entered immediately and is the sole registration or whether it will be created first on paper
and only entered afterwards.

Who registers the request in the computer heavily determines the user interface required and
the support that must be offered. If this is the person who generates this request himself (e.g.
the doctor who registers a request for function measurement, a lab test, etc.) the
requirements for the user interface are very high. You must be able to navigate quickly and
easily between the different options without losing your overall view. If the person who
registers the request is not the same person who generates it (e.g. administrative users who
have to register in the computer a request that the doctor has put down on paper) then
simpler interfaces can be provided. Paper allows for much more complex, visually-
supported selection methods than the computer because of the higher resolution that can be
achieved on paper than on a computer screen.

4 When one should fix a combined appointment for 2 technical examinations followed by a
consultation on the same day to discuss results, then one must take into account the patient's
condation, the distance from one appointment to the other, the possibility that certain
technical examinations can not be combined (patient must be sober, contrast substances,...),
external elements (patient should get the 12 o'clock bus,...). An "automatic" support will
never be able to include all these elements but will foresee a safety margin. However, the
user can easily include these elements..

36
It is easy, for example, to structure more than 500 lab tests on a request form consisting of
both sides of a single sheet of A4 (60 lines, 4 columns, 2 sides) without affecting the legibility
and usability. If you tried to present the same number of lab tests on a computer screen in
the same structure it would be unusable. In that case, more interim selections would have to
be made, which ultimately would increase the number of actions required to enter the
request. If an administrative user has to enter the request formulated on paper into the
computer then the user interface can be many times simpler because only test numbers have
to be entered.

When the registration is made in the computer is also an important factor for determining
the user interface. There is a big difference between the support required for users who enter
information in bulk (and post facto) and that required for users who enter the information
while they are engaged in a discussion, exam or whatever. A doctor who enters a request
during a consultation with the patient will require a different interface to a nurse who is
entering all the collected requests at the end of a shift.

Where the registration or the enquiry is made also places demands on the technology.
Registrations that are made as close as possible to the place where the information is
generated have high technological or infrastructural requirements. A registration of the
clinical parameters at the bedside demands a wireless network and a portable computer or
alternatively a computer by each bed. Registrations in the consultation booths make
demands on the space available in the office. If the registrations are being entered further
away from the place where the information is being generated you can make better use of
the available space, materials, etc. The drawback is that information can be distorted
(because it is not being entered immediately), forgotten, etc.

What you register can be split into content and degree of detail:

it is best to base the requests or registrations that you enter on your own codes rather than on
external ones (e.g. your own activity numbers rather than RIZIV (Belgian National Health
Insurance) nomenclature numbers). With this approach you can even determine the degree
of detail that you want to record and you are not dependent on the external need for
aggregation. Usually people want a more detailed registration internally than is required for
external reporting (minimum clinical data, minimum nursing data, invoicing, etc.).

Register continuously only what will have an immediate impact on the functioning of the
system or is necessary for external reporting.

The quality of continuous registrations that serve only for long-term control without the
users seeing any immediate advantages in terms of effects is in practice very low. If you
want to obtain figures from registrations for long-term control you are better off using

37
sampling (continuously for a limited population or only at certain moments for the whole
population).

3.2.4 Medication prescription and administration

3.2.4.1 CONSUMPTION

There is a clear difference between three types of consumption, each of which is necessary
for controlling different aspects of medication management:

• The medical consumption is the actual quantity consumed by the patient.

• The physical consumption is the quantity that has been removed from stock.

• The billable consumption is the quantity that can be charged for on the invoice.

It is important to recognize and make due allowance for this distinction5.

Choose to register one sort of medication consumption and derive the other forms of
consumption from it.

3.2.5 Patient movements

The central registration of the patient movements can be an important controlling factor for
various applications. The more refined and the more up-to-date the information, the greater
the advantages associated with it. The patient movements can serve as controls for
authorization rules (as far as the patient file but also for external systems such as medicine
chests for opiates, etc.) and for running work lists (attendance patterns, combinations of
scheduled and already present patients, etc.).

The patient movements can also be used for on-line quality control when entering data:
activities cannot be registered outside the patient’s attendance period, an activity cannot be
performed at a department where the patient is not present, etc.

5 One patient has to be given 10 ml from a 50 ml bottle. During the administration someone
drops the bottle. The medical consumption is therefore 2ml, the physical consumption is 50
x 50ml and the billable consumption is 50ml.

38
3.2.6 Invoicing

Try as far as possible to derive/suggest the data required for the invoicing system on the
basis of other registrations.

Possible sources are structured and/or coded reports and medical registration modules. So,
for example, thanks to a structured report and the designation within it of certain technical
actions (for reporting reasons), tests or the charges associated with them will not be
forgotten.

3.2.7 Problem list and progress notes

A problem list is a chronological list of diagnoses and interventions that shows the current
status of the patient. In this list it must be possible to change problems simply from active to
inactive status and to correct their content. The progress notes are brief notes in the file that
are dated and which can perhaps be related to the problems in the problem list. It need not
be possible to inactivate the progress notes, given that they are just a snapshot of a specific
moment and are thus always correct. The progress notes form a sort of communication log
about a patient. The progress notes are less relevant in the long term.

A central, structured, active and coded problem list is the basis for the proper functioning of
the medical file. Make agreements in advance about their use and the speed of input.

You must try to get as much information coded as possible as quickly as possible in a central
problem list, so that this can be used subsequently for applications such as inputting the
patient’s medical history and current problems into the discharge letter, for clinical
information when requesting technical exams and cross-referrals, and as input for
controlling certain systems such as warning systems (medication allergy, contra-indications,
etc.). In the extreme case no information at all need be furnished for the requesting of exams
on condition that the active problem list has been filled in up to date. The faster the
information is filled in here the more readily it can be used by various applications.

39
4 Availability of the computer system

In this chapter, we discuss a number of techniques that all have the ultimate goal of making
the data that are stored in the computer system available at any desired moment.

An initial aspect is, of course, the avoidance of unavailability. We must, however, assume
that 100% availability is unachievable or, at least, unaffordable. Measures must also be taken
to restore availability as quickly as possible when problems do arise.

To verify the completeness of the policy in this area you can start from a list of potential
problems. For each of these items you can then examine how this risk is covered at your
institution. Where appropriate, you can decide that the risk that some problems will
effectively arise is so small that no measures will be taken to deal with them. This means of
course that should such a problem arise you will have to call on general disaster plans of the
hospital and, for example, temporarily close a number of departments.

4.1 Causes of unavailability

We suggest below a number of possible sources of problems without aiming for


completeness:

• Hardware problems with a computer: failure of a disk, power supply, CPU, etc.

• Hardware problems with the network infrastructure: failure of switches, routers, etc.

• System software problems: problems with the operating system, database software, etc.

• Problems with network due to damage to cabling.

• Application software problems: bugs in the application software that make the
application unusable.

• Failure of applications due to manipulation errors: deleting data in the database, wiping
of applications, etc.

• External factors: failure of electricity, fire, smoke damage, water damage in the computer
rooms.

• Deliberate destruction of infrastructure, vandalism.

• Installation of new hardware.

40
• Installation of new versions of system software or application software.

4.2 Techniques to limit unavailability

4.2.1 Backup

Creating a backup is a preparation for a recovery operation. In a backup the data is copied to
another medium (magnetic tape, CD, etc.) and stored independently of the operational
system, as well as independently in a physical sense.6

A distinction is made between:

• A full backup, in which all the data is copied.

• An incremental or differential backup, in which the data that has been changed since the
previous full backup or since the previous incremental backup is copied.

In the worst-case scenario you will have to restore all the data of the computer system on the
basis of the backups. All changes since the last backup will be lost at that moment.

The frequency of the backups must be determined in the light of the maximum acceptable
loss of data.

If you use rewritable media for the making of backups then as soon as the first byte of the
new backup has been written the entire content of the medium is lost. To ensure that there is
always at least one backup of a system at least two sets of media must be available.

Bear in mind that errors in content may only come to light after some time. In such a
situation the last backup containing the situation before the error occurred can help to
remedy the problem.

At least two sets of media must be used to make a backup.

6 Backup should not be confused with archiving. Backup and archiving are two different
applications, each with their own requirements. Archiving is aimed at keeping the data
consultable in the very long term. Regularly performing a full backup cannot therefore be
regarded as an archive. For the specific requirements of archiving we refer you to chapter 5.

41
Determine the effective number of versions of the backups to be performed in the light of the
time that can elapse before a major content-related problem can be discovered.

It must also be possible to use the backups in the event of the total destruction of the
computer system. This is, of course, impossible if the backup media are destroyed at the
same time as the computer system.

The backups must be stored in such a way that they cannot be destroyed simultaneously
with the operational computer system.

This can be achieved by storing the backups at a sufficient distance from the operational
system or by storing them in, for example, a fireproof cabinet.

Bear in mind that the backups contain all the data of the computer system and that the
normal access control mechanisms do not work for the backups. This problem can only be
resolved by safeguarding the physical access to the backups.

Protect the backups against unauthorized access.

As stated above, creating a backup is actually the preparation for a recovery procedure. Of
course, everyone hopes that the recovery procedure will never have to be used, but even so it
is recommended that this procedure be tested out.

A test of this kind will allow you to check whether:

• the software that you are using for the backup is actually 100% compatible with the
recovery software;

• the backup media are readable;

• the data stored in the backup contains sufficient information to, if necessary, start up the
applications on a new machine just supplied by the supplier.

Another important fact that you can learn from such a test is how much time you need to
restore the system completely. This is therefore the timespan within which the hospital must
be able to continue working without the support of the computer system.

Provide emergency procedures to guarantee the functioning of the hospital during the time
that you need to restore the computer system.

These tests may lead to the conclusion that the inconvenience is too great to keep the hospital
operational in a realistic way. You should then consider whether:

42
• The recovery procedure can be shortened, e.g. by using different tape drives in parallel.

• The net period of unavailability can be shortened by setting priorities in the recovery.
Here you can think in terms of, for example, a splitting of the databases into a current
database and an historical database. Of course, in such a situation you can offer a
meaningful application package as soon as the current database has been restored.

• The chance that a full recovery must be performed can be reduced by building
redundancy into the configuration (see below).

To guarantee good discipline in the making of backups it is important to ensure that this
process is not dependent on the goodwill of the users.

In the office equipment environment this can be encouraged by recommending users to place
their files on a central server for which backup is arranged.

Make sure that the backups are not dependent on the goodwill of the individual users.

4.2.2 Hardware redundancy

Almost all large machines offer the possibility of executing a number of components
redundantly.

The first aspect that deserves attention in this context is magnetic disks. Given that magnetic
disks contain moving parts they are quite sensitive to failure. After resolving the hardware
problem the data must also be restored from the backup.

All ‘server’ machines must be protected against the failure of a disk by RAID or mirroring.

Some server machines offer other opportunities for internal redundancy. Here are a few
examples:

• Redundant mains current supply: of all the electronic parts of a computer the power
supply is perhaps the most vulnerable part because it is here that the whole power of the
computer system is processed. For larger servers, consideration can therefore be given to
providing a redundant power supply.

The opportunity to restart with part of the server: some servers will try to restart after an
outage caused by a hardware defect in which defective components are switched off. The
server then restarts, although it with a reduced capacity. Consider whether further

43
investments in hardware redundancy are worthwhile in the context of your global
availability policy.

4.2.3 Maintenance contacts

Needless to say, you should conclude maintenance and support contracts wherever
necessary. These must of course be adjusted in the light of the likely impact of a system
failure.

For devices for which dozens of examples have been installed (e.g. office printers or PCs)
you can consider purchasing a number of back-up devices yourself and then having the
recoveries performed on a cost-plus basis.

For other devices it will of course be necessary to conclude a maintenance contract that
provides for interventions during business hours (and for other devices perhaps even round-
the-clock interventions).

Decide for which systems you will have to conclude maintenance and support systems.

4.2.4 Cluster

A cluster is a traditional way of improving the availability of a computer system.

The typical configuration consists of 2 computer systems and 2 disk systems, with each disk
system being connected to the 2 computer systems. Under normal circumstances, one of the
computer systems will be functioning, with each disk system acting as the mirror of the
other.

Of course, if one of the disk systems fails then the computer system will continue running on
the ‘surviving’ disk system.

If a computer system fails then a switchover will rapidly be made to the other computer
system so that users experience only a brief interruption.

In the event of software upgrades the entire cluster must be stopped. In other words, this
configuration does not resolve the problem of interruptions for software upgrades.

44
4.2.5 Replication server

By ‘replication server’ we mean in this case a technique in which a link is realized at logical
level between 2 database servers.

This can be achieved by, for example, software that reconstitutes database commands (SQL
commands) from the log of an operational server. These commands are then executed by a
second database server – the ‘warm standby’.

Generating SQL commands does take some time, which means that the changes on the
primary database are performed on the ‘warm standby’ after some delay and that the latter
is therefore not an exact copy.

This technique has the advantage that the ‘warm standby’ is not a physical copy of the
operational server. The warm standby can accordingly have a different software version to
the operational server. Alternatively, other indexes etc. can be added.

This makes it possible to install new versions of software and hardware with a minimum
interruption in services to the hospital. One possible scenario would then be:

• Stop the routing of the SQL commands to the warm standby but continue to keep the
commands up to date.

• Perform the necessary maintenance work on the ‘warm standby’.

• Perform the buffered SQL commands on the ‘warm standby’. After a catch-up time this
will accordingly again be a copy of the primary server.

• Switch the users over to the warm standby as follows:

- Stop all user activities on the operational database.

- Wait until all the tasks have also been executed on the ‘warm standby’. The
databases will now be logically identical.

- Reverse the roles of ‘warm standby’ and primary server.

• Now repeat the upgrade procedure on the other machine.

In this scenario there is only unavailability during the actual switchover – a procedure that
can be performed in just a few minutes.

In emergencies you can of course switch over in a forced manner, so that even after failure of
the operational computer system you can switch over to the ‘warm standby’. In this case

45
allowance must be made for the fact that the data that has already been processed on the
operational server but which has not yet been passed on to the ‘warm standby’ will be lost.

4.2.6 Equipping of computer rooms

When equipping computer rooms, it is best if facilities for protecting the computer systems
are provided. Examples include:

• Physical shielding as a protection against vandalism.

• Automatic fire extinguisher installation.

• UPS as protection against short power cuts.

• Connection to the hospital’s emergency power supply as a protection against long


interruptions in the electricity supply.

Consider also equipping 2 computer rooms at some distance from one another as a
protection against fire, smoke damage, water damage, etc.

4.2.7 UPS, emergency power for infrastructure outside the computer rooms

To maintain the availability of the computer system during long power outages, the
equipment must of course also be set up in the workplace and the network infrastructure
connected to the emergency power supply. UPS is perhaps superfluous in this case: in the
event of power outage these systems can very probably be restarted.

If you opt to connect only one part of the infrastructure to the emergency power supply you
must check that this is done consistently: it makes no sense to have a PC running in an office
if the network apparatus that handles communication with the central servers is not
connected to the emergency power supply.

4.3 Redundancy criteria for a hospital

Some production lines in industrial environments depend for their proper functioning on a
computer system: if the computer system fails then the production line grinds to a halt.

46
Such environments are often cited as a reference for the high availability of computer
systems. In such environments however there are usually periods of varying duration in
which there is no activity: think of weekends, for example, or large-scale maintenance on the
production line. These periods can then of course also be used for performing maintenance
on the computer system.

A hospital does not have periods of inactivity of this kind: a hospital is never closed and the
demands on intensive services are always critical. The computer systems that these
departments support must therefore be continuously available.

We have to retain our sense of realism here: a genuinely 100%-guaranteed availability of a


computer system is perhaps unattainable and certainly unaffordable.

It is nonetheless true that long-term unavailability in particular has a very disruptive effect
on the working of a hospital. Allowance must be made for this when choosing a strategy.

Bear in mind that in a hospital a few quite short interruptions in the availability of the
computer system are less troublesome than one long interruption. This is a different
situation to that of the typical industrial environment. Make allowance for this when
making choices and decisions.

47
5 Archiving

By ‘archiving’ we mean in this context the storage of data on removable media (magnetic
tape, CD, etc.) with the intention of retaining these data for a considerable time. In the
healthcare sector ‘long-term’ is usually taken to mean decades.

This poses specific problems for the consultation of the data.

5.1 Physical problems

An initial series of problems lies at physical level: will you still be able to read the bits of the
medium? Consider the following factors:

• ‘bitrot’: with most media the stored information gradually degrades. With a magnetic
information carrier the magnetism gradually falls to a level at which a ‘0’ can no longer
be distinguished from a ‘1’. For some magnetic media the supplier recommends
rewriting the data every 2 years. Writable CDs also gradually degrade. The obvious
solution to this problem is to copy the data to other media before it becomes unreadable.

Ask the supplier of the media what the guaranteed shelf-life of the medium is and rewrite the
media before this deadline expires.

• The technology of removable media is still developing quite quickly and suppliers do not
support old equipment for ever. Eventually a problem will therefore arise in finding a
computer with a device that can read older media. Another scenario is that there are no
longer any operating systems that support these readers.

Make sure that you have a supported reader available (and will continue to have it
available) for all the media that you still need.

5.2 Logical problems

A second series of problems lies at logical level: if you store data in the format of a specific
application, will there still be programs in the future that can read this format without
problems?

A solution to this problem can be sought in a number of directions:

48
• One way is to select a ‘timeless’ format for the data storage. Think of text files, for
example, without any formatting of any kind. Portable data format and XML also seem
good candidates for continued readability over a very long timespan.

• Alternatively you must make sure that you continue to have a computer system that can
run the older version of the software. This will comprise adapted versions of the
operating system, drivers and application software.

• Another way is to ensure that the old formats are converted to newer formats on a
regular basis.

Make sure that you always have the necessary software to be able to read the file formats of
your archives.

49
6 Security

6.1 Access control

Via access control you designate who may consult and/or change which data is stored in the
computer system.

In access control a distinction must be made between authentication and authorization:

• Via authentication a user will prove his identity to the computer system.

• Authorization indicates what services the user has access to and/or to which data access is
granted.

6.2 Authentication

6.2.1 Basic techniques

Authentication is also used regularly in daily life. For example, a person who opens a door
with a key proves that he is entitled to open this door through possession of a suitable key.

Most authentication techniques are based on one of these basic principles:

• authentication on the basis of something that you know;

• authentication on the basis of something that you have;

• authentication on the basis of something that you are.

Here are a few examples:

• Authentication on the basis of something that you know is the basic technique for
authentication in the case of computer systems: to prove that you have the right to use a
certain user name you must type in the password corresponding to that user name.

• There are tokens available on the market on which a number can be displayed that can be
changed regularly. The series of numbers is displayed at random, but is actually based
on an algorithm in which usually the time and a code that is unique to each token are

50
used. Each token accordingly produces a unique series of numbers. In access control
based on tokens of this kind, the user proves that he is in possession of the token
corresponding to the user name by typing in the number that is displayed on the screen
at that moment.

• Authentication on the basis of something that you are is used everyday when people are
identified on the basis of their features, the tone of their voice, etc. Identification is
performed by ‘measuring’ biological characteristics. That is why these techniques are
also known as ‘biometric’ techniques. In an IT environment, biometric authentication
techniques are also based on the measurement of biological characteristics: facial
identification, retinal scan, fingerprint identification, voice identification, etc. Specific to
biometric authentication techniques is that they usually do not provide a clear answer to
whether or not the user is being identified. Usually we speak in terms of a certain
‘percentage’ of identification. This is no different when dealing with people on an
everyday level (think of the problems of telling twins apart, for example). When using
biometric techniques in an IT environment, allowance must therefore also be made for
‘false positive’ identifications, where someone is incorrectly identified as a valid user and
‘false negative’ identifications, where the system fails to identify a person correctly.
Sometimes these authentication programs offer the facility of indicating the required
level of identification. If a high level of identification is required then the number of false
positive identifications will fall, but the number of false negative identifications will rise.

It is also possible to use a combination of these principles to grant access to a computer


system. An example is the ATM of Banksys, where access is granted through a combination
of something you have (a banker’s card) and something you know (the corresponding PIN
code).

The combination of different authentication techniques can also provide a solution to the
problem of the false identifications associated with biometric techniques.

6.2.2 Discipline in authentication

A very important aspect of authentication is the discipline of the users: a correct


authentication stands or falls by the discipline of the users.

Here are some examples of how poor discipline can underline the authentication:

• Sharing passwords.

• Writing passwords on a post-it note on the screen or under the keyboard.

51
• Using tokens for authentication that need an extra PIN code, but writing the PIN code on
the token.

These problems can only be resolved by encouraging discipline among the users. A good
starting-point here is to draw up a document containing a ‘code of conduct’ for the use of the
computer system in which a sanctions policy is included in addition to the degree of
discipline that is being aimed for.

Care must of course be taken to ensure that all the users are bound to this document in a
legally enforceable way. For users who have a working relationship with the institution this
can be done by including the code of conduct in the standing employment conditions. Users
who are not bound by the standing employment conditions can be asked to sign for
agreement a copy of this code of conduct.

Make sure that the discipline required for authentication is included in a document
containing the ‘code of conduct for computer use’ and that all users are bound to this
document in a legally enforceable way.

In a number of situations the discipline can also be encouraged by technical measures.

6.2.2.1 ENCOURAGING DISCIPLINE IN AUTHENTICATION WITH PASSWORDS

When authentication with passwords is used the user is generally permitted to choose his or
her own password. This has the advantage that the user can choose a password that is easy
to remember. The disadvantage is that the user may make a poor choice of password.

By a ‘poor choice’ we mean a password that can easily be guessed or found out by someone
else.

The methods for finding out passwords are usually based on one of the following principles:

• The ‘brute force’ principle, in which all the possible passwords are tried out until the
correct one is found.

• The ‘Dictionary’ principle, in which all the words from a list are tried.

In most computer systems, however, the risk of poorly chosen passwords can be controlled
by limiting the choice of users. The best way of defending against ‘brute force’ attacks is to
stipulate a minimum length for passwords. A defence against ‘dictionary’ attacks can be
mounted by refusing to accept passwords that consist of words or simple combinations of
words.

52
Activate the options to refuse passwords that are too easy to guess. Insist on passwords
that are at least 6 characters long.

Someone who is looking over a person’s shoulder when they are typing in their password
can also try to follow the keystrokes to find out the password. To prevent passwords from
being found out by this method it is advisable to change the passwords regularly, e.g. every
3 months. Less disciplined users will try to get round this by going through the password
change procedure but then entering their old password again as the ‘new’ password, or by
going back to their old password again shortly after switching to a new one. These users can
be encouraged to develop better discipline by making it impossible to re-use passwords
and/or by giving passwords a minimal period of validity.

Make passwords invalid every 3 months. Activate the options to make it difficult to re-use
passwords.

6.2.2.2 ENCOURAGING DISCIPLINE BY CHOOSING AUTHENTICATION TECHNIQUES

You can also investigate the extent to which an authentication technique is intrinsically
vulnerable to a lack of discipline. Here are a few examples:

• With authentication via passwords the password can be ‘shared’ very easily. In most
computer systems the legitimate user will not experience any problems with this. With
some computer systems it is impossible for more than one user to work under a
particular user name at the same time. This can act as an incentive not to share
passwords.

• Authentication via one or other external hardware device has the advantage that this
cannot be copied and so only one person can use this device at a particular time. The
password cannot therefore be shared.

To ensure a ‘strong’ authentication, it is best to combine at least two of the above principles.
If you do not then someone who finds a token for password authentication can log on
without further ado. If this is combined with a pincode then it already becomes a great deal
more difficult.

6.2.2.3 PROVISION OF NEW OR UPDATED AUTHENTICATION TOOLS

When assigning new user names and the authentication information belonging to it the
system manager must ensure that this data only reaches the legitimate user.

53
This is, if possible, even more sensitive when the user asks the manager to intervene in the
authentication system because, for example, he has forgotten the password. Usually this
question will be asked by telephone. An authentication of the requestor must then be
performed before the new password is communicated.

This problem can be tackled by entrusting the task of issuing passwords to reliable people
who know the user personally. In a large organization this would of course require the
appointment of several trusted employees.

As an alternative you can update authentication information via a previously agreed


communications channel.

6.2.2.4 RESPONDING TO HACKING ATTEMPTS

In quite a few cases you also have the chance, after several attempts to connect using an
incorrect password, to fully block access to that user name until a manager unblocks the
access again. This provides a defence against both ‘brute force’ and ‘dictionary’ attacks.

This does however have the drawback that it becomes possible to sabotage the computer
system: the obvious solution is to deny users legitimate access to the computer system.

Consider whether or not you will activate the option to temporarily block access after a
number of incorrect password entries.

6.2.3 What authentication techniques should be used for an HIS?

An authentication based on a combination of techniques from the three categories does


perhaps lead to a perfectly watertight authentication. It is however very questionable
whether this is ergonomically and financially feasible. In practice, a compromise will
therefore have to be found.

The fundamental question here is the extent to which you can have any confidence in the
group of people that can try to get round your authentication system.

When making a decision about this level of confidence the following factors must be taken
into account:

• How valuable is the data stored in the computer system? Obviously a less strict
authentication can suffice for access to documents concerning, for example, internal
procedures than for access to medical or financial data.

54
• How large is the group of people that may try to get round the authentication system?
For workstations located in offices that are closed during the night, you only have to take
your own staff into account. If workstations are in ‘public’ areas you must take patients
and visitors into account as well. And if you grant access from outside the institution,
e.g. via the Internet, you must take all the Internet users in the world into account.

• To what extent can you rely on the discipline of the users?

For use within the buildings of the institution an authentication based on passwords can
suffice nowadays, provided that sufficient attention is paid to ensuring proper discipline
during use.

For access from outside the institution it is advisable to go over to an authentication that is
based on a combination of at least 2 authentication techniques, e.g. a token for password
generation combined with a PIN code.

6.2.4 Individual names

In order to guarantee an effective authorization the authentication must be performed on an


individual basis.

Opt, in principle, for an individual authentication.

In environments in which other users very frequently have to use the same workstation it
can be very time-consuming to perform the authentication procedure each time, which can
give rise to a very strong temptation not to change the user name each time, which boils
down to sabotage of the authentication. In this case consideration can be given to assigning a
common user name. The authorization of a common user name can also be limited to a
greater extent than an individual user name. This should therefore be chosen in preference
to a situation in which the authentication mechanism is sabotaged by the users.

In exceptional circumstances a common user name with restricted rights can be assigned.

6.3 Authorization

Authorization stipulates which users have access to which data.

This has to be set at different levels: operating system, database, applications, etc.

55
At operating system level, you have to define who has access for consulting and/or changing
files. In the same way, settings must be made in the database system to define exactly who
has access to the information that is stored in it.

In some applications, authorization can also be set. In some cases these settings are
translated into settings at system level. In other cases the application itself will also use a
number of extra authorization rules, e.g. not releasing certain information to the user. In
some cases this is used in a very extreme sort of way: some packages always set up the
connection to the database system with the same user name and password and realize full
access control in their own code. Should a user succeed in such a situation in interrogating
the database directly the extra limitations that the application implements will of course
disappear. Such access must, of course, be made impossible. If it is not, the security features
in the application make no sense.

Make sure you have a consistent policy that can be maintained in the longer term. To
achieve this, use the available tools, e.g. division into groups, roles, etc. to keep the
administrative work associated with it down to acceptable levels.

Make sure that extra security features introduced by the application cannot be circumvented
by, for example, interrogating the database directly.

The employees with system manager rights require special attention. For the system
managers the normal access controls apply. The number of staff that have these rights must
therefore be kept to a minimum.

Check which people within the organization have system manager authorizations.

6.4 Management of users

It goes without saying that the user names of employees who leave the institution must be
blocked to prevent mischief. The authorization of users must also be properly managed: if a
user no longer requires certain authorizations then they must be deleted also.

The person responsible for defining the authorizations must have the correct information
about the current task distribution within the various departments. If the authorizations are
defined centrally then a smooth information flow must be organized.

An alternative to this is to delegate the management of the authorizations as far as the level
of those people who are responsible for deciding the task distribution within the
departments. The person responsible for task distribution can then himself ensure that the

56
authorizations are adapted to changes in the distribution of tasks. This does of course
require the necessary degree of discipline from the decentralized managers.

Make sure that when a member of staff is dismissed or leaves for some other reason and in
situations where misuse of the system is suspected that all access to the computer systems
can be rapidly denied to this person.

Try also to ensure as much consistency as possible: make sure that a user has the same user
name on all systems (and, if possible, the same password).

A system that is making more and more of a name for itself in this area is the ‘Lightweight
Directory Access Protocol’ (LDAP). With this system you can manage in one environment
the users for all computer systems in the institution.

6.5 Automatic blocking of a session in the case of inactivity

A workstation of a computer system that remains unmanaged does of course make it


possible to continue using the authorization of the previous user.

Many computer systems and applications make it possible to block the workstation after a
brief period of inactivity. This prevents unauthorized access to the applications, but in most
cases it means that the workstation can only be unblocked again by the original user. A
solution can be found for this if necessary by not using the mechanism of the operating
system but by instead replacing it with a mechanism in the application.

Investigate how you can best handle workstations on which there has not been any activity
for a short time.

6.6 Auditing

By ‘auditing’ is meant in this context the recording of certain actions performed on the
computer system.

Most operating systems and database systems make it possible to create audit logs. In these
logs both successful actions and actions rejected by the authorization rules can be registered.
Examples include a successful or failed authentication.

57
Audit logs of this kind are indispensable for tracking unauthorized use. They can also help
to find out exactly what has happened when unauthorized use occurs. For that reason, logs
of this kind are usually kept for a period of at least several months.

The application programs can also contain audit logs of this kind that can register events at
application level.

Make sure that the application software (operating system and database systems) records
sufficient auditing information to be able to detect unauthorized use.

Make sure that the auditing logs are checked regularly.

6.7 Encryption and digital signature

Encryption can be used within a computer system as an extra protection for very sensitive
information: besides bypassing the access control you also have to know the encryption key
to be able to read the data.

The terms ‘encryption and digital signature’ usually relate to a situation in which data has to
be electronically transferred. In this case the encryption will ensure that only the authorized
addressee can read the message, while the digital signature will irrefutably identify the
sender.

Various types of algorithms are used for this purpose: symmetric encryption algorithms,
asymmetric encryption algorithms (also known as public-private key algorithms) and
hashing algorithms. For public-private key algorithms, certificates are also required.

In the case of encryption we have to accept that a 100% protection of encrypted data is
impossible: provided you have sufficient computing capacity it is always possible to find the
correct encryption key. It is therefore important to always choose sufficiently long keys to
make this operation unfeasible in practice. Given that the computing power of an individual
computer is constantly increasing, the key lengths also have to be gradually lengthened.
This is therefore an aspect that requires regular monitoring.

6.7.1 Symmetric encryption algorithms

These are encryption algorithms in which the same encryption key is required for both
encryption and decryption.

58
These algorithms are usually quite fast in execution. The main problem is, of course, that the
encryption key has to be exchanged securely.

When using symmetric encryption algorithms, make sure you have an adequate key length:
128 bits is a minimum.

6.7.2 Public-private key algorithms

As the name suggests, in public-private key algorithms two keys are used: a private one that
is only known to the user of the key and a public one that is made known to everyone.

Algorithms based on these principles are mathematically very complex and therefore require
rather more computing power. The most obvious advantage is, of course, that no exchange
of keys is required. If the sender encrypts the message with the public key of the addressee
then only the private key of the addressee can be used to decrypt the message.

When using public-private key algorithms, make sure you have an adequate key length: 512
bits is a minimum.

6.7.3 Certificates

When using public-private key algorithms, we use the public key of the correspondent. But
how can we be certain that the key that we are using is actually the key of the person or
institution that we intend?

The answer to this question is provided by the ‘certificates’. A certificate contains the public
key and the identity of the owner, both signed by a ‘certification service provider’.

To apply for a certificate, you have to offer the certificate service provider a public key
together with the identification data of the person or institution to whom this key belongs.
The certificate service provider will then check the identity and confirm via the certificate
that this check has been performed.

Certification service providers usually have different types of certificates with varying levels
of identity check.

What level of identity check is performed is described by the certification service provider in
his ‘Certification practice statement’.

59
The lowest levels of control actually amount to no control at all and are therefore only
suitable for test purposes.

Decide on the level of identity control that is necessary for your application and make sure
that only certificates to which an adequate control is linked are accepted.

6.7.4 Hashing techniques

A hashing algorithm calculates a check sum of a document. The algorithms that are used for
this purpose are so powerful that the chance of two different documents resulting in the
same check sum is almost zero.

Well-known algorithms for this purpose are MD5 and SHA1.

6.8 Viruses, worms and trojans

Viruses, worms and trojans are all pieces of software that are spread with the purpose of
getting a computer system to do things over which the normal user has no control. These
pieces of software also try to spread themselves over as many computers as possible.

Given that computer systems used as office equipment constitute far and away the majority
of such systems in use, they are also the favourite target for viruses.

You should assume that it is impossible to keep an institution entirely free of viruses, etc. To
keep the spread in check, the viruses that have succeeded in penetrating must be removed as
quickly as possible. To do this the computer systems must be regularly scanned.

It can, however, be useful to perform an extra scan on the routes via which viruses can
penetrate the institution, e.g. via e-mail.

Design a virus control scheme in which at least all the disks on which data managed by an
office automation system is stored is regularly scanned. Check whether other measures are
called for.

60
6.9 Physical access to computer systems

Physical access to a computer system usually offers extra opportunities to circumvent the
authentication system, e.g. by restarting the system, perhaps with a different version of the
operating system.

Restrict physical access to the computer systems to authorized persons.

6.10 Theft of laptops

Laptops form a special kind of security risk: when a laptop is stolen, not only is the computer
lost but the thief also has access to the data stored on the hard drive. Authentication is no
help in this case, as the thief can use all the possibilities of system management to circumvent
the authentication. There is only one adequate solution: lock the sensitive data, if necessary
by locking the entire drive.

If sensitive data are stored on a laptop, then make sure that these data are locked.

6.11 Access for hardware support

More and more suppliers are asking if they can use a modem or the Internet to access from
their offices the hardware set up in the hospital. As far as support for this hardware is
concerned, this is a good thing for both parties: the supplier can offer support more readily
and for the hospital the problems are rectified more quickly.

If a device is also connected to the hospital network, the staff of the company could also try
to penetrate further into other hospital computer systems. This means that you are also
contracting out the security of the hospital network to this company.

Make sure that external companies cannot access devices connected to the hospital network
without seeking the hospital’s permission.

Any regular reports must be sent from the hospital, not retrieved by the company.

61
6.12 Internet access

Protecting all the computer systems of an institution so that they can be connected to the
Internet without risks is impossible in practice. In addition, the security measures would
place a very heavy burden on the everyday use of the computers.

For that reason, connection to the Internet must be made via a firewall, which makes access
from the Internet to the internal computer systems impossible.

Connections for which the initiative is taken by the hospital are not usually a problem from a
security standpoint: the initiative is taken by a member of the hospital staff.

If you want to offer services via the Internet then consider setting up the computer systems
that offer these services in a ‘demilitarized zone’ so that, should someone succeed in taking
over control of these systems, access to the other computer systems of the institution remains
impossible.

For services for which no authentication is requested this is a necessity.

Make sure that, for the other departments, you set up an easily manageable and monitorable
environment, among other things by only setting up one channel that is properly monitored.
It is also recommended in such cases to ensure that only the applications that are really
necessary can be accessed.

If the hospital network is connected to the Internet this must be done via a firewall.

Computers that provide services via the Internet for which no authentication is requested
must be set up in a ‘demilitarized zone’.

For other services and departments, provide one communications channel that can be
optimally monitored and controlled.

Departments in which sensitive information is exchanged must use encrypted connections.

Make sure also that the infrastructure protects the user against ‘man in the middle’ attacks.

There is, of course, no point in developing a proper firewall infrastructure if other less well-
monitored forms of access to the hospital computer systems are possible. Consider in this
connection PCs with a modem and PC Anywhere and similar products. Make sure you have
a policy that offers setups of this kind, linked to a sanctions policy.

Make sure also that no extra connections can be established round the firewall.

62
6.13 Virtual private networks

A virtual private network (VPN) is a private network connection that is realized via the
Internet. A tunnel is established between a computer system and the network of an
institution, so that it is as if the computer forms part of the internal network of the institution
and can make use of all the services of the internal network.

To maintain the security of the internal network, care must therefore be exercised to ensure
that the computers connected via VPN meet the same security criteria as the internal
computer systems: virus scanning, no extra connections around the firewall, etc.

Virtual private networks can only be used when the management of the computer systems
that have been set up remotely meets the same quality requirements as the internal systems.

6.14 Access control at application level

The techniques and recommendations for authentication, authorization and audit at general
system level are also valid for the applications. Below we describe only the extra
recommendations that can be considered for the applications.

6.14.1 Authentication

The authentication should preferably be based on that of the underlying management system
or be synchronized with it.

If the authentication for medical applications is the same as or synchronized with that of the
other applications (e-mail, personal files, etc.), passing on the information for this
authentication (what people know, have, etc. – see p. 50) will involve more risks, so that
users will be less inclined to do this.

One of the techniques commonly used for this is the LDAP.

6.14.2 Authorization

Decide in advance which access control model you want to use throughout the entire
hospital. This choice must be made independently of the solutions offered. This choice will
then determine the possible applications and integration (and not vice versa).

63
When deciding on the access control model, there are various access axes that must be taken
into account:

• Departments (specialisms): one patient file per department or 1 central file.

• Functions (doctors, nursing staff, administrators, etc.): one file per function or 1 central
file for all functions.

• Patients: do you have access to all the patients in the system or do you only have access
to certain patients on the basis of a recent contact, etc?

The access control model chosen is a combination of choices in the above axes. The more you
opt for a scenario in which different axes come together to form 1 central file, the more
requirements there are for the authorization possibilities of the application. If, on the other
hand, you opt for one file per department/function with, within this, access to all patients,
then the requirements for authorization will be minimal. Hybrid forms are also possible (1
central file for all specialisms but split per function, etc.).

This model will influence the integration and limitations in the applications that are offered
(see p. 12).

In a central or linked system in which several departments have access, an access control
that is based solely on function is not good authorization policy.

Usually people think that they can make do with access rules that are based on the groups of
users. Doctors can see everything, nurses can see a bit less, and administrators can only see
administrative data. Systems that only allow access rules to be determined on the basis of
the function cannot allow sufficient splitting in the rules for access to the different patients.
The users can then see the data that their function allows for all patients, and that is an
unacceptable situation if this application is used across different departments.

If you opt for a very strict access control model a user must be able to break through this
after giving a reason and after a further audit of the actions.

The information on which the access control is based can only be historical. In an extremely
large number of cases, people want access to the file because of something that still has to be
entered (and they are authorized to request this from that moment onwards). Typical
examples are: the patient is coming in for a consultation (but without an appointment); the
anaesthetist is seeking access to the patient to arrange an operation but the admission
scheduling has not yet been entered.

64
The stricter the access control, the more need there is to be able to break through it. If there
are more breakthroughs you must investigate these breakthroughs more intensely. In that
case you had best ensure that the reasons that are given for these breakthroughs are
structured and coded as far as is possible.

If the number of breakthroughs is very high, there is a risk that the attention paid to the few
nonlegal breakthroughs will wane because of the large number of legal breakthroughs. For
that reason you must be able, if a large number of breakthroughs have occurred, to work out
rapidly whether or not a breakthrough is legal. You can do this by making breakthroughs
structured and even coded. For the patient who is coming in for a consultation you let the
user record a code for ‘patient is coming in for a consultation’ instead of letting the user enter
the reason for this breakthrough in free text. You can then automatically identify this coded
breakthrough as legal on the basis of the information that has become available in the
meantime (e.g. an appointment that has been made, a subsequent patient movement, etc.).

The stricter the authorization rules, the more support must be given to structured data
within the system to be able to base these rules on it.

Potentially structured data on which you can base the authorization rules include:

• Patient movements: through the accurate monitoring of the physical presence of the
patients and the corresponding timings. The physical presence within the hospital can be
heavily subdivided: from minimal presence and absence in the hospital to a refined
monitoring of all possible locations (medical departments, function measurements,
operating rooms, etc.).

• Contacts between patients and care providers: the recording of the supervisory
physician, nurses, social workers, etc. with a specific patient.

• Information in the future: appointments, scheduled operations, etc.

• User movements: who is present when and where.

This data can be manually entered (and will then be relatively static: doctor X has worked
for 2 years on emergency cases, nurse Y for 4 years in paediatrics and so on). There are also
ways of recording this data dynamically and automatically (badge systems with which users
have to make known their presence at a particular location, infrared or radio-controlled
detection systems, etc.).

For medical applications it can be necessary to define a more refined or different hierarchy of
groups and roles than those required for administrative applications. Check whether you
can specify this separately for each application.

65
The groups and roles that are necessary for the various applications can be in conflict.
Someone can have a managerial function within the purchasing department but may only be
assigned administrative access to the medical file.

6.14.3 Audit

6.14.3.1 POLICY

Decide and communicate in advance what the sanctions are when infringements are
detected.

If you do not decide and communicate in advance what the possible sanctions are when the
access rules are infringed, these will be decided upon subjectively at the moment the incident
occurs.

Check the level (per patient, per user and globally) at which the audit can be defined and for
which actions (logging on and off only, per action, and the related content). The more
refined the level and the actions, the more control you have over the necessary investment
for the storage of the audit logs or how long you can keep these.

If you do not have any restrictions on storage of the log then you must save it as long as
possible in as refined a manner as possible, given that it must be possible to report and
document most infringements post facto.

It is best if the system behaves in the same way with or without a log, so that the user does
not notice any difference. If you notice that a certain patient, action etc. is being logged and
another one is not, you have an inclination to treat non-logged actions as legal. If everything
functions in the same way but you know that something may have been logged, you have
the inclination to treat all actions, patients, etc. with the same degree of confidentiality.

6.14.3.2 AUDIT PER PATIENT

Setting up a log per patient can be useful for people for whom the risk of infringement is
greater than normal. This is the case with VIPs, but also with your own staff and their family
members.

Make due allowance for the fact that the biggest security risks are your own staff, both as
perpetrators and as victims of infringements.

66
If you have the opportunity to activate the log per patient you must make sure you do this
for all employees because it is here that the risk of infringement is greatest.

Do not register VIPs as anonymous patients (under a pseudonym). Register them under their
own name, via which you activate the log for this patient.

If VIPs are registered under a pseudonym this is usually only known to a limited number of
people in the hospital. This means that with these patients there is an extra risk of the
unavailability of the information required at that moment if the patient has to be urgently
treated.

6.14.3.3 AUDIT PER USER

An audit per user is a good idea for users with whom there is no contractual relationship,
and the sanctions can be made just as severe as for your own staff. An audit can also be
applied to users for whom you cannot define the authorizations in advance but can only
check post facto whether the accesses sought are legal. Instead of not giving these users any
authorizations at all you can decide to give them full authorization and then confirm the
registered actions (or have them automatically registered) post facto. Examples are accesses
to trial patients, etc.

6.14.3.4 AUDIT PER ACTION

You do not need just to be able to log who has performed which action when (and perhaps
also where). For the documenting of an infringement or for medico-legal reasons it can even
be necessary to be able to document what information (the content) someone has asked for,
changed, etc.

6.14.3.5 CHECKING THE LOGGED INFORMATION

Make the group of people who can examine the logs as large as possible. Do not make the
examination of logs the responsibility of a limited group of people, but of everyone.

If the logs have to be checked by a limited group of people then these people can find it
difficult to check the validity of the logged actions. This small group of people is going to
have to select where the problems lie from a large number of logged actions. This can only
be done if automated and standard controls are used.

If you allow a large group of persons to view the logs they are going to confine themselves to
examining those logged actions about which they have more information and which they

67
know are (il)legal. In addition the number of logs to be examined is relatively small, which
makes it a minor additional chore.

A doctor will easily be able to work out whether the logged actions for the patients for which
he is the attending physician are legal. He knows from whom he has requested a
consultation, who is working at his department, to which technical exams he has made
referrals, etc. For a small central group, a lot of this information cannot be made available in
electronic or structured form, which makes it impossible to check these logged actions for all
patients. There will then be too many logged actions remaining regarding which no
judgement can be made, which means that the suspect cases will catch the eye less readily.

68
7 Key issues in project management

It is not our intention in this chapter to recommend a specific project methodology but rather
to examine a number of key issues that are specific to the setting up and execution of IT
projects within a hospital.

When purchasing a product you must strive to adjust your own organization maximally to
the functioning of the product and to adapt the product minimally to your own
organization.

In the purchasing negotiations, the users typically visualize the product working in
accordance with their current procedures, and the salesmen win them over by talking about
possible adaptations of the product to bring them into line with the local working
procedures. Although this can provide the greatest satisfaction in the initial implementation,
this decision has severe consequences for the further life of the product within the hospital.
The local adjustments are the most expensive and most frequently recurring elements in any
future implementation of new versions. Because of this, these adaptations are also the usual
reason why new versions can no longer be implemented – or only after a long delay. You
must ask yourself whether it is better to pay the once-only price of adapting your procedures
rather than choose a product that is closer to your own way of working (but which perhaps
has less functionality, larger initial purchase price, etc.).

The decision-making body and the project team must be advised by a group of user
representatives, the individual members of which have the necessary hierarchical authority
and will also have to themselves live with the decisions that they recommend.

You must avoid a situation where the product is purchased or a solution developed on the
basis of purely managerial or economic considerations without a user group being able to
translate this into terms relevant to the users who will have to work with it every day. The
members of this user group must, however, be sufficiently high up in the hospital hierarchy
to appreciate the need for such deliberations. On the other hand these people must also be
confronted in everyday use with the decisions that they positively recommend. This ensures
a balance between strict user needs on the one hand and economic and managerial needs on
the other.

The implementation of a self-developed or purchased product requires a team of hospital


employees of the hospital who are sufficiently familiar with its functioning to pass on the
necessary information for local supplementation, maintenance, etc, and who can test the
product. These people must in this case be released for the necessary time.

69
The costs of purchasing or self-developing a product are not confined to paying the supplier
or the developers but also involve a hidden cost of time invested by employees who have
tasks at the hospital other than IT tasks.

Try to obtain an estimate of the training required. This means not only the cost of providing
this training but also the time spent by the future users on following this training when they
could be doing something else.

Often the training required is underestimated. Either the future users are given too little
time for this or they themselves pay too little attention to attending the training courses.
Being thoroughly familiar with the basic functioning of the product and its correct use at the
outset can, however, save a great deal of time and prevent a great deal of damage later on.

Before taking the new product into use, work out the emergency procedures that have to be
followed in the event of failure. Do not base these emergency procedures on the old
procedures that are being replaced by the new product; make them visibly different and much
simpler. When working out the emergency procedures, do not assume that the computer
infrastructure will be available.

No single technical solution can guarantee 100% uptime. Even 99.5% uptime means 2 days
downtime per year. In the face of this reality you must work out procedures in advance to
bridge this downtime. After the successful implementation of the new solution, the
knowledge among the users of the old way of working will rapidly evaporate. For that
reason you should make the emergency procedures as simple as possible (in such cases, for
example, make distinctions between urgent cases and things that can wait until the system is
available again).

Before you automate you typically have various forms and procedures to support all
possible eventualities. If these forms and procedures are recorded in a computer program
then – quite soon after its implementation – the users will lose familiarity with the multitude
of paper forms and procedures. It is therefore best if the emergency procedures use heavily
simplified forms and procedures.

Decide whether the emergency procedures have to be tested regularly. Testing the emergency
procedures only makes sense if it is done when normal working is also effectively
interrupted.

To minimize the negative effect of these tests at a time when, strictly speaking, they are not
necessary, you can announce in advance that these tests are going to take place.

70
It is only by testing the emergency procedures regularly that you can be sure that everyone
knows during a genuine unavailability of the computers exactly what you have to do to
bridge this interruption. In addition, the emergency procedures are then continuously tested
for their usability and you can find out what adaptations need to be made. During these
tests you also have to effectively interrupt the systems for which these emergency
procedures are designed.

Work out a rollout plan in which each phase is tackled in its entirety. Avoid duplication of
any one phase.

When taking new procedures into use you must ensure that you stop the old procedures as
quickly as possible. In many cases you have the inclination to still not trust the new system
completely and you will continue to perform crucial parts in the old way until you are sure
that the new system has proved its reliability.

Possible reasons for this are that the entire functionality is not yet available, there are still a
few bugs in the new system, not all the derived reports are possible, etc.

If you persist with this duplicated mode of working for too long then the new system will
never be able to prove that it works properly or the defects will not come to light to a
sufficient extent because you can always fall back on the old way of working. The extra
working pressure caused by duplicated working and the greater familiarity with the old way
of working will also have the effect that you will start using the new procedures less.

You should therefore check, after a previously agreed time, whether all the procedures have
been effectively stopped and you are only using the new ones.

71
8 PACS – ‘Picture Archiving and Communication Systems’

A PACS (‘Picture Archiving and Communication System’) is a subsystem for the


management of ‘medical images’, including their presentation on a workstation. This
subsystem specializes in images that make high technological demands through their large
data volumes and the need for domain-specific presentation.

In this text we are concerned with the management of the images generated by the Radiology
department.7. PACS can then be regarded as a computer system that takes over the functions
that until now were realized via film.

With digital images on the other hand fundamentally new options are also possible that were
inconceivable with film, especially the processing and analysis of the images with
manipulation of the content of these images. For the purposes of the discussion in this text,
such processing is not regarded as a part of PACS: PACS is seen purely as the infrastructure
that serves to facilitate such functionalities.

8.1 Individual aspects of a PACS

8.1.1 Archiving of images

8.1.1.1 DIMENSIONING OF THE ARCHIVE

An initial factor that determines the size of the archive is the ‘rough volume’ of pictorial
material that you want to store (the number of pixels * the number of bytes per pixel). This is
easy to estimate. The number of radiological exams or the number of beds in the hospital
already provides some indication of the annual production 8 Make due allowance for the fact
that new imaging devices are increasing that volume (see below).

7 Many of the key issues can be transferred to a PACS for other medical images, but
technological details or organisational emphases can differ

8 For a typical mix of different types of examinations and with the recent PACS technology,
20 Megabyte per radiologic examniation could be used as a first rough estimation

72
These images are usually stored in a compressed form. A compression factor of 2 (to 2.5) is
feasible without any loss of quality (reversible compression). When discussing data
volumes, always make it clear whether you are talking about volumes before or after such
compression.

Significantly larger compression factors can be achieved if you are prepared to accept that
the image once compressed is slightly different to the original image (irreversible
compression). The idea behind it is that with moderate compression factors (e.g. around a
factor of 10) the difference in practice is not noticeable, is smaller than other uncontrollable
factors, and is of no importance for making medical decisions.

This form of compression can change the technological options and can, for example, mean
that for the same investment you can achieve more rapid access to older images, which
medically speaking is a significant advantage (see next section).9. There is, however, still no
generally accepted consensus on what the ‘right’ compression factors are. For that reason
many hospitals are afraid that if they can only submit ‘compressed images’ in the event of a
possible complaint against medical errors then the burden of proof will be reversed to their
disadvantage.

Decide in consultation with the hospital management whether and in what way use will be
made of irreversible image compression. In any case always check the effect of this on the
cost and functionality of the archive and distribution system, since the more rapid access
that might possibly result is a medical advantage. If you opt to use irreversible compression
then retain also a version of the images without the use of irreversible compression in a
slower part of the archive.

The developments in radiological imaging are posing new challenges for PACS. An initial
factor is that the newer image-forming devices, and certainly CT and MR scanners, always
generate more images and/or higher resolution images. 10 A second, more recent factor is
the anticipated advent of recording techniques in which a dataset is generated through
which the user can navigate interactively. This blurs the concept of an image: from this
dataset an ‘infinite’ number of images can be generated. A question that can be asked in this

9 There is no technological advantage in performing this compression before the radiologist


has studied the images

10 It is very likely that in the coming years the growth in the amount of produced images
exceeds the growth capacity of storage and communiction systems

73
connection is whether all the images have to be retained 11. It is not yet clear how important
this question will become as a result of the various technological developments.

If you are using multislice tomographic imaging, make a decision in consultation with the
hospital management concerning which parts of this data has to be saved.

8.1.1.2 HIERARCHICAL ORGANIZATION OF THE ARCHIVE

Due to the sheer volume of data that has to be stored, it is usual to divide an archive for
PACS into different levels. A ‘working set’ of imaging exams is kept available on magnetic
disk. Exams that have not been requested for some time are only available on slower media
in a robot, or perhaps only off-line.

So as not to be confronted within this organization with long waiting times if the images
have to come from a robot (minutes instead of seconds), ‘prefetching’ is used: images with a
good chance of being required are automatically made ready on magnetic disks. This
requires information about patient movements (from other information systems) and rules
that determine which exams have to be got ready. Such rules are easy enough to draw up
for the operations performed internally within the department of Radiology (‘when the
patient comes in for this type of exam, get the previous two exams of the same organ ready’),
but are much more difficult to establish for the use of images throughout the rest of the
hospital.

With the falling cost/volume relationship of magnetic disks, however, we can nowadays
consider the radical solution of providing all the storage on magnetic disk, certainly if you
have opted for the use of irreversible compression. This avoids all the problems and
drawbacks associated with prefetching. Any additional cost must be weighed against the
increased simplicity of management.

In the interests of the global organization of the storage system, make the following two
decisions: (1) whether a hierarchy will be constructed containing slow components (which
demands prefetching), and (2) whether older images will only be retained off-line (which
demands manual actions if you want to view these images).

11 When films were used, it was also current for a tomography with thin slices, to draw only
one subset from these sections to distribute them. Same for an echography, which is an
interactive examination: the radiologist decides chich are the instantaneous ones preserved
as an illustration.

74
8.1.1.3 SYMMETRY IN ACCESS TO THE ARCHIVE

Until a short while ago, large data volumes made it a technological challenge to retrieve
imaging exams quickly (in a matter of seconds). A method that has already been used to
reduce technological requirements is to send the images in advance to the workstation(s)
where they will presumably be used or to a local subarchive. If you have to perform this
‘prefetching’ to be able to retrieve these images from the workstations, this leads to
significant limitations in the organization.

With present-day technology for networks and archives, it is more common to use an archive
from which all the workstations can retrieve all the images symmetrically12, which is much
more convenient in use.

As far as possible, avoid an architecture in which you need to know in advance from which
workstations you will be retrieving images.

If the speed of the connections is a problem, e.g. in operations at two different locations with
low bandwidth between them, it can actually be an advantage if the PACS has mechanisms
for explicitly fetching in advance a copy of the images ready at the other location. The extra
cost caused by the complexity of the system (including the maintenance and the reduced
flexibility) must however be weighed against the cost of a larger bandwidth.

When using ‘intelligent solutions’ to deal with technological limitations in which you have
to make predictions, consider whether a ‘brute force’ use of technology will not ultimately
be cheaper.

8.1.1.4 BACKUP AND CONTINUITY OF OPERATION

Database technology is only a part of the story

The images are in general not stored in a database – only the references to these images and
meta-information about these images (the actual ‘image management system’).

12 Possibly faster is when the system finds a copy in a local cache, but that system works
transparently

75
Backup & restore

Of course, a frequent backup of the actual image management system is crucial. This
database is not of exceptional size and the usual backup strategies are suitable.

On the other hand these backup strategies cannot be used for the actual image material, due
to its volume. However, the images do have the characteristic that they can no longer be
changed.

Security against loss of an individual image, typically by a media fault, can accordingly be
achieved by writing out a copy once only to an independent medium at another location
(and ensuring the integrity of this copy, according to the recognized techniques).

Security against destruction of the entire archive or an important part of it forms a special
challenge. The questions are (1) how quickly you can start working again (possibly with
lower performance) by using copies of the images, and (2) how quickly you can rebuild the
original archive. If all the images, including a copy, are available on-line on fast media
(which is unusual) then that will of course be a reasonably fast process. If the copy is
maintained off-line then the process could take a very long time.

As regards recovery, you are of course recommended to have special support for the PACS
software. After all, the images have to remain indexed from the management system, the
control of the data storage can be specific to the robot(s) and, since you can hardly be
expected to wait until a new backup copy is made and the archive is fully operational again
before restarting the operations, the software must make allowance for the fact that the
archive is only partially available.

With the supplier of the PACS, work out a procedure for the continuation of operations after
a large part of the archive has been destroyed. Estimate how long this procedure will take
(time until functioning with acceptable limitations is possible versus time until the original
situation has been restored). If the backup copies of the images are maintained off-line then
at least check whether the software of the PACS permits these copies to be used immediately
after they have been loaded into a new robot (possibly of a different type to the previous
one).

Changing or expanding the storage system

Given the specific organization of the archive, the way in which it can be expanded can
depend on the software of the PACS supplier. Given the volume of data, the conversion of a
file archive to a new archive can take a long time, especially if a near-line component is
involved.

76
Discuss in detail with the supplier of the PACS what facilities are provided for expanding
the archive or replacing its components (when data has to be transferred).

8.1.1.5 CONSOLIDATION OF STORAGE NEEDS

Storage and its management are important cost factors in hospitals.

Check whether you can use a joint storage system for PACS and other applications in which
a large amount of data is stored.

This makes it essential that the PACS is able to use this ‘3rd party’ solution. In the earlier
PACS systems this could certainly not be taken for granted, given that they had the
inclination to provide their own software for the control of the different hierarchies in the
storage system. To an increasing extent however, these systems use standard solutions and
this can certainly be discussed.

8.1.2 Distribution of radiological images throughout the hospital network

The introduction of PACS can be an argument in favour of making the network faster. With
the existing technology, the network within a building is no longer a stumbling block to the
introduction of PACS and most probably not the biggest cost factor either. In most cases no
separate network infrastructure is used for the PACS.

The bandwidth to be provided for images to users throughout the entire institution is
influenced by the choice of ‘clinical viewer’ (section 8.1.5), since this component is already
optimized to achieve a certain economy of bandwidth. There is sufficient experience with
the suppliers of the PACS or users to make suggestions about the required bandwidth.

8.1.3 Connecting the imaging devices

Digital linking versus ‘secondary capture’

Almost all new imaging devices for Radiology can be linked digitally to the network (see the
next point about DICOM). If an older device cannot be linked digitally there is a possibility
of digitizing the images from the analog output that is used to control (for example) a
printer. The result will then be as if a print had been scanned on film. The functionality for
working further with these images is more restricted.

77
A special problem with this sort of ‘secondary capture’ is the sending of patient and exam
IDs. A possible technique is to deduce these from the text that has been printed out onto the
‘film’ via automatic character recognition. This is, however, unreliable (see section 8.2.3).

DICOM

The DICOM standard was developed for the exchange of medical images and related data.
This standard is widely supported in the radiological world (and more generally in PACS).

DICOM defines two things:

• Coding (in a high degree of detail) of imaging information, including acquisition


parameters, spatial relationships between the images, information about patients and
exams, etc.

• A (complex) protocol for communication with an entity that offers a specific service for
these images (storage, printing, etc.).

Different DICOM services

A device can be ‘DICOM-compliant’ without necessarily supporting all DICOM services.


The most usual services are briefly described below:

• Image Storage defines how a device has to send a set of images to another node, e.g. to a
PACS.

• Storage Commitment is a refinement of the previous function of an archive, and provides


feedback when the images have been written out securely (and therefore can, for
example, be wiped from the module).

• Modality Worklist defines how the HIS can send information about the patient and exam
to the imaging device so that this information can be included in the digital dataset (see
section 8.2.3 p. 89 for the reasoning).

• Print prints a series of images over the network on a DICOM printer. A printer of this
kind will almost certainly be provided for printing from the PACS workstations. The
possibility of printing from the imaging device is useful for emergency procedures.

• Performed Procedure Step allows the imaging device to report that a (part of an) exam has
been performed and to supply ‘administrative’ information about the images and the
procedure with which a PACS or HIS can initiate further operations or perform quality
controls.

78
• Query/Retrieve allows an independent system to retrieve images from a PACS and to
make a local copy of them (see section 8.1.4 p. 79, the section on specialized
workstations).

Try to activate on all imaging devices that you want to connect to the PACS at least the
DICOM functions ‘Image Storage’ and ‘Modality Worklist’. Discuss with the supplier of
the PACS what other DICOM services can be utilized by that PACS, and check on their
availability with the suppliers of the imaging devices.

Level of integration with DICOM

DICOM is a communications standard intended for the exchange of imaging data between
independent systems. DICOM is not intended to achieve a tight integration of applications
with images or to, for example, realize a joint management or an overarching user interface.

8.1.4 Viewing for primary diagnosis

Functionality, technology and conditions of interactive study of images

There is now sufficient experience of diagnosis on workstations, so we need not go any


further into this subject in this text. Nor will we be discussing here any aspects such as the
performance of measurements on the images, calibration of the imaging screens, or the
realization of the correct lighting conditions.

Selection of exams, worklists

The system must build up ‘worklists’ from which the exams to be studied can be selected. In
the non-automated organization, the worklist consists of a stack of film folders with the most
urgent exam at the top. With fully digital working, the ‘token value’ of paper disappears
and all the information must be explicitly presented by the computer.

When different radiologists are working together on a packet of exams, the system must
synchronize worklists across different workstations and prevent different radiologists from
starting to report on the same exam.

Investigate where in the existing organization people rely on the ‘token value’ of paper or
film. Check whether this function is sufficiently taken over by the digital worklists on the
workstation for image viewing and identify where additional information must be brought
into the computer system.

79
‘Bookmarking’ of images

Here we are concerned with the possibility of compiling a collection of images as especially
relevant or illustrative, e.g. to be able to find comparison images quickly when comparing
exam results later on. There is less experience of this function, but given the increase in the
volume of available image material, it will become important.

Filtering of images

The imaging department may have a policy of sending only a summary of the imaging exam
(see next point) to the requestor or enclosing this summary as an additional item with the
other images, etc. It cannot therefore be automatically assumed that the PACS supports the
desired procedure.

If the imaging device generates a large number of slices lying very close to one another, a
subset of images with a larger distance between them may be used throughout the rest of the
process. The PACS must then provide mechanisms for automating this feature. ‘Only
making a subset of the images available’ can be done by only maintaining these images
(p. 74), or by only maintaining the other images in a slower part of the archive (or in a part of
the imaging department), or by simply making this subset easier or faster to retrieve, which
is more a question of ergonomics and efficiency than of imposing restrictions.

At policy level, a decision must be made regarding the extent to which only a subset of the
images will be sent from the imaging department to the rest of the institution. Another
point is whether the PACS can implement this decision technically without requiring too
many manual actions.

Use of another supplier’s workstations on the PACS

The user interface on the workstations is strongly linked to the non-standardized


management system of the PACS. It is therefore impossible to transparently replace the
‘viewing stations’ of the supplier of that PACS with workstations from another supplier.

What is possible is to exchange imaging exams with another workstation that, for example,
supports specialized visualization or surgery planning The further manipulation of the
images on this other workstation is from then on dependent on the PACS, and without the
management functions offered by that PACS.

An initial possibility is to pass these images on from the PACS (‘DICOM Image Storage’). A
second possibility is to allow this specialized workstation to query a list of exams present in

80
the PACS that meet certain criteria and to itself transfer these exams (‘DICOM
query/retrieve’).

If you want to exchange images with specialized workstations, then you had best ensure
that the suitable DICOM functions are provided. Make due allowance for the fact that the
image manipulations and processing on these workstations are independent of the PACS.

Managing the results of specialized processing or analysis

A very common reason for exchanging images with a separate workstation is to perform
specialized processing there. The results of this can be images but can perhaps also be
numbers or specialized presentations. You cannot automatically assume that these results
can be sent back to the PACS for storage and further management, certainly if it is data for
which DICOM does not have a standard.

If you want the results of the processing on independent workstations to be managed in the
PACS, you must examine in detail with the suppliers concerned whether that is indeed
possible and whether the explicit communications required for this make this setup
worthwhile. Ask yourself whether the processing cannot be performed within the PACS.

Access control

Access control from the workstations of the PACS must be explicitly provided for in that
PACS. As with every subsystem, for the PACS also you must check how the general policy
for access can be implemented. An alternative is to have the exams selected in the HIS or the
RIS (section 8.2.2) and so delegate access control to that system.

If the images have been sent to a workstation or system outside the PACS, the access control
from then on lies, of course, in the hands of that other system. A special problem is that
refined access controls are not possible in the ‘DICOM query retrieve’ function. The PACS
does not know, among other things, which user is performing the query action on the other
system.

Restrict use of the DICOM ‘query/retrieve’ function to systems for which you have access
control.

With DICOM it is possible, however, to initiate the image transfer from a different system to
the one on which the images stand or on which they have to end up. It is accordingly
technically feasible to, for example, initiate that transfer from the HIS, in which you must
nonetheless still ensure access control.

81
8.1.5 Viewing throughout the clinic

‘Clinical viewing’ means viewing images outside the imaging department, e.g. by the
requestor of the (radiological) exam. In most systems, different software is offered than that
used within the imaging department.

The emphasis in this software for these more occasional users can then lie more on an
intuitive user interface, on integrability with other components of the clinical workstations,
or on greater economy in technological requirements. As a result, functions are omitted or
simplified. Less attention can also be paid to matters such as the handling of calibrated
imaging screens.

You have to make the policy decision whether all the workstations will be equipped with
the ‘radiological’ software, or whether outside the imaging department only the ‘clinical
viewing’ software will be provided, or whether a combination of the two workstations will
be used outside the imaging department.

Of course, if you combine the two then the question arises whether you get the best or the
worst of both worlds. What can play a part in this is that you must then integrate the two
systems with the HIS and achieve access control (aspects that have a stronger emphasis
outside the imaging department than they do within it). Specialized workstations such as,
for example, those used for surgery planning remain outside the scope of this discussion
whatever form they may take.

Functionality, technology and conditions for interactive study of images

There is less experience with the use of digital viewing by heterogeneous groups of clinical
users. For that reason you should certainly involve different representative users in the
assessment of possible systems right from the outside of the project (see p. 69).

We would however like to point out that dynamic navigation through a series of images
using the new imaging techniques is gaining in importance. There is no analogy for this in
film-based working, so clinical users most probably have less experience with this.

It is usual to regularly calibrate the imaging screens within the Radiology department. That
is difficult to organize throughout the entire institution, certainly when these imaging
screens are not specially designed for this purpose, when they also have to serve for
presentation of data other than images, and when the precise adjustment cannot be firmly
achieved. Most probably the ambient light also is not always the same, if only because the
users need more light for their other tasks, whereas examining the details of images
generally requires a darker environment.

82
Selection of exams, worklists

In a clinical department, an imaging exam is approached from the perspective of the whole
patient file. It is for this reason among others that in commercially available systems also the
opportunity to select images from the primary user interface of the overarching information
system is increasingly common (section 8.2.2, p. 86).

‘Bookmarking’ of images and annotations on images

In the film-based system, clinical users often use their own summary of the images, e.g. by
making annotations on the film. So, for example, to follow the course of tumour therapy a
reference slice can be designated that you will want to find again quickly when you compare
a new exam later.

Discuss with the supplier of the PACS the possibilities for indexing (or annotating) images
or parts of images and how such selections or indications can be managed. Decide on a
policy for making or changing annotations and check whether the PACS can support this. It
should not be possible for users outside the diagnostic department to change annotations
made by people inside the department.

Access control

Detailed access control is even more important outside the imaging department than within
it. That can also be a reason for allowing the imaging exams to be invoked in the
overarching information system (section 8.2.2, p. 86).

8.2 Integration of PACS into the whole IT solution

Due to the hi-tech character of a PACS, you will develop the temptation all the more quickly
to concentrate on the technical aspects. Even so, integration into the whole IT solution is
essential. A stand-alone PACS cannot meet expectations.

8.2.1 ‘Back-office’ integration of PACS into the overarching information system

The term ‘back-office integration’ is used here because we are dealing with a form of
information exchange between computer systems that is separate from and largely invisible
to the interactive user. This is not a standardized term.

83
E xam sched uling info
H IS /R IS P atient info PA C S

Info fo r P A C S user

Illustration 8 : The HIS must send information to the PACS. Some information is required so that the
PACS can manage the imaging exams in a meaningful way (in the global context) or can control
automatic systems. Other information is primarily intended for organizing the user interface on the
imaging workstations

84
Sending administrative and logistical information to the PACS

The PACS must obtain information from the HIS/RIS about the patients who are expected
and the exams that have to be performed. All changes in patient data in the HIS/RIS and
‘patient merges’ must be pushed through to the PACS. This must cover, as a minimum, the
IDs of exams and patients (used in the HIS) and the name of the product.

A fundamental reason is of course to be able to refer in the image management system to the
same entities as those in the HIS (see p. 30). Another specific reason is so that the PACS can
check whether the incoming imaging exams can really be allocated to a particular patient
and linked to an exam in the overarching information system (more about this in sections
8.2.3 and 8.2.4). A possible further reason is to allow the PACS to control prefetching (p. 74)
and prerouting (p. 75) 13.

A frequently used method for this information exchange is communication of messages in


the HL/7 standard. It can be an advantage if the PACS can handle several mechanisms for
this exchange, since HL/7 is not the easiest protocol to implement (see p. 16).

Sending reports to the PACS

If you are examining diagnostic images, there is a good chance that you will also want to
look at the report. That report will, however, be managed outside the PACS. You can opt
for exchange of these reports between HIS and PACS, e.g. so that you can view previous
reports from the viewing software.

Of course, with this solution you have the usual problems of replication of data in two
systems (see p. 22). An alternative is to supply a link at the level of the user interface
(ultimately this is purely a matter of presentation), as described in section 8.2.2, p. 86.

Make it a policy decision whether you want to exchange reports with the PACS (so that
they can be retrieved from there). If you do supply such a link, make sure that the copy is
always up-to-date.

Forwarding task-supporting information to the PACS

In the discussion of radiological viewing it was emphasized that ergonomic worklists are
important (p. 79). If the imaging workstation is to be able to build up such lists it needs

13 Of course, use of this information for prefetching implies that you have to send this
information well in advance and that it must be available in the HIS well in advance.

85
information from external systems. The interactive user perhaps wants to obtain a list of
exams requested by a certain doctor or by a specific unit, or of exams for which a report has
still yet to be made 14 An initial question that arises in this connection is whether the PACS
can actually use the information you want. The second question is how you will forward
this information (as well as any changes in it).

Check what information is required in the user interface (worklists, search engines) of the
workstations that is not already present in the PACS. That means that you must analyze in
detail the process of working with images. When defining the workflow, take into account
that there can be a delay in sending this information. Check whether the PACS has
mechanisms for accepting and using this information (this is not standardized).

A reasonably useful form of information communication is also that of the request and the
clinical querying of the diagnostic workstation, through which the radiologist can do most of
his work from that workstation without having to switch over to other information systems.

8.2.2 ‘Front-office’ integration of PACS in an overarching user interface

By the term ‘front-office integration’ we mean integration at the level of the user interface, so
that the user at least gets the impression of interacting with one single overarching
information system. In the context of PACS that is certainly relevant for the clinicians, since
for them images are only one part of the available information (perhaps a technologically
‘heavy’ part, but one of only limited importance).

An initial advantage for integration in the user interface is simple ergonomics because the
patient and/or exam does not have to be searched for again in a different application.

A second advantage of ‘front end’ is apparent from the discussion in the previous section.
Much of the information that you communicate ‘back end’ serves ultimately just for user
interaction in the other system (presentation of reports, sorting of worklists per unit, etc.).
Often it is technically simpler instead to have the user interface function performed by the
system that immediately has all the data.

A third advantage is that you can impose a consistent policy. At different places in this
chapter, for example, access control was mentioned. If the imaging exams can only be

14 This last information can sometimes also be indicated in the image working station
However, in many organisations i t is surer that the report comes directly from the system
where it has been produced.

86
invoked in the HIS (and if the viewer of the PACS is accordingly controlled by that HIS) all
the access control can be performed centrally in the HIS.

Exam scheduling info


HIS/RIS Patient info PACS

This is any
result or…

Illustration 9 : In this example images are being invoked from the user interface of the clinical
workstation. In this arrangement you automatically have the familiar context and selection
mechanisms, and a part of the information exchange from Illustration 8 becomes unnecessary.

The primary system for selection

The question arises which system should be the ‘primary system’ for selecting information
from worklists. Given the limited experience with front-office integration a pragmatic look
should also be taken at the available possibilities.

For clinical viewing it is clear that the user interface of the general patient file is the most
suitable starting-point. That system has most of the relevant context to be able to supply
powerful search mechanisms or to present worklists (e.g. of patients who are allocated to this
user, who are present at a certain department or for whom a consultation has been
scheduled). That system also has the context for access control.

Regarding use within the radiology department it can be stated that the pure availability of
images controls much of the workflow and that the worklists are optimized to the specific
task of working with images. The radiologist would then be able to select the exam on the
workstation, after which the HIS/RIS would present the clinical and other information and
offer support for the input of the report.

At the start of the PACS project, decide what integration at the level of the user interface
you are striving for (since that leads to a shift in the efforts required for development). You
should therefore decide what the primary system is for the selection of patient and exam.

87
Available techniques

The HL/7 consortium is working on the CCOW standard for this kind of ‘visual integration’
of IT applications (see section 2.6.2 p. 28). It is too early to speak of a fully mature standard
and the envisaged functionality is, all in all, still rather limited.

While waiting for this standard, many PACS providers are providing facilities on an ad hoc
basis which enable their systems to be controlled or, in consultation with other providers, are
allowing these providers to control their systems (technologically it is indeed a simple matter
to send an ‘event’ to another application in one or other technology). Over the past few
years remarkable progress has been made in the acceptance of the idea that the different
applications of different suppliers must work together. Not too much however should (yet)
be expected of the implementation. For example, if you can invoke the clinical viewer from
outside the application with the instruction to immediately display an imaging exam but you
have to restart this viewer to display a different exam, then you lose a great deal in terms of
ergonomics.

When you use a context synchronization (e.g. via CCOW) it will continue to be apparent to
you that there are two different applications involved. The level of detail with which you
can control the other application is also limited. The other extreme is that the supplier of one
system supplies a software component that the supplier of the other system integrates into
his own application, which makes a very tight application possible (see section 2.6.3 p. 29)15.
Given the progress being made in component frameworks in general, it can be expected that
this principle will be used to an increasing extent.

If you are striving for integration of user interfaces between PACS and other systems,
discuss the possibilities in detail with the suppliers concerned, since this is completely non-
standardized. Check whether the implemented integration is also compatible with complex
navigation throughout the entire patient file.

Other key issues

The tightness of the visual integration can affect the desired refinement of control. You must
be able to compare images, which demands smooth navigation through the imaging exams.
The navigation mechanisms of the HIS have the advantage that they reflect the specific

15 Both imaging presentation and communication with the underlying archive are specialized
functions. The component in the user interface accordingly comes with its own underlying
system.

88
organization and utilize the broader existing context. On the other hand, if the viewer is
integrated in such a way that you have to go back for each navigation to a window in the
HIS, it can ultimately be more convenient just to display on the viewer which patient the
data relates to and to arrange for interactive navigation through the various exams of this
patient within the viewer.

The greater the extent to which use is made ‘around the images’ of annotations, summaries,
measurements and so on, the greater is the connection between these images and the
information or context in the overarching information system. The management of these
items can, of course, be made more powerful and also simpler by centralizing it in the HIS.
Experience with this is still insufficient however.

8.2.3 Information exchange with the imaging devices

IDs in the forwarded DICOM dataset

In the DICOM dataset that is sent to the PACS there is a field for the patient ID and a field for
the ID with which the external information system refers to this imaging exam (‘accession
number’).

These attributes are generally entered on the imaging device (which generates the DICOM
message). Entering patient data on the device is also usual in film-based working. In the
case of PACS however, it is essential that the IDs be entered with absolute accuracy. This
cannot be guaranteed if the input is performed by manual typing.

Wherever possible, avoid a situation where patient and exam ID have to be entered on the
imaging device; instead make sure that these are taken over electronically from the
overarching information system.

Techniques

The most commonly used method in ‘radiological’ imaging is the sending of data on
scheduled exams to the imaging device via the DICOM ‘Modality Worklist’ function. The
operator must then select the correct entry from this list.

This DICOM communication can be performed directly from the HIS. The PACS can also
however take upon itself the task of communicating with these devices. After all, the PACS
obtains from the HIS information about scheduled exams (back-office integration, p. 83), so
that the HIS only has to supply some additional data with which the PACS can decide which
device to control. This task of control is not simple. So, for example, allowance must be

89
made for precisely when (‘which day’) the exam is scheduled for. There are specialized
‘interface engines’ on the market (see p. 21) that can handle the traffic between HIS, PACS
and imaging devices, and which can control a list of non-DICOM-compatible devices.

If you want to connect an imaging device to the PACS and you still do not have any clear
idea of the best method of viewing IDs automatically on that device, then check
immediately whether the ‘DICOM Modality Worklist’ function can be used. Do not
however automatically exclude other options, e.g. direct control of that device from the
selection in the HIS/RIS, which can be more convenient when you are performing lots of
short exams (which makes selection from a long list onerous) and the operator for the
general organization still has to indicate in the HIS/RIS that you are busy with a particular
exam.

P atient & exam ID s


P atient & exam ID s

H IS /R IS P atient & exam ID s PA C S


P roced ure typ e
S ched uling info

Illustration 10 : It is essential that the imaging devices take the external IDs for the patient and exam
from the global information system electronically, so that these can be correctly entered in the
generated DICOM datasets. In this example, the PACS is making worklists for most modalities on the
basis of information from the HIS, while another device is controlled directly from the HIS (in practice
from the RIS).

Additional key issues

In the ‘Modality Worklist’ it is the imaging device that retrieves a new worklist, e.g. for a
couple of minutes. A certain time-lag must therefore be allowed for between input into the
HIS/RIS and availability of the data on the device.

Make sure that the exam in the HIS is created BEFORE the imaging exam is begun. (The
‘exam’ relates to the object to which the imaging exam is linked.)

90
It is certainly true that when exams can be performed on different devices and the data has
been sent to all these devices you arrive at long worklists which become impractical. The
worklist on the device must also be cleansed again. This cannot always be done
automatically (you want to be able, for example, to include another image). The imaging
device can, if desired, present only worklists that have been scheduled for ‘today’, but that
again assumes that the date on that device is correct and that any postponement in exams is
actually entered into the information system.

Of course, manual input must continue to be possible, e.g. to be able to start operations with
the imaging device if the information in the overarching system is not available (in time).

8.2.4 Correction of errors in the PACS

Because the PACS knows which exams to expect (p. 85) it can check whether the incoming
DICOM datasets contain the correct IDs of patient and/or exam. In some cases in which
manual input was required here could still be inconsistencies. It is necessary to be able to
view these imaging exams ‘for emergency cases’ (within the imaging department), but they
cannot be utilized throughout the entire hospital.

Make sure that errors in identification of the exams can be rapidly correctly manually (even
during night and weekend shifts). Make sure that errors are reported as close as possible to
the place where they arise. Correction post facto by persons who have not actually
performed the exam requires thorough knowledge of the organization and of imaging!

You should arrange for a stricter discipline among the operators than in the film-based
situation. A bad mistake is, for example, to select from the ‘modality worklist’ of another
exam scheduled for the same patient. Provide feedback mechanisms to draw the operators’
attention to faults, which means among other things that the radiologists must report
incorrect identifications.

91
IDs
DICOM
data set

IDs

HIS/RIS IDs PACS


=

This is any
result or…

Manual correction

Illustration 11 : The information exchanges between HIS, PACS and imaging devices described in
the previous sections allow the PACS to perform any check on incoming imaging exams. In the
absence of these mechanisms manual corrections are, however, required

92
9 Telematics

Telematics deals with the support, by means of information technology, of operations in


which quite a long distance is involved. ‘Distance’ here can mean:

• Physical distance, through which (1) limitations of communications technology play a


part and (2) security based on physical barriers is absent.

• The fact that the information exchange is extending outside your own hospital, which
means that you (1) have no influence over the systems and working methods on the other
side and (2) have to work without the general information context that you normally
have within your own working environment.

The most important key issues in telematics derive from that second point. If physical
distance is the only factor involved then it is known as teleworking. We will not be
discussing teleworking in this text, except to mention a few differences with the situation
where you communicate with people outside your own hospital.

In this chapter we will first focus extra attention on the aspects of security that become
especially important in telematics. We will then discuss a few techniques of information
exchange in which this information is not present in a medical file on both sides. Finally,
some possibilities and key issues in tele-information exchange are discussed in the context of
the medical file.

9.1 Security in telematics

Key security issues have already been mentioned in chapter 6 (p. 50). In this section we
revisit those aspects that are especially important when using telematics and also discuss
some aspects based on this specific application. The key issues that have already been
mentioned in these other chapters are not explicitly revisited here.

9.1.1 Defining responsibilities

An important element of security is the investing of responsibility in the user, with the
related possibility of imposing sanctions. If you are making sensitive information or services
available to outsiders via telematics then you must ensure that these users are also advised of
their duties and show a sense of responsibility. At the same time, however, responsibilities
must also be formulated in more general terms.

93
Conclude legal agreements on security and responsibilities with external parties.

There is also a relationship between the hospital and the patient. Perhaps the patient would
prefer it if his or her data was not passed on to the information systems of other care
providers. Since the use of telematics is a new factor in this field, many situations are not
covered by the general regulations.

Check whether the patient’s ‘informed consent’ is required for the electronic communication
of patient data.

9.1.2 Securing of communications

One form of illegitimate access to data is ‘eavesdropping’ on communications. In a local


network that risk is limited because the hacker must be physically located within the
infrastructure. In telematic networks the risk is enormous, not only because of the longer
and less closely monitored physical path to the partner, but also because the nodes in that
external network are, per se, accessible from somewhere else.

The Internet is especially vulnerable here. The message can be sent via (and often even be
retained at) nodes that are managed by organizations that do not have the opportunity to
pursue a security policy and which cannot be held responsible for intrusions by third parties.
There is no formal confidentiality of mails on the Internet. On the other hand, the Internet is
organizationally a fruitful medium for cooperation between health workers of all kinds (see
section 9.3.2.2, p. 105). The way in which this aspect is dealt with is encryption, which is best
used through all the communications (see p.58).

Techniques are available for checking whether the message has been (deliberately) changed
during communication and whether the message did actually come from your
communication partner (see p. 59).

9.1.3 Authentication of users

In telematics, a person need not be physically close to the system to try to retrieve
information, which simplifies intrusion.16.

16 An additional problem with this is that someone can try to break into the system ‘for fun’
while it would not be considered ‘fun’ to start physically stealing the data.

94
If a wide group of users over which you have no control has access to a service (such as a
Web site) then someone who hacks into the computers that provide this service still cannot
access the computer systems in the hospital that contain sensitive information or from which
sensitive operations can be performed (see section 6.12).

If you have to give an external user access to sensitive information then strong authentication
methods are required (see p. 50). This means that you must always conclude prior
agreements with this user and must exchange hardware or keys.

9.1.4 Confinement of business logic to the client software

Even if you are sure of the reliability of the person who is requesting access, there is a risk
that the workstation with which this person obtains access has been infiltrated. After all, you
do not have the same control over this workstation as over those within your own hospital.

As a result, you cannot rely on any restriction that you yourself have implemented in the
software that you have supplied to this person. Assume, for example, that this software
opens a connection to the database and performs a number of ‘secure’ queries. A hacker can
try to put this connection to the database to illicit use. According to the same reasoning you
cannot rely on any form of access control that was implemented in this application. You also
cannot be sure that the business logic in this application is being correctly performed and is
therefore keeping your internal systems consistent.

The software at the clients should not contain any functions that can be circumvented, such
as access control or important business logic.

The software on the server should not assume that queries or retrievals of services by the
telesoftware are ‘well behaved’.

One more secure method is to concentrate all such queries, access control and business logic
in components within the hospital (in a ‘middle tier’) from which only certain methods are
made available to the tele-applications (see Illustration 4 p. 12)17. Even then these internal

17 One possible drawback to the heavy implementation of this principle is a slower response
for user actions that require strong interaction between the user interface and the business
logic. That argument plays less of a role in applications that are explicitly intended for use
outside the hospital in which a simpler user interface is most probably being aimed for.

95
components must make allowance for ‘parameter tampering’, but the more restricted the set
of methods that can be resorted to the easier it is to guard against this.

A second more secure method is to have only the graphical user interface running on the
workstation and to have the actual application on the servers of the hospital by using
techniques involving ‘terminal servers’ (popular in Windows environments) or ‘display
servers’ (as encountered in the use of, for example, X-Windows). An advantage of this is that
existing applications can be immediately deployed. On the other hand it can be undesirable
to allow people from outside the hospital to automatically use the same application as
internal staff, simply because they can commit too many erroneous actions due to
unfamiliarity with the system. This solution is therefore more suitable for teleworking.

9.1.5 Detailed access rules and external doctors

Detailed access control is based on information in the computer system about the doctor-
patient relationship. Such information is available to a much lesser extent to external doctors
(this external doctor is most probably not attached to a specific department of the hospital,
has perhaps not had any medical contacts with the patient noted in the file, etc.).

Take into account that the existing access rules for medical data can demand information
about the external doctor that is not available in the internal system, so that you have to
implement other rules.

9.2 Technology for tele-interaction between persons

There are a few reasonably standardized methods in which the aim is to enable people to
cooperate without them having to travel. It is not our intention to issue strong
recommendations and certainly not to lay down quality criteria, but we will mention a few
key issues.

In information exchange via video conferencing, data conferencing, e-mail, etc., this
information is not integrated into an information system at the communications partner.
Do not therefore regard this as a way of exchanging information between medical files.

96
Organization 1 Organization 2

Record 1 Record 2

Other healthcare actor

Local record

Illustration 12 : In the techniques of telecooperation discussed in this section there is interaction between
people but no information exchange between the medical files. In the best-case scenario the people use
information from their own file in the conversation or they show it to their communications partner.

9.2.1 Video conferencingLevel of expectations

The quality of visual information in video conferencing is on the low side. The idea is to
create a ‘meeting experience’, not to transfer subtle points of detail. The usual standards are
based on the ‘CIF’ resolution of 352x288 pixels, which is about half the resolution of TV in
both directions (which is itself already much less than on a computer screen). If there is
sufficient bandwidth and processing speed then the movements are less jerky but the
resolution remains limited.

Do not use conventional video conferencing techniques as a way of discussing images or text
in high quality.

To discuss documents in detail at a distance, ‘data conferencing’ is preferred (section 9.2.2,


p. 98). Of course, if the quality is not particularly important and the material is only required
to illustrate the discussion it can be sufficient to hold it up in front of the camera or to switch
to a separate document camera.

If it is primarily the ‘meeting experience’ that is important then make sure that there is
adequate sound quality, if necessary at the expense of image quality.

It is more annoying if you cannot understand the other side properly than if the image falters
now and again. One aspect of this is the occurrence of echoes, which can demand facilities
for ‘echo cancellation’.

97
Required network capacity

In video conferencing the delay in communications is relevant and you need a guaranteed
bandwidth.

For this reason the use of ISDN is popular, since a guaranteed bandwidth is combined with
almost universal availability. The bandwidth of ISDN, however, is low. In the existing
compression techniques, different ISDN lines are often used in parallel. Most
telecommunication providers can supply synchronous connections with a higher guaranteed
bandwidth, but that has to be discussed on a case-by-case basis.

Video conferencing using the IP protocol, e.g. over the Internet, is attractive because of its
flexibility. There are separate standards for it. At the moment however you cannot obtain
guaranteed bandwidth over the public Internet. If you do want to make use of it you have to
conclude agreements with the all the providers of Internet communications.

The above requirements for bandwidth assume that communications will be set up between
two locations. If more than two locations are involved then additional communications can
be set up in pairs, or one node can be made responsible for the control after which a
composite signal is sent (with high bandwidth and specialized equipment and operation at
that node). The provider of the telecommunications channels can also offer that as an
additional service.

Organization

The ‘directing’ of the sound and image is not automatic. For meetings involving several
people (certainly if documents have to be shown on screen now and again) of for
transmissions in which the quality of the moving images is important (image composition,
zooming in, light reflections, etc.) it can be necessary to have someone knowledgeable
operating the camera(s).

If the quality of the video-conferencing is important, consider whether investment in proper


operation of the equipment is not more sensible that additional investment in equipment.

9.2.2 Data conferencing

If the data is available in digital form on the workstation of one of the participants, this data
can be made visible with the same quality on the workstation on the other side of the
communications.

98
In an ‘electronic whiteboard’ a window is visible on all the connected workstations on which
each of the participants can place information from the local environment. All the
participants can, among other things, make annotations on the information introduced into
the discussion.

In ‘Application sharing’ the user interface of a program that is running on one workstation is
also visible on the other connected workstation(s). If necessary the control of that program
(‘mouse and keyboard’) can be handed over to another participant. For this reason it also
sometimes known as a ‘shared desktop’.

In a whiteboard you can only introduce standardized media. It does not, for example,
directly accept a radiological image series in DICOM. In order to realize interactive
navigation through such an image series (running through dozens of images, adjusting the
contrast interactively), application sharing is necessary.

For the more basic applications it is sufficient to have a significantly lower bandwidth than
that required for video conferencing. For interaction on a series of images as mentioned in
the previous section a high bandwidth can indeed be a requirement. In the world of
teleradiology, specialized applications that can circumvent these limitations are already
being used18, but each partner in the communication must then use the same specialized
software.

If you want to perform complex interactions of high quality on a large series of images
(typically for radiology applications, and then primarily for primary diagnosis) you must
weigh up the use of standardized applications that demand a large bandwidth against the
use of specifically radiological solutions.

9.2.3 ‘Delayed time’ interaction by exchanging files

The ‘real time’ aspect of the above techniques means that everyone must be present at the
same time. For many applications the most efficient procedure is still the sending of data
after which the addressee studies this asynchronously.

18 In the standard application, each interactive adaptation of the image presentation demands
that a new ‘screen image’ be sent. For this specific application you can only send the images
once; from that point on you can only send commands to change its presentation.

99
Only use telematic technology in which people have to be present at the same time, albeit at
different places, if ‘real time’ interaction is genuinely required.

Any type of file can be sent via e-mail, but because the message has to be stored on
intervening computers the total length of the message is restricted.

To send large messages (e.g. sets of images), the use of a server on which one party places it
and the other retrieves it is generally more viable than e-mail. In addition to making
arrangements for security (including between users mutually) you must also set up an
organization to prepare, cleanse, etc. the data on this server.

It is becoming increasingly common to provide access to this server via a Web interface,
which not only allows you to provide a specific user interface but also offers possibilities for
realizing extra management of the data.

The communications partner must have the software to read the data. Check whether
important meta-information or organizational information on the various information
items can also be displayed.

That seems obvious. But with complex data types, e.g. series of radiological images, it is not
so straightforward. Perhaps the ‘viewer’ of your communications partner can see these
images but the positions are not stated in millimetres even though you used mm as the basis
for calculation in a related report, and so on (see also section 9.3.2, p. 103).

9.3 Telelinking of and telelinking to medical files

We will only discuss situations in which there is communication between independent health
care workers (hospitals or doctors for the purposes of this text). If, on the other hand, different
hospitals merge and decide to construct a common file, the integration techniques discussed
in chapter 2 are applicable, in which only the technology for the communications differs
(and, as regards security, a VPN is recommended).

A few characteristics of this situation and the innate limitations are therefore:

• The structure of the medical files is and remains different. In practice there is, after all,
no standard for that file. There is not even a standardized way of working and in many
cases not even one standardized set of concepts.

• You have only a limited influence over the technology at your communications partners.

100
It is therefore a priori unrealistic to expect a strong integration between the various systems.
The connections will be loose with limited workflow support.

9.3.1 Exchange of information between independent medical files

In this situation information is copied from one file to the other. The information is then
independently managed at the two locations.

Org a ni z a tio n 1 Org a ni z a tio n 2

Record 1 Record 2

Othe r he a lth c a r e a c to r

L o c al re c o rd

Illustration 13: Exchange of information between medical files, in which the different doctors each
interact with their own file. There is primarily interaction with the local files, while the information
exchange between these files mutually is limited.

9.3.1.1 ORGANIZATIONAL CHALLENGES

An initial problem is that a commonly accepted meaning must be agreed for the data that is
being exchanged. It is, of course, acceptable to consider the entire text of a radiological
report as one item, but appointments, admission reports, statements of problems, etc. must
contain additional structured information if a computer system is to be able to take action on
them. In most cases therefore only a ‘largest common divisor’ of information can be
exchanged in a structured manner.

A second problem is that a common context for this data must be defined and coded:

101
• This context contains, at its very least, the patient. That assumes that the same patient
identification is used in both files or that an automatic translation is possible (see
section 3.1.1, p. 30).

• In most cases more context is required. A radiological result refers to a certain type of
exam, to a request accompanied by clinical questions, etc. Here also only a ‘largest
common divisor’ of items will remain with, in the description, a lower level of detail than
is usual in both files.

A third problem is the depiction of the structure of the transmitting file on that of the
message and therefore on that of the receiving information system. In the case of that
receiving system allowance must perhaps be made for the fact that information that comes
from outside does not have the same references as internal information. An external
radiological report, for example, does not perhaps have a requestor in the internal system, or
perhaps a patient who has not yet been registered in the internal system must first be
‘created’ and use must also be made for this purpose of the limited information available in
the transmitted message.

A statement like ‘we will exchange files’ must be translated into a statement of exactly
which information items, with a specific meaning, will be exchanged.

That means that a model must be made of the information in both systems and in the
message and that these models must be mirror images. That requires decisions to be made
about the level of detail or structure in this exchange and perhaps requires adjustments to be
made to the organization of the receiving system.

9.3.1.2 ARCHITECTURE AND TECHNOLOGY FOR COMMUNICATIONS

For communications between two hospitals, direct transmission from sender to receiver is
suitable19. When transmitting to computer systems in private practices, which are less suited
to acting as a server, a more convenient architecture is one in which messages are sent to a
central node which buffers them until the other system comes to fetch them on its own
initiative.

19 In this section we are assuming that the receiver may not retrieve information on his own
initiative, if only because that demands a closer harmonization with the business processes at
the sender’s end (see following sections)

102
The technical options for data exchange and the key issues described in section 2.4 (p. 16)
remain valid.

Recent developments in communications technology in the sphere of the WWW offer


obvious potential for establishing the ‘loose connection’ between different systems in a
telematics settings20. Some concepts are ‘XML’ (for the structured coding of the data),
‘SOAP’ (for the sending of messages of this kind), and ‘Web services’ for logging on to
services remotely.

Take into account that when using communication resources in the sphere of XML there is
still as great a need to conclude precise agreements about the meaning of the information
items being exchanged.

If you want to use this technology for information exchange between files, we suggest (if you
are looking for a suitable approach, both technically and content-wise) that you refer to
‘Recommendation 4’ of the Belgian Committee on ‘Standards for Telematics for the health
care sector’ (‘Data’ working group)21.

9.3.2 Interactive tele-access to the file within a hospital

Pretty much the opposite to the approach to data exchange discussed in the previous section
is to give the external doctor remote interactive access to the file of your hospital.

In this case there is no interaction with the local file of this doctor. The links between the
context in the two files are only created ‘mentally’. In contrast to the previous technique, this
doctor is confronted with the differences in organization of the two files.

Against that drawback there stands the fact that all the available information from the
‘central file’ is presented, within a robust structure, without any compromises having to be
made in terms of translation of this information to a local file. This can also include
information that is only relevant in that central file and which it does not make sense to store

20 The concepts are carefully thought out, there is a strong standardization, and (thanks to the
wide distribution of this technology) efficient development tools are available and there is a
good chance that allowance will be made for the implementation of all kinds of products
such as firewall security

21 http://www.health/fgov.be/telematics/cnst/fr/an.htm

103
in another file (e.g. contacts for nursing staff), information that changes rapidly (e.g. the
patient’s location), information where local storage is technically questionable (e.g. large
series of images), or information that is so specific that it is difficult to provide a ‘viewer’ for
it locally.

The lack of integration between the files of the external doctor and the hospital makes
automatic workflow between these systems impossible. On the other hand, any interactive
control of the workflow in the hospital by the external doctor is relatively simple to achieve.

Organization 1 Organization 2

Record 1 Record 2

Other healthcare actor

Local record

Illustration 14 : Tele-interaction of external doctors with the file of a hospital. There is no direct link
between the various files, but each doctor has a strong interaction with the various separate files

9.3.2.1 ORGANIZATIONAL CHALLENGES

In this way of working, the internal processes of the hospital are actually opened up to the
external doctor, who at least obtains a view of these processes and perhaps has some
influence over them.

If you want to make information and business processes accessible from outside the
hospital, you must in the first instance ensure that they are available within the hospital.

The external doctor cannot be assumed to be familiar with all the details of the internal
processes. A simplification of the procedures relating to the outside world is therefore often
sensible also (e.g. in connection with the booking of appointments).

104
A special problem in this context is data and functions that are divided over different
systems. Internal users may have learned how to cope with this tangle, but for an outsider
the resulting complexity probably cancels out the benefits of tele-access.

So that a file can be made available to the outside world in a meaningful way, the data and
procedures must already be strongly integrated into it.

From the perspective of the file, a context is required for the user, e.g. to determine what
accesses this user has. The available context for external doctors is significantly smaller than
for the internal users. Separate rules will probably have to be implemented, both for access
(see section 9.1.5, p.96) and for the support of some parts of the workflow.

9.3.2.2 TECHNOLOGY FOR INTERACTIVE TELE-ACCESS

External users need a different user interface to internal users. The internal application
probably contains functions that the external user does not need, demands knowledge of
details of the organization in the hospital, and is vulnerable to incorrect use in the absence of
this knowledge.

Try, in so far as no performance limitations are created in the process, to group components
for business logic in a ‘middle tier’ and to invoke the same components from the internal
front-end and the external front-end applications.

One advantage is the opportunity for stronger security (section 9.1.4, p. 95). A second
advantage is re-use of development and maintenance.

If the aim is to grant access to a large group of external parties, you can impose in the
software for the client relatively few barriers to the underlying operating system or the
available communication channels.

As regards communications, connection via the Internet is an advantage in this case. It does
not matter in what physical way or via which provider (ISP) the external doctor and the
hospital are connected to the Internet. Avoid a situation where the hospital has to support
different principles of communication (different types of modems for example).

As regards the application, software that is based on the exchange of standardized HTML
pages or which is written in Java is an advantage here, due to the reasonable independence
of the operating system of the user’s workstation. Scripting languages within a browser are
inclined to be dependent on the browser or even on the version of the browser. Remember
that the ‘form-based’ principle of HTML/HTTP lends itself to the building of intuitive user

105
interfaces, but that the degree of interaction with the system in that principle is rather
restricted.

In the technical design, bear in mind the need for maintenance on the users’ system. Ensure a
technically clear separation of responsibilities in the making of settings for connection.
Avoid a situation where your application has to be installed on the local system, and if that
is unavoidably necessary ensure that there are automatic update procedures via the Web.

In this setting, centralized maintenance of the user’s systems is very expensive, due to the
travelling required, the fact that the configurations are very heterogeneous, and the fact that
you cannot or do not want to exercise any influence over these configurations. It is precisely
those methods that lower maintenance costs in hospitals that cannot be used in this instance.

A fruitful technique is therefore the use of a Web browser. The external user can provide or
arrange a fully independent connection to the Internet and this connection can be
independently configured and tested.

9.3.3 The sharing of a central file between independent care providers

The working procedures described in the previous sections have drawbacks if more than just
single hospitals are involved. Data exchange between more than two different files is of
course more difficult and will run up against more limitations (see also section 2.2.1, p. 7).
‘Visual access’ to the various central files becomes more onerous the more the information is
spread over different such files and the more the user comes into contact with the differing
organizations of all these files.

One possibility is that the different people involved can agree that a separate file will be
developed on ‘neutral ground’. For those operations where collaboration is the aim, that
central file will be used as a means to that end. For other operations, everyone uses his own
information system. This does of course require information exchange between the different
internal systems and that central file.

106
Illustration 15 : An independent central file on ‘neutral ground’ for cooperation between different
hospitals and external doctors. In this illustration each hospital can place information in the central file.
Organization 1 Organization 2

Record 1 Record 2

Other healthcare actor

Central record
Local record
Organization managing central record

107
You can conceive this as a highly centralized results server (see section 2.5, p. 25). Due to the
greater neutrality in comparison with the two previous solutions (linked to the even more
limited structure of the information coupled with it), it is conceivable that the information
will not be consulted purely visually but will also be actively queried by local software.

A variant is to manage not the results themselves but references to the results in separate
files.. There are initiatives underway in which the patient himself manages a part of this data
22

9.3.3.1 TECHNICAL AND ORGANIZATIONAL CHALLENGES

Take into account that, when using a shared file between different independent hospitals, all
problems relating to (1) data exchange between files and (2) tele-access to a file coalesce.

Ultimately in this setting information will after all also be shared with another file (from each
hospital to a central file).

As is the case with a results server within a hospital communicating with the different
departments, this configuration does not automatically provide a solution to the support of
the workflow between the different hospitals in the network.

Another problem is the management (and technical maintenance) of that additional central
file.

22 In this case, it is of course naive to hope that there is a good structure on the data level. In
the ideal case, it is as if one could always send a paper file by fax.

108