You are on page 1of 241

Shamil Nizamov

Unofficial Mirth Connect v3.5


Developer’s Guide
Copyright Page

Copyright © 2013-2017 by Shamil Nizamov


Cover image copyright © 2013 by Shamil Nizamov

All rights reserved. No part of the contents of this book may be reproduced or
transmitted in any form or by any means without the written permission of the author.

Mirth Connect is a trademark of Mirth Corporation. HL7 and Health Level Seven are
registered trademarks of Health Level Seven International. All other marks are property
of their respective owners.

Any rights not expressly granted herein are reserved.

The companies, organizations, products, domain names, email addresses, logos, people,
places, and/or data mentioned herein in examples are fictitious. No association with any
real company, organization, product, domain name, email address, logo, person, place,
or data is intended or should be inferred.

This book expresses the author‟s views and opinions. The information contained in this
book is provided without any express, statutory, or implied warranties. The author, Mirth
Corporation, Health Level Seven International, resellers and distributors will NOT be held
liable for any damages caused or alleged to be caused either directly or indirectly by this
book.

Introduction 2
Contents
PART 1 MIRTH CONNECT BASICS
Chapter 1 Getting Started ........................................................................................................ 15
Installation ............................................................................................................... 15
Mirth Connect Administrator .................................................................................... 16

Chapter 2 What is a Channel? .................................................................................................. 18


Connectors............................................................................................................... 19
Filters ...................................................................................................................... 19
Transformers............................................................................................................ 20
Scripts...................................................................................................................... 21

Chapter 3 Creating a Channel ................................................................................................... 23


Source Connector ..................................................................................................... 24
TMP, MSG and MESSAGE .......................................................................................... 25
Destination Connectors............................................................................................. 27
Testing the Channel .................................................................................................. 31
Global Map, Global Channel Map, Channel Map ......................................................... 34
Global Scripts ........................................................................................................... 36
Code Templates........................................................................................................ 38

PART II GENERIC ELIGIBILITY SERVICE IMPLEMENTATION

Chapter 4 Generic Eligibility Service Introduction...................................................................... 42


Eligibility Service Introduction ................................................................................... 42
Scenario Overview .................................................................................................... 43
Messages and Interactions Overview ......................................................................... 44
Eligibility Query Channels Overview ........................................................................... 45

Chapter 5 Query Sender Channel.............................................................................................. 48


Summary Tab ........................................................................................................... 48
Source Connector .................................................................................................... 50
Destinations Connector ............................................................................................ 50
Channel Implementation Verification......................................................................... 54

3 Introduction
Chapter 6 HL7v2 to HL7v3 Transformer Channel ....................................................................... 57
Summary Tab ........................................................................................................... 57
Source Connector .................................................................................................... 58
Destinations Connector ............................................................................................ 60
Code Templates........................................................................................................ 66
Scripts...................................................................................................................... 67
Channel Implementation Verification......................................................................... 68
Chapter 7 Data Logger Channel ................................................................................................ 69
Summary Tab ........................................................................................................... 69
Source Connector .................................................................................................... 70
Destinations Connector ............................................................................................ 71
Code Templates........................................................................................................ 76
Global Scripts ........................................................................................................... 76
Channel Implementation Verification......................................................................... 78
Chapter 8 HL7v3 Verification Channel....................................................................................... 80
Summary Tab ........................................................................................................... 81
Source Connector .................................................................................................... 81
Destinations Connector ............................................................................................ 84
Code Templates........................................................................................................ 90
Global Scripts ........................................................................................................... 91
Scripts...................................................................................................................... 92
Channel Implementation Verification......................................................................... 93
Chapter 9 Response Sender Channel ........................................................................................ 96
Summary Tab ........................................................................................................... 96
Source Connector .................................................................................................... 97
Destinations Connector ............................................................................................ 99
Scripts.................................................................................................................... 102
Channel Implementation Verification....................................................................... 103
Chapter 10 HL7v3 to HL7v2 Transformer Channel .................................................................... 105
Summary Tab ......................................................................................................... 105
Source Connector .................................................................................................. 106
Destinations Connector .......................................................................................... 106
Introduction 4
Channel Implementation Verification....................................................................... 109

PART III ACKNOWLEDGEMENTS IMPLEMENTATION

Chapter 11 Acknowledgements Introduction ............................................................................ 112


Scenario Overview .................................................................................................. 112
Acknowledgement Channels Overview .................................................................... 113
Chapter 12 HL7v3 ACK Channel ................................................................................................ 115
Summary Tab ......................................................................................................... 115
Source Connector .................................................................................................. 116
Destinations Connector .......................................................................................... 116
Scripts.................................................................................................................... 117
Chapter 13 HL7v3 Verification ACK Channel.............................................................................. 119
Destinations Connector .......................................................................................... 119
Code Templates...................................................................................................... 122
Scripts.................................................................................................................... 123
Source Connector .................................................................................................. 124
Chapter 14 HL7v2 to HL7v3 Transformer ACK Channel ............................................................. 125
Destinations Connector .......................................................................................... 125
Code Templates...................................................................................................... 128
Scripts.................................................................................................................... 129
Source Connector .................................................................................................. 130
Channel Implementation Verification ...................................................................... 131
Chapter 15 Query Sender ACK Channel .................................................................................... 132
Destinations Connector .......................................................................................... 132
Source Connector .................................................................................................. 134
Channel Implementation Verification ...................................................................... 136

PART IV DICOM
Chapter 16 DICOM Storage SCU .............................................................................................. 138
Scenario Overview ................................................................................................. 138
Summary Tab ........................................................................................................ 140
Source Connector .................................................................................................. 141
Destinations Connector .......................................................................................... 142

5 Introduction
Chapter 17 DICOM Storage SCP ............................................................................................... 143
Summary Tab ........................................................................................................ 144
Source Connector .................................................................................................. 145
Destinations Connector .......................................................................................... 151
Code Templates ..................................................................................................... 158
Scripts ................................................................................................................... 158
Channels Implementation Verification ..................................................................... 158

PART V ADVANCING IN MIRTH CONNECT

Chapter 18 Debugging JavaScript in Mirth Connect .................................................................. 161


Built in Logger function .......................................................................................... 161
Rhino JavaScript Debugger in Standalone Mode ....................................................... 162
Rhino JavaScript Debugger in Embedded Mode ........................................................ 163
Eclipse JSDT Debugger in Embedded Mode .............................................................. 168
Console Input ......................................................................................................... 172
Chapter 19 Utilizing JMS (Java Message Service) ...................................................................... 174
Scenario Overview .................................................................................................. 175
Sending Messages .................................................................................................. 176
Sending Objects...................................................................................................... 183
Channels Implementation Verification ..................................................................... 189
Chapter 20 Polling Web Services ............................................................................................. 191
Scenario Overview .................................................................................................. 191
Summary Tab ......................................................................................................... 192
Source Connector .................................................................................................. 192
Destinations Connector .......................................................................................... 193
Channels Implementation Verification ..................................................................... 199
Chapter 21 Building Extensions ............................................................................................... 201
Creating Templates................................................................................................. 203
Signing Extension.................................................................................................... 208
Deploying Extension ............................................................................................... 210
Extension Implementation Verification .................................................................... 211

Introduction 6
Chapter 22 Tuning Mirth Connect ............................................................................................ 213
Performance Tuning ............................................................................................... 214
Security Protection ................................................................................................. 219

Book Resources.............................................................................................................................. 223

PART V APPENDICES
A: Eligibility Query Request (QUCR_IN200101) Template .......................................... 225
B: Eligibility Query Results (QUCR_IN210101) Template ............................................ 226
C: MS Access Log Database Structure ....................................................................... 227
D: PostgreSQL Eligibility Database Structure ............................................................. 227
E: XSLT to transform from HL7v3 to HL7v2 ............................................................... 228
F: JavaScriptTask.java.............................................................................................. 230
G: Rhino Script Engine script samples ...................................................................... 233
H: Archives Content ................................................................................................ 239

7 Introduction
Introduction

Introduction
As Mirth Corporation (now is a subsidiary of Quality Systems, Inc.) says on their web-site,
“Mirth Connect is the Swiss Army knife of healthcare integration engines, specifically
designed for HL7 message integration. It provides the necessary tools for developing,
testing, deploying, and monitoring interfaces. And because it’s open source, you get all of
the advantages of a large community of users with commercial quality support.”

In addition, “The 2014 HL7 Interface Technology Survey Results” show that Mirth Connect
is one of the fastest growing healthcare messaging platforms due to its open source
paradigm, and robust functionality for HL7 messaging and X12 documents. Mirth
Connect also speeds up the development of interfaces for data exchange across different
formats and diverse healthcare systems environment.

This book describes version 3.x of Mirth Connect to the point that reader are confident
enough to start building their own healthcare data exchange interfaces and transforming
various versions of HL7 messages.

As you read this book, you will be implementing a fictitious Eligibility Query Service. Each
connection point (channel) is explained in a separate chapter, which in turn provides
step-by-step instructions on how to create and code data transformation rules.

This book is written using Mirth Connect 3.5.0 version of the product. Consequently,
other releases may include new features, or features used in this book may change or
disappear. You may also notice some differences between screen shots provided in the
book and those you see when using Mirth Connect.

Who is this book for?

I wrote this book primarily for application developers and system integrators who have
found the online Mirth Connect documentation lacking and needed a guidebook that
explains things in a more detailed and organized way.

In a book of this size, I cannot cover every feature that Mirth Connect v3.x or previous
versions have; consequently, I assume you already have some familiarity with Mirth
Connect.

Introduction 8
Assumption

This book assumes that you are dealing with applications that use message-oriented
middleware products and expects that you have at least a minimal understanding of
Web service technologies including, but not limited to, XML, XML Schemas, XPath, XSL
Transformation and SOAP/WSDL.

Before you start reading this book, you should have a basic knowledge of JavaScript and
Java; MS Access and PostgreSQL databases from a database administrator perspective;
and are familiarity with operating system environment variables settings.

You should also have basic knowledge of HL7, the standard that is being used to
exchange healthcare data, both version 2 and version 3; and DICOM, the standard for
handling information in medical imaging.

Who should not read this book?

As mentioned earlier, the purpose of this book is to provide the reade r with a high-level
overview of the capabilities and features associated with Mirth Connect v3.5. This book is
not intended to be a step-by-step comprehensive guide or substitute of any kind to
original training and certification programs provided by Mirth Corporation (Quality
Systems, Inc.).

This book is also not a tutorial on a specific messaging or middleware technology


implementation. All examples included in this book are for illustrative purposes only. If
you are interested in learning more about a specific technology or product, please refer
to one of the many on-line resources.

This book does not cover any specific installation, configuration, deployment or
monitoring activities for system administrators.

Errata and Book Support

I have made every effort to ensure the accuracy of this book and its companion content.
If you find an error, please report through email - mirthconnect@isarp.com

Warning and Disclaimer

The purpose of this book is to educate and entertain. Every effort has been made to
make this book as complete and as accurate as possible, but no warranty or fitness is
implied.

9 Introduction
The information is provided on an “as is” basis. The author shall have neither liability nor
responsibility to any person or entity with respect to any loss or damage caused, or
alleged to be caused, directly or indirectly by the information contained in this book or
from the use of software mentioned in this book. The information, methods and
techniques described by the author are based on his own experience. They may not work
for you and no recommendation is made to follow the same course of action. No
representation is made that following the advice in this book will work in your case.

The author is not an employee or representative of Mirth Corporation (Quality Systems,


Inc.) and never has been, and author‟s views and opinions are not necessarily those of
Mirth Corporation. This book is not based on trainings or certifications provided by Mirth
Corporation (Quality Systems, Inc.).

This book contains links to third-party websites that are not under the control of the
author, and the author is not responsible for the content of any linked site. If you access
a third-party website mentioned in this book, then you do so at your own risk. The
author provides these links only as a convenience, and the inclusion of the link does not
imply that the author endorses or accepts any responsibility for the content of those
third-party sites.

Furthermore, this book contains information on the subject only up to the published
date.

Acknowledgements

Like most books, this guide has been a long time in the making. I would like to
acknowledge everyone who has assisted in this project. I could not have done this
without you.

Nathan Blakley and Elliot Freedman volunteered to review early versions of a few
chapters. Your feedback helped steer me in the right direction. I‟d like to thank Philip
Helger in making an active contribution to the development of the open source
Schematron validator.

My biggest thanks go to Wayne Zafft, who was incredibly gracious with his time and
effort in reviewing the final version of the book.

Introduction 10
Roadmap

This book is divided into five parts:

Part 1 provides an introduction to Mirth Connect and a high-level overview of channels.


 Chapter 1, Getting Started
Introduces Mirth Connect at a high level, and demonstrates how to download and
install Mirth Connect Server and Administrator.

 Chapter 2, What is a Channel


Provides an overview of the channel architecture implemented in Mirth Connect. It
also covers a channel‟s major components such as connectors, filters, transformers
and scripts.

 Chapter 3, Creating a Channel


Walks the reader through the creation and configuration of a simple channel. It
covers some of the major points of the Mirth Connect channels implementation
model such as tmp and msg variables, different types of maps and their visibilities. It
also covers Global Scripts, channel scripts and Code Templates.

Part 2 focuses on the implementation of an imaginary but complete eligibility service.

 Chapter 4, Generic Eligibility Service Introduction


Introduces the Eligibility Service as defined in the HL7v3 Normative Edition, presents
the implementation plan and walks through the required components.

 Chapter 5, Query Sender Channel


Walks the reader through the implementation of the first channel in a chain that
serves as an interface to send HL7v2 Eligibility Query messages.

 Chapter 6, HL7v2-HL7v3 Transformer Channel


Explains the implementation of a channel that plays the role of a conduit or broker.
The chapter shows how to establish a MLLP connection to other channels, how to
filter messages based on some criteria and transform messages from one format to
another using different techniques that Mirth Connect provides.

11 Introduction
 Chapter 7, Data Logger Channel
Explains the implementation of a channel that uses a file and MS Access database as
destinations.

 Chapter 8, HL7v3 Verification Channel


Walks the reader through the implementation of the XML Schema and Schematron
validators using external Java classes.

 Chapter 9, Response Sender Channel


Provides insight into implementation of a database-facing channel that retrieves
data, forms the message and passes it along using a SOAP connector.

 Chapter 10, HL7v3 to HL7v2 Transformer Channel


Concludes the implementation of the Eligibility service and provides a detailed
explanation on configuring the SOAP connector and XSL Transformation.

Part 3 is dedicated to the implementation of acknowledgements.

 Chapter 11, Acknowledgements Introduction


Provides introduction and presents the implementation plan of a message
acknowledgement based on the Eligibility Service implemented in Part 2.

 Chapter 12, HL7v3 ACK Channel


Explains how to create another interim channel that receives routed HL7v3 messages
and stores them in a file.

 Chapter 13, HL7v3 Verification ACK Channel


Explains how to expand functionalities of the already existing channel to send HL7v3
MCCI acknowledgements.

 Chapter 14, HL7v2 to HL7v3 Transformer ACK Channel


Explains how to expand functionalities of the already existing channel to send HL7v2
RSP^E45 acknowledgements back and intercept HL7v3 acknowledgements received
from other channels.

 Chapter 15, Query Sender ACK Channel


Explains how to intercept HL7v2 acknowledgements received from one channel and
route them to another channel.

Introduction 12
Part 4 covers topics related to DICOM.

 Chapter 16, DICOM SCU


Provides a short introduction and presents the implementation plan of a simplified
DICOM router. Walks the reader through the implementation of the first channel in a
chain that serves as an interface to send DICOM messages.

 Chapter 17, DICOM SCP


Provides an in-depth explanation of such important topics as parsing DICOM
messages, extracting objects from a PDF file, creating and deleting template nodes,
encoding PDF file to be submitted by HL7 messages.

Part 5 covers advanced topics.

 Chapter 18, Debugging JavaScript in Mirth Connect


Provides an in-depth explanation of such important topics as debugging filters and
transformers JavaScript using built-in and external tools such as Rhino JavaScript
Debugger and Eclipse JSDT Debugger.

 Chapter 19, Utilizing JMS (Java Message Service)


Introduces the JMS Sender and Listener connector configurations to pass messages
and objects through a Message Broker such as Apache ActiveMQ. Provides insight
into passing messages, and gives a detailed explanation of serialization /
deserialization techniques to pass Java objects via the Message Broker.

 Chapter 20, Polling Web Services


Explains how to extend the functionality of the Web Server Sender connector to
periodically poll data from external service providers.

 Chapter 21, Building Extensions


Provides an in-depth explanation of such confused topic as building the Mirth
Connect extension using the example of building a JSON Writer Destination
Connector.

 Chapter 22, Tuning Mith Connect


Walks the reader through Mirth Connect Server settings to increase the overall
system‟s performance. The chapter also provides a brief overview of available security
enhancement settings.

13 Introduction
PART I – MIRTH CONNECT BASICS

Mirth Connect
Basics
CHAPTER 1 Getting Started

CHAPTER 2 What is a Channel?

CHAPTER 3 Creating a Channel

PART I – MIRTH CONNECT BASICS 14


CHAPTER 1 Getting Started

Getting Started
This chapter outlines the Mirth Connect basic installation procedure. All examples in
this book are based on the Windows version of Mirth Connect v3.5, available to
download at - http://www.mirthcorp.com/community/downloads

Make sure your computer meets minimum system requirements before you start:
 Oracle JRE version 1.8 or higher;
 1 GB of RAM is recommended;
 A web browser.

Installation

There are two possible ways to install Mirth Connect based on what package you have
downloaded or what package is available on the website. In one case, the package is an
archive of all files and classes that you need to run Mirth Connect on your computer. You
simply unzip and copy the package to an appropriate folder, for example to the
C:\Program Files\Mirth Connect\. In the other case, there is a GUI based installer
that you just start and go through the steps in the installation wizard. The installation
process itself is quite straight forward.

In both cases what is installed are Mirth Connect Server, Mirth Connect Server Manager,
Mirth Connect Administrator and Mirth Connect Command Line Interface. During the
installation you have to decide which port will be used by the Mirth Connect Server. By
default it is 8080 for unsecure communication and 8443 for the SSL connection. You can
change it later using the Mirth Connect Server Manager.

To verify the installation:


 Launch the Mirth Connect Server either through the Mirth Connect Server Manager
or the Mirth Connect Command Line;
 Open the web browser and type localhost:8080 in the address bar;
 Click the Access Secure Site button in Web Dashboard Sign In launch page;
 Type admin for the user name and repeat admin as the password, click Sign in.

If you see the Dashboard statistics page with, most likely, no channels available, you have
successfully done the installation and ready to continue. If not, refer to Mirth Connect 3.0

15 PART I – MIRTH CONNECT BASICS


User Guide written by “the same Mirth technical experts who developed the software”
available at - http://info.mirth.com/Connect_Documentation_Download.html

Configuration

The Mirth Connect Server Manager can be used as a single point to launch Mirth
Connect Service, configure ports, allocated memories, and database connections.
However, a fully-fledged configuration description is beyond the scope of this book.

Here is only a recommended step is to add a path to the \custom-lib folder to the
operating system‟s CLASSPATH environment variable. This is the folder where you put
your Java classes, libraries and other required files.

Versions 1 and 2 of Mirth Connect were using port 1099 for viewing statistics though the
JMX (Java Management Extensions ) and RMI (Remote Method Invocation) interfaces.
This port is no longer used in version 3.x. Hence, if any of your applications or firewall is
utilizing ports 8080 or 8443 you can either change Mirth‟s ports using Mirth Connect
Server Manager or manually modify the configuration file located in
\conf\mirth.properties. Don‟t forget to restart the Mirth Connect Server or Service for
any changes to make effect.

Mirth Connect Administrator

The Mirth Connect Administrator is a Java application that is not explicitly installed on a
local computer by default in a distributed environment. It is downloaded from the Mirth
Connect Server. The reason for this is to ensure the Mirth Connect Administrator
matches version of the Mirth Connect Server.

To download the Mirth Connect Administrator:


 Start Mirth Connect Server if it is not already running as a service;
 Open the web browser;
 Type localhost:8080 in the address bar;
 Click Launch Mirth Connect Administrator in the Mirth Connect Administrator launch
page;
 Click Ok to open the webstart.jnlp;
 Type admin for the user name and repeat admin as the password in the Mirth
Connect Login pop-up window, then click Login.

If everything is done correctly, each time you login, you will see the Dashboard as the
initial screen. The Dashboard displays two information panels:

PART I – MIRTH CONNECT BASICS 16


 Channels status and statistics - the number of messages Received, Filtered,
Queued, Sent, and Errored. The Dashboard Tasks area on the navigation bar on the
left side has menu items essential for developing channels such as Refresh, Send
Messages, and Remove All Messages. Same menu items can be accessed faster by
right clicking on a channel row.
 Logs – Server Log, Connection Log and Global Maps. The Server Log is used a lot to
debug channels development. Double-clicking on a Server Log entry brings a pop-up
window where you can view and copy the entire log entry content. The Server Log is
stored by Mirth Connect Server in the database and therefore closing and opening
the Mirth Connect Administrator brings back all entries not previously explicitly
purged. To clear the Server Log click Clear Displayed Log under the Server Log or
Connection Log area.

Logging Level

Channel‟s log level can be configured manually by changing \conf\log4j.properties


entries. Available options are: ERROR, WARN, INFO, DEBUG, and TRACE with DEBUG
selected by default. Log levels may be configured separately for filters, transformers,
postprocessors and other scripts that are explained later in this book.

FIGURE 1-1 Mirth Connect Administrator window by default

Familiarize yourself with other navigation items and tabs since this is the main tool used
to develop channels throughout this book.

17 PART I – MIRTH CONNECT BASICS


CHAPTER 2 What is a Channel?

What is a Channel?
Theabstract
Channel is an essential part of Mirth Connect and can be seen as one-to-many
unidirectional pipes to decouple components from each other to transfer
healthcare data between two or more applications. The channel architecture
implemented in Mirth Connect can divide a large message processing task into a
sequence of smaller independent steps. This affords developers the flexibility for
dependency, maintenance and/or performance. Some of the processing tasks can even
be external to Mirth Connect and developed independ ently.

FIGURE 2-1 Mirth Connect abstract channel architecture

In general, each channel consists of inbound and outbound Connectors, Filters and
Transformers. The connector that receives inbound messages from the Sending
Application is called the Source. Similarly, the connector that sends outbound messages
is called the Destination. From the Source connector data is passed through the channel,
where filters and transformers perform operations on the data, for example, routing a
message to one or another Destination connector and transforming the data
representation. Deciding channel‟s tasks is when wearing an analyst's hat comes into
play.

Before you create a new channel, you need to elicit the following requirements:
 Type of Application the channel reads data from (Source connector type);
 Type of Application the channel sends data to (Destination connector type);
 Type and format of the inbound message;
 Type and format of the outbound message(s);
PART I – MIRTH CONNECT BASICS 18
 Mapping table(s) between inbound and outbound messages (Transformation).

Connectors

In terms of Enterprise Integration, the connector is a Message Endpoint that specifies a


particular way or, more accurately, a particular protocol Mirth Connect should us e to
communicate with an external application or another Mirth Connect channel.

Mirth Connect supports sending and receiving messages over a variety of connectors
listed here in no particular order:
 TCP/MLLP;
 Database (MySQL, PostgreSQL, Oracle, Microsoft SQL Server, ODBC);
 File (local file system and network shares);
 PDF and RTF documents;
 JMS;
 HTTP (note that HTTPS is not supported in the free version);
 SMTP;
 SOAP (over HTTP).

The connector that receives the data is called a Reader, for example the MLLP Reader.
The connector that sends the data is called a Writer, the Database Writer is an example.

Connector types are configured under the Source and Destinations tabs of the channel,
which is explained later in this chapter. As should be obvious, some settings are common
across all connectors while others are unique to a specific connector type.

If you need a connector that is not shipped with the Mirth Connect installation package,
you can develop your own one (such as a custom HTTPS connector). Some templates
and developer‟s level documentation for such development are in the chapter dedicated
to Mirth extensions.

Filters

In a real world scenario, when numerous applications and channels are connected, a
channel may receive messages from several sources and these messages may have to be
processed differently, based on the message type or other criteria.

There are two paradigms for solving this problem, a Router and a Filter:

19 PART I – MIRTH CONNECT BASICS


 Router connects to multiple outbound channels. The key benefit of the Router is that
the decision criteria for the destination(s) of a message are maintained in a single
location.
 Filter, this is what Mirth Connect uses, is built into a message processing mechanism
and is responsible for determining whether the message should be processed or not.
The Filter inspects message properties (segments or elements) without removing the
message from the message queue. If the message cannot be consumed by this
particular pipe, it is returned to the queue unchanged for another pipe to filter or
process.

Filters can be as simple as specific elements comparison against a hard coded value or as
complex as JavaScript scripts and external Java classes. Filters can also be omitted
allowing all messages to pass through. Some routing capabilities have been introduced
starting Mirth Connect v3.1 by using a "destinationSet". If a destination is removed
from the destination set, this destination will not receive the message.

If a single channel needs to process more than one type of messages, you can create any
number of separate pipes – Destinations - and specify none, one or more filters for each
of them.

Transformers

More often than not, messages are sent between legacy systems, custom applications
and third-party solutions, each of which is built around a proprietary data model. Even
systems that claim to support a single standard may place specific requirements on data
format and content. If we could bring all legacy systems to a single format when a new
business requirem ent is proposed, we would avoid conversion issues. Unfortunately, for
most legacy systems, data format, content or data sequence changes are difficult and
risky, and simply not feasible.

How do we communicate data using different formats then? In Mirth Connect this is
done by a message Transformer that translates one data format into another. As a result,
a destination application expects to receive messages it understands which can be
processed and stored in the application‟s internal data format.

Mirth Connect allows message translation to occur at different levels, and to chain
message transformers to achieve a required result.

Supported transformers are:

PART I – MIRTH CONNECT BASICS 20


 Message Builder maps segments of the inbound message to segments in the
outbound message.
 Iterator works similarly to Message Builder but allows to iterate over multiple
instances of the same segment and map segments of the inbound message to
segments in the outbound message.
 Mapper maps segments of the inbound message to internal Mirth Connect variables.
These variables may be used later.
 External Script, as the name suggests, employs external JavaScripts to transform or
map the data.
 XSLT Step utilizes the XSL transformation.
 JavaScript, along with External Script, is where flexibility comes into play. Here any
type of (Rhino) Java Script and Java code can be used.

Scripts

Channels also support unique features called Scripts to enhance the message processing
logic. Scripts apply to a channel itself and all messages that are passing through.

These scripts are:


 Deploy script is executed each time Mirth Connect Server starts or a channel is
redeployed. This is the best place to initialize variables or create class objects.
 Attachment script deals with a message in a native format and allows extracting a
part of the message to store as an attachment or to irrevocably modify a message.
 Preprocessor script also allows handling each message in a native format before
Mirth Connect starts translating it into the internal format, which is XML.
 Filter & Transformer scripts are the main places where you handle the inbound and
outbound messages.
 Response script, as the name suggests, handles the response sent by a destination.
 Postprocessor script is executed after the message has been successfully sent.
 Undeploy script is launched each time Mirth Connect Server stops. This is the place
to, for example, release memory that was allocated for the classes used by the
channel.

Scripts are performed in the following order:


1. Global Deploy script;
2. Deploy;
3. Attachment script;
4. Global Preprocessor script;
5. Preprocessor script;

21 PART I – MIRTH CONNECT BASICS


6. Source connector Filters script;
7. Source connector Transformer script or mapping;
8. Destination 1 connector Filters script;
9. Destination 1 connector Transformer script or mapping;
10. Destination N connector Filters script;
11. Destination N connector Transformer script or mapping;
12. Response 1 Transformer script or mapping;
13. Response N transformer script or mapping;
14. Postprocessor script;
15. Global Postprocessor script;
16. Undeploy;
17. Global Undeploy script.

Deploy and Undeploy scripts are performed only once, when a channel is deployed or
undeployed, respectively. It is important to note that Global Deploy and Deploy scripts
are also executed every time any channel is redeployed. Same with Undeploy and Global
Undeploy scripts, they are executed for every channel. All other scripts are performed
every time a message is sent through a channel or an acknowledgement is received.
Notice that the Global Preprocessor script is executed before the channel‟s Preprocessor
script is executed. Similarly, after the channel‟s Postprocessor script completes, the
Global Postprocessor script is run.

If channels operate in series, the Attachment Script of the first channel is the first to
perform. The Postprocessor Script of the same channel will be executed last, after all
other scripts in all consequent channels. (see Figure 2-2)

FIGURE 2-2 Scripts execution sequence

Next, we will explore each of these steps in detail.

PART I – MIRTH CONNECT BASICS 22


CHAPTER 3 Creating a Channel

Creating a Channel
Now it‟s the time to roll up our sleeves and create a channel. We will create a simple
channel that accepts an HL7v2 message and dumps this message to a file. The idea
behind this exercise is to familiarize you with the Mirth Connect Administrator
interface.

To begin, launch the Mirth Connect Server if it is not already started and then the Mirth
Connect Administrator. Switch to Channels using the navigation bar (click Channels,
which is under Dashboard on the Mirth Connect collapsible navigation panel). Notice the
Channels Tasks menu items (in the second collapsible navigation panel) have changed to
reflect channel‟s editing specific menu options. Click New Channel. By default, you will
see the Edit Channel summary page. (see Figure 3-1)

FIGURE 3-1 Summary tab by default for a new channel

Enter the channel name in the Name box. Call it Simple Channel. In the Channel Tags
area, click the Add button and enter a tag (channel‟s alias) to sort out channels later.
Enter a description in the Channel Description box below.

23 PART I – MIRTH CONNECT BASICS


Click Set Data Types and make sure that the inbound and outbound Source and
Destination are set to the HL7v2.x format.

Click Save Changes in the Channel Tasks panel.

Source Connector

Now switch to the Source tab to configure the Source connector, which specifies how the
channel reads messages from the pipe. In this example, we will use the Channel Reader
connector type that allows a message to be sent from the Mirth Connect Administrator
interface, so no other application is needed to test our channel.

FIGURE 3-2 Channel Reader Source Connector settings

Obviously, there is not much to configure for the Channel Reader connector, although
the same cannot be said for other connector types. Click the Connector Type drop-down
list and go through other connector types. Once this has been done, switch the
connector type back to Channel Reader.

Click Edit Filter in the Channel Tasks panel to modify the Source Connector Filter. Select
Add New Rule in the Filter Tasks navigation bar; or right-click in the Source Filter area
and select this menu item from the pop-up menu. To change the filter type, double-click
on Rule Builder in the Type column, and, for example, select JavaScript. Now double-click
New Rule in the Name column to change the filter name. Right-click to delete this
sample filter rule and then click Back to Channel in the Mirth View navigation bar on the
left side of the Mirth Connect Administrator window.

PART I – MIRTH CONNECT BASICS 24


The Source Connector Transformer behaves in a similar way. Create some transformers
just for practice. Delete all transformers you have created and return to the Source
connector.

Click Validate Connector in the Channel Tasks panel. If the connector is reported as valid,
click Save Changes.

Notice that you can also import and export the Source connector as well as export an
entire channel.

TMP, MSG and MESSAGE

If you have ever created a transformation script for a channel or even read about it, you
probably know already that there are two “magic” variables called tmp and msg you have
to work with to tweak the inbound or outbound message.

What are they, where do they come from, and why is it message in one place and msg in
another?

To answer these questions let us review how the Mirth engine handles incoming
messages and passes them along.

Attachment Script

The first gate that a message meets is an Attachment JavaScript handler. The inbound
message at this stage is accessible as a message object and has not been stored in the
database yet. Therefore, you can use this script to strip off data that should not go into
the database for one reason or another. For example, comments in your message that
use a character set that your database does not support can be deleted here.

Preprocessor Script

Before the inbound message hits the Preprocessor handler, the message is stored in the
database in its raw format. Thus, even if the message is modified, the raw content is still
available. In both the Global and the Channel‟s preprocessor scripts, the inbound
message is accessible as the message object or by calling
connectorMessage.getRaw().getContent()

Source Filter and Transformer Scripts

In fact, both filter and transformer scripts are executed as a single script like this:

25 PART I – MIRTH CONNECT BASICS


if (doFilter() == true) {
doTransform();
return true;
} else { return false; }

Regardless of how many rules you add to the filter and how many steps you add to the
transformer, at the end they all are combined into a single script. Before executing this
script, the Mirth engine adds the following line to create an E4X XML object 1 for the
inbound message: msg = new XML(connectorMessage.getTransformedData());

If you provided a message template in the outbound message template window, the
engine also adds the following line to create the E4X XML object for the outbound
message: tmp = new XML(template);

At this stage, inbound and outbound messages can be accessed directly using:
connectorMessage.getRaw().getContent()
connectorMessage.getTransformedData()

The database contains only the raw message stored at the preprocessor step. If the Mirth
engine fails, when Mirth is restarted, the engine takes the stored raw message and
processes the message again.

If you have not provided any outbound templates or explicitly created the tmp instance,
the msg instance is used as the outbound message for the Source connector.

Destination Filter and Transformer Scripts

Destination Filter and Transformer scripts are similar to the Source Filter and Transformer
scripts. The Mirth engine creates msg and tmp instances to deal with inbound and
outbound messages. As stated above, at this stage inbound and outbound messages can
be accessed directly using:
connectorMessage.getRaw().getContent()
connectorMessage.getTransformedData()

When the message is fully processed, the database contains raw, processed raw,
transformed and encoded messages for the Source connector, as well as transformed,
encoded and sent messages for the Destination connector.

If you have not provided any outbound templates or explicitly created the tmp instance,
the msg instance is used as the outbound message for the Destination connector.

1
E4X (ECMAScript for XML Specification) is defined by the ECMA-357 standard available to download at
http://www.ecma-international.org/publications/standards/Ecma-357.htm
PART I – MIRTH CONNECT BASICS 26
Response Script

Similarly, the Mirth engine creates the msg object using the information returned by
connectorMessage.getResponseTransformedData(), and the tmp object, if the
outbound message template is provided.

Postprocessor script

The scripts that are executed last are the channel‟s Postprocessor script, followed by
Global Postprocessor script. These two are simply placeholders and they do not provide
any mechanism to deal with messages (and logically they should not).

Now, we are ready to move on.

Destinations Connector

Next, switch to the Destinations tab to specify how and where the channel sends the
outbound message. Each channel must have at least one destination, though more than
one is also possible.

For now, change the destination name to To File, change the connector type to File
Writer, and configure the connector.

Click the Connector Type drop-down list and review other connectors to familiarize
yourself with other connector types and settings. Switch back to File Writer. Specify the
filename and the folder you would like to use when this file is written. Click Test Write
button to verify your settings.

An important step is to indicate which kind of data to export to the file. Drag and drop
Encoded Data from the Destination Mappings list to the Template box. This tells the
channel to convert and write the inbound message to the file using format originally
specified on the Summary tab, which is HL7v2. To verify the outbound message format,
return to the Summary tab and click Set Data Types.

Note: You can navigate in the Mirth Connect Administrator using either the
navigation panels on the left side of the application window or the context
dependent pop-up menu that is displayed when you right-click on it.

Each destination has its own filter, transformer and response handler. Destinations can
be rearranged to let the busiest destination act first, thereby increasing overall channel
performance. Destinations can be disabled if they are no longer needed. Finally, as with

27 PART I – MIRTH CONNECT BASICS


the Source connector, you can import and export the Destination connector or export
the entire channel.

FIGURE 3-3 File Writer Destination Connector settings after all changes

Filter

We will not specify a filter for this particular channel, and therefore our single To File
destination will handle all messages passed by the Source connector.

Transformer

The mapping or transformation of the inbound message to the outbound message can
be done either in the Source connector transformer or in the Destination transformer.
My preference is to use the Destination transformer to handle messages that belong to
this particular destination and filter out all others.

Click Edit Transformer and create a new transformer step. Change the transformer step
type to JavaScript and rename it to Mapping.

Notice that the right side of the Transformer window contains three additional tabs:
Reference, Message Trees and Message Templates. They are explained in this chapter in a
reverse order, starting with the Message Templates tab.

PART I – MIRTH CONNECT BASICS 28


The Message Template tab makes it easier to map the inbound message to the outbound
message. Switch to this tab. Here you see two areas for message templates. For
simplicity, I will use the HL7v2.4 ACK message for this channel. Normally, I would add a
destination filter to allow only this type of messages to go through. If you do not have
an HL7v2.4 ACK message or do not know what that is, here is a template. (Source 3-1)

SOURCE 3-1 HL7v2.4 ACK template


MSH|^~\&|ADM||ALL||||ACK^A01^ACK||D|2.4|||NE|
MSA|||

Copy and paste this template to the Inbound Message Template or Outbound Message
Template areas. The Data Type in both cases must be HL7v2.x. Review the list of data
types for the outbound message template and click Properties for each to become
familiar with different options Mirth Connect offers. For example, with HL7v3 messages
Mirth allows to stripe out namespace declarations. Do not forget to revert the Data Type
back to HL7v2.x.

Now switch to the Message Tree tab. Mirth Connect tries to parse the message template
and represents them in a tree-like structure. If it fails, you will see an alert like Template is
not valid HL7v2.x. Since our template is valid, it is parsed successfully.

Expand any node, for example MSA, and drag and drop an [empty] node under MSA.1.1
to the Transformer JavaScript editor area. Here is what you should see -
msg['MSA']['MSA.1']['MSA.1.1'].toString().

The last tab (in this review) is Reference. This tab shows predefined and user defined
JavaScript functions that can be dragged and dropped to the Transformer JavaScript
editor. JavaScript functions are logically grouped, with All showing all functions as a
single list in an alphabetical order. To narrow the list, type a few characters in the Filter
text box; for example, start typing “log”, you should see entries for log info and error
statements. Drag and drop the “Log an Info Statement” to the JavaScript editor area. You
will see:

logger.info('message');

Choose Date Functions in the category list, or start typing “date” in the Filter box, and
drag and drop “Get Current Date” to the JavaScript editor area. You should see:
var dateString = DateUtil.getCurrentDate(pattern);

Learning all usage rules implicit to Mirth Connect API can be a very time-consuming
process. Fortunately, Mirth 3.5 introduces Code Completion (also called Code

29 PART I – MIRTH CONNECT BASICS


Recommenders) feature. Code Completion analyzes source code while you are typing
and extracts common usage rules and patterns for you. This feature is configurable
under Mirth Connect panel > Settings > Administrator tab > Code Editor Preferences.

FIGURE 3-4 Code Completion in JavaScript editor

Complete the transformation step as it is shown in Figure 3-5. Add more transformation
steps if you like.

FIGURE 3-5 To File destination transformer step

PART I – MIRTH CONNECT BASICS 30


When you are done, click Validate Step. This only checks syntax correctness of the code.
Then click Back to Channel. Click Save Changes and we are ready to deploy our first
channel.

SOURCE 3-2 Simple Channel Transformer script


tmp['MSH']['MSH.7']['MSH.7.1'] = DateUtil.getCurrentDate("yyyyMMddhhmmss");
logger.debug("Simple Channel: message type = " + tmp['MSH']['MSH.9']['MSH.9.1']);

Deployment

Once the channel is saved, we need to deploy it. To deploy a single channel click Deploy
Channel in the Channel Tasks navigation panel. To deploy several channels
simultaneously, click Channels under Mirth Connect navigation panel, click Redeploy All
under Channel Tasks panel.

In both cases, the Mirth Connect Administrator switches to the Dashboard view with, in
this example, a Simple Channel listed as deployed. Verify that the channel status is
Started.

Testing the Channel

As you may recall, the Source connector type for Simple Channel is the Channel Reader.
It allows placing a message directly to the channel using only the Mirth Connect
Administrator interface.

To do this, select Send Message in the Dashboard Tasks navigation panel, copy and paste
a sample of the HL7v2.4 ACK message or use the Source 3-3 snippet below if you do not
have one; then click Process Message.

FIGURE 3-6 Using the Channel Reader connector type to test the Channel

31 PART I – MIRTH CONNECT BASICS


Go to the folder specified in the To File Destination File Writer connector and check to
see if the test.hl7 file is there. Open the file to verify the content, notice that the date
field of the MSH segment has changed to the current date/time and is formatted as we
defined it.

SOURCE 3-3 HL7v2.4 positive ACK message sample


MSH|^~\&|ADM|Sender|ALL|Receiver|2014||ACK^A01^ACK|0123456789|D|2.4|||NE|
MSA|AA|0987654321

The Dashboard screen shows the channel statistics. Figure 3-7 shows the Server Log
information panel which displays the destination transformer logging output. (Logs date
and time are intentionally deleted.)

FIGURE 3-7 Simple Channel test results

Click Clear Statistics, select everything in the pop-up window (this can be done faster by
clicking Invert Selection) and click OK.

Switch between Current Statistics and Lifetime Statistics (radio boxes in the channel‟s
status bar) and notice the difference.

Click Remove All Messages, select Include selected channels that are not stopped... and
click Yes. Notice that Lifetime Statistics still shows all messages that passed through the
channel.

Click the red X sign on the bottom of the Server Log info panel to clear all log entries.

Failure

As a software developer you know pretty well that code fails sometimes. What if there is
an error with connectors, filters or transformers? Let us mimic such a situation.

PART I – MIRTH CONNECT BASICS 32


Undeploy the channel, click Channels, and select Simple Channel. Click Edit Channel or
double-click to switch to the editor mode. On the Summary tab click Set Data Types and
change the Source connector inbound message type from HL7v2.x to XML. HL7 version 2
messages are not XML, so this obviously should fail.

FIGURE 3-8 Intentionally malformed inbound message type to test the failure case

Save the changes and redeploy the channel. Clear statistics, messages and logs as
explained earlier if you have not done so yet. Send the same ACK message again. The
result should be different this time; the Errored column should show that one message
has failed.

To find the cause of the error(s), expand the channel in the Name column, select the
connector that contains the error - it is the Source connector in our case - and click the
View Messages or double-click on the Name. You should see the connector view window.
Select the Source connector that has the ERROR status, then click the Errors tab below.

The stack trace in this window gives some hints as to the cause of the error and where to
look for it. An example of the stack trace is shown in Figure 3 -9.

Switch between Messages, Mappings and Errors tabs and familiarize yourself with the
information they provide.

Select Reprocess Messages to send the selected message again.

33 PART I – MIRTH CONNECT BASICS


To fix the problem click Channels in the Mirth Connect navigation panel, go to the
Simple Channel summary page and change the Source connector inbound message type
from XML back to HL7v2.x. Save and redeploy the channel.

FIGURE 3-9 Stack trace for the Source connector error

Send the message and notice that the Dashboard shows that the message is successfully
processed and the file is created as expected.

Global Map, Global Channel Map, Channel Map

You might be puzzled with a question on how to pass variables created in one place, for
example in the Global Script, to another place, most likely the channel‟s script. Mirth
Connect does this using a tricky concept called Global and Channel Maps. There are five
of them, three of which are used more often than the others:

 Global Map variables are available across all channels; a channel may place the
variable in the Global Map and other channels can read it. This capability must be
used carefully because a channel may overwrite a variable that another channel
needs to receive.
 Global Channel Map is for variables that are specific to a particular channel. The
channel‟s Summary tab has an option to clear this map when the channel is
deployed.
 Channel Map is the same as the Global Channel Map but exists only in the context
of a single message, i.e., all data will be cleared as soon as a new message arrives.

PART I – MIRTH CONNECT BASICS 34


The other two maps are:

 Connector Map is used by the Mirth engine and stores connector specific
information, e.g., the sender of the message and the message type. You may use this
map; however your values may be purged.
 ResponseMap stores a channel‟s response message(s), e.g., the ACK message.
Typically, you create an acknowledgement message in a destination transformer and
map the response in the channel‟s postprocessor.

The lifespan or visibility of these five maps is different. Figure 3-10 shows when a map is
available to use and how long a variable placed into a map can be retrieved.

FIGURE 3-10 Maps visibility for different stages of message processing

The Connector Map is not available in the channel‟s attachment script; however, data
placed in the Global Preprocessor script can be retrieved in the channel‟s preprocessor
script.

The Global Map allows passing a variable to all script handlers. This map can also be
accessed using the following call:
Packages.com.mirth.connect.server.util.GlobalVariableStore.getInstance().getVa
riables()

The Global Channel Map is not available during Global Deploy and Global Undeploy, but
other than that, it behaves like the Global Map. This map is accessible directly by calling:
Packages.com.mirth.connect.server.util.GlobalChannelVariableStoreFactory.getIn
stance().get(channelId).getVariables()

The Channel Map, Connector Map and Response Map are instances of
connectorMessage.getChannelMap(), connectorMessage.getConnectorMap(), and
connectorMessage.getResponseMap() respectively.

All of these maps are instances of the java.util.Map interface, which means all
methods applicable to Map<K,V> are applicable to Mirth Connect maps as well.
35 PART I – MIRTH CONNECT BASICS
Notice that sometimes shortcuts are used to access maps. These shortcuts are nothing
more than JavaScript wrapper functions. For example $c(key) extracts a value from the
ChannelMap in the following way:

function $c(key, value) {


if (arguments.length == 1) {
return channelMap.get(key);
} else {
return channelMap.put(key, value);
}
}

The table below lists shortcuts for available maps.

TABLE 3-1 Map shortcuts


Get() shortcut Put() shortcut Map
$r(key) $r(key,value) ResponseMap
$co(key) $co(key, value) ConnectorMap
$c(key) $c(key, value) ChannelMap
$s(key) SourceMap (read-only)
$gc(key) $gc(key, value) GlobalChannelMap
$g(key) $g(key, value) GlobalMap
$cfg(key) ConfigurationMap (read-only)
$(key) All maps

The $(key) shortcut is specific. It goes through all maps in order listed in the Table 3-1
plus ResultMap, checks for the key, and returns the first occurance. It means that if you
have two values with the same key, one stored, for example, in the ChannelMap and
another in the ConfigurationMap, then the $(key) shortcut returns the value found in
the ChannelMap. (see Appendix G)

Global Scripts

There are two things we have not touched yet: Global Scripts and Code Templates.

Global Scripts play the same role as channel scripts and help in separating the business
logic. They have the same Deploy, Undeploy, Preprocessor and Postprocessor scripts, the
only difference is that they apply to all channels.

Click the Channels in the Mirth Connect navigation panel, then click Edit Global Scripts in
the Channel Tasks. The Deploy Global Scripts editor window is shown by default.

PART I – MIRTH CONNECT BASICS 36


FIGURE 3-11 Deploy Global Scripts to configure destination folder

Assume we have numerous channels like Simple Channel, each of which stores messages
in the same folder on a local computer and now we need to redirect these files to a
network drive instead. Changing Source connectors one by one may be too tedious and
error prone. It is more sensible to set the folder location once and pass it to each
connector, changing it only when needed. This is what the Deploy Global Scripts allows
us to do.

The right side of the Global Scripts editor area shows the list of the predefined and user-
defined JavaScript functions, similar to the Reference tab for the channel‟s transformers.

Select the Map Functions in the Category list or start typing “global” in the Filter box,
drag and drop the Put Global Variable Map in the editor. Rename the key and type the
path to the folder instead of the value.

Your final script may look like Source 3-4. What this code does is it takes the location of
the temp folder defined as an operating system environment variable, and stores this
value in the Global Map. The scrip also verifies if such folde r actually exists.

SOURCE 3-4 Deploy Global script


var tempdir = java.lang.System.getenv("TMP");
globalMap.put('testDir', tempdir );

var testDir = new Packages.java.io.File( globalMap.get('testDir') );


if ( !testDir.exists() ) try {
testDir.mkdir();
} catch (err) {
logger.error( 'GLOBAL Scripts: Cannot create Temp folder ' + err.message );
}

Validate the Deploy Global Scripts, and save it. Now it is time to change the Simple
Channel Destination connector settings.

37 PART I – MIRTH CONNECT BASICS


Open Simple Channel for editing, switch to the Destinations tab and click To File if you
have created more than one destination. Type ${testDir} in the Directory box. This time
clicking Test Write displays an error because the testDir variable is not available in editing
mode. You need to redeploy everything to the Deploy Global Scripts for this to take
effect.

FIGURE 3-12 Destination Directory setting is taken from the Global Map

Save Simple Channel, click Redeploy All. If the folder you defined cannot be created for
any reason, there will be an error in the Server Log info panel in the Dashboard window.
If so, correct the Global Scripts and redeploy everything. Test the Simple Channel as
explained earlier in this chapter.

If you‟d like to store scripts of the Global Scripts to a file, keep in mind that they are not
exported along with the Channels by using the Export All Channels option. You need to
export or import the Global Scripts explicitly using the appropriate menu items.

Code Templates

Code Templates is where you create and manage your own code, variables and
functions. Variables and functions created here are available through the Reference list in
the editors for Filters, Transformers and other scripts. To open the code templates editor
click Channels under Mirth Connect navigation panel, click Edit Code Templates under
Channel Tasks navigation panel.

PART I – MIRTH CONNECT BASICS 38


As an example, I would like to define a function that returns the formatted data/time.
The code is pretty simple. Click New Library in the navigation panel, give it a name.
Define library visibility under the Channels list. (see Figure 3-13)

FIGURE 3-13 User defined library

Click New Code Template, change the template name in the Name column from
Template 1 to Get Current Date/Time or something similar, verify that the Type is set to
Function, and type the code for this function. (see Figure 3-14)

FIGURE 3-14 User defined function

Context defines visibility of user defined functions. Thus, in this example the Now()
function is available everywhere, i.e., can be used in all types of scripts. Change the
context of this function to Message and make sure that Now() is available in connectors‟
scripts but not in Deploy or Undeploy scripts.

Save the Code Template and return to the Simple Channel, to the To File destination
transformer.

39 PART I – MIRTH CONNECT BASICS


Switch to the Reference tab and pick the User Defined Functions category from the drop
down list. Notice that the newly created function is available there. Drag and drop this
function to the editor area and modify the MSH.7.1 segment to set the date and time
using this function. (see Figure 3-15)

FIGURE 3-15 Transformer script user defined functions

Save the changes and test the Simple Channel as we did before. You should get similar
results.

PART I – MIRTH CONNECT BASICS 40


PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION

Generic Eligibility Service


Implementation
CHAPTER 4 Generic Eligibility Service Introduction

CHAPTER 5 Query Sender Channel

CHAPTER 6 HL7v2 to HL7v3 Transformer Channel

CHAPTER 7 Data Logger Channel

CHAPTER 8 HL7v3 Verification Channel

CHAPTER 9 Response Sender Channel

CHAPTER 10 HL7v3 to HL7v2 Transformer Channel


CHAPTER 4 Generic Eligibility Service Introduction

Generic Eligibility Service


Introduction
This part is fully devoted to the implementation of an imaginary Eligibility Service. The
reason for choosing Eligibility over, say, Patient Administration (ADT), General Order
Messages (ORM) or Unsolicited Observation Message (ORU), which you may already
know, is to fully concentrate on Mirth Connect features instead of business or clinical
requirements.

Note: Before we start, I would like to stress that interactions and messages shown
here do not represent a real implementation and should not be used “as is”.

Eligibility Service Introduction

The Eligibility Service is used by a Healthcare Provider to query whether a Patient has
coverage with a Payer as of a given date. An expected response contains a simple
Yes/No answer depending on whether the Patient has insurance coverage. The response
may also contain a specific reason code and comments to provide additional information
to clarify the Patient status.

To give you a sense how this may work in a real life scenario, I would like to quote a
narrative example from the HL7v3 Normative Edition 2014. You can find it in HL7v3
Standard > Universal Domain > Eligibility Topics > Storyboards:

“During a patient's visit to the optometrist, it was determined that the patient would
benefit from the use of eyeglasses. The optometrist asked the patient if they had eyeglass
coverage with an extended benefit plan. The patient indicated that they were not sure but
that they thought they had some type of extended coverage through their employer wi th
the HC Payor, Inc. The patient looked through their wallet and in fact found an HC Payor,
Inc. extended benefit coverage card that included the plan ID, group coverage number,
insured's ID number, name and DOB and plan expiry date.

The optometrist asked the patient if they would like the secretary to determine if they were
covered by the HC Payor, Inc. extended benefit plan for the purchase of eyeglasses. The

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 42


patient indicated that they would appreciate this because if eyeglasses were not covered
under the plan, they would not be able to purchase them at this time.

The secretary queried the HC Payor, Inc. extended benefit plan giving the patient unique
identifier, name, DOB, as well as the plan particulars from the patient benefit coverage
card and asked if eyeglasses were covered under the plan for this patient. The response
indicated that for this patient and plan, a maximum of $300.00 every 2 years is covered for
the purchase of eyeglasses. This was communicated to the patient, who immediately
identified they would like to purchase a pair of eyeglasses. It should be noted that the
response by the Payor is not a commitment from the Payor to pay for the eyeglasses (the
claim).”

Scenario Overview

The secretary mentioned in the storyboard above uses a Sender application to submit a
query to the Service application located on the HC Payor Inc side and receive a response.
This scenario, further described as an interaction model, illustrates the process followed
to create, submit and process the Eligibility Query.

Interaction Model

The Interaction Model for the Eligibility Service shows interactions between applications
and is depicted using the sequence diagram (Figure 4-1).

FIGURE 4-1 Imaginary Eligibility Query sequence diagram

43 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION


This interaction model has been intentionally complicated by adding an interim step to
transform incoming HL7v2 query messages to outgoing HL7v3 query messages and vice
versa.

Application Roles

Application roles in this chapter do not follow the HL7 Standard definition of the
Application Roles. Thus, an application, that requests the information using the HL7v3
message verifying whether a patient has insurance coverage or not has a very specific
name FICR_AR021001UV01 in HL7v3 terms. Instead, applications roles below reflect the
channel names we implement later.

 Sender: A request is initiated by the Sender by creating an HL7v2 query message.


This message is sent to the HL7 Transformer for processing. The Sender also receives
and processes the response message.
 HL7 Transformer: The HL7 Transformer receives the query from the Sender, verifies
the query and transforms it into the HL7v3 query message format. The newly
constructed message is then sent to the Service. The HL7 Transformer also receives
the response from the Service, translates the response to HL7v2 format and sends it
back to the Sender. The HL7 Transformer also logs messages that fail the HL7v2
message structure validation.
 Service: The Service receives the query message, analyses and processes the
message, queries the database, forms the response message and sends it back to the
HL7 Transformer. Like the HL7 Transformer, the Service logs messages that fail the
HL7v3 message structure validation.

To try as many features as possible within the scope of the given scenario, the actual
implementation of the Eligibility Query interaction model will go a bit beyond these
three application roles (channels).

Messages and Interactions Overview

The HL7v2 Standard defines a message as an essential part of a data interaction model
between systems. Each message is identified by the pair of message type and trigger
event that uniquely identify the message function. The message itself is comprised of a
group of segments in a predefined order.

Similarly, the HL7v3 Standard defines an interaction as “a unique association between a


specific message type (information transfer), a particular trigger event that initiates or

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 44


"triggers" the transfer, and the Receiver Responsibilities (in terms of response interactions)
associated with the receipt of the Interaction.”

Because of this difference, the HL7v2 message will be presented by message type and
trigger event, and the HL7v3 message by an interaction name.

Messages used for the implementation of the scenario are:

 QBP^E22 – HL7v2 Query Authorization Request Status – The Eligibility Query


Request message is used by a Provider to query a Payer against an Authorization
Request or a specific Product/Service Line Item in a n Authorization Request.
 RSP^E22 – HL7v2 Authorization Request Status Query Response - The Eligibility
Query Response message is used by a Payer to respond to a Provider who submitted
a Query Eligibility (QBP^E22) request.
 QUCR_IN200101UV01 - HL7v3 Eligibility Query Request Generic – This message
requests the status of a patient's eligibility for services.
 QUCR_IN210101UV01 - HL7v3 Eligibility Query Response Generic – This message is
used as a response to provide the information about a patient's eligibility for
services.

Even if the purposes of message pairs are similar, it does not mean that their content is
similar as well. There are fields that do not have associations in another pair; data may be
lost if you simply pass information between message pairs.

Eligibility Query Channels Overview

There are many ways to accomplish the same task in Mirth Connect. The diagram in
Figure 4-2 illustrates the game plan for this part of the book. The main idea is NOT to
implement the service in the correct way but to explore as many Mirth Connect options
as possible. The actual activity of a real system may be quite different from this one.

The implementation plan is based on using channels that mimic the scenario of sending
the Eligibility Query request to the Service and receiving the response back:

 Query Sender – a channel that serves as an interface to handle and place an original
QBP^E22 message into the pipe using an MLLP Message Transport protocol.
 v2-v3 Transformer – this channel plays a role of a translator that receives the HL7v2
request message, verifies it, creates the HL7v3 message, and maps data from HL7v2
to HL7v3 messages. If the received message fails validation, the message is sent for
logging performed by Data Logger.

45 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION


 v3 Verification – this channel receives the HL7v3 request message, verifies it,
decomposes it, and stores data into the database. If the received message fails
validation, the message is sent for logging performed by Data Logger.
 Response Sender – plays the role of the Service. It queries the database periodically,
creates an HL7v3 response message based on data taken from the database, and
sends this message back to the Query Sender using the Web service.
 v3-v2 Transformer – this channel plays the role of the translator that handles the
HL7v3 response message, creates HL7v2 message, maps data from HL7v3 to HL7v2
messages, and stores the resulting RSP^E22 message as a file.
 Data Logger – receives validation errors along with the message that caused them,
and stores error details in a log file and the database.

FIGURE 4-2 Eligibility Query channels’ implementation plan

Throughout this implementation we will explore a variety of connectors: Channel Reader,


TCP/IP Listener and Sender, Database Writer and Reader, Web Service Listener and
Sender, and File Writer.

We will use all techniques explained earlier in this book which includes, but not limited
to: filters, transformers, code templates, global and channel scripts.

At this point it is assumed that you are very comfortable navigating the Mirth Connect
Administrator.

Recommended Tools and Packages

To make the development easier and less tedious, here is a list of recommended tools
and packages. Some of them, such as PostgreSQL, have to be installed prior a particular
channel implementation.

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 46


 HL7v2 viewer such as HL7Spy by Inner Harbour Software;
 HL7v3 viewer such as Altova XMLSpy by Altova GmbH;
 MS Access and MS Access ODBC driver or similar database;
 PostgreSQL or similar database supported by Mirth Connect;
 JDBC Level 4 driver for PostgreSQL or selected database;
 phLOC Schematron Java library package by Philip Helger.

And last but not least, use the schema and other related files from the archive provided
with this book (see Appendix G).

47 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION


CHAPTER 5 Query Sender Channel

Query Sender Channel


Theis toQuery Sender channel is the first and simplest channel we will implement. Its task
serve as an interface to send the HL7v2 Eligibility Query request message. Thus,
the Source connector, is the Channel Reader. To connect to the next channel down
the pipe, which will be the v2-v3 Transformer channel, using the MLLP Message
Transport protocol, the Destination connector is configured as a TCP Sender.

To make testing a bit easier, this channel tweaks each message by assigning a unique
UUID and the current date/time stamp to distinguish one incoming message from
another. Also, we can use this channel to create malformed messages to test the
verification scripts we will implement later for two transformation channels.

To create this channel, let us go through each of the channel‟s tabs - Summary, Sender,
and Destination - and perform the required configuration settings.

Deploy, Preprocessor, Postprocessor and Undeploy scripts are not changed, so they are
not discussed in this chapter.

Summary Tab

Create a new channel or import the Query Sender channel from the archive provided with
this book and switch to the Summary tab.

Type the channel name, channel tag and channel description. You may omit the channel
tag if there are only channels related to this project, assuming you have deleted the
Simple Channel we created before.

Click Set Data Types and configure inbound and outbound messages for both Source
connector and Destination connector to HL7v2.x. Leave the other settings there
unchanged. Verify that the Initial State of the channel is Started (other options are
Paused and Stopped).

There are no attachments to configure for this channel or for any other channel in this
project. Nevertheless, you may change the attachment type using the drop-down list and
click Properties to familiarize yourself with different ways of handling attachments.

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 48


FIGURE 5-1 Query Sender channel Summary tab settings

The Channel Tags field is used to logically sort out channels and provided for
convenience.

FIGURE 5-2 Set Data Types settings

49 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION


Return the Attachment to None, if you changed it.

Source Connector

Switch to the Source tab; verify that the Connector Type is set to Channel Reader. No
other settings are required or available for the Channel Reader connector.

FIGURE 5-3 Query Sender channel Source connector settings

Since all incoming messages are accepted, no filters or transformers are needed for this
channel. Validate the connector and save the changes.

Destinations Connector

Switch to the Destinations tab. Rename the existing destination from Destination 1 to “To
HL7v2-HL7v3 Transformer”. You may choose a different name if you like.

Since this destination is a TCP client, change the Connector Type to TCP Sender.

Change the remote address to the localhost IP address (127.0.0.1), or type another IP
address of the TCP server if you needed.

Type the remote port: 6611. If any port in this project is already used by applications on
your computer, feel free to make appropriate changes to all related channels.

Verify that ${message.encodedData} is in the Template box or drag and drop Encoded
Data from the Destination Mappings panel on the right side of the Mirth Connect
Administrator screen into this box. (see Figure 5-4)

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 50


Leave the other settings as they are and save the changes.

No filters are required for this destination. However, as we discussed earlier, this channel
tweaks inbound messages to differentiate one message from another. This is done by
destination‟s transformer scripts.

FIGURE 5-4 Query Sender channel Destination connector settings

Click Edit Transformer to create transformer scripts for this destination. There are four
scripts, three are JavaScript and one is a Message Builder to show how it works.

51 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION


FIGURE 5-5 Query Sender channel Message Builder Transformer

Add a new transformer step, change the type to JavaScript and rename the step. You
may use the transformer names I gave or use your own. Below are scripts for each of the
JavaScript transformer steps.

MSG Segment updater

This script assigns inbound message elements to the outbound message elements. Since
the outbound template is not provided, the script explicitly assigns inbound message
object to tmp variable and maps all fields.

SOURCE 5-1 MSH Segment Transformer script


tmp = msg;
tmp['MSH']['MSH.7']['MSH.7.1'] = Now("yyyyMMddhhmmss");
tmp['MSH']['MSH.10']['MSH.10.1'] = UUIDGenerator.getUUID();

Then it assigns the current date/time to an appropriate outbound message field.


Similarly, it creates and assigns a unique UUID to the appropriate message field.

Patient ID generator

This script creates a number sequence to mimic a patient identifier such as a Social
Security Number. Needless to say, this identifier does not follow any rules and is
completely fabricated. The identifier is assigned to an appropriate message field.

SOURCE 5-2 Patient ID Transformer script


tmp['QPD']['QPD.3']['QPD.3.1'] = getPatientID();

function getPatientID() {
var patientId = '';
for ( var i = 0; i <= 9; i++ ) {

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 52


patientId += Math.floor( (Math.random() * 10) );
}
return patientId;
};

This script is to show that you can use JavaScript functions within the steps to separate
'programming logic'.

Query Tag generator

The Query Tag generator transformer step uses the Message Builder. It maps part of the
query message Control ID number to a Query Tag of the QPD segment. If that sounds
like Latin to you then put it another way, this transformer step assigns a unique identifier
to a request which, in the real world, may be used to debug message requests. In this
project this is done to differentiate messages from one other. (see Figure 5-5)

Fill out Message Segment and Mapping fields with the following values:
Message Segment: tmp['QPD']['QPD.2']['QPD.2.1']
Mapping: tmp['MSH']['MSH.10']['MSH.10.1'].substring(10)

Notice, that there are Step and Generated Script tabs in the editor area. Switch to the
Generated Script tab. You should see auto-generated JavaScript code to support your
mapping. (see Figure 5-6)

FIGURE 5-6 Auto generated JavaScript code for Message Builder step

Generated Script is available for different transformer and filter steps. You can use it to
learn how to properly code JavaScript in Mirth.

TEST ONLY (Intentionally Malformed)

This last script is not actually needed until we implement the Data Logger channel. But in
order to keep all Query Sender channel scripts in one place this script is also listed here.
This script verifies the message trigger event field and if it is equal ERR2 it randomly
tweaks three message fields – message type, trigger event and the version number – so
53 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION
the message will fail validation in future steps. Note that there are rare cases when the
script would not change the message. Alternatively, you may change the content of
these fields manually each time you send the message to verify the validation algorithm;
I have found this a bit too tedious and time consuming.

SOURCE 5-3 Intentionally Malformed Transformer script


if ( 'ERR2' == msg['MSH']['MSH.9']['MSH.9.2'].toString().toUpperCase() ) {
if ( Math.floor((Math.random()*10)+1) > 5 )
tmp['MSH']['MSH.9']['MSH.9.1'] = 'QQQ';
if ( Math.floor((Math.random()*10)+1) > 5 )
tmp['MSH']['MSH.9']['MSH.9.2'] = 'E99';
if ( Math.floor((Math.random()*10)+1) > 5 )
tmp['MSH']['MSH.12']['MSH.12.1'] = '9.9';
}

Validate each JavaScript step then return to the channel and save all changes. This
concludes the changes you need to make to the Query Sender channel.

Channel Implementation Verification

Since other channels down the pipe that should receive messages sent by the Query
Sender channel are not available yet, to verify this channel implementation, either change
the Destination connector type from TCP Sender to File Writer, or better, clone the
original To HL7v2-HL7v3 Transformer destination, change the connector type to File
Writer and disable the To HL7v2-HL7v3 Transformer destination. (see Figure 5-7)

FIGURE 5-7 Cloned destination

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 54


Type the directory and the file name settings, or use the Global Map value created for
Simple Channel (see Source 3-4). Drag and drop the Encoded Data message type from
the Destination Mappings panel on the left side. Save all changes and deploy the channel.

In the Mirth Connect Administrator‟s Dashboard view, click Send Message, copy and
paste the QBP^E22 message sample in Source 5-4 into the message box. Note that
some fields such as message creation date/time are missed.

SOURCE 5-4 QBP^E22 Query Authorization Request sample message


MSH|^~\&|ADM|Sending Organization|ALL|Receiving Organization|||QBP^E22^QBP_E22||D|2.7|||AL|AL
QPD|E22^Authorization
Request^CIHI0003||^^^ISO^PHN|Everywoman^Mary^Patrick^^^^L^|19680120|MSP^|EXT
RCP|I

If you created a separate File Writer destination to store the message in a file, and not
disabled the original To HL7v2-HL7v3 Transformer destination, select the former
destination in the Destinations list. Uncheck the destination(s) that should not receive the
message.

Click Process Message. Immediately after, a file should appear in the folder you specified
for the File Writer setting. (see Figure 5-8)

FIGURE 5-8 Sending a test message to verify the Query Sender channel implementation

Open this file and make sure that the date/time field of the MSH segment (the first line)
is set to the current date/time; the message identifier, also in the MSH segment, looks
like a UUID; and the last part of this identifier repeats in the QPD segment (the fourth
line of the message).

55 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION


Repeat the steps above to send another message. This time change the message trigger
event (it is E22 in the QBP^E22^QBP_E22 field) to ERR2 and send the message again.
Open the newly created file and make sure that at least one of the fields is incorrect.

Enable the To HL7v2-HL7v3 Transformer destination. Disable or delete the To File


destination and save the changes.

We are done with the Query Sender channel. Let‟s move on to the next channel
implementation.

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 56


CHAPTER 6 HL7v2 to HL7v3 Transformer Channel

v2-v3 Transformer Channel


Imagine a situation, where multiple application instances build a mixed point-to-point
or interconnected environment. Each of these applications may talk in a different HL7
dialect or even version (different language). To understand each other, they need
middleware that acts as a broker that translates messages from one version or standard
to another. One of the advantages of such a configuration is that it supports routing and
delivery of multiple types of messages over a single conduit. The v2-v3 Transformer
channel plays the role of such a broker. It receives the HL7v2 message, validates it,
translates to the HL7v3 message standard and sends it along. If the initial message fails
validation, it is sent for the logging performed by the Data Logger channel.

Summary Tab

As was done with the Query Sender channel, either create a new channel or import the
v2-v3 Transformer channel from the archive provided with this book and switch to the
Summary tab.

FIGURE 6-1 v2-v3 Transformer channel Summary tab settings

57 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION


Type the channel name, channel tag and channel description. There is no attachment for
this channel. (see Figure 6-1)

Since this channel transforms messages from one standard type to another, the data
types for the connectors will be different. Click Set Data Types and configure inbound
and outbound messages for the Source connector. If you have created a channel from
scratch, the second Destination connector is not available yet, so you have to return to
these data settings later and verify that the outbound message type for the v3
Verification channel is HL7 v3.x, and the outbound message type for the Data Logger
channel is pure XML. (see Figure 6-2)

FIGURE 6-2 v2-v3 Transformer channel Data Types settings

Note that for the HL7v2 connector the Use Strict Parser setting produces different result
(e.g., resulted E4X object is built differently). Therefore mappings for messages parsed
with and without the Strict Parser setting are not be compatible.

Leave the Responses unchanged. They are not used in this part of the book.

Source Connector

Switch to the Source tab. This channel is waiting to receive messages from the Query
Sender channel via the MLLP Message Transport protocol. Therefore, the Source
connector should be configured as a TCP Listener waiting messages on port 6611.

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 58


Change the Response from Auto-Generate to None. Leave the other settings as they are.
(see Figure 6-3)

FIGURE 6-3 v2-v3 Transformer channel Source connector settings

No filters or transformers are needed for the Source connector. Save the changes and
(re)deploy all channels. Note that the destination has not been created yet.

Now you actually can verify the “To HL7v2-HL7v3 Transformer” destination of the Query
Sender channel. Make sure that the newly created v2-v3 Transformer channel is
deployed, open the Query Sender channel for editing, go to the Destinations tab and
click Test Connection. If everything is done correctly, you should see a dialog box with
the IP address and the port the Query Sender channel‟s destination has tried to connect
to. It should be 127.0.0.1:6611 if you followed the instruction given here. (see Figure 6-
4)

FIGURE 6-4 Testing the Query Sender channel connection

59 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION


Destinations Connector

There are two destinations for this channel. Both verify incoming messages. One handles
only valid messages, transforms them into HL7v3 format and sends them along. The
other deals only with invalid messages and sends them to logging.

In some cases, it makes more sense to do the verification in the Source connector, add
the verification result to the Channel Map and use it later in the destinations‟ filters.

Rename the destination to To HL7v3 Verification or come up with a name you like more.
This destination is the TCP Sender that connects to port 6622 on the local computer.

Add one more destination and call it To Data Logger. This destination, which also has the
TCP Sender connector type, will send error details to the Data Logger channel using port
6644 on the local computer.

Do not forget to check the Template box for the outbound message mapping. If it is not
there, drag and drop the Encoded Data from the Destination Mappings on the left side to
the Template box for newly created destinations.

FIGURE 6-5 v2-v3 Transformer channel Destination connectors settings

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 60


To HL7v3 Verification filter

Now click on To HL7v3 Verification channel and open the Filter for editing (click Edit
Filter). The first thing to do is to take the inbound QBP^E22 message template (use the
snippet in Source 6-1 or the QBP template from the archive provided with this book),
and paste it into the Inbound Message Template box on the left side, then click the
Message Trees tab and expand the parsed message.

SOURCE 6-1 QBP^E22 message template


MSH|^~\&|||||||||D|2.7|||AL|AL
QPD|E22^Authorization Request^||^^^^|^^^^^^^||MSP^|EXT
RCP|I

Now create a new rule and verify that the rule type is a Rule Builder. Create the first filter
to verify that the message type is QBP. To do that, expand the MSH.9 element, and drag
and drop the value under the element MSH.9.1 to the Field box. Click Equals. Click New
and type the value „QBP’. Do not forget the quotes! (see Figure 6-6)

FIGURE 6-6 Creating filter rules from inbound message elements

Create another rule, change the operator to AND, and drag and drop the trigger event.
Do the same for the message version. Validate the scripts, return back to the channel
and save the changes.

As with the transformer steps, new version of Mirth allows to switch to the auto
generated script tab to see what JavaScript code is behind the filter. Let‟s pretend that
we forget to put quotes around the verified value in the first filter steps. Change the
Values field from ‘QBP’ to QBP and switch to the Generated Script tab. (see Figure 6-7)

61 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION


FIGURE 6-7 Auto generated script for the filter rule

You can see that the auto-generated JavaScript validation code is now:
If(msg['MSH']['MSH.9']['MSH.9.1'].toString() == QBP) { <snip>

where QBP is treated as a variable. Obviously, this will fail since QBP is not defined.
Change the value back to ‘QBP’ (with quotes) and save the changes.

To HL7v3 Verification transformer

Once the message has passed the validation, it is time to transform it from HL7v2 format
to HL7v3 format, which will require lots of mapping.

Before we start, we need to prepare templates. Click on To HL7v3 Verification destination


and open the Transformer for editing (click Edit Transformer). Verify that the QBP^E22
message template is in the Inbound Message Template box. Take the HL7v3 message
template for the Eligibility Query (QUCR_IN200101) either from Appendix A or the
archive, and place it in the Outbound Message Template box. Make sure that the Data
Type for the outbound message is set to HL7v3 and Properties has the Strip Namespaces
box checked. Switch to the Message Trees view.

FIGURE 6-8 Creating filter rules from inbound message elements

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 62


Most of the transformation rules we are going to create here are Message Builder type,
though there are several JavaScript steps, including one optional JavaScript step that
makes the message invalid for the testing purpose only. (see Figure 6 -8)

Message Builder steps are listed in the Table 6-1. Create a step, rename it, and copy and
paste the message segment and mapping from the table.

TABLE 6-1 Mapping fields from the HL7v2 to the HLv3 Eligibility query request transformer rules
Name Message Segment Mapping
Creation Time tmp['creationTime']['@value'] msg['MSH']['MSH.7']['MSH.7.1'].
toString()
Receiving tmp['receiver']['device']['asAgent']['representedO msg['HDR']['HDR.2']['HDR.2.1'].
Organization rganization']['id']['@controlInformationExtension' toString()
]
Sending tmp['sender']['device']['asAgent']['representedOr msg['HDR']['HDR.1']['HDR.1.1'].
Organization ganization']['id']['@controlInformationExtension'] toString()
Query ID tmp['controlActProcess']['queryByParameter'] msg['MSH']['MSH.10']['MSH.10.1'].
['parameterList']['id']['@extension'] toString()
Patient ID tmp['controlActProcess']['queryByParameter']['pa msg['QPD']['QPD.3']['QPD.3.1'].
rameterList']['coveredPartyAsPatient.Id']['value'] toString()
['@extension']
Patient Birthday tmp['controlActProcess']['queryByParameter']['pa msg['QPD']['QPD.5']['QPD.5.1'].
rameterList']['coveredPartyAsPatientPerson.Birth toString()
Time']['value']['@value']
Eligibility Plan Type tmp['controlActProcess']['queryByParameter']['pa msg['QPD']['QPD.6']['QPD.6.1'].
rameterList']['policyOrAccount.Id']['value'] toString()
['@extension']
Service Date tmp['controlActProcess']['queryByParameter']['pa msg['MSH']['MSH.7']['MSH.7.1'].
rameterList']['serviceDate']['value'] toString()
['@validTimeLow']

The Message ID step is a simple JavaScript s cript that assigns a unique UUID to the
appropriate message element:

tmp['id']['@extension'] = UUIDGenerator.getUUID();

The patient name JavaScript mapping is created for illustration only. It could be done as
a Message Builder rule as well.

SOURCE 6-2 Patient Name mapping Transformer script


var familyName = msg['QPD']['QPD.4']['QPD.4.1'].toString();
var givenName = msg['QPD']['QPD.4']['QPD.4.2'].toString();
var middleName = msg['QPD']['QPD.4']['QPD.4.3'].toString();

tmp['controlActProcess']['queryByParameter']['parameterList']['coveredPartyAsPatientPerson.Name'][
'value']['part'][0]['@value'] = familyName;
tmp['controlActProcess']['queryByParameter']['parameterList']['coveredPartyAsPatientPerson.Name'][
'value']['part'][1]['@value'] = givenName;
tmp['controlActProcess']['queryByParameter']['parameterList']['coveredPartyAsPatientPerson.Name'][
'value']['part'][2]['@value'] = middleName;

63 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION


The step in Source 6-3 is added specifically for testing, like the step added to the Query
Sender channel. It checks the Sending Facility field and if it is equal ERR3 it intentionally
breaks the outbound message by randomly changing the message creation date or
message identifier OID element.

SOURCE 6-3 TEST ONLY Transformer script


if ( 'ERR3' == msg['MSH']['MSH.4']['MSH.4.1'].toString().toUpperCase() ) {
if ( Math.floor((Math.random()*10)+1) >5 ) {
tmp['creationTime']['@value'] = '20140101';
} else {
tmp['id']['@root'] = '2.16.840.1';
}
}

Return to the channel and save all changes.

To Data Logger filter

The To Data Logger destination filter also checks the validity of the inbound message. If
the message is invalid, it accepts it. Otherwise it ignores it.

SOURCE 6-4 To Data Logger filter rule script


var errorList = new Packages.java.util.ArrayList();

if( msg['MSH']['MSH.9']['MSH.9.1'].toString() != 'QBP' ) {


errorDesc = {SegmentID:'MSH', Sequence:'9', Position:'1', Description:'200&Message type
is not QBP'};
errorList.add( errorDesc );
}

if( msg['MSH']['MSH.9']['MSH.9.2'].toString() != 'E22' ) {


errorDesc = {SegmentID:'MSH', Sequence:'9', Position:'2', Description:'201&Trigger
Event is not E45'};
errorList.add( errorDesc );
}

if( msg['MSH']['MSH.12']['MSH.12.1'].toString() != '2.7' ) {


errorDesc = {SegmentID:'MSH', Sequence:'12', Position:'1', Description:'203&Message
version is not 2.7'};
errorList.add( errorDesc );
}

globalChannelMap.put('errorList', errorList);

return ( errorList.size() > 0 );

The Source 6-4 scrip shows how to use external Java classes, create JavaScript objects
and pass them through the Channel Map to another step or other destinations.

Note: Objects must be passed using the Global Channel Map. The other two maps
are persisted into the database and therefore the Mirth Connect engine
serializes them as strings.

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 64


The script verifies the same inbound message fields as the To HL7v3 Verification Channel
destination filter, creates the errorDesc object, assigns its properties, and adds the
errorDesc object to the ArrayList. Then it places the ArrayList class into the Global
Channel Map and returns the verification result.

FIGURE 6-9 To Data Logger destination filter script

Once this is done, return to the channel and save the changes.

To Data Logger transformer

The To Data Logger destination transformer contains only two JavaScript steps. The first
step shows how to create an XML feed by adding XML elements using the script. The
second step uses Code Templates to call an encryption algorithm. (see Figure 6-10)

FIGURE 6-10 To Data Logger transformer steps

65 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION


The first thing to do is add a root node to the outbound message template box. The root
node in our case is <error/>. Make sure that the template data type is set to XML, and
Strip Namespaces property is checked.

Now create a new step and rename it to Error Feed. The source code for this step is
shown in Source 6-5.

SOURCE 6-5 Error Feed Transformer script


var errorList = new Packages.java.util.ArrayList( globalChannelMap.get('errorList') );

if( !errorList.isEmpty() ) try {


var msgData = createSegment('message', tmp);
createSegment('creationDate', msgData);
createSegment('id', msgData);
createSegment('type', msgData);
createSegment('trigger', msgData);
createSegment('version', msgData);
tmp['message']['creationDate'] = msg['MSH']['MSH.7']['MSH.7.1'].toString();
tmp['message']['id'] = msg['MSH']['MSH.10']['MSH.10.1'].toString();
tmp['message']['type'] = msg['MSH']['MSH.9']['MSH.9.1'].toString();
tmp['message']['trigger'] = msg['MSH']['MSH.9']['MSH.9.2'].toString();
tmp['message']['version'] = msg['MSH']['MSH.12']['MSH.12.1'].toString();
for ( var i = 0; i < errorList.size(); i++ ) {
tmp.appendChild( new XML('<cause/>') );
var errorDesc = errorList.get(i);
tmp['cause'][i] = errorDesc.Description;
}
var sgm = createSegment('attachment', tmp);

} catch(err) {
logger.error(err);
}

In the first line of the course code creates the Java class and restores the ArrayList with
error objects from the Global Channel Map. Then it builds the XML structure shown at
the end of the script (for the reference). Note that there is an empty node for the
attachment.

Add one more step, rename it Encode. The source code of this step, shown in Source 6-6,
extracts the original inbound message in HL7v2 format from the Channel Map, encodes
it using the Base64 encoding schema and places it in the attachment node.

SOURCE 6-6 Encode transformer script


tmp['attachment'] = getBase64Encrypted( channelMap.get('msgRaw') );
tmp['attachment']['@type'] = 'HL7V2';

Validate the script, return to the channel‟s Summary tab and make sure that Data Types
for both destinations are set to XML. Save the changes.

Code Templates

Open Code Templates for editing. Create a new library, rename it Part 2 or give a better
name.

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 66


Create a new code template, rename it getBase64Encrypted, add a description and copy
the following code.

SOURCE 6-7 getBase64Encrypted Code Template script


function getBase64Encrypted(strData) {
try {
if (null == strData) strData = '';
var byteData = new Packages.java.lang.String(strData).getBytes();
return FileUtil.encode(byteData);
} catch(err) {
logger.error(err);
}
}

Set the Context for this code template function to Global. Validate the script and save the
changes.

FIGURE 6-11 getBase64Encrypted Code Templates function

Now getBase64Encrypted is available on the Reference tab, in the User Defined Functions
category when you are dealing with scripts, such as destination transformer scripts. Save
changes made to the Code Temapltes.

Scripts

Return to the v2-v3 Transformer channel and switch to the Scripts tab. All scripts but
Preprocessor are left as they are. The Preprocessor script stores the original inbound
message in HL7v2 format for the To Data Logger transformer step.

To retrieve the original inbound message in a native format, use the internal message
Mirth Connect variable (see Source 6-8).

67 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION


SOURCE 6-8 Preprocessor script
channelMap.put('msgRaw', message);

The Channel Map exists only in the context of this message and is overwritten when the
next message arrives.

FIGURE 6-12 Preprocessor script to store the original inbound message

Notice, that functions defined in Code Templates are not available for Deploy,
Preprocessor, Postprocessor and Undeploy scripts.

Check that channel‟s Set Data types for the inbound and outbound connectors are
correct (see Figure 6-2).

Channel Implementation Verification

To verify this channel, change the Destination connector type to File Writer or duplicate
the destination. Use different file names for the To HL7v3 Verification and To Data Logger
Destination connectors. Submit the message (Source 5-4 code snippet) in the same way
we did for the Query Sender channel. Make sure that a new file is in the folder and the
filename corresponds to the File Writer settings of To HL7v3 Verification.

Repeat the steps above to send another message. This time change the message trigger
event (MSH.9.2) to ERR2 and send the message again. There should be a file created with
the name you entered in To Data Logger File Writer settings. Open the newly created file
and make sure that it contains an error feed with the Base64 encoded HL7v2 message.

Repeat the step above to send another message. This time change the Sending Facility
field (MSH.4) to ERR3 and send the message again. Open the newly created HL7v3
message and make sure that at least one of the fields is incorrect. Restore the original
connector settings and save the changes.

We have completed with the v2-v3 Transformer channel. Let us move on to the Data
Logger channel implementation.

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 68


CHAPTER 7 Data Logger Channel

Data Logger Channel


This channel receives an XML feed with message validation errors from either the v2-
v3 Transformer channel that has been reviewed earlier or the HL7v3 Verification
channel that will be explained later. Once the XML feed is received, this channel
parses it and adds a line to a log file. The message that caused the validation to failure is
written to the database.

Prerequisites

In earlier versions this channel used MS Access database to store error messages.
Starting from JDK 1.7, the MS Access code described in this chapter throws "class not
found" exception because required JDBC-ODBC Bridge is removed.

Note: More about - http://docs.oracle.com/javase/7/docs/technotes/guides/jdbc/bridge.html


In such case you can store error feeds to a file, use another database or try to find a valid JDBC-
ODBC driver for the MS Access.

However, I left the To Log DB destination explained in this chapter unchanged to show
the approach which is applicable to other databases. Just make sure you have an
appropriate ODBC (JDBC) driver for it; or download and install one. And make the
appropriate changes to the channel settings to connect to that database of your choice.

 Database: Open MS Access and create a new database with a table called Messages.
The table structure is described in the Appendix C. Store the database in a folder that
will be used by the Data Logger.
 System DSN: Open the ODBC Data Source Administrator (Windows Start > Control
Panel > Administrative Tools > Data Sources ). Switch to the System DSN tab and
create a new entry called QBP_LOG_DB, pointing to the newly created database.

Summary Tab

Once all preliminary work is done, create a new channel or import the Data Logger
channel from the archive. Type the channel name, channel tag and channel description.

Click Set Data Types and configure inbound and outbound messages for both Source
connector and Destination connector to XML. Leave Strip Namespaces and Response
settings as they are.

69 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION


Save changes and switch to the Source connector tab.

FIGURE 7-1 Data Logger channel Summary and Data Types settings

Source Connector

This channel is waiting for XML feeds over the MLLP Message Transport protocol, so the
Source connector is TCP Listener. The port for this connector is 6644. Change the
Response from Auto-Generate to None. All the other settings are left untouched.

FIGURE 7-2 Data Logger channel Source connector setting

No Source connector filters or transformers are needed for this channel.

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 70


Destinations Connector

Switch to the Destinations tab. There will be two destinations here. One destination adds
a line to the log file and provides minimum information about the message itself and the
error(s). The other stores the message in a native format in the database.

To Log File destination

Rename the destination to To Log File or chose a better name. Change the connector
type to File Writer. You can type the file directory name into the box, but I prefer to
configure it elsewhere, like as we did for the Simple Channel in Chapter 3 using Global
Scripts. If you also prefer this way, type ${logFolder} for now.

Enter the file name such as QBP_${date.get("yyyyMMdd")}.log. This string gets the
current date and converts it to the proposed format. Thus, there will be a new log file for
every day.

FIGURE 7-3 To Log File destination connector settings

Drag and drop Encoded Data in the Template box. Now you can validate and save the
changes. Notice that the Template box entry will be changed later.

There is no Filter for the To Log File destination. We are ready to start with the
Transformer that extracts data from the feed.
71 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION
To Log File transformer

Add a transformer step, change the type to JavaScript and rename it to Extract Log Data.

Copy and paste the XML feed structure to the Inbound Message Template area and
switch to Message Trees to parse it.

SOURCE 7-1 XML error feed template


<error>
<message>
<creationDate/>
<id/>
<type/>
<trigger/>
<version/>
</message>
<cause></cause>
<attachment type=""></attachment>
</error>

No template is needed for the outbound message; required data will be passed using
Channel Maps. (see Figure 7-4)

FIGURE 7-4 To Log File Destination transformer step to extract data from the feed

The source code for this transformer step extracts individual pieces from the feed,
ignoring the Base64 attachment and assigns them to Channel Map variables.

SOURCE 7-2 To Log File Transformer script


var creationDate = msg['message']['creationDate'].toString();
var msgID = msg['message']['id'].toString();
var msgType = msg['message']['type'].toString();
var msgTrigger = msg['message']['trigger'].toString();
var msgHL7 = msg['message']['version'].toString();
var xml = new XML(msg);

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 72


try {
var datestring = DateUtil.convertDate('yyyyMMddhhmmss -0800', 'yyyy-MM-dd hh:mm:ss',
creationDate);
} catch(err) {
datestring = Now('yyyy-MM-dd hh:mm:ss');
}

channelMap.put('msgDate', datestring);
channelMap.put('msgID', msgID);
channelMap.put('msgType', msgType);
channelMap.put('msgTrigger', msgTrigger);
channelMap.put('msgVerHL7', msgHL7);
channelMap.put('msgErrors', 'Errors: ' + xml.cause.length() );
channelMap.put('CR', '\n');
channelMap.put("logFolder", $("logDir"));

The MS Access database requires an additional step to transform the HL7v2 date/time
format to the MS Access supported one. This may be different for the database of your
choice; if so, change the code appropriately.

Validate the script and return to the Destinations tab. Notice, that there are additional
entries in the Destination Mappings box. Delete ${message.encodedData} that we
temporarily placed earlier as a stub to allow verifying and saving the channel. Drag and
drop ${msgDate}, ${msgID} and other mappings variables into the Template box. Place
them in the order you want them to appear in the log file. (see Figure 7-5)

Add tab breaks between these variables to separate one from another. As the last entry,
do not forget to add ${CR} to start a new line in the file.

Once this is done, validate the connector and save all changes.

FIGURE 7-5 Log file content variables for the To Log File destination

To Log DB destination

Add a new destination and rename it to To Log DB. This destination will write the
message to the database, therefore, change the connector type to Database Writer.

73 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION


Change the Driver to JDBC-ODBC Bridge, click Insert URL Template and change DSN in
the URL field to System DSN you specified to access your database. Set Use JavaScript to
Yes. Mirth Connect inserts the recommended connection string source code template.

SOURCE 7-3 Database Writer connection string code


var dbConn;

try {

dbConn = DatabaseConnectionFactory.createDatabaseConnection('sun.jdbc.odbc.JdbcOdbcDriver',
'jdbc:odbc:QBP_LOG_DB','','');

} finally {
if (dbConn) {
dbConn.close();
}
}

Validate the connector and save the changes.

FIGURE 7-6 To Log DB Destination settings for Database Writer connector

There is no filter for the To Log DB destination and we proceed with the transformer to
extract the data from the feed.

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 74


To Log DB transformer

Create a transformer step and rename it Write DB. This step will extract data from the
same XML feed. In addition to what the To Log File transformer does, this transformer
step decrypts the Base64 encrypted HL7 message and stores it in the database.

As we did for the encryption, a function is added to Code Templates to decrypt the
message. You may refer to the Code Templates section below to create such a function
and then return here to continue editing the transformer.

SOURCE 7-4 Database Writer Transformer script


var dbConn;
try {
dbConn =
DatabaseConnectionFactory.createDatabaseConnection('sun.jdbc.odbc.JdbcOdbcDriver','jdbc:odbc:QBP_LOG_DB','','');
var insertString = "INSERT INTO Messages (CreationDate, UUID, MsgType, [Trigger], [Version], [Errors],
[Source] ) VALUES ('" +
$('msgDate') + "','" + $('msgID') + "','" + $('msgType') + "','" +
$('msgTrigger') + "','" + $('msgVerHL7') + "','" + getErrorList() + "','" +
getBase64Decrypted( msg['attachment'].toString() ) + "');"
var result = dbConn.executeUpdate(insertString);
} finally {
if (dbConn) { dbConn.close(); }
}

function getErrorList() {
var xml = new XML(msg);
var errorList = '';
for ( var i = 0; i < xml.cause.length(); i++ ) {
errorList += xml.cause[i].toString() + $('CR');
}
return errorList;
}

For simplicity, possible errors and their descriptions are not stored in separate table with
the cross-reference as would normally be done. Instead, they are transformed to a
comma delimited string and stored as a single entry. Validate the step, return to the
channel and save all changes.

FIGURE 7-7 To Log DB transformer step

75 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION


Code Templates

Open Code Templates for editing. Create a new code template, rename it to
getBase64Decrypted, add a description and copy the code in Source 7-5. Set the Context
for this code template function to Global.

SOURCE 7-5 getBase64Decrypted function


function getBase64Decrypted(strData) {
try {
if (null == strData) return '';
var encodedByte = new Array();
encodedByte = FileUtil.decode(strData);
return new Packages.java.lang.String(encodedByte);
} catch(err) {
logger.error(err);
throw err;
}
}

Validate the script and save the changes.

FIGURE 7-8 getBase64Decrypted Code Templates function

Verify that getBase64Decrypted is available in the Reference tab, under the User Defined
Functions category when you editing the destination transformer step.

Global Scripts

For the To Log File Destination connector we have used ${logFolder} channel map
variable. Now it is the time to specify it. Open the Global Scripts, in the Deploy script,
type, or copy and paste the directory initialization code (Source 7-6).

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 76


We can also check the existence and the correctness of the Log database by submitting a
simple query to the database. (see Figure 7-9)

SOURCE 7-6 Deploy Global Script to initialize the To Log Channel script
var logDir = new Packages.java.io.File( $('logDir') );

if ( !logDir.exists() ) try {
logDir.mkdir();
} catch (err) {
logger.error('GLOBAL Scripts: Cannot create LOG folder ' + err);
}

Notice that the LogDir path with the path to your Log folder location is taken from the
variable defined as $('logDir').

FIGURE 7-9 Deploy Global Scripts to initialize the To Log Channel

Since in our case the To Log DB destination is disabled, the MS Access initialization code
is disabled as well for simplicity.

Now is the time to define the Log folder. Save and deploy the channel. Switch to the
Mirth Configuration Map panel (Settings > Configuration Map).

FIGURE 7-10 Configuration Map to store project’s settings

77 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION


Click Add button and create a new setting to store the project‟s error log file location.
Save the changes. Now this path is available for other scripts as a variable called using
$('logDir') notation. (see Figure 7-10)

Notice that the path requires two backslashes as a separator.

Channel Implementation Verification

Deploy or redeploy all channels. This time we will not test the Data Logger channel
separately; we test all three channels that we have created so far together. Prepare the
Dashboard panel for the test by removing all messages that are left from the previous
tests. Clear the Server Log info panel as well.

Select the Query Sender channel and click Send Message, copy and paste the QBP^E22
message (use the Source 5-4 snippet), change the message trigger event to ERR2 and
click Send. If everything is done correctly, you should see a new file in the folder you
specified for log files.

What if it does not work as expected?

FIGURE 7-11 v2-v3 Transformer channel results for non-existing channel

To see, double-click the destination that caused the error, and switch to the Error tab.
You will see a Java stack trace that may give some hint as to the cause of the error. For
this specific case I decided to verify which message has been passed to the v2-v3
Transformer channel. To do this, open the channel for editing, go to the Preprocessor
script and add a logging function to see the inbound message in a native format.

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 78


FIGURE 7-12 v2-v3 Transformer channel Preprocessor step

Notice that the logging function uses something similar to what web developers call
“bread crumbs” by adding the channel name and the processing step. This helps find a
specific logging function later, especially when there are dozens of channels. Save the
changes, redeploy the channel and try sending the message again. Part IV has more
about debugging channels.

Alternatively you may use logger.debug() which works quite similarly except the fact
that its behavior depends on log level settings in log4j.properties file. Thus, if the log
level for transformer is set to INFO, logger.debug will not produce any output, i.e.:

log4j.logger.transformer = INFO

Now you can verify what the channel is receiving and why it produces an error. For
example, you might have typed ERR2 incorrectly or typed it in a field other than the
trigger event.

As a friendly reminder, do not forget to delete logging functions when errors are fixed.

79 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION


CHAPTER 8 HL7v3 Verification Channel

HL7v3 Verification Channel


This channel probably is the most interesting channel in this project. It receives the
incoming eligibility request message, and validates it against XML Schemas and
Schematron rules. If the message is correct it adds a record to the PostgreSQL
database for other channels to consume. To do this, this channel uses the database
connection pool to the database. Global Scripts are also used to configure the channel
during the deployment process.

Prerequisites

There are certain requirements for this channel to work correctly. You need a database,
XML Schema and Schematron files, and appropriate Java libraries.

 Database: This channel uses a PostgreSQL database to store patients and messages
information. Install and configure this or a similar database. Create a database called
Eligibility; create two tables called Patients and Messages. The SQL scripts for this are
given in the Appendix D. Download the PostgreSQL JDBC Level 4 driver and copy to
the Mirth Connect custom-lib folder.
 Java libraries: For the Schematron validation I‟m using the phLOC library. Download
this or a similar package, and copy the JARs to the Mirth Connect custom-lib folder.
 XML Schema: The Eligibility Query message is built using the HL7v3 Normative
Edition 2012 schemas without any changes. Create the Schema folder under custom-
lib, and copy the appropriate interaction, message type and CMET schemas to the
Schema folder. Do not forget to copy the coreschemas folder as well.
 Schematron: I‟m using a custom built Schematron file. It is the very simple one, and
verifies only the message ID and message creation date/time.

Make sure that custom-lib is in your Windows operating system CLASSPATH variable.
Once you deploy all files and update the CLASSPATH, restart the Mirth Connect Server.
You can find a copy of custom-lib in the archive provided with this book.

Note: Mirth Connect may not work correctly with Java libraries built using Maven. In
this case you may see the error “... is not a function, it is object ...”. Several
solutions were discussed in the Mirth forum to overcome this: build the
package using the -d switch when compiling the Java source file; use the Ant
tool. Choose what is most appropriate in your case.

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 80


Summary Tab

Create a new channel and call it HL7v3 Verification channel. Add channel‟s tag and
description. Open the data type settings window and change types to HL7v3.x.

FIGURE 8-1 HL7v3 Verification channel Summary and Data Types settings

Make sure that Strip Namespaces settings are unchecked for both inbound and
outbound message types! Otherwise, the schema validation will fail with “Cannot find
the declaration of element QUCR_IN200101UV01” exception.

Source Connector

This channel is waiting for the HL7v3 Eligibility request message over the MLLP Message
Transport protocol. Therefore, the Source connector should be configured as a TCP
Listener waiting for messages in port 6622.

Contrary to the other channels implementations, this is not the end of the story for this
channel‟s Source connector. It has two Source connector transformers to do the XML
Schema and Schematron verification and validation. Results are stored in the Channel
Map for the Destination connector to consume.

All other settings for TCP Listener Connector are left as they are.

81 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION


FIGURE 8-2 HL7v3 Verification channel Source connector settings

Schema Validation transformer

Open the Source connector tab, click Edit Transformer and create a new step. Rename it
to Schema Validation or something similar. Use the following code script (see Source 8-
1) to validate the inbound message. At the beginning, this script creates an instance of
the msg object with required namespaces and then uses this instance to validate against
XML schemas. The original msg object is kept intact.

SOURCE 8-1 QUCR_IN200101UV01 Schema validation script


default xml namespace = "urn:hl7-org:v3";

var msgValidate = new XML( msg.toString() );


msgValidate.addNamespace( new Namespace('xsi','http://www.w3.org/2001/XMLSchema -instance') );

var factory = new Packages.javax.xml.validation.SchemaFactory.newInstance('http://www.w3.org/2001/XMLSchema');

var schemaLocation = new Packages.java.io.File( $('custom-lib') + 'schemas\\' + globalMap.get('schemaRequest') );


var schema = factory.newSchema( schemaLocation );

var validator = schema.newValidator();

var reader = new java.io.StringReader( msgValidate );


var source = new Packages.javax.xml.transform.stream.StreamSource( reader );

var validationResult = false;


var errorString = '';

try {

validator.validate( source );
validationResult = true;

} catch( err ) {
errorString = err.message.replace('org.xml.sax.SAXParseException:','').replace(/("|')/g, '');
} finally {
channelMap.put('schemaValidationError', errorString);
channelMap.put('schemaValidationPassed', validationResult);
}

Notice that the location of custom-lib is obtained from the Global Channel Map. This and
other channel‟s settings are discussed later in the Global Scripts section.
PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 82
FIGURE 8-3 XML Schema Validation transformer script

The XML schema validation result is assigned to the schemaValidationPassed variable.


The schemaValidationError variable may contain an additional explanation of the reason
the validation failed.

Schematron Validation transformer

Create one more transformer step and call it Schematron Validation or something similar.
This transformer step does a similar job of validating the inbound message against the
Schematron rules and assigning the final result to the schematronValidationPassed
variable. Explanations of Schematron rules errors, if any, appear in the
schematronValidationError variable. (see Source 8-2)

SOURCE 8-2 QUCR_IN200101UV01 Schematron validation script


var validationResult = true;
var errorString = '';
var schematronLocation = new Packages.java.io.File( $('custom-lib') + 'schematron\\' + globalMap.get('schematronRequest') );
var schema = new Packages.com.phloc.commons.io.resource.FileSystemResource(schematronLocation);
var schematronResource = new Packages.com.phloc.schematron.pure.SchematronResourcePure(schema);
try {
if ( schematronResource.isValidSchematron() ) {
var msgXML = new Packages.com.phloc.commons.xml.transform.StringStreamSource(msg);
var result = schematronResource.applySchematronValidationToSVRL(msgXML);
if (result != null) {
var failedAsserts = Packages.com.phloc.schematron.svrl.SVRLUtils.getAllFailedAssertions (result);
validationResult = ( 0 == failedAsserts.size() );

for( var i=0; i < failedAsserts.size(); i++ ) {


errorString += failedAsserts.get(i).text + '; ';
}
}
}
} catch(err) {
logger.error('Schematron exception: ' + err);
} finally {
channelMap.put( 'schematronValidationError', errorString.replace(/[ \x00-\x1F]/g,'') );
channelMap.put( 'schematronValidationPassed', validationResult );
}

83 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION


Validate scripts, return to the channel and save all changes.

FIGURE 8-4 Schematron Validation transformer script

This script is a bit simplified, it returns only a list of all failed assertions in a given
schematron output. You may further expand it by adding a verification of the list of all
successful reports in a given schematron output. Refer to com.phloc.schematron API
user guide.

Note that Inbound and Outbound Message Templates are not required for these scripts
and therefore left empty.

Destinations Connector

Switch to the Destinations tab. We will create two destinations here based on validation
results. If the inbound eligibility query has passed all validation steps, such message will
be parsed and stored in the database. If the message has failed, then the feed will be
sent to Data Logger channel to store it in the Log database.

To Patients DB destination

Rename the destination to To Patient DB or chose a better name. Change the connector
type to Database Writer. Select the database driver type in accordance with your
database; it is PostgreSQL in my case. Click Insert URL Template and change it to a valid
connection string. Provide the username and the password.

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 84


FIGURE 8-5 To Patient DB Destination connector settings

The Use JavaScript checkbox is selected, but there is no JavaScript required here. This is
explained later in the Scripts and Global Scripts sections.

To Patients DB filter

Since this channel handles only valid eligibility query messages, there is a filter that
verifies if the inbound HL7v3 message has passed all validation steps, i.e., XML schema
and Schematron validations.

Click Edit Filter, create a new rule and change the type to JavaScript. Copy and paste the
filter script code given below (see Source 8-3)

SOURCE 8-3 To Patient DB Filter script


if ( $('schemaValidationError').length() != 0 ) {
return false;
} else if ( $('schematronValidationError').length() != 0 ) {
return false;
} else
return true;

There is another way to type the same script using predefined variables. For doing this
click the Reference tab on the left side if another tab is opened, then drag and drop
validation variables from the Available Variables box. Type the rest of the filter rule script.
(see Figure 8-6)

85 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION


FIGURE 8-6 To Patient DB Filter script

The code snippet in Source 8-3 is not the most efficient one and is used for clarity only.

To Patients DB transformer

Once a valid HL7v3 query message is received it is time to parse it and store data in the
database. Use the QUCR_IN200101 template and paste it to the Inbound Message
Template box. Switch to the Message Trees tab and use the parsed message for mapping.
Copy and paste the code given below (see Source 8-4)

SOURCE 8-4 To Patient DB Transformer script


var dbConn = globalChannelMap.get('postgreConn');

if ( null == dbConn ) {
dbConn = getPostgreSQLConn();
globalChannelMap.put('postgreConn', dbConn);
}

var mid = msg['id']['@extension'].toString();


var cdate = msg['creationTime']['@value'].toString();
var sender =
msg['sender']['device']['asAgent']['representedOrganization']['id']['@controlInformationExtension'
].toString();
var author =
msg['controlActProcess']['authorOrPerformer']['assignedPerson']['id']['@controlInformationExtensio
n'].toString();

var fname =
msg['controlActProcess']['queryByParameter']['parameterList']['coveredPartyAsPatientPerson.Name'][
'value']['part'][1]['@value'].toString();
var lname =
msg['controlActProcess']['queryByParameter']['parameterList']['coveredPartyAsPatientPerson.Name'][
'value']['part'][0]['@value'].toString();
var pid =
msg['controlActProcess']['queryByParameter']['parameterList']['coveredPartyAsPatien t.Id']['value']
['@extension'].toString();
var dob =
msg['controlActProcess']['queryByParameter']['parameterList']['coveredPartyAsPatientPerson.BirthTi
me']['value']['@value'].toString();

var insertString = "WITH tid AS (INSERT INTO patients VALUES (DEFAULT, " +
"'" + fname + "','" + lname + "','" + pid + "','" + dob + "', DEFAULT) RETURNING id) INSERT INTO
messages VALUES (DEFAULT, " +

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 86


"'" + mid + "','" + cdate + "','" + sender + "','" + author + "', (SELECT * FROM tid));"

try {
if ( dbConn != null )
var result = dbConn.executeUpdate(insertString);
} catch(err) {
logger.error('v3 Verification - To Patients DB (Passed) Transformer: ' + err);
dbConn.rollback();
}

What this code does is it verifies if the database connection exists and if so extracts some
fields from the inbound message, creates an SQL insert query and performs it. Any errors
that may happen during this process will be logged to the Server Log info panel.

For simplicity, the Patients table we‟ve created stores only patient‟s first and last name,
birth date and a patient identifier such as the Social Security Number. There is one more
field called processed to indicate if this record has been processed by the other channel
or not. The Messages table contains only the message unique identifier, date when
message was created, the message sender and the initial query creator (the author).
There is also an identifier that connects the patient record and the message record.

FIGURE 8-7 To Patient DB transformer script

Notice that the connection string is obtained from the Global Channel Map. This is
explained later. Validate the script, return to the channel, and save all changes.

To Data Logger destination

The To Data Logger channel destination connector is very similar to the one we created
for the HL7v3 to HL7v3 Transformer channel. If the inbound message has failed
87 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION
validation, this message is accepted by this destination. The destination then creates a
feed, encodes the original HL7v3 message and sends it to the Data Logger channel we
created earlier.

FIGURE 8-8 To Data Logger channel Destination connector settings

This destination connector type is TCP Sender with the remote port 6644. Other settings
are left unchanged.

To Data Logger filter

This channel must handle only a failed eligibility query message. Therefore, there is a
filter that verifies if the message has not passed validation steps (see Source 8-1 and
Source 8-2). Click Edit Filter, create a new rule and change the type to JavaScript. Use the
channels variables in the Available Variables box and type the rest of the filter rule script.
(see Source 8-5)

SOURCE 8-5 To Data Logger filter script


if ( $('schemaValidationError').length() != 0 ) {

return true;

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 88


} else if ( $('schematronValidationError').length() != 0 ) {
return true;
} else
return false;

FIGURE 8-9 To Data Logger filter script

Return to the channel and save the changes.

To Data Logger transformer

The To Data Logger channel destination transformer contains the same JavaScript steps
we have already built for the HL7v2 to HL7v3 Transformer channel transformer. Create a
new step and call it Error Feed or something similar. Change the step type to JavaScript.
Type, or copy and paste the source code for this step. You may use the Inbound Message
Template and Message Trees to build the script. (see Source 8-6)

FIGURE 8-10 Error Feed Transformer step

What this script does is it creates an initial error feed in XML format and populates all
required values.

SOURCE 8-6 Error Feed transformer script


tmp['message']['creationDate'] = msg['creationTime']['@value'].toString();
tmp['message']['id'] = msg['id']['@extension'].toString();
tmp['message']['type'] = 'QUCR_IN200101UV01';

89 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION


tmp['message']['trigger'] = msg['controlActProcess']['code']['@code'].toString();
tmp['message']['version'] = msg['versionCode']['@code'].toString();

if( $('schemaValidationError').length() != 0 ) {
tmp['cause'] = $('schemaValidationError');

} else if( $('schematronValidationError').length() != 0 ) {


tmp['cause'] = $('schematronValidationError');
}

Add one more step; rename it to Error Feed or something similar. Change the step type
to JavaScript. The source code of this step extracts the original inbound message in
HL7v3 format using the connectorMessage object, encodes it using the Base64 encoding
algorithm and places it into the same error feed created in the previous step. (see Source
8-7)

SOURCE 8-7 Encode Transformer script


tmp['attachment'] = getBase64Encrypted( connectorMessage.getRaw().getContent() );
tmp['attachment']['@type'] = 'HL7v3';

Validate scripts, return to the channel and save the changes.

Code Templates

Earlier we decided to use JavaScript to connect to the database but left JavaScript box
unchanged (see Figure 8-5). The database connection for this channel is established
during the channel deployment stage and is taken from the Code Templates.

SOURCE 8-8 getPostgreSQLConn Code Templates script


function getPostgreSQLConn() {
return DatabaseConnectionFactory.createDatabaseConnection( 'org.postgresql.Driver' ,
'jdbc:postgresql://localhost:5445/ELIGIBILITY' , globalMap.get('sqlUser') , globalMap.get(
'sqlPsw'));
}

For doing this open Code Templates for editing and create a new code template.

FIGURE 8-11 getPostgreSQLConn code template function

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 90


Rename it to getPostgreSQLConn, add a description and copy the following code. Set the
Context to Channel which makes this function available on the channel level only.
Validate the script and save the changes.

Now getPostgreSQLConn is available in the Reference tab, in the User Defined Functions
category when you are editing the destination transformer step, for example.

Global Scripts

The Deploy Global Scripts is expanded to verify that everything is ready for the HL7v3
Verification: XML schema and Schematron files are available; the Global Map contains a
path to the custom-lib folder, and the PostgreSQL database is running and it contains the
Eligibility database with all the required tables.

SOURCE 8-9 Deploy Global Script code (the old code is greyed out)
var logDir = new Packages.java.io.File( $('logDir') );
if ( !logDir.exists() ) try {
logDir.mkdir();
} catch (err) {
logger.error('GLOBAL Scripts: Cannot create LOG folder ' + err);
}

/*
globalMap.put( 'logDB', 'QBP_Log.accdb' );
if ( new Packages.java.io.File( $('logDir') + '/' + globalMap.get('logDB')).exists() ) {
try {
var dbConn =
DatabaseConnectionFactory.createDatabaseConnection('sun.jdbc.odbc.JdbcOdbcDriver','jdbc:odbc:QBP_LOG_DB','','');
var result = dbConn.executeCachedQuery('SELECT 1 FROM Messages;');
dbConn.close();
} catch( err) {
logger.error('GLOBAL Scripts: Log database cannot be opened. Verify that QBP_LOG_DB System
DSN for Log MS Access database is defined ' + err);
}
} else {
logger.error('GLOBAL Scripts: Log MS Access database file cannot be found, channels may not work
correctly.');
} */
/*********** HL7v3 Verification global variables ***********/

globalMap.put( 'schemaRequest', 'QUCR_IN200101UV01.xsd' );

if ( !new Packages.java.io.File($('custom-lib') + 'schemas\\' + globalMap.get('schemaRequest')).exists() )


logger.error('GLOBAL Scripts: QUCR_IN200101 Query Schema cannot be found, channels will not work
correctly.');

globalMap.put( 'schemaResponse', 'QUCR_IN210101UV01.xsd' );

if ( !new Packages.java.io.File($('custom-lib') + 'schemas\\' + globalMap.get('schemaResponse')).exists() )


logger.error('GLOBAL Scripts: QUCR_IN210101 Response Schema cannot be found, channels will not work
correctly.');

globalMap.put( 'schematronRequest', 'QUCR_IN200101UV01.sch' );

if ( !new Packages.java.io.File($('custom-lib') + 'schematron\\' + globalMap.get('schematronRequest')).exists()


)
logger.error('GLOBAL Scripts: QUCR_IN200101 Query Schematron file cannot be found, channels will not
work correctly.');

/*********** Patients DB variables ***********/

globalMap.put( 'sqlUser', 'postgres' );


globalMap.put( 'sqlPsw', 'admin' );

try {

91 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION


var dbConn =
DatabaseConnectionFactory.createDatabaseConnection('org.postgresql.Driver','jdbc:postgresql://localhost:5445/ELI
GIBILITY',globalMap.get('sqlUser') , globalMap.get( 'sqlPsw'));
var result = dbConn.executeCachedQuery('SELECT 1 FROM public.patients;');
dbConn.close();

} catch( err) {
logger.error('GLOBAL Scripts: jdbc:postgresql ' + err);
}

return;

FIGURE 8-12 Deploy Global Script code

You may need to change the PostgreSQL user name, password and port to align with
your installation settings. You may also decide not to keep the admin password as a
plain text.

Save the changes and redeploy all channels to have the Deploy Global Script to take
effect.

Scripts

Open the HL7v3 Verification channel for editing again and switch to the Scripts tab. As
was mentioned earlier, the database connection is retrieved from the connection pool.

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 92


Deploy

The Deploy script (see Source 8-10) is the place to create the PostgreSQL connection and
place it into the Global Channel Map to pass to the To Patients DB transformer.

SOURCE 8-10 Deploy script


var dbConn = getPostgreSQLConn();
globalChannelMap.put('postgreConn', dbConn);

FIGURE 8-13 Deploy channel script

Validate the script and save the changes.

Undeploy

The Undeploy script (see Source 8-11) does the opposite; it closes the PostgreSQL
connection when this channel is no longer needed.

SOURCE 8-11 Undeploy script


var dbConn = globalChannelMap.get('postgreConn');
if ( dbConn != null) dbConn.close();

Validate the script and save the changes. Notice that the Scripts tab has changed and
now shows the number of scripts available for this channel.

Channel Implementation Verification

Now we can test the first logical chain (request) from sending the message to storing the
message data into the database. Prepare the Dashboard for the test by removing all
messages that are left from previous tests. Clear the Server Log info panel as well.

Make sure that custom-lib folder contains all required Java packages, XML schema and
Schematron files. You may find XML schemas and Schematron files in the archive
provided with this book. Download the same or latest versions of other required
packages from the Internet as needed. (see Figure 8-14)

93 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION


FIGURE 8-14 custom-lib folder content

You may add project required resources without restarting Mirth Connect Server by
using the resources settings tab (Settings > Resources). Copy a new file or library to the
custom-lib folder and click Reload Resources. Now the newly added resource is available
for channels scripts. (see Figure 8 -15)

FIGURE 8-15 Adding resources during run-time

To test the normal flow, select the Query Sender channel and click Send Message, copy
and paste the QBP^E22 message (use the Source 5-4), and Send. Verify that records are
added to the Patient and Message tables.

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 94


Now repeat the steps to send the query message, this time change the Sending Facility
(MSH.4) to ERR3, and click Send. A record should be added to the QBP_log database
containing the original HL7v3 message and a new log line in the log file. Send another
message with ERR2 as the trigger event. The result should be similar – a new record in
the database with the original HL7v2 message and updated log file.

95 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION


CHAPTER 9 Response Sender Channel

Response Sender Channel


This channel starts forming the response chain and is used to return the coverage
eligibility status in response to the query interaction. The Source connector is facing
the database and periodically queries the Patients table looking for unprocessed
records. Once a record is found, the channel creates the HL7v3 Eligibility Query Result
(QUCR_IN210101) message and sends it further using the SOAP message.

Because the idea behind this channel is not to actually process the El igibility Query
Request, this channel always returns the Eligibility Query Result message with a positive
result. It also provides only a portion of the information that typically should be included
in a response message.

Summary Tab

Create a new channel or import the Response Sender channel from the archive. Type the
channel name, channel tag and channel description.

Click Set Data Types and configure inbound and outbound messages for both Source
connector and Destination connector to XML. Uncheck Strip Namespaces for the
destination inbound and outbound messages.

FIGURE 9-1 Response Sender channel Summary and Data Type settings

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 96


Source Connector

In the previous chapter we explored the Database Writer connector type that adds data
to the PostgreSQL database. Now we explore a connector that reads that data from the
database. In addition to simply querying the data, the Database Reader connector can
write back to the database to change the status of the queried record. To make this
happen, a table must be designed in such way as to contain an extra field called
“processed” or something similar that could be updated once the query is complete.

Switch to the Source connector tab and change the connector type to Database Reader.
Configure the Driver and URL settings to handle the PostgreSQL database or the
database of your choice if you are using different one.

Set the Use JavaScript option to Yes. Set the Retry Interval to a reasonable value.

FIGURE 9-2 Response Channel Source connector settings

97 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION


Database Query script

The Patients table contains a field called “processed” to indicate whether this is a new
record or it has been processed. The JavaScript below queries the Patients table and
takes one record at a time where the processed field‟s value equal false. To query the
database it uses the database connection pool created earlier. If the connection from the
pool is not valid, the script recreates one and adds it to the Global Channel Map.

SOURCE 9-1 Source Connector database query script


var dbConn = globalChannelMap.get('postgreConn');

if ( null == dbConn ) {
dbConn = getPostgreSQLConn();
globalChannelMap.put('postgreConn', dbConn);
}

var sqlSelect = 'SELECT patients.id AS id, patients.fname AS fname, patients.lname AS lname,


patients.pid AS pid, patients.dob AS dob, patients.processed AS processed, ' +
'messages.mid AS mid, messages.cdate AS cdate, messages.sender AS sender, messages.author AS
author FROM messages INNER JOIN patients ON messages.pid = patients.id ' +
'WHERE patients.processed = false LIMIT 1;';

var result = dbConn.executeCachedQuery( sqlSelect );

return result;

Notice that once this script is placed into the JavaScript box, the box next to the Post-
Process JavaScript is populated with the list of queried fields. (see Figure 9-3)

FIGURE 9-3 Fields mapping patterns

These fields can be queried using $('column_name') mapping pattern where the
column_name is the name of the actual column, for example $('id') for the id field.

Postprocess script

Set the Run Post-Process Script to After each message. This allows running the
postprocess script every time the Source connector successfully queries the database.

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 98


This script I‟m using takes the ID of the record that is returned and sets the field
processed value to true so that the next time the post-processor is run this record will
be ignored.

This script also uses the database connection pool and updates it if the connection is lost
or does not exist by placing a new connection into the Global Channel Map.

SOURCE 9-2 Source Connector Postprocess script

var dbConn = globalChannelMap.get('postgreConn');

if ( null == dbConn ) {
dbConn = getPostgreSQLConn();
globalChannelMap.put('postgreConn', dbConn);
}

var sqlUpdate = 'UPDATE patients SET processed = true WHERE id = ' + $('id') + ';';

var result = dbConn.executeUpdate( sqlUpdate );

Instead of using the database connection pool, you may also use the Connection and
Select / Update scripts auto generators.

Destinations Connector

Now switch to the Destinations tab. There will be only one destination here that connects
to the Web Service (explained later) and sends the SOAP message with the Eligibility
Query Result as a payload message.

To Web Service destination

Rename the destination to To Web Service destination or chose a better name.

Change the connector type to Web Service Sender. Type the WSDL URL or copy the
following URL:
http://localhost:8081/services/EligQuery?wsdl

By clicking Get Operations, the Mirth Connect Administrator populates the Service and
Port fields with appropriate values. Unfortunately, this can happen only if the Web
Service Listener, which is the channel described in a following chapter, exists. Otherwise,
this does not work and displays the alert “Error caching WSDL”. (see Chapter 20 Polling
Web Services)

99 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION


FIGURE 9-4 Response Sender channel settings

To solve this, either jump ahead and import the v3-v2 Transformer channel and deploy or
copy and paste the following lines into the Service and Port fields:

 Service: {http://ws.connectors.connect.mirth.com/}DefaultAcceptMessageService
 Port: {http://ws.connectors.connect.mirth.com/}DefaultAcceptMessagePort

The SOAP message used by the Mirth Connect Server is predefined and provided in the
code snippet below. (see Source 9-3)

SOURCE 9-3 Web Service SOAP message template


<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:ws="http://ws.connectors.connect.mirth.com/">
<soapenv:Header/>
<soapenv:Body>
<ws:acceptMessage>
<arg0><![CDATA[${message.encodedData}]]></arg0>
</ws:acceptMessage>
</soapenv:Body>
</soapenv:Envelope>

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 100


There is no other way to change this SOAP message except by overriding methods in the
Mirth Connect source code for the Web Service Sender and Listener. The payload
message is sent in the arg0 element. Without the CDATA wrapper it will fail to process.

To Web Service transformer

The transformer to build the HL7v3 QUCR_IN210101 message from the database query
result is a simple one. It adds only mandatory field values and ignores others. The
message templates need to be created before the transformer step is added.

Open the Transformer for editing and switch to the Message Templates tab on the left
side of the editor window. Copy and paste the inbound template that represents the
query result from the code snippet below (see Source 9-4). Then copy and paste the
outbound HL7v3 QUCR_IN210101UV01 message template which you can find in
Appendix B or the archive provided with this book. Change outbound template Data
Type to HL7v3.x and set Strip Namespace property to checked.

SOURCE 9-4 Web Service inbound message template


<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<result>
<id>value</id>
<fname>value</fname>
<lname>value</lname>
<pid>value</pid>
<dob>value</dob>
<processed>value</processed>
<mid>value</mid>
<cdate>value</cdate>
<sender>value</sender>
<author>value</author>
</result>

Add a new message transformer step, change the type to JavaScript and rename it Build
QUCR or something similar. This step combines all mappings required to create the
Eligibility Query Response message by assigning a unique message id, creation
date/time, the status code to determine if the answer is yes or no (it is hardcoded to Ok),
and parameters of the original request message.

SOURCE 9-5 Build QUCR Transformer step script


tmp['id']['@extension'] = UUIDGenerator.getUUID();
tmp['creationTime']['@value'] = Now('yyyyMMddhhmmss-0800');

tmp['acknowledgement']['targetMessage']['id']['@extension'] = msg['mid'].toString();

tmp['controlActProcess']['reasonOf']['detectedIssueEvent']['code']['@code'] = 'OK';
tmp['controlActProcess']['reasonOf']['detectedIssueEvent']['text']['@value'] = 'Request is successful';

var familyName = msg['lname'].toString();


var givenName = msg['fname'].toString();
var dob = msg['dob'].toString();
var pid = msg['pid'].toString();

tmp['controlActProcess']['queryByParameter']['parameterList']['coveredPartyAsPatientPerson.Name']['value']['part

101 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION


'][0]['@value'] = familyName;
tmp['controlActProcess']['queryByParameter']['parameterList']['coveredPartyAsPatientPerson.Name']['value']['part
'][1]['@value'] = givenName;
tmp['controlActProcess']['queryByParameter']['parameterList']['coveredPartyAsPatientPerson.BirthTime']['value'][
'@value'] = dob;
tmp['controlActProcess']['queryByParameter']['parameterList']['coveredPartyAsPatient.Id']['value']['@extension']
= pid;

The application level acknowledgement detail is provided as the original message ID


along with the hardcoded type code AA that indicates that the original message was
processed successfully.

FIGURE 9-5 To Web Service transformer script

Validate the script, return back to the channel and save all changes.

Scripts

Like the HL7v3 Verification channel, this channel uses the database connection pool
created during the channel deployment process. Switch to the Scripts tab and open the
Deploy script.

Deploy

The Deploy script (see Source 9-6) is the place to create the PostgreSQL connection and
place it in the Global Channel Map to pass to the To Web Service transformer.

SOURCE 9-6 Deploy script

var dbConn = getPostgreSQLConn();


globalChannelMap.put('postgreConn', dbConn);

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 102


FIGURE 9-6 Deploy script to create the PostgreSQL connection

If you save changes, the Scripts tab will change showing the total number of scripts
defined there.

Undeploy

The Undeploy script (see Source 9-7) performs when the channel is no longer needed
and therefore the PostgreSQL database connection can be safely closed.

SOURCE 9-7 Undeploy script


var dbConn = globalChannelMap.get(‘postgreConn’);
if ( dbConn != null) dbConn.close();

Other scripts, i.e., Preprocessor and Postprocessor scripts, are left unchanged.

Channel Implementation Verification

To verify this channel, clone the To Web Service destination, rename it to, for example, To
Test File and change the connector type to File Writer. Configure the connector. Drag
${message.encodedData} to the Template box. Disable the To Web Service destination.

Make sure that the PostgreSQL DB is running.

Select the Query Sender channel and click Send Message, copy and paste the QBP^E22
message (see Source 5-4), and click Send. Run the SQL query against the Patients and
Messages tables to verify that new records are stored to the database.

If the Response Sender channel works correctly the file with the HL7v3
QUCR_IN210101UV01 message should appear shortly in the folder you specified for this
test destination.

103 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION


Run the SQL Select query and make sure that the “processed” field is changed to true
indicating that the record has been processed. You may change this field value back to
false and verify that another test file appears.

Once the test has been successfully completed, enable To Web Service destination, and
disable or delete the To Test File destination.

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 104


CHAPTER 10 HL7v3 to HL7v2 Transformer Channel

v3-v2 Transformer Channel


This is the last channel in the line. It translates the HL7v3 Eligibility Query Response
(QUCR_IN210101UV01) received from the Response Sender channel to the response
message (RSP^E22) in HL7v2 format, and stores the message in a file. The HL7v3 to
HL7v2 translation is done using the XSL Transformation. This is possible because Mirth
Connect serializes any given message from a native format to XML and all subsequent
transformations are done using XML.

The Mirth Connect Server uses Xalan-Java as the XSLT processor to transform one XML
document to another which means by default Mirth supports XSLT 1.0.

The way to create the XSLT file that transforms the QUCR_IN210101UV01 message to
RSP^E22 is not discussed in this book. Use one of the possible tools to do the mapping ,
or create the XSLT file manually. The sample used here is provided in Appendix E.

Summary Tab

Create a new channel or import the v3-v2 Transformer channel from the archive. Type
the channel name, channel tag and channel description.

FIGURE 10-1 v3-v2 Transformer channel Summary and Data Types settings

105 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION


Open the Set Date Type settings window, change Source connector inbound and
outbound message types to XML and Destination 1 outbound message type to HL7v2.x.

Source Connector

This channel plays a role of the Web Service server. Switch to the Source tab and change
the connector type to Web Service Listener. If it is not already in use by another
application, use the default port 8081.

In the Service Name box type EligQuery. The WSDL URL box will be changed as well.
This is the URL used by the Response Sender channel to connect to the Web Service.

FIGURE 10-2 v3-v2 Transformer channel Source connector settings

No other changes are required here. The Source connector does not have any filters or
transformer steps. Save the changes.

Destinations Connector

Switch to the Destinations tab. There will be only one destination here that transforms
the message to HL7v2 format and stores it into the file.

Rename the destination from Destination 1 to To Response File or chose a better name.
Change the connector type to File Writer. Specify the file folder and the file name. You
PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 106
may use ${logFolder} variable or specify another folder. For the file name you may use
RSP_E22_${date.get("MMMdd-hhmmss")}.hl7.

FIGURE 10-3 v3-v2 Transformer channel Destination settings

Drag Encoded Data from the Destination Mappings panel to the Template box. This is a
stub and will be changed later when the transformer step is specified. Verify the
connector and save the changes.

To Response File transformer

Create a new transformer step and choose the XSLT Step type. Rename the transformer
step to QUCR to RSP XSL Transformation or something similar. The source XML string is
the inbound message itself, i.e., msg. The outbound message or result will be called
RSP_XML_XSLT. Copy and paste the XSLT Template from Appendix E. (see Figure 10-4)

107 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION


FIGURE 10-4 QUCR to RSP XST Transformation step

Note that the Generated Script tab is available next to the Step tab. Switch to that tab to
see the auto generated JavaScript code that performs the same function, i.e., transforms
one XML message to another using XSL Transformation. To add flexibility, you may use
this code instead and specify the XSLT script in the external file. (see Figure 10-5)

FIGURE 10-5 Auto-generated JavaScript script

The result of the XSL Transformation is another XML. The expected result should be in
HL7v2 format and therefore one more transformation step is required. Create one more
step and rename it to Convert to HL7v2 or something similar. Change the transformer
step to JavaScript and copy the script from the code snippet below. (see Source 10-1).

SOURCE 10-1 Convert to HL7v2 Transformer script

tmp = SerializerFactory.getSerializer('HL7V2').fromXML( $('RSP_XML_XSLT') );


channelMap.put('RSP_HL7v2', tmp);
channelMap.put("logFolder", $("logDir"));

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 108


This code creates the RSP_HL7v2 channel variable with the outbound message in HL7v2
format. No inbound or outbound message templates are required for these steps.

FIGURE 10-6 Convert to HL7v2 transformer step

Return to the Destination connector, delete ${message.encodedData} temporarily


added to the Template box. Notice that the Destination Mappings panel on the left side
now lists the RSP_XML_XSLT and RSP_HL7v2 channel variables. Drag and drop the latter
to the Template box.

FIGURE 10-7 Updated v3-v2 Transformer channel Destination connector settings

Channel Implementation Verification

With the implementation of this channel this phase of the entire project is done and can
be tested.

Redeploy all channels. Prepare the Dashboard by removing all messages that are left
from the previous tests. Clear the Server Log info panel as well. Clear the log files and log
entries in the MS Access database. Clear the Patients and Messages tables in PostgreSQL
database.

To test the normal flow, select the Sender Channel and click Send Message, copy and
paste the QBP^E22 message (see Source 5-4), and click Send. After a while, the

109 PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION


RSP_E22_xxxxx-xxxxxx.hl7 file should appear in the folder you specified for the result file,
where Xs in file name are replaced by the current date and time.

Now repeat the steps by sending the query message with ERR2 instead of the trigger
event (MSH.9.2) and then another one with ERR3 as the Sending Organiation (MSH.4).
Records should be added in the QBP_log database and log lines in the log file. The
RSP_E22_xxxxx-xxxxxx.hl7 file should not appear in these cases.

FIGURE 10-8 Dashboard panel

This concludes this part of the project.

PART II – GENERIC ELIGIBILITY SERVICE IMPLEMENTATION 110


PART III – ACKNOWLEDGEMENTS IMPLEMENTATION

Acknowledgements
Implementation
CHAPTER 11 Acknowledgements Introduction

CHAPTER 12 HL7v3 ACK Channel

CHAPTER 13 HL7v3 Verification ACK Channel

CHAPTER 14 HL7v3 to HL7v2 Transformer ACK Channel

CHAPTER 15 Query Sender ACK Channel


CHAPTER 11 Acknowledgements Introduction

Acknowledgements Introduction
This part addresses an aspect of the communication environment missed in the
Eligibility Service messaging implementation in Part 2. That implementation was
based on a messaging scenario where the HL7 message is sent from the Sender to
the Receiver without any type of acknowledgement.

This part is dedicated to implementation of the imaginary Eligibility Service where an HL7
message payload is sent from the Sender to the Receiver and there is a requirement for
the Receiver to acknowledge the message reception by sending a positive or negative
acknowledgement message to the originator.

Scenario Overview

The Sender application submits the HL7v2 formatted query to the HL7 Transformer. The
HL7 Transformer verifies the received message and immediately responds with an
application level acknowledgement message based on the result of the verification. If the
received query has successfully passed the message verification, HL7v2 ACK is sent to the
Sender, otherwise HL7v2 NACK is sent back.

Then, the HL7 Transformer in turn sends the HL7v3 query to the Service. The Service also
verifies the query message and confirms whether it accepts or rejects the content of the
received HL7 message payload.

The processing of the application level acknowledgement occurs in each of the


Application Roles separately, i.e., there is no forwarding of the Service acknowledgement
to the Sender by the HL7 Transformer. This means that if the Sender has received the
positive acknowledgement message it assumes that the Eligibility Query Request
message is accepted and the Sender can wait for the Eligibility Query Response.

Interaction Model

The Interaction Model for the Eligibility Service (Figure 11-1) shows the interactions
between applications and is depicted in the sequence diagram.

PART III – ACKNOWLEDGEMENTS IMPLEMENTATION 112


FIGURE 11-1 Imaginary Eligibility Query sequence diagram with acknowledgements

The Application Roles for this part of the project are the same. The difference is that two
additional messages will be implemented as follows:

 ACK^A01 – HL7v2 General Acknowledgement - This message is used to


acknowledge receipt of a message from the Sender by the Receiver. It is never used
to initiate a message interaction; it merely provides a mechanism for a Sending
Application to confirm that a Receiving Application has received its message.
 MCCI_IN000002 – HL7v3 Accept Acknowledgment – Acknowledgment sent by the
Receiver to the Sender and invoked as the receiver responsibility. This message does
not contain a payload.

These message interactions are used for both positive and negative acknowledgements.
For example, to confirm that the Eligibility Query Request message has passed the initial
verification, the acknowledgement message is sent with the AA (Application Accept)
Acknowledgement Code. If an error is found, the Acknowledgement Code is set to AE
(Application Error). There are other error codes, such as AR (Application Reject) if the
receiving application is not available, but those will not be used in our implementation.

Acknowledgement Channels Overview

Besides changes related to sending acknowledgements, there will be two more channels
added to the project implemented in Part 2. These two channels store the HL7v2 and
HL7v3 acknowledgement messages in files.

The Figure 11-2 diagram shows what will be implemented in addition to the already
created channels. For clarity, some connections are not shown on the diagram, thus the

113 PART III – ACKNOWLEDGEMENTS IMPLEMENTATION


Data Logger channel still receives validation errors along with the message that caused
validation errors even if this connection is not explicitly shown.

One additional channel for this project is:

 HL7v3 ACK Channel – receives HL7v3 MCCI_IN000002 messages from the v2-v3
Transformer channel and stores them into a file.

FIGURE 11-2 Acknowledgements implementation plan

Throughout the implementation of acknowledgements we will explore response settings,


message routing and acknowledgement HL7v2 and HL7v3 messages creation.

PART III – ACKNOWLEDGEMENTS IMPLEMENTATION 114


CHAPTER 12 HL7v3 ACK Channel

HL7v3 ACK Channel


Rather than overloading the HL7v2 to HL7v3 Transformer-ACK channel with the code
that stores the HL7v3 acknowledgement message, and to show how to route
messages from one channel to another, we will create a so called “technical” channel
that does it instead. This channel simply receives an HL7v3 MCCI_IN000002 Accept
Acknowledgment message sent back by the v3 Verification channel and routed by the
HL7v2 to HL7v3 Transformer-ACK channel, and stores this message in the file.

The way the message is received by the HL7v3 ACK channel is explained later.

Summary Tab

Create a new channel or import the HL7v3 ACK channel from the archive provided with
this book, and switch to the Summary tab.

Type the channel name, channel tag and channel description. You may omit the channel
tag if you wish.

FIGURE 12-1 HL7v3 ACK channel Summary and Data Types settings

Click Set Data Types and configure inbound and outbound messages for both Source
connector and Destination connector to be HL7v3.x. Leave the other settings unchanged.

115 PART III – ACKNOWLEDGEMENTS IMPLEMENTATION


Source Connector

This channel is listening for MCCI_IN000002 Accept Acknowledgment messages. The


Source connector is configured as a Channel Reader.

FIGURE 12-2 HL7v3 Ack channel Source connector

There are no settings to configure for the Channel Reader connector type.

Destinations Connector

Switch to the Destinations tab. Rename the destination from Destination 1 to To ACK File
or choose a better name if you like. Change the connector type to File Writer.

FIGURE 12-3 HL7v3 Ack channel Destination connector

PART III – ACKNOWLEDGEMENTS IMPLEMENTATION 116


Use ${logFolder} specified as mapping of the configuration $('logDir') setting to
local channel map. For the File Name setting use the Destination mappings value
${FileName} specified in the Destination transformer with the current date/time.
For example - ${FileName}_${date.get("MMMdd_hhmmss")}.xml

To MCCI File transformer

Add a transformer step, change the type to JavaScript and rename it to File Name. Copy
and paste the code in Source 12-1.

SOURCE 12-1 File Name transformer script


channelMap.put("logFolder", $("logDir"));

if ( 'AA' == msg['acknowledgement']['@typeCode'].toString() )
channelMap.put('FileName', 'MCCI_ACK');
else
channelMap.put('FileName', 'MCCI_NACK');

No inbound or outbound message templates are required.

FIGURE 12-4 HL7v3 Ack channel transformer step

It also sets the FileName Channel Map variable to an appropriate file name based on the
type of the response received. Thus, if the response is positive the file name should start
with MCCI_ACK, otherwise it starts with MCCI_NACK.

Scripts

The Deploy script is not necessary for this channel as well. Like the HL7v2 ACK channel, it
stores the channel unique ID in the Global Channel Map for the same purpose, to be
retrieved later by another channel.

Type, or copy and paste the code in Source 12-2 as the Deploy script.

117 PART III – ACKNOWLEDGEMENTS IMPLEMENTATION


SOURCE 12-2 Deploy script

globalMap.put('v3ACKChannelId',channelId);

Validate connectors and transformer steps and store the changes.

FIGURE 12-5 HL7v3 Ack channel Deploy script

Validate connectors and transformer steps and store the changes. Deploy or redeploy
the channel and test it by sending the MCCI message only through this channel. If
everything is correct, there should be a file in the log folder with the appropriate file
name.

PART III – ACKNOWLEDGEMENTS IMPLEMENTATION 118


CHAPTER 13 HL7v3 Verification ACK Channel

HL7v3 Verification ACK Channel


Sending an acknowledgement is a bit tricky in Mirth Connect. Several steps are needed
to send and process the acknowledgement. First, create a customized ACK message.
Second, add the newly created customized ACK to the response map. Third, specify
the response for the Source connector.

In addition to that the Mirth built-in ACKGenerator class allows only “to generate HL7
v2.x acknowledgments based on an inbound message”. (Mirth API) It‟s user‟s responsibility
to provide the code to create HL7v3 ACK message.

This chapter explains all three steps listed above. It will not repeat all changes required to
HL7v3 Verification channel (see Chapter 8), only sources required to send the ACK
message are shown.

Destinations Connector

Open the HL7v3 Verification channel for editing or import the HL7v3 Verification-ACK
channel from the package provided with this book. You may rename it to HL7v3
Verification-ACK to differentiate projects. No other changes are required on Summary or
Source connector tabs.

To Patients DB transformer

Switch to the Destinations tab. Select the To Patient DB destination and open the
Transformer for editing.

Create one more step and change the type to JavaScript. Rename the step to Build ACK.
This step builds the MCCI_IN000002 Acknowledgement message and adds it to the
Channel Map. (see Source 13-1)

SOURCE 13-1 Build ACK Transformer script


var ack = new XML( getMCCI_IN000002() );

var sender =
msg['sender']['device']['asAgent']['representedOrganization']['id']['@controlInformationExtension'
].toString();
ack['receiver']['device']['asAgent']['representedOrganization']['id']['@controlInformationExtensio
n'] = sender;

var receiver =
msg['receiver']['device']['asAgent']['representedOrganization']['id']['@controlInformationExtensio
n'].toString();

119 PART III – ACKNOWLEDGEMENTS IMPLEMENTATION


ack['sender']['device']['asAgent']['representedOrganization']['id']['@controlInformationExtension'
] = receiver;

ack['creationTime']['@value'] = Now('yyyyMMddhhmmss');
ack['id']['@extension'] = UUIDGenerator.getUUID();

ack['acknowledgement']['targetMessage']['id']['@extension'] = msg['id']['@extension'].toString();

ack['acknowledgement']['@typeCode'] = 'AA';
ack['acknowledgement']['acknowledgementDetail']['code']['@code'] = '100';
ack['acknowledgement']['acknowledgementDetail']['text']['@value'] = 'No errors';

channelMap.put('V3ACK', ack);

Validate the script to avoid syntax errors.

FIGURE 13-1 HL7v3 Verification channel Build ACK Transformer step

Return to the channel and save the changes.

To Data Logger transformer

Whereas Build ACK in the To Patient DB destination transformer step creates the positive
acknowledgement message, a new transformer step for the To Data Logger channel is
required to create a negative acknowledgement message.

Return to the Destinations tab, select the To Data Logger destination and open the
Transformer for editing. Create one more step, change the type to JavaScript and rename
the step to Build NACK or something similar. This step is done in a pretty similar way as
for the ACK message. The only difference is that it adds XML schema or Schematron
validation errors to the message. (see Source 13-2)

SOURCE 13-2 Build NACK Transformer script


var nack = new XML( getMCCI_IN000002() );

var sender =
msg['sender']['device']['asAgent']['representedOrganization']['id']['@controlInformationExtension'

PART III – ACKNOWLEDGEMENTS IMPLEMENTATION 120


].toString();
nack['receiver']['device']['asAgent']['representedOrganization']['id']['@controlInformationExtensi
on'] = sender;

var receiver =
msg['receiver']['device']['asAgent']['representedOrganization']['id']['@controlInformationExtensio
n'].toString();
nack['sender']['device']['asAgent']['representedOrganization']['id']['@controlInformationExtension
'] = receiver;

nack['creationTime']['@value'] = Now('yyyyMMddhhmmss');
nack['id']['@extension'] = UUIDGenerator.getUUID();

nack['acknowledgement']['targetMessage']['id']['@extension'] = msg['id']['@extension'].toString();

nack['acknowledgement']['@typeCode'] = 'AE';

if( $('schemaValidationError').length() != 0 ) {

nack['acknowledgement']['acknowledgementDetail']['code']['@code'] = 'HL7v3';
nack['acknowledgement']['acknowledgementDetail']['text']['@value'] =
$('schemaValidationError');

} else if( $('schematronValidationError').length() != 0 ) {

nack['acknowledgement']['acknowledgementDetail']['code']['@code'] = 'HL7v3';
nack['acknowledgement']['acknowledgementDetail']['text']['@value'] =
$('schematronValidationError').toString().replace(';','').trim();

channelMap.put('V3ACK', nack);

Validate the script to avoid syntax errors.

FIGURE 13-2 HL7v3 Verification channel Build NACK Transformer step

Return to the channel and save the changes.

121 PART III – ACKNOWLEDGEMENTS IMPLEMENTATION


Code Templates

An empty MCCI_IN000002 message is created by the Code Templates function. Save the
HL7v3 Verification-ACK channel if you have not done so. Return to the Channels Tasks
panel and open the Code Templates for editing. Create a new code template, rename it
to getMCCI_IN000002 or something similar, add a description and copy and paste the
code in Source 13-3.

SOURCE 13-3 getMCCI_IN000002 code template script


function getMCCI_IN000002() {

var mcci = '<MCCI_IN000002UV01 ITSVersion="XML_1.0">' +


'<id root="2.16.840.1.113883.1.3" extension=""/>' +
'<creationTime value=""/>' +
'<versionCode controlInformationRoot="2.16.840.1.113883.11.19373" code="V3PR1"/>' +
'<interactionId root="2.16.840.1.113883.1.6" extension="MCCI_IN000002UV01"/>' +
'<processingCode code="D"/>' +
'<processingModeCode code="T"/>' +
'<acceptAckCode code="NE"/>' +
'<receiver typeCode="RCV">' +
'<device classCode="DEV" determinerCode="INSTANCE">' +
'<id controlInformationRoot="2.16.840.1.113883.3.40.5.1"
controlInformationExtension="Organization"/>' +
'<asAgent classCode="AGNT">' +
'<representedOrganization classCode="ORG"
determinerCode="INSTANCE">' +
'<id
controlInformationRoot="2.16.840.1.113883.3.51.200" controlInformationExtension=""/>' +
'</representedOrganization>' +
'</asAgent>' +
'</device>' +
'</receiver>' +
'<sender typeCode="SND">' +
'<device classCode="DEV" determinerCode="INSTANCE">' +
'<id controlInformationRoot="2.16.840.1.113883.3.40.5.2"
controlInformationExtension="Organization"/>' +
'<asAgent classCode="AGNT">' +
'<representedOrganization classCode="ORG"
determinerCode="INSTANCE">' +
'<id
controlInformationRoot="2.16.840.1.113883.3.51.200" controlInformationExtension=""/>' +
'</representedOrganization>' +
'</asAgent>' +
'</device>' +
'</sender>' +
'<acknowledgement typeCode="AE">' +
'<targetMessage>' +
'<id root="2.16.840.1.113883.1.3" extension=""/>' +
'</targetMessage>' +
'<acknowledgementDetail>' +
'<code code=""/>' +
'<text value=""/>' +
'</acknowledgementDetail>' +
'</acknowledgement>' +
'</MCCI_IN000002UV01>';

return mcci;

Validate the script and save the changes.

PART III – ACKNOWLEDGEMENTS IMPLEMENTATION 122


FIGURE 13-3 getMCCI_IN000002 Code Templates function

Scripts

At present we have done only preliminary work by creating positive and negative HL7v3
acknowledgment messages. Now we need to actually send these messages back to the
sending channel, i.e., to the v2-v3 Transformer channel. In Mirth Connect this is done
storing the acknowledgement message to the channel‟s response map in the
Postprocessor script.

Note: This works similarly even if you assign a response message in other scripts,
except Attachment script. However, the recommended way is to use the
Postprocessor script for that. (see Figure 2-2)

Postprocessor

Open the HL7v3 Verification-ACK channel for editing and switch to the Scripts tab. Select
the Postprocessor script. Copy and paste the code in Source 13-4.

SOURCE 13-4 Postprocessor script


var ack = channelMap.get('V3ACK');

if ( ack.toString().length > 0 ) try {


responseMap.put('MCCI_IN0000 02', ack.toString() );
} catch(err) {
logger.error('v3 Verification - Postprocessor - Exception: ' + err);
}
return;

123 PART III – ACKNOWLEDGEMENTS IMPLEMENTATION


FIGURE 13-4 v3 Verification channel Postprocessor script

The main task of this script is to place the acknowledgement message into the channel‟s
response map. Validate the script and save all changes.

Source Connector

How does the channel send the acknowledgement message?

For doing this we need to specify what message the Source connector sends in response
to a received inbound message. Open the Source connector tab, click the Response drop
down list under Source Settings and select MCCI_IN000002 previously added to the
Response Map. If you do not see the MCCI_IN000002 value in the list (re)deploy the
channel and try again.

FIGURE 13-5 HL7v3 Verification channel Source connector response setting

Save all changes and redeploy the channel.

PART III – ACKNOWLEDGEMENTS IMPLEMENTATION 124


CHAPTER 14 HL7v2 to HL7v3 Transformer ACK Channel

v2-v3 Transformer ACK Channel


Thereceived
HL7v2 to HL7v3 Transformer-ACK channel is updated to process HL7v3 responses
from the HL7v3 Verification-ACK channel, and it, in turns, send positive or
negative HL7v2 responses to the Query Sender channel.

First, we will start by handling and routing the HL7v3 acknowledgement message
received from the HL7v3 Verification-ACK channel. Then, as before, we continue with the
creation of the HL7v2 acknowledgement message to be sent to the Query Sender
channel.

Destinations Connector

We will start with the Response Transformer. Open the v2 - v3 Transformer-ACK channel
for editing. Switch to the Destinations tab. Select the To HL7v3 Verification channel
destination, verify that the Ignore Response box under TCP Sender Settings is unchecked;
otherwise your channel will ignore acknowledgement messages. (see Figure 14-1)

FIGURE 14-1 Destination settings

Open the Response Transformer for editing.

To HL7v3 Verification channel response

In the Response Transformer, create a step and change the type to JavaScript. Rename
the step to Route MCCI or choose a better name. This step simply takes the response
message represented by the msg variable and routes it to another channel represented
by the channel ID.

To avoid any errors, the channel ID for the HL7v3 ACK channel is taken from the global
map we initially added in this channel‟s Deploy script.

SOURCE 14-1 Route MCCI Response script

router.routeMessageByChannelId(globalMap.get('v3ACKChannelId'), msg);

125 PART III – ACKNOWLEDGEMENTS IMPLEMENTATION


FIGURE 14-2 HL7v2 to HL7v3 Transformer Response Transformer step

No inbound or outbound message templates are required here. Once routed, the
MCCI_IN000002 Acknowledgement message is handled by the Channel Reader specified
as the Source connector for the HL7v3 ACK channel and then stored in the file.

To HL7v3 Verification transformer

The Response Transformer script allows intercepting the incoming acknowledgement


messages sent by the HL7v3 Verification ACK channel. But the HL7v2 to HL7v3
Transformer-ACK channel also has to submit the HL7v2 ACK/NACK message every time a
QBP^E22 query is received. Therefore, one more transformer step is required for each
destination – one for sending the positive acknowledgment and one for sending the
negative acknowledgment.

On the Destination tab select To HL7v3 Verification destination and open the Transformer
for editing. Add one more step, change the type to JavaScript and rename it to Build ACK
or something similar.

This script uses Mirth Connect built-in ACKGenerator to retrieve a dummy HL7v2
acknowledgement message. Then it populates this message with the appropriate
information. The code for this is in Source 14-2.

SOURCE 14-2 Build ACK Transformer script


var ack;

try {
var msgACK = ACKGenerator.generateAckResponse(connectorMessage.getRawData(), 'AA',
'SUCCESS');

ack = new XML( SerializerFactory.getSerializer('HL7V2').toXML(msgACK).toString() );


ack['MSH']['MSH.4']['MSH.4.1'] = msg['MSH']['MSH.6']['MSH.6.1'].toString();
ack['MSH']['MSH.6']['MSH.6.1'] = msg['MSH']['MSH.4']['MSH.4.1'].toString();
ack['MSH']['MSH.7']['MSH.7.1'] = Now('yyyyMMddhhmmss');
ack['MSH']['MSH.9']['MSH.9.1'] = 'ACK';
ack['MSH']['MSH.9']['MSH.9.2'] = 'A01';
ack['MSH']['MSH.10']['MSH.10.1'] = UUIDGenerator.getUUID();

PART III – ACKNOWLEDGEMENTS IMPLEMENTATION 126


ack['MSH']['MSH.12']['MSH.12.1'] = '2.7';
ack['MSA']['MSA.2']['MSA.2.1'] = msg['MSH']['MSH.10']['MSH.10.1'].toString();

} catch(err) {
logger.error(err);
} finally {
channelMap.put('ACK_A01', ack);
}

FIGURE 14-3 To HL7v3 Verification Transformer step

Validate the script, return to the channel and save all changes.

To Data Logger transformer

The script for the To HL7v3 Verification destiantion creates the positive HL7v2
acknowledgement message.

If the inbound QBP^E22 query message has failed validation and passed to the logging
channel, there is a need to create a negative HL7v2 acknowledgement message. Switch
to the To Data Logger destination and open the transformer for editing. Add one more
step, change the type to JavaScript and rename it to Build NACK. (see Source 14-3)

SOURCE 14-3 Build NACK Transformer script


var errorList = new Packages.java.util.ArrayList( globalChannelMap.get('errorList') );
var nack;
if( !errorList.isEmpty() ) try {
var msgNACK = ACKGenerator.generateAckResponse(connectorMessage.getRawData(), 'AR',
'ERROR');
nack = new XML( SerializerFactory.getSerializer('HL7V2').toXML(msgNACK).toString() );
nack['MSH']['MSH.4']['MSH.4.1'] = msg['MSH']['MSH.6']['MSH.6.1'].toString();
nack['MSH']['MSH.6']['MSH.6.1'] = msg['MSH']['MSH.4']['MSH.4.1'].toString();
nack['MSH']['MSH.7']['MSH.7.1'] = Now('yyyyMMddhhmmss');

127 PART III – ACKNOWLEDGEMENTS IMPLEMENTATION


nack['MSH']['MSH.9']['MSH.9.1'] = 'ACK';
nack['MSH']['MSH.9']['MSH.9.2'] = 'A01';
nack['MSH']['MSH.10']['MSH.10.1'] = UUIDGenerator.getUUID();
nack['MSH']['MSH.12']['MSH.12.1'] = '2.7';

nack['MSA']['MSA.2']['MSA.2.1'] = msg['MSH']['MSH.10']['MSH.10.1'].toString();

for ( var i = 0; i < errorList.size(); i++ ) {


var errorDesc = errorList.get(i);
var errXML = new XML( getErrorSegment(errorDesc) );
nack.appendChild( errXML );
}
} catch(err) {
logger.error('v2-v3 Transformation - Build NACK - Exception:' + err);
} finally {
channelMap.put('ACK_A01', nack);
}

FIGURE 14-4 To Data Logger Transformer step

Enter and validate the script, return to the channel and save all changes.

Code Templates

The creation of the error segments for the ACK^A01 negative acknowledgement
message is handled by getErrorSegment function specified in Code Templates. Save the
HL7v2 to HL7v3 Transformer channel if you have not done so. Open Code Templates for
editing. Create a new code template, rename it to getErrorSegment, add a description
and copy the code in Source 14-4.
PART III – ACKNOWLEDGEMENTS IMPLEMENTATION 128
SOURCE 14-4 getErrorSegment code template script
function getErrorSegment( errorDesc ) {
var xmlResult;
if (null != errorDesc) {
xmlResult = new XML('<ERR/>');
xmlResult['ERR.1']['ERR.1.1'] = errorDesc.SegmentID;
xmlResult['ERR.1']['ERR.1.2'] = errorDesc.Sequence;
xmlResult['ERR.1']['ERR.1.3'] = errorDesc.Position;
xmlResult['ERR.1']['ERR.1.4'] = errorDesc.Description;
} else {
xmlResult = new
XML('<ERR><ERR.1><ERR.1.1></ERR.1.1><ERR.1.2></ERR.1.2><ERR.1.3></ERR.1.3><ERR.1.4></ERR.1.4></ERR.1></ERR>');
}
return xmlResult;
}

This step is to show how to build another XML structure and assign variables.

FIGURE 14-5 getErrorSegment Code Templates function

Validate the Code Templates script and save the changes.

Scripts

Once the positive and negative HL7v2 acknowledgement messages are created, the
Postprocessor script has to be changed to place one of these messages into the Response
Map.

Postprocessor

Open the v2-v3 Transformer-ACK channel again, switch to the Script tab and select the
Postprocessor script. Copy and paste the script in Source 14-5.

SOURCE 14-5 HL7v2 to HL7v3 Transformer channel Postprocessor script


var ack = channelMap.get('ACK_A01');

129 PART III – ACKNOWLEDGEMENTS IMPLEMENTATION


if ( ack.toString().length > 0 ) try {
var HL7v2ACK = SerializerFactory.getSerializer('HL7V2').fromXML(ack);
responseMap.put('ACK_A01', ResponseFactory.getErrorResponse( HL7v2ACK.toString() ) );
} catch(err) {
logger.error('v2-v3 - Postprocessor - Exception: ' + err);
}
return;

FIGURE 14-6 HL7v2 to HL7v3 Transformer Postprocessor script

Now the proper response message is in the response map and can be specified in the
Source connector response setting. Validate the script and save the changes.

Source Connector

The last step is to change the response settings for the Source connector to specify what
the Source connector sends in response to a received message. Open the Source
connector, click the Response drop down list under Response Settings and select ACK_A01
previously added to the Response Map. If you do not see the ACK_A01 value in the list
(re)deploy the channel and try again.

FIGURE 14-7 HL7v2 to HL7v3 Transformer Source connector Response setting


PART III – ACKNOWLEDGEMENTS IMPLEMENTATION 130
Channel Implementation Verification

To test the implementation of the HL7v3 acknowledgement message, save all changes
and redeploy all channels. At a minimum, the following channels should be successfully
deployed: Query Sender, v2-v3 Transformer-ACK, v3 Verification-ACK, Data Logging and
HL7v3 ACK.

Prepare the Dashboard by removing all messages that are leftover from previous tests.
Clear the Server Log info panel as well.

Select the Query Sender channel and click Send Message, copy and paste the QBP^E22
message (use the Source 5-4 snippet) and click Send. If everything has been done
correctly, you should see a new MCCI_ACK file in the folder you specified for the HL7v3
ACK Channel To File destination.

Send another QBP^E22 message, this time change the Sending Facility (MSH.4) to ERR3.
There should be a MCCI_NACK file in the folder.

131 PART III – ACKNOWLEDGEMENTS IMPLEMENTATION


CHAPTER 15 Query Sender ACK Channel

Query Sender ACK Channel


Themessage
changes required for the Query Sender ACK channel to handle the HL7v2 response
coming from the HL7v2 to HL7v3 Transformation ACK channel will be
different. Instead of routing the acknowledgement message to another (technical)
channel, as we did with v2-v3 Transformer-ACK channel, this time we will re-route the
message back to the same channel, i.e., back to the Query Sender ACK. Consequently, we
will create another destination to store the acknowledgement message to the folder.

You may follow the code scripts in this chapter or use the Query Sender ACK channel
available for you in the archive provided with this book.

To begin with, open the channel for editing and switch to the Source connector tab. Note
that the Source Queue setting is OFF by default, we will change this later to re-route
acknowledgement messages. (see Figure 15-1)

FIGURE 15-1 Query Sender channel Source connector settings

Destinations Connector

Switch to the Destinations tab. Select the To HL7v2-HL7v3 Transformer destination and
verify that the Ignore Response box is unchecked. Open the Response transformer for
editing.

To HL7v2 - HL7v3 transformer response

Create a transformer step and change the type to JavaScript. Rename the step to Route
ACK or choose a better name. The idea is to handle the inbound HL7v2 response

PART III – ACKNOWLEDGEMENTS IMPLEMENTATION 132


message provided in XML format in the msg variable and route this message to the same
channel defined by the channel ID.

FIGURE 15-2 Query Sender Response Transformer step

To avoid errors, this script routes the received message using the channel ID. Copy the
script for this step from the Source 15-1.

SOURCE 15-1 Route ACK Response Transformer script


var hl7Response = SerializerFactory.getSerializer('HL7V2').fromXML(msg);
router.routeMessageByChannelId(channelId, hl7Response);

Validate script, return back to the channel and save changes.

ACK File transformer

Open the Destination tab. Add a new destination either by using a New Destination
under Channel Tasks panel or by right-clicking in the destination list and selecting the
same menu item. Rename it to ACK File and change the destination type to File Writer.
(see Figure 15-3)

FIGURE 15-3 Ack File Destination connector

133 PART III – ACKNOWLEDGEMENTS IMPLEMENTATION


If you decide to use another name for this destination you also need to make changes to
the Source connector transformer script that routes inbound messages (see Source 15-
3).

The Directory setting ${logDir} is specified in the Deploy Global Scripts. The File Name
setting uses the destination mappings value ${FileName} specified in the destination
transformer which is discussed below.
Type the following file name for now - ${FileName}_${date.get("MMMdd_hhmmss")}.hl7

Do not forget to drag and drop Encoded Data to the Template box.

Add a transformer step, change the type to JavaScript and rename it to File Name. Copy
and paste the code from Source 15-2.

SOURCE 15-2 File Name transformer script


if ( 'AA' == msg['MSA']['MSA.1']['MSA.1.1'].toString() )
channelMap.put('FileName', 'ACK');
else
channelMap.put('FileName', 'NACK');

No inbound or outbound message templates are required and provided, therefore, Mirth
Connect maps inbound message to outbound message without changes.

FIGURE 15-4 Ack File destination transformer step

It also sets the FileName channel map variable to an appropriate file name based on the
type of response received. Thus, if the response is positive the file name should start with
ACK, if it is negative the file name starts with NACK.

Source Connector

The last step is to change the response settings for the Source connector to specify
which message goes to which destination. Open the Source connector, click the Edit
Transformer. Create a transformer step and change the type to JavaScript. Rename the
step to Set Destinations or choose a better name.

PART III – ACKNOWLEDGEMENTS IMPLEMENTATION 134


Type, or copy and paste the code in Source 15-3.

SOURCE 15-3 Set Destinations script


if (msg['MSH']['MSH.9']['MSH.9.1'].toString().equals("QBP")) {
destinationSet.removeAllExcept("To HL7v2-HL7v3 Transformer");
} else if (msg['MSH']['MSH.9']['MSH.9.1'].toString().equals("ACK")) {
destinationSet.removeAllExcept("ACK File");
}

Validate connectors and transformer steps and store the changes.

FIGURE 15-5 Set Destination transformer script

No inbound or outbound message templates are required.

If you try this channel now most likely you‟ll end up with an infinite loop or locked
channel, i.e., re-routing will not work as expected. A small change is required on the
Source connector tab to make this work.

Return to the Source connector and set Source Queue to ON (see Figure 15-6).

FIGURE 15-6 Source connector setting to re-route ACK

135 PART III – ACKNOWLEDGEMENTS IMPLEMENTATION


Now acknowledgement messages from the To HL7v2-HL7v3 Transformer destination are
allowed to enter the Source connector and will be intercepted by the Source connector
transformer script (see Source 15-3). This script sends ACK messages down to the ACK
File destination which, in turn, saves it to the file.

Channel Implementation Verification

To test the implementation of the HL7v2 Acknowledgement message, save all changes
and redeploy all channels. At least the following channels should be successfully
deployed: Query Sender ACK, v2-v3 Transformation ACK, v3 Verification ACK, Data
Logger, HL7v2 ACK and HL7v3 ACK.

Prepare the Dashboard by removing all messages that may be leftover from previous
tests. Clear the Server Log area as well.

Select the Sender Channel and click Send Message, copy and paste the QBP^E22
message (use the Source 5-4 snippet) and Send. If everything has been done correctly,
you should see a new ACK file that represents the positive acknowledgement in HL7v2
format in the folder you specified for the ACK File destination. There also should be a
MCCI_ACK file representing the positive HL7v3 ACK message.

Send another QBP^E22 message, this time change the message trigger event to ERR2.
There should be a NACK file in the folder. Notice that the MCCI_NACK file does not
appear in the folder since the HL7v3 version of the QBP^E22 message has not been sent
for verification.

PART III – ACKNOWLEDGEMENTS IMPLEMENTATION 136


PART IV – DICOM

DICOM
CHAPTER 16 DICOM Storage SCU

CHAPTER 17 DICOM Storage SCP


CHAPTER 16 DICOM Storage SCU

DICOM Storage SCU


TheyearsDigital Imaging and Communications in Medicine (DICOM) standard is over 20
old and has been widely adopted. “DICOM is implemented in almost every
radiology, cardiology imaging, and radiotherapy device (X-ray, CT, MRI, ultrasound,
etc.), and increasingly in devices in other medical domains such as ophthalmology and
dentistry. With tens of thousands of imaging devices in use, DICOM is one of the most
widely deployed healthcare messaging standards in the world.” (medical.nema.org)

This section, which contains two channels, implements an imaginary solution that verifies
DICOM information objects and routes DICOM messages accordingly. Information
objects, as you may know, are part of the Service Object Pairs (SOP) Classes. Our solution
checks inbound DICOM files for the Encapsulated PDF Storage SOP Class where an
encapsulated PDF document can be attached to the DICOM message.

As per Mirth Connect fundamentals, each channel may have one source connector and
several destination connectors, that is, a single DICOM Listener and multiple DICOM
Senders. In other words, you can create a DICOM Router which filters and passes
transactions based on the DICOM header data.

Note: Using Mirth Connect to process thousands of high resolution images is quite
challenging for several reasons:
1. Mirth uses Java-based dcm4che library which requires high performance tuning.
2. Mirth serializes DICOM to XML and back to apply tags modifications.
3. Mirth acts only as a Service Class Provider for C-STORE commands.
4. Security, disaster recovery and PACS migration are not addressed in standard Mirth
implementations.

Scenario Overview

Many medical practices use Adobe PDF documents routinely because “Portable
Document Format (PDF) is a file format used to present documents in a manner
independent of application software, hardware and operating systems. Each PDF file
encapsulates a complete description of a fixed-layout flat document, including the text,
fonts, graphics and other information needed to display it.” (Wiki)

PART IV – DICOM 138


Usage of PDF has other advantages over DICOM, such as page layout, image
compression, security (digital signature, password protection) capabilities. Users can also
directly annotate a PDF file. For all of these reasons, DICOM SOP
1.2.840.10008.5.1.4.1.1.104.1, “Encapsulated PDF”, allows a PDF file to be encapsulated as
yet another DICOM object.

Nevertheless, since PDF is not yet widespread in DICOM/PACS world, it is much easier to
handle image-only DICOM files. Channels in this section are devoted to translating
DICOM with encapsulated PDF to DICOM with the image, extracted from that PDF.

As a side note, source DICOM files with encapsulated PDFs or DICOM files with images
are not provided with this book due to their size. You have to find DICOM files elsewhere
or build your own to test the implementation.

Application Roles

There are two channels in this scenario one of which acts as the service requestor and
another that acts as the service provider.

 DICOM SCU: A service requestor, also called as a Service Class User (SCU) in DICOM,
starts association establishment and sends DICOM files to the service provider. The
DICOM files being sent can be either DICOM with PDF encapsulation or DICOM with
image attachments.
 DICOM SCP: A service provider, also called a Service Class Provider (SCP) in DICOM,
receives DICOM files from the service requestor, extracts image(s) from the PDF file
and compiles another DICOM file with the image attachment. In addition, the service
provider creates HL7v2 ORU message with attached PDF. If the received DICOM file
has an image attachment only, the file is stored without any changes and a HL7v2
ORU message is not created.

The diagram in Figure 16-1 illustrates the game plan for this part of the book. The
DICOM SCU channel takes DICOM files from a folder and submits them to the DICOM
SCP channel. It does not change the DICOM file content. The DICOM SCP channel parses
incoming DICOM files and stores DICOMs and/or ORU_R01 messages in the destination
folder(s). This is the channel that does all major work in our scenario.

139 PART IV – DICOM


FIGURE 16-1 DICOM SCU/SCP channels’ implementation plan

Throughout this implementation we will explore a variety of connectors, some of which


you already know: File Reader and File Writer, DICOM Sender and DICOM Listener.

It is worth noting, however, that channels in this section represent only a skeleton
implementation and not an actual solution.

Summary Tab

Let‟s start with the DICOM SCU channel. Create a new channel or import the DICOM SCU
channel from the archive provided with this book, and switch to the Summary tab.

Type the channel name, channel tag and channel description. You may omit the channel
tag if you wish.

FIGURE 16-2 DICOM SCU channel Summary and Data Types settings

PART IV – DICOM 140


Click Set Data Types and configure inbound and outbound messages for both Source
connector and Destination connector to be DICOM. Leave the other settings unchanged.

Source Connector

This channel reads files from an input folder. To do this, the Source connector is
configured as a File Reader.

FIGURE 16-3 DICOM SCU channel Source connector

Input directory is specified in the Deploy script of the Global Scripts and taken from the
Configuration Map settings, for example:
globalMap.put('dicom_input', $('dicom_input'));

Change Source connector settings, such as Polling Frequency, After Processing Action
and so on, to something suitable for your case.

There are no filters or transformers to configure for this connector.


141 PART IV – DICOM
Destinations Connector

Switch to the Destinations tab. Rename the destination from Destination 1 to To DICOM
SCP or choose a better name if you like. Change the connector type to DICOM Sender.

FIGURE 16-4 HL7v3 Ack channel Destination connector

Verify that ${DICOMMESSAGE} is in the Template box, otherwise drag and drop DICOM
Message Raw Data from the Destination Mapping box.

Choose other settings such as Remote Host, Remote Port, Local and Remote application
entities fields, etc., as appropriate in your case. Notice that TCP/UDP port 104 is reserved
for DICOM (see RFC 1700).

There are no filters or transformers to configure for this connector. Save all changes and
deploy the channel.

PART IV – DICOM 142


CHAPTER 17 DICOM Storage SCP

DICOM Storage SCP


AsSCP,
outlined in the previous chapter, the DICOM service provider channel, or DICOM
serves as a simplified DICOM router. It enables connected medical facilities or
modalities (played by the DICOM SCU channel) to share or distribute medical
imaging studies among other systems (played by a destination folder on your local
computer).

Before we start, one thing to pay attention to is Java Virtual Machine heap size. As Java
documentation states “The JVM heap size determines how often and how long the JVM
spends collecting garbage. If you set a large heap size, full garbage collection is slower, but
it occurs less frequently. If you set your heap size in accordance with your memory needs,
full garbage collection is faster, but occurs more frequently.” (docs.oracle.com)

If you primarily deal with relatively small ASCII based HL7 files, you may be unpleasantly
surprised by how big a DICOM message can be. An average size for Computed
Radiography (CR) is about 30 Mb, for Computed Tomography (CT) – 32Mb, for Magnetic
Resonance (MR) – 21Mb, etc. Files of this size may trigger java.lang.OutOfMemoryError:
Java heap space errors that are difficult to reproduce and therefore, difficult to correct.

One possible preventative action is to increase the default JVM heap size by configuring
the server-side heap size in <Mirth root>\mcserver.vmoptions and client-size heap
size administrator.maxheapsize setting in …\conf\mirth.properties file.

Another potential problem area not confirmed by Mirth, is the default availability of the
SSLv2Hello protocol for DICOM Sender/Listener connector types during the channel‟s
deploy stage. This occurs regardless of configured TLS connection settings for DICOM
connector types. This may be considered a security flaw, found and fixed in JDK
implementation. “Many systems that require strong encryption for security will reject any
connections attempting to use the SSLv2 or SSLv2Hello protocol. Whatever protocols
(SSLv2, SSLv3, TLSv1) are selected, the SSLv2Hello protocol remains enabled by default.”
(JDK-6285492)

Prerequisites

It is assumed that you are familiar with the following tools and standards:

143 PART IV – DICOM


 DICOM: You do not need to read all 3000 pages of the DICOM standard to
implement channels in this section, but at least some understanding of DICOM basic
concepts is required.
 Apache PDFBox: open source Apache PDFBox library to create and manipulate PDF
documents.
 HL7v2 ORU_R01: Unsolicited Observation Message - Event R01 message structure.

So let‟s get started with the DICOM Service Class Provider channel.

Summary Tab

Create a new channel or import the DICOM SCP channel from the archive provided with
this book, and switch to the Summary tab.

Type the channel name, channel tag and channel description. You may omit the channel
tag if you wish.

FIGURE 17-1 DICOM SCP channel Summary and Data Types settings

Click Set Data Types and configure inbound and outbound messages for the Source
connector to be DICOM. There will be two destination connectors, one of which is
configured as HL7v2 data type for outbound messages. Leave the other settings
unchanged.

PART IV – DICOM 144


Source Connector

Move to Source connector tab. This channel is waiting for the DICOM files to be sent, so
set the Connector Type to DICOM Listener. If you change connector settings such as
listener address and port, or application entity, do not forget to make similar changes to
DICOM SCU channel‟s destination settings.

FIGURE 17-2 DICOM SCP channel Source connector

Validate the connector if you wish and then click “Edit Transformer” to process the
incoming DICOM message. There will be three Source connector transformer steps:
extract a PDF document from the inbound DICOM composites, extract image from the
PDF document and (re)-build DICOM composite with attached image.

Let‟s get started with the most exciting part of this project.

145 PART IV – DICOM


Extract PDF

Click the Edit Transformer. Create a transformer step and change the type to JavaScript.
Rename the step to Extract PDF or choose a better name. Copy and paste the code in
Source 17-1.

SOURCE 17-1 Extract PDF transformer script


var encDoc = msg..tag00420011;
tmp = msg;

if (encDoc.toString().length == 0) {
destinationSet.remove("HL7 Writer");
return;
}

var hexs = encDoc.split('\\');


var bytes = new Array(hexs.length);
for (i = 0 ; i < hexs.length; i++) {
sbyte = java.lang.Integer.parseInt(hexs[i], 16);
if (sbyte > 127) sbyte -= 256;
bytes[i] = new java.lang.Byte(sbyte);
}

delete tmp['tag00420011'];
delete tmp['tag00420012'];

channelMap.put("source_pdf", FileUtil.encode(bytes));

This script tries to extract an attached PDF document from the Encapsulated Document
tag (0042, 0011). If there is one, the script continues with PDF document processing,
otherwise the incoming DICOM file is routed to the DICOM File Writer destination
without any changes, which completes the source connector processing. The second
destination that creates HL7v2 file will not see this incoming message.

If everything goes as expected, the extracted PDF document contains a backslash-


separated list of digits that represent a hex sequence, i.e., something like this -
25\50\44\46\2D\31\… The script turns this sequence into an array of bytes and stores it
in the channel map to consume it later.

Finally, Encapsulated Document (0042, 0011) and MIME Type of Encapsulated Document
(0042, 0012) tags are removed from the outbound template. The inbound template
remains unchanged.

Notice that since no inbound or outbound message templates are provided (see
template boxes under the Message Templates tab), the outbound template variable, i.e.,
tmp, is explicitly assigned to inbound template variable, i.e. msg, to map all tags.

PART IV – DICOM 146


FIGURE 17-3 Extract PDF transformer step

Validate the script and move on.

Extract Image

Create a new transformer step and change the type to JavaScript. Rename the step to
Extract Image or choose a better name. Copy and paste the code in Source 17-2.

SOURCE 17-2 Extract Image transformer script


var inputStream = new Packages.java.io.ByteArrayInputStream(bytes);
var document = new Packages.org.apache.pdfbox.pdmodel.PDDocument.load(inputStream,
true);

try {

var pages = document.getDocumentCatalog().getAllPages();

var iter = pages.iterator();

while(iter.hasNext()) {
var page = iter.next();
var resources = page.getResources();
var pageImages = resources.getXObjects();

var imageIter = pageImages.keySet().iterator();


while(imageIter.hasNext()) {
var key = imageIter.next();

if (pageImages.get(key) instanceof

147 PART IV – DICOM


org.apache.pdfbox.pdmodel.graphics.xobject.PDXObjectImage) {

var image = pageImages.get(key);

if (image.getHeight() > 300) {

channelMap.put('rows', image.getHeight() );
channelMap.put('columns', image.getWidth() );

var awtImage = image.getRGBImage();


var out = new Packages.java.io.ByteArrayOutputStream();
var result = javax.imageio.ImageIO.write(awtImage, "JPEG", out);
channelMap.put("hasImage", result);
out.flush();
addAttachment(out.toByteArray(), "DICOM");

}
}
}
}

} catch(ex){

logger.error(ex)

} finally {
out.close();
document.close();
}

To deal with PDF documents, Mirth internally uses an open source library, Apache
PDFBox, which allows creation of new PDF documents, manipulation and, what is most
important in our case, the ability to extract content from PDF documents.

However, you will not find Apache PDFBox library related files in the \server-lib or
\client-lib folders. You also do not need to upload related jars to the \custom-lib
folder as it may cause version conflict. Required files are in the
<root>\extensions\doc\lib folder.

So, the script creates a PDDocument object to iterate through document‟s pages and
extracts only PDXObjectImage instances from each page. The script checks the image
size to exclude unwanted images, such as logos, signatures, clipart, bullets, etc., in the
PDF document.

The script can also convert the medical imaging study from one format to another
before attaching it to the outbound template. Image parameters, such as height and
width are stored in the channel map to be used later.

PART IV – DICOM 148


FIGURE 17-4 Extract Image transformer step

You may be puzzled why AttachmentUtil.addAttachment(attachment,content,


type) has three parameters as Mirth API doc explains, whereas the actual call of the
addAttachment as shown in the script above requires only two. Here‟s what is happening
under the hood. When Mirth compiles channel‟s scripts, it wraps some of the
AttachmentUtil class functions calls in a following manner:

function addAttachment(data,type) {
return AttachmentUtil.createAttachment(connectorMessage,data,type);
}

This also happens with getAttachments(). Check Appendix G with script samples for
details.

Update pixel data

Create a new transformer step and change the type to JavaScript. Rename the step to
Update Pixel Data or choose a better name. Since this step can be lengthy depending on
the medical study type, only a portion of DICOM data elements assignments is shown in

149 PART IV – DICOM


the source snippet (see Source 17-3). This should give you a sense how to create new
nodes and populate values. It calls a user-defined function, explained later in this chapter
(see Source 17-11).

SOURCE 17-3 Update pixel data transformer script


if (channelMap.get('hasImage')) {

setDICOMtag(tmp['tag00020002'],'1.2.840.10008.5.1.4.1.1.7'.length,"UI",
"1.2.840.10008.5.1.4.1.1.7");

setDICOMtag(tmp['tag00020010'],'1.2.840.10008.1.2.4.50'.length,"UI",
"1.2.840.10008.1.2.4.50");

setDICOMtag(tmp['tag00080016'],'1.2.840.10008.5.1.4.1.1.7'.length,"UI",
"1.2.840.10008.5.1.4.1.1.7")

createSegmentAfter('tag00280002', tmp..tag00200062);
createSegmentAfter('tag00280004', tmp..tag00280002);
createSegmentAfter('tag00280006', tmp..tag00280004);
createSegmentAfter('tag00280010', tmp..tag00280006);
createSegmentAfter('tag00280011', tmp..tag00280010);
createSegmentAfter('tag00280100', tmp..tag00280011);
createSegmentAfter('tag00280101', tmp..tag00280100);
createSegmentAfter('tag00280102', tmp..tag00280101);
createSegmentAfter('tag00280103', tmp..tag00280102);

setDICOMtag(tmp['tag00280002'],2, "US", 3);


setDICOMtag(tmp['tag00280004'],12,"CS", "YBR_FULL_422");
setDICOMtag(tmp['tag00280006'],2, "US", 0);
setDICOMtag(tmp['tag00280010'],2, "US", parseInt(channelMap.get('rows')));
setDICOMtag(tmp['tag00280011'],2, "US", parseInt(channelMap.get('columns')));
setDICOMtag(tmp['tag00280100'],2, "US", 8);
setDICOMtag(tmp['tag00280101'],2, "US", 8);
setDICOMtag(tmp['tag00280102'],2, "US", 7);
setDICOMtag(tmp['tag00280103'],2, "US", 0);

If a medical imaging study is present and re-attached to the DICOM message, DICOM
Information Entities have to be updated to reflect a new stage of the message.

Thus, the original Media Storage Service-Object Pair (SOP) Class, tag (0002, 0002),
defined by the Information Object Definition (IOD) representing Encapsulated PDF
Storage is set to 1.2.840.10008.5.1.4.1.1.104.1. Therefore, SOPClassUID and
TransferSyntaxUID need to be replaced to better represent the medical imaging study
extracted from the PDF document.

In addition, information entities related to Image Pixel Macro attributes are probably
missing from the original DICOM message, so we need to create them as well. We can
do this using the createSegmentAfter() reference function, which, as its name implies,
“create a new segment and inserts it after the target segment”. Notice that the tags, i.e.,

PART IV – DICOM 150


outbound message segments, are created in same order using slightly different way as
we did it before with createSegment() function. (see Source 6-5)

The rest of the script assigns appropriate values to the newly created data element,
including the element‟s value, its length, the data type of the element and the
group/element pair as a tag attribute.

FIGURE 17-5 Update pixel data transformer step

This concludes changes required to the Source connector. Validate scripts, validate the
Source connector and save all changes.

Destinations Connector

Now switch to the Destinations connector tab. Rename the destination from Destination
1 to DICOM File Writer or choose a better name if you like. Change the connector type to
File Writer.

DICOM File Writer

Use ${dicom_output} assigned in the Deploy Global Scripts and specified in


Configuration Map for the output folder.

151 PART IV – DICOM


For the File Name setting use the Destination mappings value ${DICOMfile} specified in
the Destination transformer. (This is discussed later, see Figure 17-7)

FIGURE 17-6 DICOM File Writer channel Destination connector

Make sure that File Type is set to Binary. Delete the default ${message.encodedData} in
the Template box and drag and drop DICOM Message Raw Data from the Destination
Mappings box on the right side. You should see ${DICOMMESSAGE} in the Template box.
(see Figure 17-6)

Click Edit Transformer. Create a new step which stays as a Mapper type by default. Type
the file name variable, i.e., DICOMfile into a Variable field and map it to DICOM
SOPInstanceUID tag - msg['tag00080018'].toString();

You may choose another DICOM element or other way to name the file if you like.

PART IV – DICOM 152


FIGURE 17-7 DICOMfile channel transformer mapping

Both inbound and outbound data types for this destination are DICOM. Notice that
inbound or outbound message templates for the DICOM File Writer destination are not
required since no changes are made to the passing DICOM message.

HL7 Writer

Return to the channel‟s Destination tab, add a new destination, rename it to HL7 Writer
or choose a better name if you like. Change the connector type to File Writer. This
destination converts incoming DICOM message to the outbound HL7v2 ORU^R01
Unsolicited Observation message and attaches the PDF document, initially extracted
from the DICOM in the Source connector, to the OBX segment.

Use ${hl7_output} assigned in the Deploy Global Scripts and specified in Configuration
Map for the HL7 output folder.

For the File Name setting use the value ${HL7file} specified in the Destination
transformer step (see Source 17-10).

Make sure that for this destination File Type is set to Text. Verify that Encoded Data, i.e.
${message.encodedData}, is in the Template box, or drag and drop it from the
Destination Mappings box on the right side.

Click the Edit Transformer. To begin with, verify that the Data Type for the outbound
message template is set to HL7v2.x. Then copy and paste a simplified ORU_R01 message
template. (see Source 17-4)

153 PART IV – DICOM


FIGURE 17-8 DICOM File Writer channel Destination connector

You may use other HL7v2 or HL7v3 message template if you like. For HL7v3 messages
you need to change the Data Type setting as well.

SOURCE 17-4 ORU_R01 Unsolicited Observation Message template


MSH|^~\&|ADM|Sending Organization|ALL|Receiving Organization|||ORU^R01^ORU_R01||D|2.7|||AL|AL
PID|1|||||||M
ORC|RE||||CM||||
OBR|1||||||
OBX|1|ED||||||A|||FF

Create a new transformer step and change the type to JavaScript. Rename the step to
MSH segment or choose a better name. Copy and paste the code in Source 17-5.

SOURCE 17-5 MSH segment transformer script


tmp['MSH']['MSH.7']['MSH.7.1'] = DateUtil.getCurrentDate("yyyyMMddhhmm");

var uuid = UUIDGenerator.getUUID();


tmp['MSH']['MSH.10']['MSH.10.1'] = uuid.substr(24, 12);

This step simply fills some required fields in the MSH segment. Feel free to extend this
code.

PART IV – DICOM 154


FIGURE 17-9 MSH segment channel transformer mapping

Create transformer steps for each of PID, ORC and OBR segments and change steps
types to JavaScript. Rename these steps, for example, as PID segment, ORC segment, OBR
segment or choose better names. Copy and paste codes in Source 17-6 to 17-8 snippets
to appropriate destination steps.

Note: Mappings in these steps should not be considered normative.


For specific implementations, you may check the IHE Radiology Technical Framework -
http://www.ihe.net/Technical_Frameworks/#radiology

If you are not familiar with HL7v2 message structure, I will provide a short description of
each segment, taken from the standard. Thus, “The PID segment is used by all
applications as the primary means of communicating patient identification information.”
(HL7v2)

SOURCE 17-6 PID segment transformer script


// DICOM Patient ID (0010,0020) to Patient Identifier List
tmp['PID']['PID.3']['PID.3.1'] = msg['tag00100020'].toString();

// DICOM Patient's Name (0010,0010) to Patient Name


tmp['PID']['PID.5'] = msg['tag00100010'].toString();

// DICOM Patient's Birth Date (0010,0030) to Date/Time of Birth


tmp['PID']['PID.7']['PID.7.1'] = msg['tag00100030'].toString();

// DICOM Patient's Sex (0010,0040) to Administrative Sex


tmp['PID']['PID.8']['PID.8.1'] = msg['tag00100040'].toString();

// DICOM Patient's Address (0010,1040) to Patient Address


if (msg['tag00101040'].toString().length > 0)
tmp['PID']['PID.11']['PID.11.1'] = msg['tag00101040'] .toString();

“The Common Order segment (ORC) is used to transmit fields that are common to all
orders (all types of services that are requested).” (HL7)

155 PART IV – DICOM


SOURCE 17-7 ORC segment transformer script
// DICOM Placer Order Number (0040,2016) to Placer Order Number
if (msg['tag00402016'].toString().length > 0)
tmp['PID']['PID.2']['PID.2.1'] = msg['tag00402016'].toString();

// DICOM Filler Order Number (0040,2017) or Accession number (0080,0050)


// to Filler Order Number
if (msg['tag00402017'].toString().length > 0)
tmp['ORC']['ORC.3']['ORC.3.1'] = msg['tag00402017'].toString();

else if (msg['tag00080050'].toString().length > 0)


tmp['ORC']['ORC.3']['ORC.3.1'] = msg['tag00080050'].toString();

// DICOM Issue Date of Imaging Service Request (0040,2004) or Study Date (0008,0020)
// to Date/Time of Transaction
if (msg['tag00402004'].toString().length > 0)
tmp['ORC']['ORC.9']['ORC.9.1'] = msg['tag00402004'].toString();

else if (msg['tag00080020'].toString().length > 0)


tmp['ORC']['ORC.9']['ORC.9.1'] = msg['tag00080020'].toString();

// DICOM Order Enterer's Location (0040,2009) to Enterer's Location


if (msg['tag00402009'].toString().length > 0)
tmp['ORC']['ORC.13']['ORC.13.1'] = msg['tag0040200 9'].toString();

// DICOM Order Callback Phone Number (0040,2010) to Call Back Phone Number
if (msg['tag00402010'].toString().length > 0)
tmp['ORC']['ORC.14']['ORC.14.1'] = msg['tag00402010'].toString();

“In the reporting of clinical data, the OBR serves as the report header. It identifies the
observation set represented by the following atomic observations.” (HL7)

SOURCE 17-8 OBR segment transformer script


// DICOM Study Description (0008,1030) to Universal Service Identifier (required field)
if (msg['tag00081030'].toString().length > 0)
tmp['OBR']['OBR.4']['OBR.4.1'] = msg['tag00081030'].toString();

else {
tmp['OBR']['OBR.4']['OBR.4.1'] = "404684003";
tmp['OBR']['OBR.4']['OBR.4.2'] = "Clinical finding";
tmp['OBR']['OBR.4']['OBR.4.3'] = "SNOMED CT" ;
}

// DICOM Filler Order Number (0040,2017) or Accession number (0080,0050)


// to Filler Order Number
if (msg['tag00402017'].toString().length > 0)
tmp['OBR']['OBR.3']['OBR.3.1'] = msg['tag00402017'].toString();

else if (msg['tag00080050'].toString().length > 0)


tmp['OBR']['OBR.3']['OBR.3.1'] = msg['tag00080050'].toString();

// DICOM Study Date (0008,0020) to Observation Date/Time


if (msg['tag00080020'].toString().length > 0)
tmp['OBR']['OBR.7']['OBR.7.1'] = msg['tag00080020'].toString();

// DICOM Modality (0008,0060) to Diagnostic Serv Sect ID


// requires (0008,0060) to HL7 Table 0074 mapping
if (msg['tag00080060'].toString().length > 0)
tmp['OBR']['OBR.24']['OBR.24.1'] = msg['tag00080060'].toString();

PART IV – DICOM 156


“The OBX segment is used to transmit a single observation or observation fragment. It
represents the smallest indivisible unit of a report. The OBX segment can also contain
encapsulated data, e.g., a CDA document or a DICOM image.” (HL7)

SOURCE 17-9 OBX segment transformer script


// Observation Value Type
tmp['OBX']['OBX.2']['OBX.2.1'] = "ED";

// DICOM Encapsulated PDF to Observation Value


var pdf = channelMap.get('source_pdf');

tmp['OBX']['OBX.5']['OBX.5.2'] = "Application";
tmp['OBX']['OBX.5']['OBX.5.3'] = "PDF";
tmp['OBX']['OBX.5']['OBX.5.4'] = "Base64";
tmp['OBX']['OBX.5']['OBX.5.5'] = FileUtil.encode(
org.apache.commons.codec.binary.StringUtils.getBytesUsAscii(pdf) );

The OBX segment in HL7v2 messages is unique in that it allows insertion of a variety of
Base64 encoded documents such as CDA, PDF or images. As the HL7 standard suggests
“OBX-2-value type contains the data type for this field according to which observation
value is formatted” which is “ED” or Encapsulated Data in our case. The “OBX-5-
Observation Value contains the MIME package encoded as an encapsulated data type.”

FIGURE 17-10 OBX segment channel transformer step

The last step is to create an HL7 file name mapping, using code like the following. (see
Source 17-10).

SOURCE 17-10 HL7v2 file name transformer script


// DICOM Patient ID (0010,0020) and DICOM Study ID (0020,0010) as HL7v2 file name
channelMap.put("HL7file", msg['tag00100020'].toString() + "_" + msg['tag00200010'].toString() +
".hl7");

157 PART IV – DICOM


Validate all scripts, validate the destination and save all changes. We are almost done
with this project.

Code Templates

Earlier we decided to use a Code Templates function to populate the newly added
DICOM data element (see Source 17-3). Open Code Templates for editing and create a
new code template. Rename the function to setDICOMtag. Copy and paste the code in
Source 17-11. Notice that the return value of the function is undefined, so setDICOMtag
behaves more as a procedure executed using the scope chain. Also note the use of an
optional parameter in the function definition.

SOURCE 17-11 setDICOMtag Templates script


function setDICOMtag(node, len, vr, value, /* optional */ tag) {
tag = tag || node.name().toString().substr(3);
node.setChildren(value.toString());
node.@len = len.toString();
node.@tag = tag;
node.@vr = vr.toString();
}

Validate the script and save all changes. Now the setDICOMtag function is available in
the Reference tab, in the User Defined Functions category when you are editing Source
connector transformer steps.

Scripts

The last thing to do before we deploy channels and test our implementation is to verify
that Deploy Global Scripts contains required mappings for incoming and outgoing
folders taken from the Configuration Map (see Figure 17-11).

SOURCE 17-12 Deploy Global Scripts script


globalMap.put('dicom_input', $('dicom_input'));
globalMap.put('dicom_output',$('dicom_output'));
globalMap.put('hl7_output', $('hl7_output'));;

Change folders location as you need. Validate scripts and save all changes.

FIGURE 17-11 Configuration Map settings

PART IV – DICOM 158


Channel Implementation Verification

To test our simplified DICOM router implementation, save all changes and redeploy all
channels. Prepare the Dashboard by removing all messages that may be left from
previous tests. Clear the Server Log area as well.

Prepare several DICOM messages, some with encapsulated PDF documents and others
with image attachments. Copy them to the input folder. After an interval, specified by the
polling frequency settings, DICOM SCU should pick up these files one after another and
pass to the DICOM SCP channel. Once DICOM SCP completes processing the files, you
should see DICOM messages in the output folder, all with attached images, as well as
HL7v2 messages with Base64 encoded PDF documents.

FIGURE 17-12 DICOM SCP channel’s statistics

If you double click on the channel‟s name in the Dashboard window, you should see
channel‟s statistics. For the DICOM connector types you should see an additional
Attachments tab which allows (by double-clicking on the row in this box) to view
attached DICOM image. (see Figure 17-12)

159 PART IV – DICOM


PART V – ADVANCING IN MIRTH CONNECT

Advancing in Mirth Connect


CHAPTER 18 Debugging JavaScript in Mirth Connect

CHAPTER 19 Utilizing JMS (Java Message Service)

CHAPTER 20 Polling Web Services

CHAPTER 21 Building Extensions

CHAPTER 22 Tuning Mirth Connect


CHAPTER 18 Debugging JavaScript in Mirth Connect

Debugging JavaScript in Mirth


Connect
Itdeploying
is evident that syntax, logical or run-time errors could happen while writing and
JavaScript code. While it is easy to find some error types, such as a syntax
error, debugging others may be incredibly frustrating. This part of the book explains
some of the ways of debugging JavaScript code in Mirth Connect and goes from simple
to more complex techniques of achieving this.

This chapter starts with Mirth Connect built-in logging functions, then it touches the
Rhino JavaScript debugger and then it explains how to attach the debugger to the Mirth
Connect script engine.

Recommended Tools and Packages

The first two subsections of this chapter are based entirely on the tools provided by
Mirth Connect. The third subsection requires installing and being familiar with additional
tools for Java development. You may chose recommended tools from the list below, or
use similar tools for the same purpose.

The recommended tools are:


 JDK 1.7 or higher;
 Tortoise SVN;
 Eclipse IDE for Java Developers;
 Eclipse JSDT plug-in for Eclipse IDE.

Note: This chapter is based on JDK 1.8.0_45, Eclipse Mars (4.5.1) and Mirth Connect version
3.5.0 source code.

Built-in Logging function

Once a script, such as the transformer script, is written, the simple way to verify its syntax
is to click the Validate Script button. It is capable of finding most obvious syntax errors
such as missing commas like “;” but nothing more.

161 PART V – ADVANCING IN MIRTH CONNECT


To provide insight into how a script performs, put crumble logger.info or
logger.error function calls here and there. These two functions serialize the passed
variable and output information to the Server Log box of the Mirth Connect
Administrator.

java.lang.System.out.println and java.lang.System.err.println functions work


in a similar way with the difference is that they output to the Mirth Connect Server
console window.

If channels have more than one destination each of which has several transformer steps,
it might be a good idea to provide a path in the logging function for deleting or
disabling this logging function later. Otherwise, it might be a bit frustrating to find where
a particular logging activity sits and still writes debugging information when everything
is working as expected.

Alternatively, you might use a Global Channel variable to turn on and off the debug
logging for all or some channels. For example, the Global Scripts may contain
globalMap.put('debug',1). The Transformer script then verifies the global variable
and outputs the logging information to the Sever Log. Use Source 16-1 as an example.

SOURCE 18-1 Using the debugger global variable


if ( 1 == globalMap.get('debug') ) {
logger.info('ABC Channel – Dest1 – Step1: ' + msg);
logger.error('ABC Channel – Dest1 – Step1: ' + msg);
alerts.sendAlert('Transformer alert: ' + msg);
java.lang.System.out.println('Transformer println: ' + msg);
}

The difference between logger.info and logger.error is in the level of logger


hierarchy specified for a particular script. If the logging level specified for the transformer
in log4j file is WARN, the content of logger.error will be printed, whereas the content
of logger.info will be ignored. The log4j.properties configuration file is in the
<Mirth Connect HOME>/conf folder.

Rhino JavaScript Debugger in Standalone Mode

Mirth Connect uses the Rhino JavaScript engine to run scripts and call Java classes. The
Rhino JavaScript engine is a JAR file located in the <Mirth Connect HOME>/client-lib
folder. The version that is included with Mirth Connect v3.5 is rhino-1.7.6.jar.

The Rhino JavaScript engine itself includes a Rhino JavaScript Debugger which is a GUI
based application to help debug scripts.

PART V – ADVANCING IN MIRTH CONNECT 162


To launch the Rhino JavaScript Debugger open the client-lib folder in the command
prompt window and type the following (use the code snippet below):

SOURCE 18-2 Running the Rhino JavaScript Debugger


java -cp rhino-1.7.6.jar org.mozilla.javascript.tools.debugger.Main

The Mozilla Developer Network site provides additional information on how to use the
Rhino Debugger.

Rhino JavaScript Debugger in Embedded Mode

Needless to say the Rhino JavaScript Debugger in standalone mode is not very helpful
since it does not provide the required flexibility. It is better way to use it is in conjunction
with the Mirth Connect Server within the messages context.

This part of the chapter is inspired by a topic on how to develop Mirth Connect using
Eclipse 2 and continues with some additions from the “Real debugging for Mirth JavaScript
channel code” Mirth Connect forum thread 3.

To begin, download the latest version of Mirth Connect from the repository.

Assuming that Tortoise SVN is installed and properly configured do the following:
 Create a new folder that will be used as a destination folder for the source code;
 Open the Tortoise SVN Checkout dialog window;
 As a URL under the Repository use: https://svn.mirthcorp.com/connect/trunk
 Under the Checkout Directory, type or navigate to the folder created in step one.

Once the download is complete, it is time to start Eclipse. The first two projects are
Donkey and Generator since they do not depend on other projects.

Open Eclipse and do the following to create a Java project:


 Create a new project called Donkey;
 Select File > Import, select General > File System and click Next;
 Browse or type the folder that contains the Donkey source code, select all to import
and click Next;
 Once uploading is complete, select Project > Clean, verify that there are no errors
with this project;
 Otherwise select Project > Properties and fix errors such as missing libraries or
dependencies.

2
Located here - http://www.mirthcorp.com/community/wiki/display/mirth/Developing+Mirth+Connect+in+Eclipse
3
Located here - http://www.mirthcorp.com/community/forums/showthread.php?t=7018

163 PART V – ADVANCING IN MIRTH CONNECT


Similarly, create other projects verifying and fixing miss ed libraries or dependencies. The
recommended order to create the projects is as follows: Donkey, Generator, Server,
Command, Manager, Client, and Webadmin.

The table below shows the projects dependencies.

TABLE 18-1 Mirth Connect projects dependencies


Project Dependencies Comments
Donkey
Generator
Server Donkey
Command Donkey, Server
Manager Donkey, Server
Client Donkey, Server May require libraries located in the Server folder
Webadmin Donkey, Server Required to build the Server

Once all projects are created, select Project > Clean, select all projects in the Clean dialog
window and process them to make sure that all projects have been created successfully.
There might be some warnings but there should not be any errors. Fix all remaining
errors before continuing.

FIGURE 18-1 Using lower case in project names may lead to lost dependencies

The Client project may require some libraries located in the Server project folder some of
which may be missed such as userutil-sources.jar. To build the project use the same
jar from the previous Mirth installation and later point to one compiled from the source
code and located in Server\build\client-lib.

PART V – ADVANCING IN MIRTH CONNECT 164


Close Eclipse and navigate to the Eclipse workspace folder (located by default in the
<user>\Application Data\Eclipse directory) and find the Server project folder.

From the command line, run the file build.bat, or type ant -f mirth-build.xml. Wait
for the build to complete successfully. When it completes, open Eclipse, refresh and clean
all projects. There should be no errors. Build all projects.

Create Main.java in the Server/src directory as an entry point for the debug
configuration. To launch the Mirth Connect Server from Eclipse type or copy and paste
the code from Source 18-3.

SOURCE 18-3 Main.java code source for launching the Mirth Connect Server
public class Main {
public static void main(String[] args) {
com.mirth.connect.server.Mirth.main(args);
}
}

Refresh the Server project. Select the Server project properties, select the Run/Debug
Settings and create a new Java application setting called, for example, Mirth from Main.

Select the Server project again and run this application.

FIGURE 18-2 Running Mirth Connect Server from Eclipse

Make sure that the Mirth Connect Server runs successfully. Open an Internet Browser and
launch the Mirth Connect Admin as was explained in Chapter 1, in the Mirth Connect
Administrator subsection.

Note: If you are running Mirth under Oracle JRE 1.8, you may encounter a warni ng
message that your Java application is blocked. To overcome this issue, add the
application location URL to the exception site list as the warning message suggests.

165 PART V – ADVANCING IN MIRTH CONNECT


Alternatively, you may launch Mirth Connect Administrator from the Client project as
explained in Chapter 21 “Building Extensions” (see Source 21-1).

Create a simple channel. Configure the Source connector as a Channel Reader and the
Destination connector as File Writer, for example. Create several JavaScript transformer
steps with some loggings and mappings. Specify the inbound or outbound templates if
you like. Save the channel and verify that it works. Thus, if the Destination connector is
configured as File Writer then there should be a file in the specified folder with valid
transformations.

Return to Eclipse and open the Server project. Navigate to the folder
Server/src/com/mirth/connect/server/util/javascript and open the
JavaScriptTask.java file for editing.

Make the changes shown in Source 18-4. Only the required changes are shown, other
lines of code are skipped for convenience. The complete source code is located in
Appendix F.

The Rhino debugger window is called from the executeScript method for every new
context. If the context is released, the debugger window is recreated to handle a new
context. For each channel the context exists from deployment to the first time a message
is submitted. Another attempt to submit a message is handled by another context and
therefore by another debugger window.

SOURCE 18-4 JavaScriptTask.java changes to launch instances of Rhino Debugger


/* (SN) Debugger declaration starts */
import org.mozilla.javascript.tools.debugger.Main;
/* (SN) Debugger declaration ends */
.....

public abstract class JavaScriptTask<T> implements Callable<T> {


.....

/* (SN) Debugger declaration starts */


private static Main rhinoDebugger = null;
/* (SN) Debugger declaration ends */
.....

public Object executeScript(Script compiledScript, Scriptable scope) throws InterruptedException {


try {
// if the executor is halting this task, we don't want to initialize the context yet
synchronized (this) {
ThreadUtils.checkInterruptedStatus();
context = Context.getCurrentContext();
Thread.currentThread().setContextClassLoader(contextFactory.getApplicationClassLoader());
logger.debug(StringUtils.defaultString(StringUtils.trimToNull(getClass().getSimpleName()),
getClass().getName()) + " using context factory: " + contextFactory.hashCode());
/*
* This should never be called but exists in case executeScript is called from a
* different thread than the one that entered the context.
*/
if (context == null) {
contextCreated = true;
context = JavaScriptScopeUtil.getContext(contextFactory);
}

PART V – ADVANCING IN MIRTH CONNECT 166


if (context instanceof MirthContext) {
((MirthContext) context).setRunning(true);
}
}

/* (SN) Debugger entry starts */


if (rhinoDebugger == null) {
final String title = StringUtils.defaultString(StringUtils.trimToNull(getClass().getSimpleName()),
getClass().getName()) + " using context factory: " + contextFactory.hashCode();
rhinoDebugger = Main.mainEmbedded(contextFactory, scope, title);
rhinoDebugger.setExitAction(null);
}
rhinoDebugger.attachTo(contextFactory);
rhinoDebugger.setScope(scope);
rhinoDebugger.pack();
rhinoDebugger.setVisible(true);
/* (SN) Debugger entry ends */

return compiledScript.exec(context, scope);


} finally {
if (contextCreated) {
Context.exit();
contextCreated = false;
}
}
}
}

This context behavior leads to the constant appearance of new instances of the Rhino JS
Debugger, each of which has a separate window for each channel‟s script, i.e., one
window with Deploy script, one with Preprocessor script, one for Source connector
scripts (both filter and transformer) and so on. Examples of Source and Destiantion
connectors‟ scripts are given in Appendix G. It might be a good idea to use Window–Tile
to organize script windows so they do not overlap. You can safely close those instances
that have already been processed.

FIGURE 18-3 Rhino JavaScript Debugger

167 PART V – ADVANCING IN MIRTH CONNECT


If you have not defined any script and try to run Rhino debugger, you may get “failed to
load…” error. (See Figure 18-4)

FIGURE 18-4 User-defined script is not specified error

If any script, such as transformer script, is defined, you may use JavaScript debugger
statement to set a breakpoint and invoke Rhino Debugger functionality. Other than that
this statement has no effect. (See Source 18-5)

SOURCE 18-5 Using debugger statement in Destination Transformer script


logger.info("Destination Transformer script");
debugger;

Once you understand how to use the Rhino JS Debugger running in embedded mode to
trace Mirth Connect scripts executions, there is a lot of room for improvement.

Eclipse JSDT Debugger in Embedded Mode

The Eclipse IDE features the JavaScript Development Tools (JSDT) which is a JavaScript
IDE. Like the Rhino JS Debugger, the JSDT can be embedded allowing the developer to
remotely control the execution of scripts from the Eclipse IDE.

Prerequisites

 Mirth Connect: It is assumed that the Mirth Connect project is properly loaded and
running as described in the section Rhino JavaScript Debugger in Embedded Mode.
 Eclipse JSDT Plug-in: JSDT Plug-in may need to be installed using the Help > Install
New Software ... dialog window in the Eclipse IDE. Select your Eclipse version's main
download site in the Work with list or type. For example, for the Eclipse IDE Mars
release the download site is - http://download.eclipse.org/releases/mars. Find
JavaScript Development Tools in the Programming Languages or Web section.

PART V – ADVANCING IN MIRTH CONNECT 168


 JSDT Debug Bundles: Additionally you might need to download the JSDT Debug
Bundles that contain classes required to embed the debugger (samples below are
based on Web Tools Platform 3.8.0) - https://eclipse.org/webtools/jsdt/debug.
 Host: Verify your localhost setting located in the C:\Windows\System32\drivers\etc
for Windows OS environment, as well as Firewall settings.

Navigate to the plug-ins directory of your Eclipse installation and locate the following
JAR files:
 org.eclipse.wst.jsdt.debug.rhino.debugger
 org.eclipse.wst.jsdt.debug.transport

Add these JARs to the Server project class library for later reference in the source code.

FIGURE 18-5 JSDT classes added to the Server project library

Navigate to src > com.mirth.connect.server.util.javascript and open


JavaScriptTask.java for editing.

Make the changes shown in Source 18-5 to the source code to launch Eclipse JSDT. The
address port (9009) may be different in your case. Only the required changes are shown
in the code snippet; all other lines of code of the JavaScriptTask file are skipped for
convenience. The complete source code is located in Appendix F.

SOURCE 18-5 JavaScriptTask.java changes to launch Eclipse JSDT


.....

/* (SN) Debugger declaration starts */


import org.eclipse.wst.jsdt.debug.rhino.debugger.RhinoDebugger;
/* (SN) Debugger declaration ends */
.....

public abstract class JavaScriptTask<T> implements Callable<T> {


.....

/* (SN) Debugger declaration starts */

169 PART V – ADVANCING IN MIRTH CONNECT


private static RhinoDebugger rhinoDebugger = null;
/* (SN) Debugger declaration ends */

public Object executeScript(Script compiledScript, Scriptable scope) throws InterruptedException {


try {
// if the executor is halting this task, we don't want to initialize the context yet
synchronized (this) {
ThreadUtils.checkInterruptedStatus();
context = Context.getCurrentContext();
Thread.currentThread().setContextClassLoader(contextFactory.getApplicationClassLoader());
logger.debug(StringUtils.defaultString(StringUtils.trimToNull(getClass().getSim pleName()),
getClass().getName()) + " using context factory: " + contextFactory.hashCode());

/*
* This should never be called but exists in case executeScript is called from a
* different thread than the one that entered the context.
*/
if (context == null) {
contextCreated = true;
context = JavaScriptScopeUtil.getContext(contextFactory);
}

if (context instanceof MirthContext) {


((MirthContext) context).setRunning(true);
}
}

/* (SN) Debugger entry starts */


if ( null == context.getDebugger() ) {
rhinoDebugger = new RhinoDebugger("transport=socket,suspend=y,address=9009");
try { rhinoDebugger.start(); } catch(Exception ex){System.out.println(ex.getMessage());};
rhinoDebugger.contextCreated(context);
}
contextFactory.addListener(rhinoDebugger);
/* (SN) Debugger entry ends */

return compiledScript.exec(context, scope);


} finally {
if (contextCreated) {
/* (SN) Debugger stop */
try { rhinoDebugger.stop(); } catch(Exception ex){System.out.println(ex.getMessage());};
/* (SN) Debugger stop */
Context.exit();
contextCreated = false;
}
}
}
}

To launch the JSDT in embedded mode, the debug configuration is required. Use the
Eclipse IDE Run > Debug Configurations... menu item to open the configuration dialog
window and double-click Remote JavaScript to create a new configuration. Call it JSDT for
Mirth, for example, with the following settings:

 Set the Connector type to Mozilla Rhino - Attaching Connector.


 Set the host to localhost or specify your computer IP address, set the port (9009 in
my case).
 Configure the Source tab to include the Server project.

PART V – ADVANCING IN MIRTH CONNECT 170


FIGURE 18-6 Configuring Eclipse JavaScript Debugger

To launch the Mirth Connect Server in debug mode do the following:

 Use Run > Debug Configurations and launch Mirth from Main.
 Launch the Mirth Connect Administrator, create a simple channel, and add several
JavaScript transformation steps, intentionally make an error in a script. Deploy the
channel and submit a message.
 For the first time the channel execution should be suspended (if “suspend=y” in the
source code) waiting for JSDT to connect.
 Launch JSDT using the newly created configuration JSDT for Mirth. In the Eclipse IDE,
switch to Debug mode and verify that both configurations are running.
 Eclipse JSDT should intercept the script and show the JavaScript window to trace the
error (Figure 18-7).

FIGURE 18-7 JSDT Debugger window with channel’s script

171 PART V – ADVANCING IN MIRTH CONNECT


Known Issues

JSDT may fail to connect with connection refused, runtime disconnected or other types
of exceptions. There may be many reasons for that – host file, firewalls, old Eclipse WTP
release, etc. Solving such errors is out of scope of this book. As an initial step you may
start from here - wiki.eclipse.org/JSDT/Debug/Rhino/Embedding_Rhino_Debugger

Console Input

Sometimes it is useful to interrupt code execution in the debugging mode or execute


different blocks of code based on user input. Since Mirth runs JavaScript code in
Headless mode you cannot use normal dialogs windows for that. However, the console
window is still available for you.

First, create a prompt function in Code Templates. The title parameter is what will be
shown to a user as a prompt. (see Source 18-6)

SOURCE 18-6 Console Input code template


function prompt(title) {

var console = new java.io.BufferedReader(new


java.io.InputStreamReader(java.lang.System.in));
java.lang.System.out.print(title);
return console.readLine();

Validate the script and save it. Return to the channel.

FIGURE 18-8 Console Prompt function template

Once Code Templates are saved, this function should be available as a user defined
function and can be called within a channel as:

var input = prompt("Enter a value: ");

PART V – ADVANCING IN MIRTH CONNECT 172


FIGURE 18-9 Console Prompt function template and usage

Create a new test channel or use any existing channel. Add the prompt() function call to
a filter or transformer step. Verify the script, save all changes and re-deploy the channel.
Now, send a message and switch to the Mirth Connect Server console window.

FIGURE 18-10 Mirth Connect Server console window

The server is waiting for user input. Type any value and press Enter. The channel‟s code
execution should continue.

173 PART V – ADVANCING IN MIRTH CONNECT


CHAPTER 19 Utilizing JMS (Java Message Service)

Utilizing JMS (Java Message


Service)
There is one thing that we have not touched yet: the Java Message Service connector.
The Mirth Connect default installation package includes JMS Sender and JMS Listener
with built-in Apache ActiveMQ and JBoss connection templates.

This chapter explains how to use the open source message broker to send and receive
messages and objects using the JMS connectors.

Prerequisites

Channels in this chapter use Apache ActiveMQ as the message broker. Make sure that
ActiveMQ is installed and properly configured to send and receive messages. You may
even build small Java apps to verify the connection before you start. For debugging
purposes, you may also install and configure Eclipse as was described in Chapter 16
Debugging JavaScript in Mirth Connect.

The recommended tools are:


 Eclipse IDE for Java Developers;
 Eclipse JSDT plug-in for Eclipse IDE.

The JNDI will not be used to connect to the ActiveMQ Message Broker, therefore the
ActiveMQConnectionFactory is required.
 activemq-core-5.4.2.jar (copy to the custom-lib folder);

For passing objects, a custom class that implements the Serializable interface is
required. The source code for our examples is provided in Source 17-1.

SOURCE 19-1 Custom class that implements Serializable interface source code
package com.isarp.Samples;

import java.util.ArrayList;
import java.io.Serializable;

public class FaultMessage implements Serializable {


private static final long serialVersionUID = 123456789L;
public String msgVersion = "";
public String msgType = "";
public ArrayList<String> errors = null;
public String msgRaw = "";
}

PART V – ADVANCING IN MIRTH CONNECT 174


Compile this class and create a JAR file. Move the JAR to the custom-lib folder of your
Mirth Connect installation. For your convenience, the faultmessage JAR, required for
this chapter, is provided in the archive supplied with this book.

Note: Once you have changed the connector types to JMS Sender and Listener, the
channels may not be deployed unless the Message Broker is running.

Scenario Overview

The Sender application submits the HL7v2 formatted query to the HL7 Transformer.

The HL7 Transformer verifies the received HL7v2 query and if it contains one or more
errors, puts them in the error feed or populates the object‟s fields, and sends it to the
Message Broker. Otherwise, if the HL7v2 query has successfully passed the validation, it
is transformed to the HL7v3 query message and sent for HL7v3 verification.

If, during the HL7v3 message verification, one or more errors are found, they are put in
the error feed or populated in the object‟s fields, and sent to the Message Broker.
Otherwise, the HL7 query parameters are written to the database.

The Message Broker receives the error feed or object and passes it along for processing.

The error feed or object from the Message Broker is received by the Data Logger
channel. Validation results are extracted and appended in the log file and the log
database.

FIGURE 19-1 Messaging solution using the Message Broker implementation plan

175 PART V – ADVANCING IN MIRTH CONNECT


Three channels participate in this exercise. The v2-v3 Transformer channel verifies the
HL7v2 query message and passes a feed to the Message Broker. The v3 Verification
channel verifies the HL7v3 query message and passes a similar feed to the Message
Broker. The Data Logging channel listens for feeds, extracts validation results from the
feed and writes them into the log file and the MS Access database.

Note that the error feed and object are not sent simultaneously. Channels will have one
configuration to build, send, receive and process the error feed as an XML message; and
another configuration to do the same with the error feed as an object.

All channels for this chapter are taken from Part II Generic Eligibility Service
Implementation of this book except those accountable for sending the response back.
The same is true for the Global Script and Code Templates, which are used in this
chapter without change. The archive provided with this book contains all channels and
related files for the implementation of the scenario presented in this chapter.

Let us start configuring these three channels for sending error feeds as messages and
error feeds as objects separately.

Sending Messages

For the error feed in this chapter we will be using the same XML error feed template
explained in the Chapter 7 Data Logger channel. Since this template does not follow the
HL7v2 or HL7v3 format, the data types for sender‟s outbound and receiver‟s inbound
connectors must be changed to XML.

SOURCE 19-2 XML error feed template


<error>
<message>
<creationDate/>
<id/>
<type/>
<trigger/>
<version/>
</message>
<cause></cause>
<attachment type=""></attachment>
</error>

Open the v2-v3 Transformer channel, open the Data Types dialog window and change
the To Data Logger outbound type to XML. Do the same for the HL7v3 Verification
channel.

PART V – ADVANCING IN MIRTH CONNECT 176


FIGURE 19-2 Data Types settings for v2-v3 Transformer (left) and v3 Verification (right) channels

Open the Data Logger channel, open the Data Types dialog window and change the
Source connector inbound type to XML.

FIGURE 19-3 Data Logger channel Data Types settings

Save changes in all channels.

v2-v3 Transformer channel

Return to the v2-v3 Transformer channel, switch to the Destinations tab and select the To
Data Logger destination. Change the connector type to JMS Sender.

We will not be using JNDI, therefore make sure that Use JNDI is set to No. Select
ActiveMQ in the connection template window on the right side and populate the
settings.

177 PART V – ADVANCING IN MIRTH CONNECT


FIGURE 19-4 JMS Sender Connector destination setting for messages

Set the Destination Name used in send/receive operations to ELIGIBILITY.LOG. You may
use another name as the Destination Name but it must be consistent across all three
channels.

The last step is to add the template used to send the message which is
${message.encodedData}. No other changes are required here and the To Data Logger
Destination‟s filter and transformer steps are left unchanged.

Alternatively, you may clone the existing To Data Logger destination, disable the old
destination and make all the changes described above in the new one.

Note, that the JAR, containing the ActiveMQConnectionFactory class, was removed
from the Mirth Connect installation package. Therefore the JAR with this class must be
explicitly copied to the custom-lib folder.

HL7v3 Verification channel

Now open the HL7v3 Verification channel and make the same changes. Change the To
Data Logger Destination connector type to JMS Sender. Populate the settings, type the
PART V – ADVANCING IN MIRTH CONNECT 178
Destination Name, and add ${message.encodedData} as the outbound message
template.

FIGURE 19-5 JMS Sender Connector destination setting for messages

Alternatively, you may clone the existing To Logging Channel destination, disable the old
destination and make all changes in the new one.

Data Logger channel

The last step is to prepare the Data Logger channel. Open the Data Logger channel and
change the Source connector type to JMS Listener.

FIGURE 19-6 Data Logger JMS Listener source connector


179 PART V – ADVANCING IN MIRTH CONNECT
Verify that Use JNDI is set to No, populate the settings with the ActiveMQ connection
template, type the Destination Name which is ELIGIBILITY.LOG. No other settings are
required here.

Switch to the Source connector Transformer, add a new step and delete the others.
Change the step type to JavaScript, rename the step to Populate Fault Message or choose
a better name.

FIGURE 19-7 Data Logger Source transformer step

Source 19-3 creates an instance of the FaultMessage object and populates the object‟s
fields.

SOURCE 19-3 Source connector transformer step script


var faultMessage = new Packages.com.isarp.Samples.FaultMessage();

faultMessage.msgType = msg['message']['type'].toString();
faultMessage.msgVersion = msg['attachment']['@type'].toString();
faultMessage.msgRaw = getBase64Decrypted( msg['attachment'] );
faultMessage.errors = new Packages.java.util.ArrayList();

for(var i = 0; i<msg['cause'].length(); i++) {


faultMessage.errors.add(msg['cause'][i]);
}

channelMap.put('fault',faultMessage);

Once populated, the object is passed to the destination transformer of the same channel
using the Channel Map. Return back to the Source tab and save the Source connector
changes.

Switch to the Destinations tab, select the To Log File destination and open its destination
transformer.

PART V – ADVANCING IN MIRTH CONNECT 180


No inbound message template is required - all necessary values, including the incoming
message that caused errors, are taken from the object. Get the faultMessage object from
the Channel Map, create an XML message from the serialized version of the HL7v2
message stored in the faultMessage.msgRaw field.

Depending on the version of the inbound message, which expected to be either HL7v2
or HL7v3, values for logging are retrieved from different spots.

FIGURE 19-8 Data Logger Destination transformer step (snipped)

As a final step, the script Source 19-4 adds errors to the Channel Map as a string to be
consumed by the database writer destination. This is done just for script simplicity.

SOURCE 19-4 To Log File Destination transformer script


var faultMessage = channelMap.get('fault');
var msgXML = new XML(
SerializerFactory.getSerializer(faultMessage.msgVersion).toXML(faultMessage.msgRaw) );

if ( 'HL7V2' == faultMessage.msgVersion ) {
creationDate = msgXML['MSH']['MSH.7']['MSH.7.1'].toString();
msgID = msgXML['MSH']['MSH.10']['MSH.10.1'].toString();
msgTrigger = msgXML['MSH']['MSH.9']['MSH.9.2'].toStr ing();
} else if ( 'HL7V3' == faultMessage.msgVersion ) {
creationDate = msg['creationTime']['@value'].toString();
msgID = msgXML['id']['@extension'].toString();
msgTrigger = msgXML['controlActProcess']['code']['@code'].toString();
}

var datestring;
try {
datestring = DateUtil.convertDate('yyyyMMddhhmmss -0800', 'yyyy-MM-dd hh:mm:ss',
creationDate);
} catch(err) {
datestring = Now('yyyy-MM-dd hh:mm:ss');
}

181 PART V – ADVANCING IN MIRTH CONNECT


channelMap.put('msgDate', datestring);
channelMap.put('msgID', msgID);
channelMap.put('msgType', faultMessage.msgType);
channelMap.put('msgTrigger', msgTrigger);
channelMap.put('msgVerHL7', faultMessage.msgVersion);
channelMap.put('msgErrors', 'Errors: ' + faultMessage.errors.size() );
channelMap.put('CR', '\n');

channelMap.put('msgRaw', msgXML);
channelMap.put('errors', getErrorList());

function getErrorList(){
var errorList = '';
for ( var i = 0; i < faultMessage.errors.size(); i++ ) {
errorList += faultMessage.errors.get(i) + $('CR');
}
return errorList;
}

The next destination, the To Log DB destination, consumes some of the values passed
through the Channel Maps and, therefore, the To Log DB destination must have the Wait
for the previous destination checkbox, next to the connector type drop-down list, checked
for this destination to work properly.

FIGURE 19-9 Data Logger Destination transformer step

Verify the script, return back to the channel and save all changes. Now we are ready to
test the entire chain implementation.

SOURCE 19-5 To Log DB Destination transformer script


var dbConn =
DatabaseConnectionFactory.createDatabaseConnection('sun.jdbc.odbc.JdbcOdbcDriver','jdbc:odbc:QBP_L
OG_DB','','');

var insertString = "INSERT INTO Messages (CreationDate, UUID, MsgType, [Trigger], [Version],
[Errors], [Source] ) VALUES ('" +
$('msgDate') + "','" + $('msgID') + "','" + $('msgType') + "','" + $('msgTrigger') + " ','" +
$('msgVerHL7') + "','" +
channelMap.get('errors') + "','" + channelMap.get('msgRaw') + "');"

var result = dbConn.executeUpdate(insertString);


dbConn.close();

PART V – ADVANCING IN MIRTH CONNECT 182


Testing the implementation

Save all changes, redeploy all channels, and take the HL7v2 query message template
(Source 5-4). Change the message trigger event (MSH.9.2) to ERR2. As you may recall
from previous chapters, this creates a randomly malformed HL7v2 message that should
fail the validation.

Send the message. The error feed should be sent to the Data Logger channel, the log file
should be created or updated, and a new record should appear in the MS Access
database.

Send another message with the Sending Facility (MSH.4) field changed to ERR3. This
should create a malformed HL7v3 query message. Verify that the log file and the MS
Access database have been updated respectively.

If everything works as expected, let us move on to passing objects via the Message
Broker.

Sending Objects

Assuming that all prerequisites i.e., JAR files for FaultMessage object and
ActiveMQConnectionFactory are in /custom-lib, the Message Broker is up and running,
let‟s configure the JMS connectors to pass objects.

To begin with, change the Data Types for the senders‟ outbound connectors and the
receiver‟s inbound connector to RAW.

Open the v2-v3 Transformer channel, open the Data Types dialog window and change
the To Data Logger outbound type to XML. Do the same for the To Data Logging
outbound connector of the HL7v3 Verification channel.

FIGURE 19-10 Data Types settings for v2-v3 Transformer (left) and v3 Verification (right)

183 PART V – ADVANCING IN MIRTH CONNECT


Next, open the Data Logger channel, open the Data Types dialog window and change the
Source connector inbound type to RAW.

FIGURE 19-11 Data Logger Data Types settings for receiving an object

When connectors are ready to send objects, we need to put some meat on their bones.

v2-v3 Transformer destination transformer

Return to the v2-v3 Transformer channel, switch to the Destinations tab and select the To
Data Logger destination. Open the transformer and add one more JavaScript step.
Rename the step to Serialize Object or give a better name.

FIGURE 19-12 v2-v3 Transformer Destination Transformer step to serialize the object

The plan is to create the object, populate object‟s fields, serialize the object, encode the
object and pass it to the Destination connector.

PART V – ADVANCING IN MIRTH CONNECT 184


The FaultMessage object serialization is done by calling org.apache.commons.
lang3.SerializationUtils.serialize().

The commons-lang3-3.1.jar that contains this class should already be in the Mirth
Connect installation server-lib\commons folder. If it‟s not there, download the JAR file
and place it in your custom-lib folder. In this case, you need to restart the Mirth Connect
Server to handle a new library.

SOURCE 19-6 v2-v3 Transformer script to serialize the object


var objFaultMessage = new Packages.com.isarp.Samples.FaultMessage();

objFaultMessage.msgVersion = "HL7V2";
objFaultMessage.msgType = msg['MSH']['MSH.9']['MSH.9.1'].toString();
objFaultMessage.msgRaw = connectorMessage.getRaw().getContent();
objFaultMessage.errors = new Packages.java.util.ArrayList();

var errorList = new Packages.java.util.ArrayList( globalChannelMap.get('errorList') );


for ( var i = 0; i < errorList.size(); i++ ) {
var errorDesc = errorList.get(i);
objFaultMessage.errors.add( errorDesc.Description );
}

var bytes = [];


bytes = new Packages.org.apache.commons.lang3.SerializationUtils.serialize(objFaultMessage);
var encoded = FileUtil.encode(bytes);

channelMap.put("objFaultMessage", encoded );

The serialized object encoding is done by the built -in FileUtil.encode() function.

FIGURE 19-13 v2-v3 Transformer JMS Sender settings to pass the object

185 PART V – ADVANCING IN MIRTH CONNECT


Once encoded, the object is ready to be sent via the Message Broker. The last script line
adds the encoded object to the Channel Map to be consumed by the Destination
connector. Verify the script, return to the channel and save the changes.

Replace ${message.encodedData} with the object from the Channel Map, which is
${objFaultMessage}. (see Figure 17-11)

Save all changes and redeploy the v2-v3 Transformer channel.

HL7v3 Verification destination transformer

Now open the HL7v3 Verification channel, switch to the Destinations tab and select the
To Data Logger Destination. Open the Transformer and add one more JavaScript step.
Rename the step to Serialize Object or give a better name.

This script works almost similarly to one we created before for the v2-v3 Transformer
channel with the only difference is that errors are taken from the XML schema and
Schematron validation results.

FIGURE 19-14 HL7v3 Verification Destination Transformer step to serialize the object

The FaultMessage object is created and fields are populated with values. It is serialized
using org.apache.commons.lang3.SerializationUtils.serialize() .

Then the serialized object is encoded using the built -in FileUtil.encode() function. The
encoded object, which is ready to be sent via the Message Broker, is added to the
Channel Map.

PART V – ADVANCING IN MIRTH CONNECT 186


SOURCE 19-7 HL7v3 Verification script to serialize the object
var objFaultMessage = new Packages.com.isarp.Samples.FaultMessage();
objFaultMessage.msgVersion = "HL7V3";
objFaultMessage.msgType = msg['interactionId']['@extension'].toString();
objFaultMessage.msgRaw = connectorMessage.getRaw().getContent();
objFaultMessage.errors = new Packages.java.util.ArrayList();

if( $('schemaValidationError').length() != 0 ) {
objFaultMessage.errors.add( $('schemaValidationError') );
} else if( $('schematronValidationError').length() != 0 ) {
objFaultMessage.errors.add( $('schematronValidationError') );
}
var bytes = [];
bytes = new Packages.org.apache.commons.lang3.SerializationUtils.serialize(objFaultMessage);
var encoded = FileUtil.encode(bytes);
channelMap.put("objFaultMessage", encoded );

Verify the script, return to the channel and save the changes.

Similarly to the previous channel changes, replace ${message.encodedData} with the


object from the Channel Map, ${objFaultMessage}.

Save all changes and redeploy the HL7v3 Verification channel.

FIGURE 19-15 HL7v3 Verification Destination settings to pass the object

187 PART V – ADVANCING IN MIRTH CONNECT


Data Logger channel source transformer

Once the sender channels are ready, it is the time to configure the receiver channel.

Open the Data Logger channel, switch to the Source tab and open the transformer.
Comment out or delete the other steps if there are any, add a new JavaScript step and
rename it to Deserialize Object or give a better name.

The script in Source 19-7 performs actions in the reverse order than the sender channels
did. The first thing to do is to decode the received raw message. This is done by calling
the built-in FileUtil.decode() function.

Once decoded, the byte array is deserialized to the object using the org.apache.
commons.lang3.SerializationUtils.deserialize() function.

The last step is to add the object to the channel map, so that the object will be
consumed by the destination transformer.

FIGURE 19-16 Data Logger Source Transformer step to deserialize the object

The Data Logger Source connector‟s settings for the JMS Listener have not been
changed, and are the same as for passing the message. (see Figure 19-6)

SOURCE 19-8 Data Logger Source Transformer script

var bytes = [];

bytes = FileUtil.decode( connectorMessage.getRaw().getContent() );

var objFaultFeed = new Packages.com.isarp.Samples.FaultMessage();


objFaultFeed = new Packages.org.apache.commons.lang3.SerializationUtils.deserialize(bytes);

channelMap.put('fault',objFaultFeed);

PART V – ADVANCING IN MIRTH CONNECT 188


Data Logger channel destination transformer

Save changes for the Sender transformer and switch to the Destinations tab.

Select the To Log File destination and open the transformer. If you have not followed this
chapter by implementing the JMS message passing scripts then create a step, change
the type to JavaScript and rename it to Extract from Object or give a better name.
Otherwise, you have the Extract Log Data step already, which is the same, and no other
changes are required here.

FIGURE 19-17 Data Logger Destination transformer script (snipped)

The To Log DB destination settings and the script are similar to the one used for the
passing message implementation explained in the previous section of this chapter. So no
changes are required.

Channels Implementation Verification

Make sure that custom-lib folder contains all required Java packages. If not, download
the same or latest versions of required packages from the Internet as needed. (see Figure
19-16)

Save all changes, redeploy all channels, and take the HL7v2 query message tem plate
(Source 5-4). Change the message trigger event (MSH.9.2) to ERR2.

189 PART V – ADVANCING IN MIRTH CONNECT


Send the message. The FaultMessage object should be sent to the Data Logger channel,
the log file should be created or updated, and a new record should appear in the MS
Access database.

FIGURE 19-18 custom-lib folder content

Send another message with the Sender Facility (MSH.4) field changed to ERR3. This
should create a malformed HL7v3 query message. Now the HL7v3 Verification channel
should send the FaultMessage object. Verify that the log file and the MS Access
database have been updated respectively.

PART V – ADVANCING IN MIRTH CONNECT 190


CHAPTER 20 Polling Web Services

Polling Web Services


Sometimes there are tasks located outside of typical HL7 request -response patterns.
One of such tasks is necessity to update data on a client computer at regular basis or
intervals, assuming a data provider does not have effective publish/notification
mechanism available for clients. In such cases the client decides on its own when to
request the data, sends a request to the data provider and waits for a response. Once the
response is received and analyzed, the new values are compared with the values stored
in the local database and data is updated as needed.

Mirth Connect includes Web Service Listener and Web Service Sender connectors but
none of them seems to be directly applicable to accomplish the data polling task. There
should be something else which periodically starts the engine.

This chapter is based on an initial request came from one of this book readers. The
content of this chapter has been done independently from the thread posted on the
Mirth forum, even if the former somehow repeats the latter.

This chapter explains how to set up and configure Web Service Sender connectors to
periodically poll data from several publicly available Web Services. As a prerequisite, your
development environment should have access to the Internet and firewalls along the way
should not block Mirth Connect requests. You also should be familiar with WSDL
definitions.

Scenario Overview

This chapter results in a simple channel implementation with the requester mechanism.
In addition to that, the channel must implement a facility with which it can periodically
poll for the response from the data provider.

The second task will be done by configuring the Source connector as a JavaScript
Listener. However, to manually trigger the channel during testing, we will be using
Channel Reader.

The first task, i.e., requesting data, will be done by configuring the Destination connector
as a Web Service Sender. The received result will be logged to appear in the Server Log
info panel.

191 PART V – ADVANCING IN MIRTH CONNECT


Summary Tab

Create a new channel or import the Web Service Polling from the archive provided with
this book and switch to the Summary tab.

Type the channel name, channel tag and channel description.

Click Set Data Types and configure inbound and outbound messages for both Source
connector and Destination connector to XML. Make sure that Strip Namespaces boxes for
Source, Destination 1 and Response are unchecked (see Figure 20-1).

FIGURE 20-1 Set Data Types settings

Leave other settings for this channel unchanged. The Initial State of the channel is
Started.

Source Connector

Switch to the Source tab; change the Connector Type to JavaScript Reader. Notice that
Polling Type is set to Interval and Polling Frequency is set to 5000 ms (5 seconds) by
default. Change these values if required.

Later on you may add a code to build a more complex scheduling to execute the polling
request at specific time.

For testing purposes, however, I turned the Source connector to a Channel Reader which
requires no settings.

PART V – ADVANCING IN MIRTH CONNECT 192


Destinations Connector

Switch to the Destinations tab. Rename the destination from Destination 1 to Currency
Converter Service. You may use other name if you like. To send a request, change the
connector type to Web Service Sender.

Currency Converter Service (SOAP 1.1)

Type or copy the following URL into the WSDL URL field:
http://www.webservicex.net/CurrencyConvertor.asmx?WSDL

By clicking Get Operations button, the Mirth Connect Administrator populates the Service
and Port fields with values provided by the service. Mirth v3.1 introduces a new feature
allowing to select a port from the list if more than one binding declarations defined in
the WSDL. Click Port/Endpoint drop down list and select SOAP 1.1 binding (Figure 20-2):
{http://www.webserviceX.NET/}CurrencyConvertorSoap

FIGURE 20-2 Web Service Sender destination settings by default

The next step is to click Generate Envelope button which populates SOAP Action field
with http://www.webserviceX.NET/ConversionRate and SOAP Envelope field with the
code shown in the code snippet (Source 20-1)

SOURCE 20-1 Web Service SOAP 1.1 message template


<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:web="http://www.webserviceX.NET/">
<soapenv:Header/>
<soapenv:Body>

193 PART V – ADVANCING IN MIRTH CONNECT


<web:ConversionRate>
<web:FromCurrency>?</web:FromCurrency>
<web:ToCurrency>?</web:ToCurrency>
</web:ConversionRate>
</soapenv:Body>
</soapenv:Envelope>

Replace “?” with currencies of your choice for example:


<web:FromCurrency>USD</web:FromCurrency>
<web:ToCurrency>CAD</web:ToCurrency>

The next step to do is to extract the received conversion rate value. On the Destinations
tab, click Edit Response in the Channel Tasks navigation bar and create a new Response
Transformer step, change the step type to JavaScript.

The Extract Rate script (see Source 20-2) logs the received response, extracts the required
value ignoring namespaces and logs the result. No inbound or outbound templates are
required here, therefore these boxes are left empty.

SOURCE 20-2 Extract Rate response transformer step script


logger.info( "Currency Server Response: " + msg.toString() );

var rate = msg.*::Body.*::ConversionRateResponse.*::ConversionRateResult.toString();


logger.info("The current exchange rate is: " + rate);

Save the changes, and deploy or redeploy the channel.

FIGURE 20-3 Response Transformer script

Now, try to send a request to the Exchange Rate web server provider, see Channel
Implementation Verification section below for details. You should get a valid response
from the service.

Currency Converter Service (SOAP 1.2)

The same Currency Converter Service also supports SOAP 1.2 binding. To try it, open the
channel for editing, select Destination tab, select Currency Converter Service destination,
click Port/Endpoint drop down list and select SOAP 1.2 binding (see Figure 20-4):

{http://www.webserviceX.NET/}CurrencyConvertorSoap12

PART V – ADVANCING IN MIRTH CONNECT 194


Click Generate Envelope button again to populate SOAP Action field with a new SOAP
Envelope shown in the code snippet (see Source 20-3).

SOURCE 20-3 Web Service SOAP 1.2 message template


<soap:Envelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope" xmlns:web="http://www.webserviceX.NET/">
<soap:Header/>
<soap:Body>
<web:ConversionRate>
<web:FromCurrency>?</web:FromCurrency>
<web:ToCurrency>?</web:ToCurrency>
</web:ConversionRate>
</soap:Body>
</soap:Envelope>

Similarly, replace “?” with currencies of your choice.

FIGURE 20-4 Web Service Sender destination settings for SOAP 1.2 binding

The Extract Rate script remains the same and no other changes are required. Save the
destination changes, and deploy or redeploy the channel.

Now, try to send a request to the Exchange Rate web server provider. You should get a
valid response from the service.

Stock Quote service

The same service provider offers a quote service. Create another destination, call it Quote
Service and change the connector type to Web Service Sender.

Type or copy the following URL into the WSDL URL field:
http://www.webservicex.net/stockquote.asmx?WSDL
195 PART V – ADVANCING IN MIRTH CONNECT
Repeat steps we did for the Currency Converter Service. From the Port/Endpoint drop
down list select SOAP 1.1 or 1.2 binding and then click Generate Envelope. Replace
<web:symbol>?</web:symbol> with a stock symbol of your preferred company.

FIGURE 20-5 Stock Quote Service destination settings for SOAP 1.2 binding

Before you continue with the Response Transformer script, make sure that Set Data Types
settings on the Summary tab have Strip Namespaces boxes for this destination and
corresponding response unchecked (see Figure 20-6).

FIGURE 20-6 Set Data Types settings

Add a Response Transformer step to extract required data. For simplicity, my response
transformer step just logs the closing price (see Source 20-4).

SOURCE 20-4 Extract Stock response transformer step script


logger.info( "Stock Quote Server Response: " + msg.toString() );

var msgResponse = msg.*::Body.*::GetQuoteResponse.*::GetQuoteResult.toString();


msgResponse = msgResponse.replace("&lt;", "<");

PART V – ADVANCING IN MIRTH CONNECT 196


msgResponse = msgResponse.re place("&gt;", ">");

tmp = new XML(msgResponse);


logger.info( "Quote: " + tmp['Stock']['Last'].toString() );

Disable the Currency Service destination if you like. Save all changes and re-test the
channel again.

Extend the Response Transformer step as required to extract other data. You may also
consider defining a company symbol in the Source connector and passing it as a variable
to the Destination connector.

IHE Gazelle CDA Validator service

Let‟s try some services related to HL7. For example, IHE Gazelle provides publicly
available Object and CDA Validator services which validate given CDA document against
IHE defined Schematron rules. There are two end points so we will try both of them.

Create yet another destination or a new channel, call it IHE Gazelle ObjectValidator and
change the connector type to Web Service Sender.

Type or copy the following URL into the WSDL URL field:
http://jumbo.irisa.fr:8080/SchematronValidator-SchematronValidator-ejb/GazelleObjectValidatorWS

Repeat same steps we did before for the Currency Converter Service, i.e., Get Operations
and Generate Envelope. Before actually validatating a CDA document you may need to
retrieve all available Schematron sets by selecting GetAllSchematron in Operation list and
hitting Generate Envelope. Save all changes and deploy the channel. Since
GetAllSchematron is not passing any messages to the service, send a dummy message to
initiate the communication. Save the service response which you may need later to select
the Schematron rules set required to validate a particular CDA document template.

As the next step, select validateObject in Operation list and hit Generate Envelope. Open
Destination Transformer for editing, create a new JavaScript step and type the following
code to encode the inbound CDA document. (See Source 20-5)

SOURCE 20-5 Base64 encode transformer step script

channelMap.put('cdaEncoded', getBase64Encrypted (msg.toString()) );

Note that it uses Code Template getBase64Encrypted() function (See Source 6-7 ).

Return to the IHE Gazelle ObjectValidator destination connector and change the
base64ObjectToValidate value to retrieve the channel map variable - ${cdaEncoded}.

197 PART V – ADVANCING IN MIRTH CONNECT


Similarly change the xmlMetadata node value to point to a selected Schematron set, e.g.,
IHE - PCC - Physician Note (PN). (See Figure 20-7)

Make sure that Set Data Types settings on the Summary tab have Strip Namespaces
unchecked for this destination and corresponding response.

FIGURE 20-7 IHE Gazelle Object Validation Service destination settings

To handle the web service response, open the Response Transformer for editing . Create
a new JavaScript step and type the following code (See Source 20-6).

SOURCE 20-6 CDA validation results step script


var msgResponse = msg.toString();

msgResponse = msgResponse.replace(/&lt;/g, "<");


msgResponse = msgResponse.replace(/&gt;/g, ">");
msgResponse = msgResponse.replace(/&amp;/g, "&");
msgResponse = msgResponse.replace(/&#13;/g, "");

logger.info(msgResponse);

Test the service by submitting any CDA document.

Now, let‟s try the IHE Gazelle CDAValidator service. Create yet another destination or
clone the IHE Gazelle ObjectValidator destination, rename it to IHE Gazelle CDAValidator
and, if you‟ve created a new destination, change the connector type to Web Service
Sender.

Type or copy the following URL into the WSDL URL field:
http://gazelle.ihe.net/CDAGenerator-CDAGenerator-ejb/CDAValidatorWS?wsdl

PART V – ADVANCING IN MIRTH CONNECT 198


Repeat all steps we did before for the IHE Gazelle ObjectValidator, including creating
transformer and response JavaScript steps. Make sure that Set Data Types settings on the
Summary tab have Strip Namespaces unchecked for this destination and corresponding
response.

FIGURE 20-8 IHE Gazelle CDA Validation Service destination settings

Change the base64Document node value to retrieve the encoded document. Change the
validator node value to indicate the selected Schematron set, e.g., HL7 - CDA Release 2.
(See Figure 20-8)

Channel Implementation Verification

To verify the channel(s) with web service sender destiantions, save all changes and
redeploy the channel(s). In the Mirth Connect Administrator dashboard window select
the Web Service Polling, right-click on the channel name and select Send Message from
the drop down menu. Since our Source connector contains no transformation steps or
message templates, there is no need to enter any values or messages to the Message
field.

Select only the destination connector that you want to check, for example, the Currency
Converter Service, and click Process Messages (see Figure 20-9). If you select all
destinations, after a while you will receive answers from all web services configured in
this chapter.

199 PART V – ADVANCING IN MIRTH CONNECT


FIGURE 20-9 Testing web services implementation

After a while the Server Log info panel on the dashboard should be populated with
logged information such as the response message sample and extracted rate value, or it
may return an exception result if the service not available at the moment.

PART V – ADVANCING IN MIRTH CONNECT 200


CHAPTER 21 Building Extensions

Building Extensions
ItConnect
is just a matter of time before you come up with the idea building your own Mirth
extension, either a connector or plug-in, to keep up with ever changing
business processes. One of the immediate needs is to support HTTPS protocol for
secure communication. Soon after you will find that building the Mirth Connect
extension is terra incognita, heavily guarded by Mirth Co (Quality Systems, Inc.) - no
publicly available templates whatsoever, and there is not much written about it. There
are also some limitations, discussed later, that do not allow building publicly
distributable extensions.

This chapter guides you through building a JSON Writer Destination connector and
reveals some of the major steps that need to be taken to build similar Mirth Connect
extensions. This JSON Writer connector takes an outbound message in HL7v2, HL7v3 or
XML formats only, converts the message to JSON format and writes it into the user-
specified file. If the outbound message is in any other format than these three (for
example, RAW) the JSON Writer throws an exception.

The JSON format has been selected for no particular reason and the connector
implementation is not meant to be a fully-fledged implementation of the XML to JSON
transformer.

The content of this chapter is based on the Eclipse IDE. If you are using IDEA or any other
editing tool, there are probably similar options exist.

A word of caution: the official Mirth distribution is signed with CA-issued Mirth
certificate. This means if you built an extension using your own certificate, or
Server/keystore.jks as explained in this chapter, you need to build the Server project
using that certificate and replace all the client-lib and extensions JARs.

Prerequisites

This chapter is based on activities taken in Chapter 18 Debugging JavaScript in Mirth


Connect. You have to have the Mirth Connect source code downloaded, projects created
and built, and the debug configuration to run the Mirth Connect Server created. If you
have skipped Chapter 18, I recommend that you complete the steps described there
before continuing.

201 PART V – ADVANCING IN MIRTH CONNECT


Besides tools listed in the Chapter 18, the recommended additional tools are:
 Eclipse WindowBuilder plug-in or similar GUI designing tool;
 Java SDK or jarsigner.exe downloaded separately;
 org.json library for Java (http://www.json.org/java/index.html).

Preliminary steps

To begin with, we need another debug configuration to be able to run the Mirth Connect
Administrator from Eclipse. Open the Client project, create Main.java in the Client/src
directory to be used as an entry point for the debug configuration to launch the Mirth
Connect Administrator from Eclipse. In the file, type or copy Source 19-1.

SOURCE 21-1 Main.java code source for launching Mirth Connect Administrator
public class Main {
public static void main(String[] args) {
com.mirth.connect.client.ui.Mirth.main(new String[]
{"https://localhost:8443","3.5.0","admin","admin"});
}
}

Refresh the Client project. Select the Client project properties, select the Run/Debug
Settings and create a new Java application setting called, for example, Mirth Client
from Main or give a better name.

FIGURE 21-1 Running Mirth Connect Administrator from Eclipse

Build the project. Launch Mirth from Main first, and Mirth Client from Main
second. Make sure that both Mirth Connect Server and Administrator run successfully.
Now we have our environment to build the extension ready.
PART V – ADVANCING IN MIRTH CONNECT 202
Creating Templates

Due to the lack of publicly available extension templates, we will use what is always
available, i.e., on one of the connectors shipped with the Mirth Connect installation. I
think that the Document Writer Destination connector does exactly what we want – it
allows entering the folder and the output file name, it has a verification button to check
if the destination folder is available, and it has several options to control the file writing
process.

Open the Client project and create the package com.mirth.connect.connectors.json.


Open the Server project and similarly create the com.mirth.connect.connectors.json
package.

At a bare minimum, the following files are required for these packages:

TABLE 21-1 JSON Writer extension files


File Name Project Description
JSONWriter.java Client Connector GUI, and getter and setter for properties.
JSONWriter.form Client Connector GUI in the XML format. Used by the
NetBeans or similar GUI designer tool.
JSONDispatcher.java Server The class that transforms the XML to JSON and writes
the file.
JSONDispatcherProperties.java Server Properties class with getters and setters.
JSONConnectorServlet.java Server Required for the Test Write verification function.
JSONConnectorServletInterface.java Server Required for the Test Write verification function.
destination.xml Server Extension description file.

Create the files listed in Table 21-1 and copy-paste the source code from the
corresponding Document Writer extension files, changing classes and properties
declarations accordingly.

The .form file is only used by the NetBeans GUI designer. You do not need to deploy
this file with the connector. If you are using a GUI editor other than NetBeans, editing the
Swing GUI code contained in JSONWriter.java will put its XML representation in
JSONWriter.form out of synch.

I assume you are proficient in Java development, and will not list the source code for all
of these files; you may find it in the archive provided with this book. Below I emphasize
only major obstacles that you might bump into. So let‟s get started.

First, update the destination.xml that describes the JSONWriter Destination


connector. Open the destination.xml for editing, copy and paste Source 21-2.

203 PART V – ADVANCING IN MIRTH CONNECT


The JSONWriterService class is invoked when the user presses the Test Write button
next to the Directory folder field. No changes are required to the invoke() method of
this class.

SOURCE 21-2 destination.xml code


<connectorMetaData path="json">
<name>JSON Writer</name>
<author>Mirth Corporation</author>
<pluginVersion>3.5.0</pluginVersion>
<mirthVersion>3.5.0</mirthVersion>
<url>http://www.mirthcorp.com</url>
<description>This connector allows Mirth to create JSON documents.</description>
<clientClassName>com.mirth.connect.connectors.doc.JSONWriter</clientClassName>
<serverClassName>com.mirth.connect.connectors.doc.JSONDispatcher</serverClassName>
<sharedClassName>com.mirth.connect.connectors.doc.JSONDispatcherProperties</sharedClassName>
<library type="CLIENT" path="json-client.jar" />
<library type="SHARED" path="json-shared.jar" />
<library type="SERVER" path="json-server.jar" />
<library type="SERVER" path="lib/org.json.jar" />
<apiProvider type="SERVLET_INTERFACE"
name="com.mirth.connect.connectors.json.JSONConnectorServletInterface"/>
<apiProvider type="SERVER_CLASS" name="com.mirth.connect.connectors.json.JSONConnectorServlet"/>
<transformers></transformers>
<protocol>json</protocol>
<type>DESTINATION</type>
</connectorMetaData>

Client side

Let us rework the JSON Writer connector‟s user interface by deleting unnecessary fields
and buttons, and adding required ones. Open the Client project, navigate to the JSON
package, right click on JSONWriter.java and select Open With > WindowBuilder Editor,
then click to the Design tab.

Notice, that an attempt to open JSONWriter.java, or any other destination extension


provided with the Mirth package, in the Eclipse WindowBuilder designer may lead to a
null pointer exception caused by javax.swing.JComponent called from the
org.syntax.jedit.TextAreaPainter.paint() method. If that is so in your case, open
the TextAreaPainter.java and temporarily comment out the entire paint() method.
Clean all projects (Project > Clean).

Open JSONWriter in the WindowBuilder designer mode and make the following
changes:
 Delete the password text field, and delete one of the radio buttons groups,
Encryption group for example.
 Change the remaining radio buttons group to Pretty Print for the title, and Yes and
No for radio buttons.
 Delete any associated event handlers.

PART V – ADVANCING IN MIRTH CONNECT 204


Pretty Print added here will change the way JSON is written in the file – if Yes is selected,
the output file will be formatted to be easily read by the end user, if No the data will
written as a single line.

FIGURE 21-2 Editing the JSONWriter.java GUI

Return to the source code, and save the changes. As a friendly reminder, do not forget to
uncomment changes made to the method TextAreaPainter.paint() if you have not
done so yet.

Server side

Before we start, make sure that the org.json library is added to the list of available
libraries for the Server project. In case you have forgotten, switch to the Server project,
open the Project > Properties, then click the Java Build Path, switch to the Libraries tab
and click the Add button to include the library.

The next step is to update JSONDispatcherProperties.java to include a new


prettyPrint property and delete unused properties left there from the Document
Writer. I will leave you with this exciting endeavor. Return once you have done.

Once JSONDispatcherProperties.java is updated and saved you might wonder how


the properties set in this file become visible to the Client's JSONWriter.java. For this, go
to the Client‟s project Project > Properties then open the Java Build Path settings, switch
to the Projects tab, select and expand Server project, select Access rules. Double-click on
205 PART V – ADVANCING IN MIRTH CONNECT
Access rules to add an access to the JSONDispatcherProperties and
JSONConnectorServletInterface classes. Notice that these declarations are above the
Forbidden entry. After all changes, you may need to rebuild projects. (See Figure 21-3)

If you are using a Java development environment other than Eclipse, the steps explained
above may be different. If so, then choose what is appropriate for this specific
development tool to achieve the same result.

FIGURE 21-3 Making the Server project classes accessible from the Client project

The class that does the hard lifting work transforming the XML to JSON format is
JSONDispatcher.java. This class requires the org.json library to do its job. Either
download all required files from the json.org website and build your own library, or use
the one included in the archive provided with this book.

Now, once you have made all required changes to handle prettyPrint and other
properties, add the additional class declarations as shown in Source 21-3.

SOURCE 21-3 Additional class declaration code


import com.mirth.connect.connectors.json.JSONDispatcherProperties;
import org.json.XML;
import org.json.JSONObject;

The method that is actually writing in the file is writeDocument(). In my


implementation it does the following, and you are free to change it if you choose:

PART V – ADVANCING IN MIRTH CONNECT 206


 Verifies that the outbound message data type is in one of following formats: HL7v2,
HL7v3 or XML.
 If the outbound message is HL7v2, the code serializes the message to XML.
 No changes are made for the HL7v3 message since it is already in XML format by
default.
 If the message is in any format other than these three, the source code throws an
Exception and stops further actions.
 Then the code creates JSONObject and applies indentation if Pretty Print is selected.
 And finally the method writes the file in the user-specified folder.

SOURCE 21-4 WriteDocument() source code


private void writeDocument(String template, File file, JSONDispatcherProperties jsonDispatcherProperties,
ConnectorMessage connectorMessage) throws Exception {

FileOutputStream fos = null;


StringBuilder strOutput = new StringBuilder();
String dataType = connectorMessage.getEncoded().getDataType();

if (dataType.equals("XML") || dataType.equals("HL7V3")){
strOutput.append(template);

} else if (dataType.equals("HL7V2")){
strOutput.append(SerializerFactory.getSerializer("HL7V2").toXML(template));

} else {
logger.error("JSON Writer: The data type " + dataType +" is not supported. Failed to convert to json
format.");
throw new Exception("JSON Writer: The data type " + dataType +" is not supported. Failed to convert to
json format.");
}

JSONObject strJSON = XML.toJSONObject(strOutput.toString());


int indentFactor = 0;

if (jsonDispatcherProperties.isPrettyPrint()) {
indentFactor = 4;
}

try {
fos = new FileOutputStream(file);
fos.write(strJSON.toString(indentFactor).getBytes());
fos.flush();
fos.close();

} catch (Throwable e) {
throw new Exception(e);

} finally {
if (fos != null) {
fos.close();
}
}
}

The source code presented in the Source 21-4 for writeDocument() is not perfect and is
for illustration only. Thus, it takes the data type from the outbound encoded message,
whereas the user may specify not only the encoded data, ${message.encodedData}, but
also a raw data - message.rawData or transformed data - message.transformedData
in the outbound Template field which might be in a format other than the encoded data

207 PART V – ADVANCING IN MIRTH CONNECT


format. As a result, the code may fail to transform to JSON. Also, the XML to JSON
transformation is outside of the exception handling block.

JSONConnectorServlet.java and JSONConnectorServletInterface.java are


additionally required for this connector to verify the existence of the destination folder
when you click “Test Write” button next to the JSON writer directory setting. To make
this call available you have to make the JSONConnectorServletInterface also
accessible from the Client project as described above. (see Figure 21-3)

Other changes done to these files handle getter and setter methods for properties. I will
not post all changes here, instead, you can find all source code files needed in the
archive provided with this book.

Signing Extension

Once all JSON Writer extension source code files are completed and projects are
successfully compiled, it is the time to build JAR files that will later be deployed to the
Mirth Extensions folder.

There will be two JAR packages that divide functionalities between the Mirth Connect
Administrator and Server sides. There is also a third package shared by the Mirth
Connect Administrator and Server.

If you are using the Eclipse editor to build the JAR, click File > Export... , then select the
JAR file type and click Next. Name the JARs and include the files listed in Table 21-2.

TABLE 21-2 Extension packages


JAR File Name File Name Description
json-client.jar JSONWriter.java Mirth Connect Administrator side JAR
json-server.jar JSONDispatcher.java Mirth Connect Server side JAR
JSONConnectorServlet.java
json-shared.jar JSONDispatcherProperties.java Test Write shared between Mirth Connect
JSONConnectorServletInterface.java Administrator and Server

To debug and trace the JSON Writer connector, make sure you have checked the Export
Java source files and resources setting.

Now, if you try to use these files, you will receive


com.sun.deploy.net.JARSigningException: Found unsigned entry in resource:
http://localhost:8080/webstart/extensions/libs/json...

This indicates that JARs must be signed, and furthermore, they must be signed using
Mirth Corporation certificate. You also need to use the password that Mirth provides.

PART V – ADVANCING IN MIRTH CONNECT 208


To sign your JARs, create a temporary folder for JSONWriter JARs and copy the newly
created JARs there. Navigate to the ...\Mirth Connect\appdata folder and copy the
keystore.jks file to the same JAR folder. You may need additionally change the
MANIFEST.MF file content for each jar. (See Source 21-5)

SOURCE 21-5 MANIFEST.MF file content


Manifest-Version: 1.0
Ant-Version: Apache Ant 1.9.1
Application-Name: Mirth Connect
Permissions: all-permissions
Codebase: *

Assuming that jarsigner.exe is in the operating system‟s environment path variable,


create a batch file with the following command lines. Notice, that the third-party libraries
must be signed as well.

SOURCE 21-6 JAR Signer command line code


jarsigner -keystore keystore.jks -storetype JCEKS -storepass <psw> -sigfile SERVER -sigalg
SHA256withRSA -digestalg SHA1 -tsa https://timestamp.geotrust.com/tsa json -client.jar
mirthconnect

jarsigner -keystore keystore.jks -storetype JCEKS -storepass <psw> -sigfile SERVER -sigalg
SHA256withRSA -digestalg SHA1 -tsa https://timestamp.geotrust.com/tsa json -server.jar
mirthconnect

jarsigner -keystore keystore.jks -storetype JCEKS -storepass <psw> -sigfile SERVER -sigalg
SHA256withRSA -digestalg SHA1 -tsa https://timestamp.geotrust.com/tsa json -shared.jar
mirthconnect

jarsigner -keystore keystore.jks -storetype JCEKS -storepass <psw> -sigfile SERVER -sigalg
SHA256withRSA -digestalg SHA1 -tsa https://timestamp.geotrust.com/tsa lib\org.json.jar
mirthconnect

The <psw> in Source 21-6 is not part of the command line and must be replaced with
the actual password required to sign the JAR. I have done this for security reasons.
Furthermore, this password may be different in your installation or may change in future
versions of the Mirth source code.

However, the password required to sign the JARs can be found in the <Mirth home>
\conf\mirth.properties file. The code in Source 21-7 contains some settings related to
JAR certificates.

SOURCE 21-7 mirth.prperties keystore code


# keystore
keystore.path = ${dir.appdata}/keystore.jks
keystore.storepass = <psw>
keystore.keypass = <psw>
keystore.type = JCEKS

With the password in hand, update the jarsigner batch command strings and sign the
JARs. Verify that JAR‟s META-INF folder contains *.RSA and *.SF files.

209 PART V – ADVANCING IN MIRTH CONNECT


Now we are ready to build and deploy the JSONWriter extension installation package.

Deploying Extension

The Mirth extension should be packaged a certain way and this must be reflected in the
destination.xml file.

Third-party libraries go into the lib folder, the path is specified as lib/<third-party
library>.jar. All extension‟s JARs are in the extension root folder. The root folder
name and path attribute in destination.xml must comply, for example
<connectorMetaData path="json">

FIGURE 21-5 JSON Writer extension package structure

Once the folder structure is created, ZIP the folder, including the extension‟s root folder.

Installation

The recommended way to install an extension using the Mirth Connect Administrator
open, as follows: click the Extension link in the navigation bar, click the Browse button to
locate the extension zip file and then click the Install button. If there are no errors, you
will see the logout, restart the server and login again prompt. Once you have logged back
in, click the Extension link again to verify that the extension is actually installed. By the
way, the appearance of your extension in the Extension list does not mean that there are
no errors in the extension implementation.

In the background, the Mirth Server unzips the extension archive to the install_temp
folder and prompts you when the installation has been completed. During the next start,
the Mirth Connect Server tries to copy the install_temp folder content to the path
specified in destination.xml, pretending that it is “installing” the extension. All
extensions are verified afterwards for validity. If this sounds like a long process for a
simple copy-paste routine, that‟s because it is. You may skip all “installation” steps by

PART V – ADVANCING IN MIRTH CONNECT 210


simply copying and pasting your extension to the required folder, or replacing the
installation process with a DOS command batch file that creates a folder and copies files
there, and then running the Mirth Connect Server to see if your extension is valid.

If you want to run your extension under Eclipse, say, for debugging purposes, open the
Eclipse workspace that contains Mirth Server project, navigate to the
..\Server\bin\extensions folder and paste your extension there. In addition, you
need to add the extension declaration to the Server\appdata\extension.properties
files:
JSON\ Writer = true

Note: When you rebuild Mirth projects by running Project > Clean in Eclipse, it
returns the Server extension folder to its initial state, i.e., all custom extensions
are deleted. You need to deploy the extension again to run it under Eclipse.

Extension Implementation Verification

To debug and test the implementation, the extension‟s JARs must be built with the
including source codes option turned on. Once all steps to signing and deploying JARs
are done, launch the Mirth Connect Server debug configuration first. Wait for the prompt
that the Mirth Connect server successfully started to appear and then launch the Mirth
Connect Administrator debug configuration.

211 PART V – ADVANCING IN MIRTH CONNECT


FIGURE 21-6 JSON Writer Destination settings

If there are problems with the extension, add breakpoints to the source code to find
them.

Once the Mirth Connect Administrator is up and running, create a new channel. Switch to
the Destinations tab and from the connector types list select JSON Writer. Type the
output folder and file names. Verify by clicking the Test Write button. Add the encoded
data to the template text box. Click the Validate Connector link.

If you encountered any problems at this step, add breakpoints and trace the
performance. Most likely you have forgotten to set or return some of the properties.

Switch to the channel‟s Summary tab, open the Set Data Types window and change the
Destination outbound message type to one that is supported, i.e., XML, HL7v3 or HL7v2.
Leave the Source connector as Channel Reader. Save and deploy the channel. (See Figure
21-6)

Send an HL7v2, HLv3 or XML message and verify that it is actually transformed to JSON.
If so, congratulations, you have successfully built your first extension.

PART V – ADVANCING IN MIRTH CONNECT 212


CHAPTER 22 Tuning Mirth Connect

Tuning Mirth Connect


Performance and security are two tricky Quality-of-Service (QoS) requirements where
you need to be prepared to make trade-offs. You and your clients may understand
performance and security requirements differently. The conventional approach is to
meet clients‟ objectives that typically focus on performance, how long a particular
operation (message processing) takes, and how many resources the system utilizes
under varying load conditions.

Translated to non-functional requirements, the client‟s concern is about the t hroughput,


i.e., the number of messages that your application can process per unit time, typically
measured as transactions per second, and resource utilization, i.e., the server and
network resources consumed by the application. Resources typically include CPU,
memory usage, disk I/O and network I/O.

The second aspect, information security and patient privacy, is a fundamental


component of healthcare‟s landscape. As HIMSS introduction states, “all healthcare
providers, regardless of size, have an obligation to their patients to protect the personal
information provided or created as a result of medical care. Many legal jurisdictions have
created legislation to codify the provider’s obligations and to provide the patient with
recourse in the event that the protected health information is mishandled. In the United
States, this body of legislation is primarily HIPAA and HITECH. These laws, as well as the
privacy laws of other modern countries, are all based on the well - defined best practices of
the field of Information Security.” (HIMSS Privacy & Security Toolkit for Small Provider
Organizations). 4

Security regulations such as HIPAA and HITECH are forcing many healthcare
organizations and practices to secure healthcare information. This obviously involves a
great deal of configuring EMR systems to make them fully compliant with privacy and
security requirements. Healthcare systems tend to interact with other systems through
interfaces often built using interface engines like Mirth. Adding interface engines
exposes the EMR systems to additional security risks.

This is why, whether you are a new or quite experienced in healthcare systems
development, it is vital that you know Mirth Connect security settings basics.

4
Introduction to the HIMSS Privacy & Security Toolkit for Small Provider Organizations © February 2011
Healthcare Information and Management Systems Society.
213 PART V – ADVANCING IN MIRTH CONNECT
Performance Tuning

Let‟s start with the channel level performance tuning. Create a new channel or open an
existing one and switch to the Summary Tab. Under the Channel Properties settings, you
will find Message Storage and Message Pruning. Settings that affect Mirth database
performance.

FIGURE 22-1 Message Storage and Message Pruning settings

Let‟s go through each of them and see what changes they can bring to our system.

Message Storage

Message Storage is one of the places that affect Mirth‟s capability to handle a high
volume of messages on the channel level. Here you have options from storing all
message artifacts such as message content and message metadata, to storing nothing at
all. The less you store in the database the higher the channel‟s throughput.

The Message Storage setting goes from Development, which stores everything, to
Disabled, which stores nothing. Depending on the amount of information stored for each
message, Mirth enables or disables other channel settings. Thus, if Message Storage is set
to Raw or lower, the destination queuing is disabled. If you decide to set Message
Storage to Metadata or lower, the source connector queuing is disabled as well.

Along with message pruning, you must decide what message artifacts to retain in the
database to meet regulatory or policy requirements for audit purposes.
PART V – ADVANCING IN MIRTH CONNECT 214
Message Pruning

By default, Mirth Connect uses Apache Derby database to store messages and message
metadata. Mirth also uses this database to store other information such as channels and
global script codes, users, etc. The Derby database optimization is a separate topic by
itself and in a high-load production environment you may decide to move to one of the
officially supported industrial-level database systems such as PostgreSQL, MySQL, Oracle
or MS SQL Server.

All in all, minimizing the database size is crucial for performance and that is where
Message Pruning comes into play.

This setting applies only to the selected channel‟s messages and message metadata. It
allows you to retain messages for a certain amount of time to adhere to regulatory or
policy requirements for data audit criteria, and to clean the space for newer messages.

In the current version of Mirth, message pruning does not apply to errored messages.
Such messages stay in the database and need to be removed manually. To do this, you
need to stop and undeploy all channels and run a SQL query to delete such messages.
The other option is to stop, undeploy, export, delete and import back all affected
channels. Be cautious when using the second option as it also deletes all queued
messages for a particular channel.

Message archiving for the channel can be enabled only if a similar setting is enabled for
the Mirth Data Pruner (see below).

Source Queue

Let‟s move on and switch to the Source connector tab. The Source Settings are available
for all types of source connector settings. Let‟s see what options we have here.

A channel processes each message as it arrives. Turning the Source Queue setting to ON
allows the channel to consume messages from the queue asynchronously. In this case,
Mirth keeps all incoming messages that the channel has not processed yet in the
database. Once the channel has processed all queued messages, it takes the next portion
of incoming messages, defined by Queue Buffer Size, from the database, stores them to
memory and continues processing. Increasing the queue buffer size increases queue
performance.

215 PART V – ADVANCING IN MIRTH CONNECT


FIGURE 22-2 Source Queue and Max Processing Threads settings

Larger Queue Buffer Size obviously comes at the cost of higher memory usage. You
probably need to increase the server-side max heap size in <Mirth
root>\mcserver.vmoptions configuration file if Mirth is running as the server, or
<Mirth root>\ mcservice.vmoptions if Mirth is running as the service.

FIGURE 22-3 Global Message Queue setting

You may also change the default message capacity of the queue buffer for all new
channels via the Server Settings. (see Figure 22-3)

Message Processing Threads

Find the Max Processing Threads field within the Source Queue settings.

As the name suggests, this setting affects the number of messages that can be
processed by the channel at one time, i.e., the number of threads available for this
channel to handle incoming messages. By default, only one thread is running. You may

PART V – ADVANCING IN MIRTH CONNECT 216


increase this number to start processing incoming messages asynchronously, which also
means that there is no way to know the order in which messages will be processed.

If the order of the messages is important, you should look for other ways to improve
performance, probably on the Mirth system level.

Data Pruner

If you want to enable message pruning for one or more channels, you also need to
enable data pruning on the system level. Save the channel, click the Settings menu item
and select the Data Pruner tab.

FIGURE 22-4 Data Pruner Threads setting

Besides pruning messages, you may also decide to prune system level event records.

Java Heap Size

If you are processing large files such as DICOM files you may notice that the processing
time is extremely long regardless what you do. In such case, you may think about tuning
Java Virtual Machines (JVMs) that Mirth uses to run.

For the uninitiated, the JVM heap size determines how often and how long the Java
Virtual Machine spends collecting garbage. Depending on the heap size, full garbage
collection happens less frequently but takes more time, or happens more frequently and
takes less time. The goal is to find the balance between the amount of memory required
to process a message and the time that JVM spends on garbage collection.
217 PART V – ADVANCING IN MIRTH CONNECT
One of the options to optimize JVM is to change the heap size. Go to your Mirth
installation folder and find mcserver.vmoptions or mcservice.vmoptions file. The
former file is to configure JVM settings if Mirth Connect is running as the server and the
latter file if it is running as the service. Open the required file for editing and change
Xmx256m which specifies the maximum size of allocated memory.

If you are running Mirth on Windows and Mac OS X, it comes with Mirth Connect Server
Manager utility that has an option to configure server memory as well. (see Figure 22-5)

However, I doubt this utility actually changes the server's maximum available memory as
suggested. Let‟s try it. Enter a new value to the Server Memory field and click Apply. Now
return to the Mirth root folder and open the mcserver.vmoptions file. It is probably
unchanged. What you have just updated is the memory setting in the
mcservice.vmoptions file used by the Mirth service.

FIGURE 22-5 Mirth Connect Server Manager settings

Note that clicking the wrench button next to Administrator allows you to configure the
memory settings for the Mirth Connect Administrator application which can be done
either directly in the <Mirth root>\conf\mirth.properties file as
administrator.maxheapsize or by clicking the gear icon when launching Mirth Connect
Administrator through the browser.

Operating System and Hardware Optimization

If you tried all settings listed above to improve the throughput of the channels and you
do not see any significant change, then probably the problem lays outside of Mirth.

PART V – ADVANCING IN MIRTH CONNECT 218


Obviously, your Mirth Connect server can perform only as well as its weakest link where
operating system and/or hardware may be one of the limiting factors. For example, if
your channel reads a file from the filesystem or does complex queries to the database
you may decide to move from HDD to SSD (solid-state drive). Similarly, other frequent
bottlenecks are CPU and memory saturations.

Security Protection

You may be surprised that Mirth Connect supports some extended security settings.
However, by default some of these settings are not readily available and, furthermore,
Mirth does not give you a clue that they are available.

Let‟s go through message encryption, password strength and extended encryption


settings.

Message Encryption

As we explored earlier, Message Storage allows performance tuning by choosing


appropriate message-storage level. If you decided to set the messag e storage level to
Raw or higher, meaning that the message content is stored in the database, then you
may be obliged to encrypt stored messages.

FIGURE 22-6 Message Storage settings

To do this, check the Encrypt message content box. Now the message content is still
viewable in Mirth Connect Administrator but is not searchable.

219 PART V – ADVANCING IN MIRTH CONNECT


Password Strength

Let‟s look at the Mirth Connect server configuration file. Open mirth.properties file
located in the <Mirth root>\conf folder. There you find password requirements and
keystore sections to tighten security. Note that password settings are left at the
minimum allowable values.

Security policies in your organization may require stronger passwords to gain access to
sensitive data. Mirth can fully configure password strength to meet these requirements
by altering password-related properties.

FIGURE 22-7 Password Properties settings

If you change any of these properties, you need to restart Mirth Connect server or
service to have your changes to take effect.

Encryption Settings

Moving down in the mirth.properties file you will see that the database credentials the
Mirth Connect server uses to access the database are stored in an unencrypted plaintext
form. This means that anyone who has access to the Mirth properties file can explore the
database and see what messages are coming and going in the interface engine.

To enable additional security you need to manually add the following line to the Mirth
property file:

encryption.properties = 1

If your Mirth Connect instance uses a password to access the database (e.g., it is running
on the database other than Derby) and you restart Mirth Connect with this encryption
setting, you should see that the database.password string is now encrypted and starts
with the {enc} encryption prefix.

PART V – ADVANCING IN MIRTH CONNECT 220


Other security settings available for you are listed in the table below.

TABLE 22-1 Mirth encryption settings


Setting Value Example Description
encryption.properties 1 Makes encryption settings available.
encryption.export 1 Enables encryption of any export you do from Mirth (e.g.,
channels, global scripts, etc.) using Mirth Connect
Administrator.
encryption.algorithm AES The algorithm Mirth uses for encryption. Default is AES.
Other possible values: DES
encryption.keylength 128 The key length. Default is 128. Other possible value: 256.
digest.algorithm SHA256 Message digest algorithms used by the one-way hashing
function to encrypt the password. Default is SHA256. Other
5
possible value: MD5.
security.provider BouncyCastleProvider The security provider used for all encryption and hashing.

If you manually add all encryption settings to the Mirth properties file, your file should
look like in the image below (see Figure 22-8).

FIGURE 22-8 Encryption settings

Restart the Mirth server or service for these settings to take effect. Launch Mirth Connect
Administrator, create a simple channel, export it and check the file content. Now you
should find that the channel‟s source is encrypted.

A word of caution: Mirth Connect uses a Java KeyStore (JCEKS file) repository located in
appdata/keystore.jks which also contains a private key used for encryption where the
keypass for the keystore itself still remains unencrypted. This means that the database
password found in the properties file can be decrypted in the same way Mirth Connect
does it.

This concludes Mirth Connect performance and security tuning. Both topics are complex
subjects since there are many business practices, regulatory parties and governing
bodies involved, each with their own requirements. It is very unlikely that performance
and security settings applicable to one company will fit similarly well for another. Now
you have a starting point to customize your Mirth Connect instance to improve
performance.

5
About changing hash function for the password - http://www.mirthcorp.com/community/forums/showthread.php?t=15446

221 PART V – ADVANCING IN MIRTH CONNECT


You have made it through many Mirth Connect concepts.

I hope you have enjoyed the ride!

PART V – ADVANCING IN MIRTH CONNECT 222


Book Resources

Book Resources
Other titles you may be interested in:

Unofficial Developer's Guide to CCD on Mirth


Connect

This book introduces readers to version 3.x of Mirth


Connect to the point that they are confident enough
to start building their own healthcare data exchange
interfaces.

By implementing an imaginary CCD Builder Server,


this book covers topics on XSL Transformation,
acknowledgements implementation, XML schema and
Schematron validation. Each connection point
(channels and destinations) is explained in a separate
chapter, which in turn provides step-by-step
instructions on how to create and code data
transformation rules for ADT and ORU messages.

The book is available to download at –


http://ccdonmirth.shamilpublishing.com

Unofficial Developer's Guide to FHIR on Mirth


Connect

This book describes version 3.x of Mirth Connect to


the point that reader are confident enough to start
building their own healthcare data exchange
interfaces using a new HL7 standard called FHIR or
Fast Healthcare Interoperability Resources.

This book may be interesting for those implementing


HL7 FHIR based solutions.

The book is available to download at –


http://fhironmirth.shamilpublishing.com
Unofficial Developer's Guide to HL7v3 Basics

This book introduces readers to HL7 version 3 to the


point that they are confident enough to start building
their own healthcare data exchange interfaces. The
book provides clear and easy to use, step-by-step
guidance for learning the standard, with numerous
examples covering many topics.

This book may be interesting for those implementing


the Clinical Document Architecture (CDA) or HL7
Reference Information Model (aka RIM) based
solutions.

The book is available to download at –


http://hl7basics.shamilpublishing.com

PART V – ADVANCING IN MIRTH CONNECT 224


APPENDICES

Appendices
A: Eligibility Query Request (QUCR_IN200101) Template
<?xml version="1.0" encoding="UTF-8"?>
<QUCR_IN200101UV01 ITSVersion="XML_1.0" xmlns="urn:hl7-org:v3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<id root="2.16.840.1.113883.1.3" extension=""/>
<creationTime value=""/>
<versionCode controlInformationRoot="2.16.840.1.113883.11.19373" code="V3PR1"/>
<interactionId root="2.16.840.1.113883.1.6" extension="QUCR_IN200101UV01"/>
<profileId controlInformationRoot="2.16.840.1.113883.9" controlInformationExtension="Elig Query Request, Gen"/>
<processingCode code="D"/>
<processingModeCode code="T"/>
<acceptAckCode code="NE"/>
<receiver typeCode="RCV">
<device classCode="DEV" determinerCode="INSTANCE">
<id controlInformationRoot="2.16.840.1.113883.101.1"
controlInformationExtension="Organization"/>
<asAgent classCode="AGNT">
<representedOrganization classCode="ORG" determinerCode="INSTANCE">
<id controlInformationRoot="2.16.840.1.113883.101.1"
controlInformationExtension=""/>
</representedOrganization>
</asAgent>
</device>
</receiver>
<sender typeCode="SND">
<device classCode="DEV" determinerCode="INSTANCE">
<id controlInformationRoot="2.16.840.1.113883.101.2"
controlInformationExtension="Organization"/>
<asAgent classCode="AGNT">
<representedOrganization classCode="ORG" determinerCode="INSTANCE">
<id controlInformationRoot="2.16.840.1.113883.101.2"
controlInformationExtension=""/>
</representedOrganization>
</asAgent>
</device>
</sender>
<controlActProcess classCode="CACT" moodCode="EVN">
<code code="QUCR_TE200101UV01" codeSystem="2.16.840.1.113883.11.19427"/>
<authorOrPerformer typeCode="AUT">
<assignedPerson classCode="ASSIGNED">
<id controlInformationRoot="2.16.840.1.113883.101.10.1"
controlInformationExtension=""/>
<representedOrganization classCode="ORG" determinerCode="INSTANCE">
<id controlInformationRoot="2.16.840.1.113883.101.2"
controlInformationExtension=""/>
</representedOrganization>
</assignedPerson>
</authorOrPerformer>
<queryByParameter>
<statusCode code="new"/>
<parameterList>
<id extension=""/>
<carrierRole.id>
<value nullFlavor="NI"/>
</carrierRole.id>
<coveredPartyAsPatient.Id>
<value root="2.16.840.1.113883.101.10.2" extension=""/>
</coveredPartyAsPatient.Id>
<coveredPartyAsPatientPerson.BirthTime>
<value value=""/>
</coveredPartyAsPatientPerson.BirthTime>
<coveredPartyAsPatientPerson.Name>
<value>
<part type="FAM" value=""/>
<part type="GIV" value=""/>
<part type="GIV" qualifier="MID" value=""/>
</value>
</coveredPartyAsPatientPerson.Name>
<policyOrAccount.Id>
<value root="2.16.840.1.113883.101.3" extension="MSP"/>
</policyOrAccount.Id>
<serviceDate>
<value validTimeLow=""/>
</serviceDate>
</parameterList>
</queryByParameter>
</controlActProcess>
</QUCR_IN200101UV01>

B: Eligibility Query Results (QUCR_IN210101) Template


<?xml version="1.0" encoding="UTF-8"?>
<QUCR_IN210101UV01 ITSVersion="XML_1.0" xmlns="urn:hl7-org:v3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<id root="2.16.840.1.113883.1.3" extension=""/>
<creationTime value=""/>
<versionCode controlInformationRoot="2.16.840.1.113883.11.19373" code="V3PR1"/>
<interactionId root="2.16.840.1.113883.1.6" extension="QUCR_IN210101UV01"/>
<profileId controlInformationRoot="2.16.840.1.113883.9" controlInformationExtension="Elig Query Results, Gen"/>
<processingCode code="D"/>
<processingModeCode code="T"/>
<acceptAckCode code="NE"/>
<receiver typeCode="RCV">
<device classCode="DEV" determinerCode="INSTANCE">
<id controlInformationRoot="2.16.840.1.113883.101.2"
controlInformationExtension="Organization"/>
<asAgent classCode="AGNT">
<representedOrganization classCode="ORG" determinerCode="INSTANCE">
<id controlInformationRoot="2.16.840.1.113883.101.2"
controlInformationExtension="EligibilityServiceOrganization"/>
</representedOrganization>
</asAgent>
</device>
</receiver>
<sender typeCode="SND">
<device classCode="DEV" determinerCode="INSTANCE">
<id controlInformationRoot="2.16.840.1.113883.101.1"
controlInformationExtension="Organization"/>
<asAgent classCode="AGNT">
<representedOrganization classCode="ORG" determinerCode="INSTANCE">
<id controlInformationRoot="2.16.840.1.113883.101.1"
controlInformationExtension="Organization"/>
</representedOrganization>
</asAgent>
</device>
</sender>
<acknowledgement typeCode="AA">
<targetMessage>
<id root="2.16.840.1.113883.1.3" extension=""/>
</targetMessage>
</acknowledgement>
<controlActProcess classCode="CACT" moodCode="EVN">
<code code="QUCR_TE210101UV01" codeSystem="2.16.840.1.113883.1.18"/>
<subject typeCode="SUBJ">
<policyOrAccount classCode="COV" moodCode="EVN" negationInd="false">
<code code="PUBLICPOL"/>
<author typeCode="AUT">
<carrierRole classCode="UNDWRT">
<id root="2.16.840.1.113883.101.1"
extension="UserId@Organization.com "/>
</carrierRole>
</author>
</policyOrAccount>
</subject>
<reasonOf typeCode="RSON">
<detectedIssueEvent classCode="ALRT" moodCode="EVN">
<code code="" controlInformationRoot="2.16.840.1.113883.11.208"/>
<text value=""/>
</detectedIssueEvent>
</reasonOf>
<queryAck>
<queryResponseCode code="OK" controlInformationRoot="2.16.840.1.113883.11.208"/>
</queryAck>
<queryByParameter>
<parameterList>
<id nullFlavor="NI"/>
<carrierRole.id>
<value nullFlavor="NI"/>

APPENDICES 226
</carrierRole.id>
<coveredPartyAsPatient.Id>
<value root="2.16.840.1.113883.101.10.2" extension=""/>
</coveredPartyAsPatient.Id>
<coveredPartyAsPatientPerson.BirthTime>
<value value=""/>
</coveredPartyAsPatientPerson.BirthTime>
<coveredPartyAsPatientPerson.Name>
<value>
<part type="FAM" value=""/>
<part type="GIV" value=""/>
</value>
</coveredPartyAsPatientPerson.Name>
<policyOrAccount.Id>
<value root="2.16.840.1.113883.101.3" extension="MSP"/>
</policyOrAccount.Id>
<serviceDate>
<value nullFlavor="NI"/>
</serviceDate>
</parameterList>
</queryByParameter>
</controlActProcess>
</QUCR_IN210101UV01>

C: MS Access Log Database Structure

Create a table called Messages and add following fields:


Field Type Comment
id AutoNumber
CreationDate Date/Time Format: yyyy-mm-dd hh:nn:ss
UUID Text
MsgType Text
Trigger Text
Version Text
Errors Memo
Source Memo

D: PostgreSQL Eligibility Database Structure


-- Table: messages
-- DROP TABLE messages;

CREA TE TABLE messages


(
id serial NOT NULL, -- autoincrementing id
mid character varying(40), -- inbound message identifier, /QUCR_IN200101UV01/id/@extension
cdate character varying(20), -- inbound message creation date,
/QUCR_IN200101UV01/creationTime/@value
sender character varying(100), -- inbound message sending organization,
/QUCR_IN200101UV01/sender/device/asAgent/representedOrganization/id/@controlInformationExtension
author character varying(100), -- assigned person identifier
pid integer -- link to patients.id table
)
WITH (
OIDS=FALSE
);
ALTE R TABLE messages
OWNE R TO postgres;
COMME NT ON TAB LE messages
IS 'Contain QUCR_IN200101 (Elig Query Request) message related information. For the test purpose
only.';
COMME NT ON COLUMN messages.id IS ' autoincrementing id';

227 APPENDICES
COMME NT ON COLUMN messages.mid IS 'inbound message identifier,
/QUCR_IN200101UV01/id/@extension';
COMME NT ON COLUMN messages.cdate IS 'inbound message creation date,
/QUCR_IN200101UV01/creationTime/@value';
COMME NT ON COLUMN messages.sender IS 'inbound message sending organization,
/QUCR_IN200101UV01/sender/device/asAgent/representedOrganization/id/@controlInformationExtension';
COMME NT ON COLUMN messages.author IS 'assigned person identifier,
/controlActProcess/authorOrPerformer/assignedPerson/id/@controlInformationExtension';
COMME NT ON COLUMN messages.pid IS 'link to patients.id table';

-- Table: patients
-- DROP TABLE patients;

CREA TE TABLE patients


(
id serial NOT NULL, -- autoincrementing id
fname character varying(100), -- patient first name,
/controlActProcess/queryByParameter/parameterList/coveredPartyAsPatientPerson.Name/value/p
art[1]/@value
lname character varying(100), -- patient last name,
/controlActProcess/queryByParameter/parameterList/coveredPartyAsPatientPerson.Name/value/part[1]/@
value
pid character varying(10), -- patient person identifier,
/controlActProcess/queryByParameter/parameterList/coveredPartyAsPatient.Id/value/@extension
dob character varying(15), -- patient date of birth,
/controlActProcess/queryByParameter/parameterList/coveredPartyAsPatientPerson.BirthTime/value/@val
ue
processed boolean NOT NULL DEFAULT false -- QUCR_IN210101 (Elig Query Results) response sent
flag
)
WITH (
OIDS=FALSE
);
ALTE R TABLE patients
OWNE R TO postgres;
COMME NT ON TAB LE patients
IS 'Patient personal information sent by QUCR_IN200101 (Elig Query Request) message. For the
test purpose only.';
COMME NT ON COLUMN patients.id IS 'autoincrementing id';
COMME NT ON COLUMN patients.fname IS 'patient first name,
/controlActProcess/queryByParameter/parameterList/coveredPartyAsPatientPer son.Name/value/part[1]/@
value';
COMME NT ON COLUMN patients.lname IS 'patient last name,
/controlActProcess/queryByParameter/parameterList/coveredPartyAsPatientPerson.Name/value/part[1]/@
value';
COMME NT ON COLUMN patients.pid IS 'patient person identifier,
/controlActProcess/queryByParameter/parameterList/coveredPartyAsPatient.Id/value/@extension';
COMME NT ON COLUMN patients.dob IS 'patient date of birth,
/controlActProcess/queryByParameter/parameterList/coveredPartyAsPatientPerson.BirthTime/value/@val
ue';
COMME NT ON COLUMN patients.processed IS 'QUCR_IN210101 (Elig Query Results) response sent
flag';

E: XSLT to transform from HL7v3 to HL7v2


<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform ">
<xsl:output method="xml" encoding="UTF-8" indent="yes"/>
<xsl:template match="/">
<HL7Message>
<MSH>
<MSH.1>|</MSH.1>
<MSH.2>^~\&amp;</MSH.2>
<MSH.3>
<MSH.3.1>ADM</MSH.3.1>
</MSH.3>
<MSH.4>
<MSH.4.1>Sending Organization</MSH.4.1>

APPENDICES 228
</MSH.4>
<MSH.5>
<MSH.5.1>ALL</MSH.5.1>
</MSH.5>
<MSH.6>
<MSH.6.1>Receiving Organization</MSH.6.1>
</MSH.6>
<MSH.7>
<MSH.7.1>
<xsl:value-of select="substring-before(/QUCR_IN210101UV01/creationTime/@value,'-
')"/>
</MSH.7.1>
</MSH.7>
<MSH.9>
<MSH.9.1>RSP</MSH.9.1>
<MSH.9.2>E22</MSH.9.2>
</MSH.9>
<MSH.10>
<xsl:value-of select="/QUCR_IN210101UV01/id/@extension"/>
</MSH.10>
<MSH.11>
<MSH.11.1>D</MSH.11.1>
</MSH.11>
<MSH.12>
<MSH.12.1>2.4</MSH.12.1>
</MSH.12>
<MSH.13/>
<MSH.14/>
<MSH.15>
<MSH.15.1>AL</MSH.15.1>
</MSH.15>
<MSH.16/>
</MSH>
<MSA>
<MSA.1>
<MSA.1.1>AA</MSA.1.1>
</MSA.1>
<MSA.2>
<MSA.2.1>
<xsl:value-of
select="/QUCR_IN210101UV01/acknowledgement/targetMessage/id/@extension "/>
</MSA.2.1>
</MSA.2>
</MSA>
<QAK>
<QAK.1/>
<QAK.2>
<QAK.2.1>OK</QAK.2.1>
</QAK.2>
<QAK.3>
<QAK.3.1>E22</QAK.3.1>
<QAK.3.2/>
<QAK.3.3>CIHI0003</QAK.3.3>
</QAK.3>
</QAK>
<QPD>
<QPD.1>
<QPD.1.1>E22</QPD.1.1>
<QPD.1.2/>
<QPD.1.3>CIHI0003</QPD.1.3>
</QPD.1>
<QPD.2>
<QPD.2.1/>
</QPD.2>
<QPD.3/>
</QPD>
<PID>
<PID.1/>
<PID.2/>
<PID.3>
<PID.3.1>
<xsl:value-of
select="/QUCR_IN210101UV01/controlActProcess/queryByParameter/parameterList/coveredPartyAsPatient.Id/value/@extension"/>
</PID.3.1>
<PID.3.2/>
<PID.3.3>ISO</PID.3.3>
<PID.3.4>PHN</PID.3.4>
</PID.3>
<PID.4/>
<PID.5>
<PID.5.1>
<xsl:for-each
select="/QUCR_IN210101UV01/controlActProcess/queryByParameter/parameterList/coveredPartyAsPatientPerson.Name/value/*">
<xsl:if test="./@type = 'FAM'">
<xsl:value-of select="./@value"/>
</xsl:if>
</xsl:for-each>
</PID.5.1>
<PID.5.2>
<xsl:for-each
select="/QUCR_IN210101UV01/controlActProcess/queryByParameter/parameterList/coveredPartyAsPatientPerson.Name/value/*">
<xsl:if test="./@type = 'GIV'">
<xsl:value-of select="./@value"/>

229 APPENDICES
</xsl:if>
</xsl:for-each>

</PID.5.2>
<PID.5.3/>
<PID.5.4/>
<PID.5.5>L</PID.5.5>
</PID.5>
<PID.6/>
<PID.7>
<PID.7.1>
<xsl:value-of
select="/QUCR_IN210101UV01/controlActProcess/queryByParameter/parameterList/coveredPartyAsPatientPerson.BirthTime/value/@value"/>
</PID.7.1>
</PID.7>
<PID.8>
<PID.8.1/>
</PID.8>
</PID>
</HL7Message>
</xsl:template>
</xsl:stylesheet>

F: JavaScriptTask.java
Rhino JS Debugger Embedded
/*
* Copyright (c) Mirth Corporation. All rights reserved.
*
* http://www.mirthcorp.com
*
* The software in this package is published under the terms of the MPL license a copy of which has
* been included with this distribution in the LICENSE.txt file.
*/

package com.mirth.connect.server.util.javascript;

import java.util.concurrent.Callable;

import org.apache.commons.lang3.StringUtils;
import org.apache.log4j.Logger;
import org.mozilla.javascript.Context;
import org.mozilla.javascript.Script;
import org.mozilla.javascript.Scriptable;

import com.mirth.connect.donkey.util.ThreadUtils;

/* (SN) Debugger declaration starts */


import org.mozilla.javascript.tools.debugger.Main;
/* (SN) Debugger declaration ends */

public abstract class JavaScriptTask<T> implements Callable<T> {

private Logger logger = Logger.getLogger(JavaScriptTask.class);


private MirthContextFactory contextFactory;
private Context context;
private boolean contextCreated = false;

/* (SN) Debugger declaration starts */


private static Main rhinoDebugger = null;
/* (SN) Debugger declaration ends */

public JavaScriptTask(MirthContextFactory contextFactory) {


this.contextFactory = contextFactory;
}

public MirthContextFactory getContextFactory() {


return contextFactory;
}

public void setContextFactory(MirthContextFactory contextFactory) {


this.contextFactory = contextFactory;
}

protected Context getContext() {

APPENDICES 230
return context;
}

public Object executeScript(Script compiledScript, Scriptable scope) throws InterruptedException {


try {
// if the executor is halting this task, we don't want to initialize the context yet
synchronized (this) {
ThreadUtils.checkInterruptedStatus();
context = Context.getCurrentContext();
Thread.currentThread().setContextClassLoader(contextFactory.getApplicationClassLoader());
logger.debug(StringUtils.defaultString(StringUtils.trimToNull(getClass().getSimpleName()),
getClass().getName()) + " using context factory: " + contextFactory.hashCode());

/*
* This should never be called but exists in case executeScript is called from a
* different thread than the one that entered the context.
*/
if (context == null) {
contextCreated = true;
context = JavaScriptScopeUtil.getContext(contextFactory);
}

if (context instanceof MirthContext) {


((MirthContext) context).setRunning(true);
}
}

/* (SN) Debugger entry starts */


if (rhinoDebugger == null) {
final String title = StringUtils.defaultString(StringUtils.trimToNull(getClass().getSimpleName()),
getClass().getName()) + " using context factory: " + contextFactory.hashCode();
rhinoDebugger = Main.mainEmbedded(contextFactory, scope, title);
rhinoDebugger.setExitAction(null);
}
rhinoDebugger.attachTo(contextFactory);
rhinoDebugger.setScope(scope);
rhinoDebugger.pack();
rhinoDebugger.setVisible(true);
/* (SN) Debugger entry ends */

return compiledScript.exec(context, scope);


} finally {
if (contextCreated) {
Context.exit();
contextCreated = false;
}
}
}
}

Eclipse JSDT Debugger Embedded


/*
* Copyright (c) Mirth Corporation. All rights reserved.
*
* http://www.mirthcorp.com
*
* The software in this package is published under the terms of the MPL license a copy of which has
* been included with this distribution in the LICENSE.txt file.
*/

package com.mirth.connect.server.util.javascript;

import java.util.concurrent.Callable;

import org.apache.commons.lang3.StringUtils;
import org.apache.log4j.Logger;
import org.mozilla.javascript.Context;
import org.mozilla.javascript.Script;
import org.mozilla.javascript.Scriptable;

import com.mirth.connect.donkey.util.ThreadUtils;

231 APPENDICES
/* (SN) Debugger declaration starts */
import org.eclipse.wst.jsdt.debug.rhino.debugger.RhinoDebugger;
/* (SN) Debugger declaration ends */

public abstract class JavaScriptTask<T> implements Callable<T> {

private Logger logger = Logger.getLogger(JavaScriptTask.class);


private MirthContextFactory contextFactory;
private Context context;
private boolean contextCreated = false;

/* (SN) Debugger declaration starts */


private static RhinoDebugger rhinoDebugger = null;
/* (SN) Debugger declaration ends */

public JavaScriptTask(MirthContextFactory contextFactory) {


this.contextFactory = contextFactory;
}

public MirthContextFactory getContextFactory() {


return contextFactory;
}

public void setContextFactory(MirthContextFactory contextFactory) {


this.contextFactory = contextFactory;
}

protected Context getContext() {


return context;
}

public Object executeScript(Script compiledScript, Scriptable scope) throws InterruptedException {


try {
// if the executor is halting this task, we don't want to initialize the context yet
synchronized (this) {
ThreadUtils.checkInterruptedStatus();
context = Context.getCurrentContext();
Thread.currentThread().setContextClassLoader(contextFactory.getApplicationClassLoader());
logger.debug(StringUtils.defaultString(StringUtils.trimToNull(getClass().getSimpleName()),
getClass().getName()) + " using context factory: " + contextFactory.hashCode());

/*
* This should never be called but exists in case executeScript is called from a
* different thread than the one that entered the context.
*/
if (context == null) {
contextCreated = true;
context = JavaScriptScopeUtil.getContext(contextFactory);
}

if (context instanceof MirthContext) {


((MirthContext) context).setRunning(true);
}
}

/* (SN) Debugger entry starts */


if ( null == context.getDebugger() ) {
rhinoDebugger = new RhinoDebugger("transport=socket,suspend=y,address=9009");
try { rhinoDebugger.start(); } catch(Exception ex){System.out.println(ex.getMessage());};
rhinoDebugger.contextCreated(context);
}
contextFactory.addListener(rhinoDebugger);
/* (SN) Debugger entry ends */

return compiledScript.exec(context, scope);


} finally {
if (contextCreated) {
/* (SN) Debugger stop */
try { rhinoDebugger.stop(); } catch(Exception ex){System.out.println(ex.getMessage());};
/* (SN) Debugger stop */
Context.exit();
contextCreated = false;
}
}
}
}

APPENDICES 232
G: Rhino Script Engine script samples

Source and Destination connector scripts as they executed by the Mirth Connect Server
using Rhino Script Engine. Code is manually tweaked and commented for better
readability.

Channel Source Connector script

function $co(key, value) {


if (arguments.length == 1) {
return connectorMap.get(key);
} else {
return connectorMap.put(key, value);
}
}

function $c(key, value) {


if (arguments.length == 1) {
return channelMap.get(key);
} else {
return channelMap.put(key, value);
}
}

function $s(key, value) {


if (arguments.length == 1) {
return sourceMap.get(key);
} else {
return sourceMap.put(key, value);
}
}

function $gc(key, value) {


if (arguments.length == 1) {
return globalChannelMap.get(key);
} else {
return globalChannelMap.put(key, value);
}
}

function $g(key, value) {


if (arguments.length == 1) {
return globalMap.get(key);
} else {
return globalMap.put(key, value);
}
}

function $cfg(key, value) {


if (arguments.length == 1) {
return configurationMap.get(key);
} else {
return configurationMap.put(key, value);
}
}

function $r(key, value) {


if (arguments.length == 1) {
return responseMap.get(key);
} else {
return responseMap.put(key, value);
}
}

function $(string) {
try {
if (responseMap.containsKey(string)) {
return $r(string);
}
} catch (e) {}
try {

233 APPENDICES
if (connectorMap.containsKey(string)) {
return $co(string);
}
} catch (e) {}
try {
if (channelMap.containsKey(string)) {
return $c(string);
}
} catch (e) {}
try {
if (sourceMap.containsKey(string)) {
return $s(string);
}
} catch (e) {}
try {
if (globalChannelMap.containsKey(string)) {
return $gc(string);
}
} catch (e) {}
try {
if (globalMap.containsKey(string)) {
return $g(string);
}
} catch (e) {}
try {
if (configurationMap.containsKey(string)) {
return $cfg(string);
}
} catch (e) {}
try {
if (resultMap.containsKey(string)) {
return resultMap.get(string);
}
} catch (e) {}
return '';
}

function getAttachments() {
return AttachmentUtil.getMessageAttachments(connectorMessage);
}

function addAttachment(data, type) {


return AttachmentUtil.createAttachment(connectorMessage, data, type);
}

function validate(mapping, defaultValue, replacement) {


var result = mapping;
if ((result == undefined) || (result.toString().length == 0)) {
if (defaultValue == undefined) {
defaultValue = '';
}
result = defaultValue;
}
if ('string' === typeof result || result instanceof java.lang.String || 'xml' === typeof result) {
result = new java.lang.String(result.toString());
if (replacement != undefined) {
for (var i = 0; i < replacement.length; i++) {
var entry = replacement[i];
result = result.replaceAll(entry[0], entry[1]);
}
}
}
return result;
}

function createSegment(name, msgObj, index) {


if (arguments.length == 1) {
return new XML('<' + name + '></' + name + '>');
};
if (arguments.length == 2) {
index = 0;
};
msgObj[name][index] = new XML('<' + name + '></' + name + '>');
return msgObj[name][index];
}

function createSegmentAfter(name, segment) {


var msgObj = segment;
while (msgObj.parent() != undefined) {

APPENDICES 234
msgObj = msgObj.parent();
}
msgObj.insertChildAfter(segment[0], new XML('<' + name + '></' + name + '>'));
return msgObj.child(segment[0].childIndex() + 1);
}
importClass = function() {
logger.error('The importClass method has been deprecated and will soon be removed. Please use
importPackage or the fully-qualified class name instead.');
for each(argument in arguments) {
var className = new Packages.java.lang.String(argument);
if (className.startsWith('class ')) {
className = className.substring(6);
}
eval('importPackage(' + Packages.java.lang.Class.forName(className).getPackage().getName() + ')');
}
}

function doScript() {
// Retrieve inbound message and defined 'msg' variable
msg = new XML(connectorMessage.getTransformedData());
if (msg.namespace('') != undefined) {
default xml namespace = msg.namespace('');
} else {
default xml namespace = '';
}

// User defined filter rule


function doFilter() {
phase[0] = 'filter';
return true;
}

function doTransform() {
phase[0] = 'transformer';
logger = Packages.org.apache.log4j.Logger.getLogger(phase[0]);

// User defined Source Connector Transformer


logger.info("Source Transformer script");

// If Inbound message template is defined


if ('xml' === typeof msg) {
if (msg.hasSimpleContent()) {
msg = msg.toXMLString();
}
} else if ('undefined' !== typeof msg && msg !== null) {
var toStringResult = Object.prototype.toString.call(msg);
if (toStringResult == '[object Object]' || toStringResult == '[object Array]') {
msg = JSON.stringify(msg);
}
}

// If Outbound message template is defined


if ('xml' === typeof tmp) {
if (tmp.hasSimpleContent()) {
tmp = tmp.toXMLString();
}
} else if ('undefined' !== typeof tmp && tmp !== null) {
var toStringResult = Object.prototype.toString.call(tmp);
if (toStringResult == '[object Object]' || toStringResult == '[object Array]') {
tmp = JSON.stringify(tmp);
}
}
}
if (doFilter() == true) {
doTransform();
return true;
} else {
return false;
}
}

doScript();

235 APPENDICES
Channel Destination Connector script

function $co(key, value) {


if (arguments.length == 1) {
return connectorMap.get(key);
} else {
return connectorMap.put(key, value);
}
}

function $c(key, value) {


if (arguments.length == 1) {
return channelMap.get(key);
} else {
return channelMap.put(key, value);
}
}

function $s(key, value) {


if (arguments.length == 1) {
return sourceMap.get(key);
} else {
return sourceMap.put(key, value);
}
}

function $gc(key, value) {


if (arguments.length == 1) {
return globalChannelMap.get(key);
} else {
return globalChannelMap.put(key, value);
}
}

function $g(key, value) {


if (arguments.length == 1) {
return globalMap.get(key);
} else {
return globalMap.put(key, value);
}
}

function $cfg(key, value) {


if (arguments.length == 1) {
return configurationMap.get(key);
} else {
return configurationMap.put(key, value);
}
}

function $r(key, value) {


if (arguments.length == 1) {
return responseMap.get(key);
} else {
return responseMap.put(key, value);
}
}

function $(string) {
try {
if (responseMap.containsKey(string)) {
return $r(string);
}
} catch (e) {}
try {
if (connectorMap.containsKey(string)) {
return $co(string);
}
} catch (e) {}
try {
if (channelMap.containsKey(string)) {
return $c(string);
}
} catch (e) {}
try {
if (sourceMap.containsKey(string)) {
return $s(string);

APPENDICES 236
}
} catch (e) {}
try {
if (globalChannelMap.containsKey(string)) {
return $gc(string);
}
} catch (e) {}
try {
if (globalMap.containsKey(string)) {
return $g(string);
}
} catch (e) {}
try {
if (configurationMap.containsKey(string)) {
return $cfg(string);
}
} catch (e) {}
try {
if (resultMap.containsKey(string)) {
return resultMap.get(string);
}
} catch (e) {}
return '';
}

function getAttachments() {
return AttachmentUtil.getMessageAttachments(connectorMessage);
}

function addAttachment(data, type) {


return AttachmentUtil.createAttachment(connectorMessage, data, type);
}

function validate(mapping, defaultValue, replacement) {


var result = mapping;
if ((result == undefined) || (result.toString().length == 0)) {
if (defaultValue == undefined) {
defaultValue = '';
}
result = defaultValue;
}
if ('string' === typeof result || result instanceof java.lang.String || 'xml' === typeof result) {
result = new java.lang.String(result.toString());
if (replacement != undefined) {
for (var i = 0; i < replacement.length; i++) {
var entry = replacement[i];
result = result.replaceAll(entry[0], entry[1]);
}
}
}
return result;
}

function createSegment(name, msgObj, index) {


if (arguments.length == 1) {
return new XML('<' + name + '></' + name + '>');
};
if (arguments.length == 2) {
index = 0;
};
msgObj[name][index] = new XML('<' + name + '></' + name + '>');
return msgObj[name][index];
}

function createSegmentAfter(name, segment) {


var msgObj = segment;
while (msgObj.parent() != undefined) {
msgObj = msgObj.parent();
}
msgObj.insertChildAfter(segment[0], new XML('<' + name + '></' + name + '>'));
return msgObj.child(segment[0].childIndex() + 1);
}
importClass = function() {
logger.error('The importClass method has been deprecated and will soon be removed. Please use
importPackage or the fully-qualified class name instead.');
for each(argument in arguments) {
var className = new Packages.java.lang.String(argument);
if (className.startsWith('class ')) {
className = className.substring(6);

237 APPENDICES
}
eval('importPackage(' + Packages.java.lang.Class.forName(className).getPackage().getName() + ')');
}
}

function doScript() {

// Retrieve inbound message and defined 'msg' variable


msg = new XML(connectorMessage.getTransformedData());
if (msg.namespace('') != undefined) {
default xml namespace = msg.namespace('');
} else {
default xml namespace = '';
}

// User defined filter rule


function filterRule1() {
logger.info("This is the Destination Filter rule");
}

function doFilter() {
phase[0] = 'filter';
return ((filterRule1() == true));
}

function doTransform() {
phase[0] = 'transformer';
logger = Packages.org.apache.log4j.Logger.getLogger(phase[0]);

// User defined Destination Connector Transformer


logger.info("This is the Destination Transformer step");

// If Inbound message template is defined


if ('xml' === typeof msg) {
if (msg.hasSimpleContent()) {
msg = msg.toXMLString();
}

} else if ('undefined' !== typeof msg && msg !== null) {


var toStringResult = Object.prototype.toString.call(msg);
if (toStringResult == '[object Object]' || toStringResult == '[object Array]') {
msg = JSON.stringify(msg);
}
}

// If Outbound message template is defined


if ('xml' === typeof tmp) {
if (tmp.hasSimpleContent()) {
tmp = tmp.toXMLString();
}

} else if ('undefined' !== typeof tmp && tmp !== null) {


var toStringResult = Object.prototype.toString.call(tmp);
if (toStringResult == '[object Object]' || toStringResult == '[object Array]') {
tmp = JSON.stringify(tmp);
}
}
}

if (doFilter() == true) {
doTransform();
return true;
} else {
return false;
}

doScript();

APPENDICES 238
H: Archives Content

There are five archives provided with this book each of which contains a complete set of
files required for Part II - Part V implementations.

Eligibility.NoACK

Folder Files Comment


Channels Code Template.xml Channels, code templates and
ConfigurationMap.properties global scripts for Part II
Data Logger.xml implementation.
Global Script.xml
HL7v3 Verification.xml
Query Sender.xml
Response Sender.xml
v2-v3 Transformater.xml
v3-v2 Transformater.xml
custom-lib /coreschemas Custom-lib folder for Mirth Connect
/schemas Server installation.
/schematron
DB PostgreSQL-Eligibility DB.sql PostgreSQL patients database
HL7v2\Samples QBP_E22_Request.hl7
RSP_E22_Error.hl7
RSP_E22_Success.hl7
HL7v2\Templates RSP-45_Template.hl7
HL7v2\XSLT QUCR-RSP.xslt
HL7v3 <intentionally skipped> Schemas for HL7v3 messages
HL7v3\- QUCR_IN200101UV01_Request_Annotated.xml Annotated Eligibility query response
Samples_Annotated QUCR_IN210101UV01_Error_Annoteated.xml and request messages
QUCR_IN210101UV01_Success_Annotated.xml
HL7v3\Templates QUCR_IN200101_Template.xml Eligibility query response and
QUCR_IN210101_Template.xml request template messages with
empty fields

Eligibility.ACK-NACK

Folder Files Comment


Channels Code Template ACK.xml Channels, code templates and
ConfigurationMap.properties global scripts for Part III
Data Logger.xml implementation.
Global Script.xml
HL7v3 ACK.xml
HL7v3 Verification-ACK.xml
Query Sender-ACK.xml
v2-v3 Transformer-ACK.xml
custom-lib /coreschemas Custom-lib folder for Mirth Connect
/schemas Server installation.
/schematron
DB PostgreSQL-Eligibility DB.sql PostgreSQL patients database
HL7v2\Samples ACK-A01_Negative.hl7 HL7v2 acknowledgement samples

239 APPENDICES
ACK-A01_Positive.hl7
HL7v3 <intentionally skipped> Schemas for HL7v3 messages
HL7v3\Samples MCCI_IN000002UV01.xml HL7v3 acknowledgement sample
HL7v3\- MCCI_IN000002UV01-Annotated.xml HL7v2 acknowledgement sample
Samples_Annotated with annotations
HL7v3\Templates MCCI_IN000002_template.xml HL7v2 acknowledgement template
with empty fields

DICOM

Folder Files Comment


Channels Code Template.xml Channels, code templates and
ConfigurationMap.properties global scripts for Part IV
DICOM SCP.xml implementation.
DICOM SCU.xml
Global Scripts.xml

Eligibility.JMS

Folder Files Comment


Channels Data Logger JMS.xml Channels, code templates and
Data Logger RAW.xml global scripts for Part V
HL7v3 Verification JMS.xml implementation.
HL7v3 Verification RAW.xml
Query Sender JMS.xml
v2-v3 Transformer JMS.xml
v2-v3 Transformer RAW.xml
custom-lib faultmessage.jar Custom-lib folder for Mirth Connect
Server installation.

Debugging

Folder Files Comment


JSDT JavaScriptTask.java Eclipse JSDT debugger in
embedded mode.
Rhino Debugger JavaScriptTask.java Rhino JavaScript debugger in
embedded mode.

Polling

Folder Files Comment


Channels IHE Validator Service.xml Polling channels
Web Service Polling.xml

Extension.JSON

Folder Files Comment


Deploy json/lib/org-json.jar JSON Writer destination connector
json/destination.xml deployment package
json/json-client.jar

APPENDICES 240
json/json-server.jar
json/json-shared.jar
jarsigner signjar.bat Batch file with command prompts
to sign JARs
Source\Client JSONWriter.java Client side extension template
Source\Server destination.xml Server side extension templates
JSONConnectorServlet.java
JSONConnectorServletInterface.java
JSONDispatcher.java
JSONDispatcherProperties.java
Source\org.json JSONArray.java XML to JSON library source code
JSONException.java
JSONObject.java
JSONString.java
JSONStringer.java
JSONTokener.java
JSONWriter.java
XML.java
XMLTokener.java

241 APPENDICES