You are on page 1of 2

Migrating to a new db2 LUW version the hard way

By reading just the title I'm sure that lots of you are thinking why people
just cannot do things in an easy and well ordered fashion ... the way it is
supposed to be. Well, sometimes there is just no way around.
Maybe explaining why will set things more in perspective: living in a
country where three languages are spoken, makes sorting chars a
challenge. You would want to have names - whether they are Dutch,
French or German - to be sorted the way your customer wants to see
them on his reports and this without losing any performance.
You should know that once you have created your database, the choice in
which things are sorted is made for good and that flipping on any
configuration switch is not going to help you changing all that. Sure, you
can change your sorting by adding clauses to each and every query, but
be assured that sorting will happen by using algorithms which is always
much slower than every other option. It is just fine for testing purposes
and that's the end of that discussion.
So the hard way ... which means in short and by far the complete list:
-

generate ddl
arrange new disk space
prepare the new database with the new codepage to get the sorting
right
arrange downtime
export data with a script invoking HighPerformanceUnload or when
the downtime has to be kept to the minimum, set up a replication of
the data by using Change Data Capture
import data
check integrity
point all software to the new database
put the old database out of commission and free up the discs

and repeat this cycle for every of your databases in our case we had
800 databases spread over the multiple environments and you're done.
We had to come up with a strategy that would avoid failure and since we
had a team of thirteen all migrations had to be done in one and exactly
one standardized way. Besides that we to bypass all of the flaws we found
in the tools we used:
a) db2look didn't deliver ddl on which we could blindly count on as the
order of creating elements isn't always correct. To give one example
think of the order of creation of tables and views or stored
procedures. Remove database objects and grants that werent
needed anymore as a second example. For the databases that we

were obligated to use CDC generated columns should be differently


defined and foreign keys shouldnt be enforced at the targets side.
Once all data replicated the current definitions would get restored.
b) HighPerformanceUnload wouldn't always export data in the form of
lobs or xml. Most of the times the starting point of a migration was
an isolated online database and in other cases an offline backup of
the database on disc and HPU couldnt export the data in a correct
manner at first.
c) there is nothing out there that can generate a script to load any kind
of data into any kind of table (think of generated always columns,
lobs, xml, ...) ... let alone load it in parallel
d) once all data loaded how are you going to get your tables out of
'check integrity' state? In which order are you going to check
integrity for your tables?
We came up with some Korn shell scripts that helped us to stand up and
face all of the previously described problems. By declaring a few Unix
environment settings on both the source and target server the scripts
where given directions on how to proceed.
A first script generated a ddl on which we could count on for 95% of our
cases. We had to interact anyway on those databases where e.g.
federation was activated as passwords get blanked out (which is ok!). A
second script executed the export and generated a load script suited for
each type of table or column for a database. A third script, now on the
target system, broke the monolithic load script into pieces so that we were
able to load the data in parallel. A fourth script to check the numbers of
exported rows per table against the number of loaded rows and to search
for any other kind of error. Finally a fifth script figuring out how to get the
tables checked on their referential integrity.
Now we still needed one Word document to rule them all. This document
described the procedure that the other team members had to follow and
gave the team directions on how to migrate every database in the same
way.
The effort that it took to get such a smooth migration? One year of shell
programming, a number of pmr's and testing the sequence of actions to
come up with one procedure for all cases. One month and a few
adjustments later the majority of the databases was migrated from a v9.1
with collating sequence UCA400 to a v9.5 UCA500R1.
Eddy Coppens

SuadaSoft

You might also like