You are on page 1of 11

PROCEDURE TO

CONFIGURE DATA
RETENTION
UP FRAMEWORK

Jason/Louis
January 17, 2022
Version: 1.2v
TABLE OF CONTENTS
1. SUPPORTING INFORMATION .................................................................................................. 3
1.1 About This Document .......................................................................................................... 3
1.2 Introduction ......................................................................................................................... 3
1.3 Document Revision History .................................................................................................. 3

2. PREPARATION WORK ............................................................................................................. 4

3. CONFIGURE THE DATA RETENTION PERIOD ........................................................................ 4

4. CHECK THE PARTITION SIZE .................................................................................................. 7

5. DROP THE OLD PARTITION IN RTDB...................................................................................... 8

6. VERIFICATION STEP .............................................................................................................. 10

2
1. SUPPORTING INFORMATION
1.1 About This Document

The purpose of this document is to provide the procedure in modifying the default data retention period
for IPF logs table according to bank decision. The main purpose to perform this configuration is to
reduce the tablespace usage in RT DB database to the bank which have encounter tablespace not
enough issue. Therefore, we come up with a solution to decrease the default data retention period, 365
days for IPF logs table which have already been set during installation.

1.2 Introduction

 To perform this activity, bank will need to schedule a downtime around 6 hours. (Depends on
the transaction amount and server performance)

 This activity will affect a few IPF logs table which are:
- ipf_message
- ipf_trans_log
- ipf_alias_mgmt_log

Based on observation, these are the top three table which occupy most of the tablespace’s space.

 Bank needs to perform a backup before starting this activity.

1.3 Document Revision History

Version Author Date Changes

1.0 Jason, Louis 13 Jan 2022 Initial Release

1.1 Louis 17 Jan 2022 Adding the screen capture for setting in
Configuration Builder

1.2 Louis 20 Jan 2022 - Added introduction


- Updated more detail info for each section

3
2. PREPARATION WORK
a. Before making the changes, backup the items listed below as this is crucial for the rollback step.

 Real Time Database (RTDB) backup


 Export the latest configuration binary file from Configuration Builder
b. Bank needs to identify their own preferred retention period on the RTDB IPF logs table.
c. Downtime is expected. Therefore, bank need to schedule a period to perform this activity and
inform BI about their schedule.
d. Bank needs to send a sign-off message to BI before starting the activity. And send a sign-on
message after completed this activity.

3. CONFIGURE THE DATA RETENTION PERIOD


This step is done to add the configuration entry for the desired table under Platform Settings
configuration, and thereby allow its retention settings to be changes.

a. Login to Configuration Builder.

b. Go to Platform Settings  Data Retention  mcas_tx_events, make any changes in this page
and click on the Accept button in top right corner.
(e.g.: Change the default value of Retention Period from 30d to 7d)

4
c. Navigate to menu Configuration  Export to XML to export an XML file.

d. Open the XML file exported in previous process using Notepad++ or any other text editor.

e. Search the <delete…/> stanza having the dataset attribute with value = “mcas_tx_events” to locate
the XML stanza that governs retention for the “mcas_tx_events“ entity.

f. Make a copy of this XML stanza and paste it in the file. Replace the copy’s dataset attribute value
with the retention attribute’s value to the days preferred. Remove the XML stanza having the

5
dataset attribute with value = “mcas_tx_events” because we are not going to any changes for
“mcas_tx_events” table.
In below example, we make copies and edit in order to configure retention on 3 additional tables:
“ipf_message”, “ipf_trans_log” and “ipf_alias_mgmt_log”

Example:
Before:
<delete type=”delete-or-partition” dataset=” mcas_tx_events”
retention=”7d” partitionSize=”Generated” partitionAhead=”4”
blockSize=”100” interval=”10m” timeOfDay=”0:0:0” />
After:
<delete type=”delete-or-partition” dataset=”ipf_message”
retention=”2d” partitionSize=”Generated” partitionAhead=”4”
blockSize=”100” interval=”2h” timeOfDay=”0:0:0” />
<delete type=”delete-or-partition” dataset=”ipf_trans_log”
retention=”2d” partitionSize=”Generated” partitionAhead=”4”
blockSize=”100” interval=”2h” timeOfDay=”0:0:0” />
<delete type=”delete-or-partition” dataset=”ipf_alias_mgmt_log”
retention=”2d” partitionSize=”Generated” partitionAhead=”4”
blockSize=”100” interval=”2h” timeOfDay=”0:0:0” />

g. Save the XML file.

h. In Configuration Builder, navigate to menu Configuration  Import from XML, select the XML file
that was saved just now and import back to replacing the configuration data.

i. Navigate to menu Configuration  Apply Config Changes to apply config change.

6
4. CHECK THE PARTITION SIZE
a. Login to the MCAS server.
b. Go to Switch log directory and examine the switch log, to check the partition size of the new created
data retention period for the IPF log table involved.
$ cd <switch_dir>/Switch/log/switch1

Check the partition size for “ipf_message” table:


$ less log<current_date>.txt | grep -F "[ipf_message]"

Example of output:

2022/01/11 00:59:59.1:info:switch3:T[c94287c]:mcas.db.storage.ret:ZZZ
[ipf_message] on [DB1, (ACTIVE, 2022-01-06 23:03:02.690)] prepare
partition-size=300m safety=600m retention-period=45h check-
interval=18000000 d-a-t=3600000

Check the partition size for “ipf_trans_log” table:


$ less log<current_date>.txt | grep -F "[ipf_trans_log]"

Example of output:

2022/01/11 01:00:02.1:info:switch3:T[c94287c]:mcas.db.storage.ret:ZZZ
[ipf_trans_log] on [DB1, (ACTIVE, 2022-01-06 23:03:02.690)] prepare
partition-size=300m safety=600m retention-period=45h check-
interval=18000000 d-a-t=3600000

Check the partition size for “ipf_alias_mgmt_log” table:


$ less log<current_date>.txt | grep -F "[ipf_alias_mgmt_log]"

Example of output:

2022/01/11 01:01:52.1:info:switch3:T[c94287c]:mcas.db.storage.ret:ZZZ
[ipf_ alias_mgmt_log] on [DB1, (ACTIVE, 2022-01-06 23:03:02.690)]
prepare partition-size=300m safety=600m retention-period=45h check-
interval=18000000 d-a-t=3600000

7
Note (Meaning of the variables in output script):
1. Partition-size: Size for each partition
2. Retention-period: Log retention time (Will roughly be same as the value set by bank)

5. DROP THE OLD PARTITION IN RTDB


a. Before starting this activity, make sure all the switch services have been fully stopped in all
instances.
b. Login to one of the RTDB server.
c. For each table on which retention is to be changed: Export data, drop partitions (remove data),
split the last partition, create the future partition, and import back data.
Procedure:
1. Export data
2. List the partitions in order of ascending partition name. Note the partition name of
the first partition
3. Using this list order, drop each partition, but don't drop last partition
4. Split the last partition into two, the new one with label with first partition name
but with current timestamp
5. Drop last partition
6. Create 4 more future partition (Time interval for each partition = partition size
found in section 4)
7. Import back the data
8. Confirm new partitions have been created

Example (SQL) for “ipf_trans_log” table:


Note:
1. Number of partitions may differ.
2. All the new partition created should calculate according to your timestamp.
3. Tablespace name to use in the <tablespace name> should be your RT DB tablespace
name.
1. Export data from ipf_trans_log

2. List the partition


select PARTITION_NAME from USER_TAB_PARTITIONS where
table_name='IPF_TRANS_LOG' ORDER BY PARTITION_NAME;

Example of output:

PARTITION_1
PARTITION_2
PARTITION_3
PARTITION_4
PARTITION_5

3. Drop each partition except last


ALTER TABLE IPF_TRANS_LOG DROP PARTITION PARTITION_1;

8
ALTER TABLE IPF_TRANS_LOG DROP PARTITION PARTITION_2;
ALTER TABLE IPF_TRANS_LOG DROP PARTITION PARTITION_3;
ALTER TABLE IPF_TRANS_LOG DROP PARTITION PARTITION_4;

4. Split the last partition


ALTER TABLE IPF_TRANS_LOG split partition PARTITION_5 INTO (partition
PARTITION_1 values less than (timestamp' 2022-01-21 13:00:00'),
PARTITION PARTITION_5);

5. Drop the last partition


ALTER TABLE IPF_TRANS_LOG DROP PARTITION PARTITION_5;

6. Create 4 more future partition


ALTER TABLE IPF_TRANS_LOG ADD PARTITION PARTITION_2 values less than
(timestamp' 2022-01-21 18:00:00') TABLESPACE "RTDB" INITRANS 10;
ALTER TABLE IPF_TRANS_LOG ADD PARTITION PARTITION_3 values less than
(timestamp' 2022-01-21 23:00:00') TABLESPACE "RTDB" INITRANS 10;
ALTER TABLE IPF_TRANS_LOG ADD PARTITION PARTITION_4 values less than
(timestamp' 2022-01-22 04:00:00') TABLESPACE "RTDB" INITRANS 10;
ALTER TABLE IPF_TRANS_LOG ADD PARTITION PARTITION_5 values less than
(timestamp' 2022-01-22 09:00:00') TABLESPACE "RTDB" INITRANS 10;

7. Import data back to ipf_trans_log

8. Confirm new partitions have been created


select PARTITION_NAME, HIGH_VALUE from USER_TAB_PARTITIONS where
table_name='IPF_TRANS_LOG' ORDER BY PARTITION_NAME;

Example of output:

PARTITION_1 timestamp' 2022-01-21 13:00:00'


PARTITION_2 timestamp' 2022-01-21 18:00:00'
PARTITION_3 timestamp' 2022-01-21 23:00:00'
PARTITION_4 timestamp' 2022-01-22 04:00:00'
PARTITION_5 timestamp' 2022-01-22 09:00:00'

d. Repeat the same procedure to “ipf_message” and “ipf_alias_mgmt_log” table.


e. Repeat the same procedure to others RT DB server.
f. Restart the switch services in MCAS server
i. For Active-Active Availability Mode, restart all the switch service one by one (Switch1 to
Switch4)
ii. For Active-Passive Sync Availability Mode, only restart Switch1 and Switch3 services
iii. For Active-Passive Async Availability Mode,
 bring up Switch2 and Switch4
 bring down Switch2 and Switch4 after successful bring up
 bring up Switch1 and Switch3 service
g. Inform BI the activity ended and send a sign-on message.

9
6. VERIFICATION STEP
After successful bring up all the switch, keep on monitoring on the three IPF logs table,
“ipf_message”, “ipf_trans_log” and “ipf_alias_mgmt_log”
a. Check if new partition will create
b. Check if old partition will drop after data retention period

10
© Copyright ACI Worldwide, Inc. 2022
ACI, ACI Worldwide, ACI Payments, Inc., ACI Pay, Speedpay and all ACI product/solution names are trademarks
or registered trademarks of ACI Worldwide, Inc., or one of its subsidiaries, in the United States, other countries or both.
Other parties’ trademarks referenced are the property of their respective owners.

11

You might also like