Professional Documents
Culture Documents
1
Document Control
2
Table of Contents
3
7.3 JSH Initialization/T24 Login ........................................................................................................... 26
8. T24 Multi Application Server Setup ........................................................................................... 27
8.1 User Id and Groups ....................................................................................................................... 27
8.2 TAFC runtime ................................................................................................................................ 27
8.3 Oracle client .................................................................................................................................. 27
8.4 T24 setup....................................................................................................................................... 27
8.5 .profile setup ................................................................................................................................. 28
8.6 ORACLE DCD configuration ........................................................................................................... 30
8.7 T24 Desktop Connectivity ............................................................................................................. 31
8.8 T24 Service Definition ................................................................................................................... 31
8.9 T24 Updates or Local Patch or DL Define Installation................................................................... 31
9. T24 Application Server and Oracle DB Server DNZ Setup ............................................................ 32
10. Mutex Contention in Oracle 11G R2 on AIX 6.1 ........................................................................ 33
11. T24 Online and COB performance tuning ................................................................................. 33
12. Multiple T24/TAFC Environment Setup (test areas) .................................................................. 33
13. ARC-IB Stress test ................................................................................................................... 34
14. Follow Up ............................................................................................................................... 39
15. Conclusion .............................................................................................................................. 40
4
1. Objective
The primary objective of the site visit was to conduct ARC-IB Stress test, review any bottleneck and
recommend appropriate changes to ensure the smooth rollout of ARC-IB which would be replacing TIB
(TEMENOS Internet Banking) solution that is already in place. Apart from the above, bank also have
other projects in pipeline of migrating T24 from jBASE to Oracle, replacing T24 Desktop to Browser and
so on. The scope of work also included of review, discuss, recommend and conclude various other
streams that would play a vital role in total TECHCOM IT roadmap.
Below are the highlights of the scope of work that were being addressed with in the stipulated
timeframe.
Scope Content
Review and recommend IBM AIX OS parameters for
OS
Oracle DB Server
Review and recommend HP-UX 11iv3 parameters
for T24 Application Server
Database
Review and recommend Oracle 11G R2 parameters
Layer
Discuss and Finalize ASM / File system based for
Oracle data files
Backup Operational methods
Indexing: common fields and tables in T24 that
require external index in Oracle
Check TAFC version: mechanism of data
APP Layer communication between App & DB layers and
improve
Migration process (& issue if any)
Common Architect conclusion of App & DB server: firewall
Items required or not
Tuning T24 Application: Online & COB
Multi-Application setup: runbook, procedures
when installing updates
Lock situation of one record when opening in
different environment
Conduct Stress test, optimization recommendation
ARC IB
and script generation
5
2. Recommended AIX Parameters
Paging is one of the most common reasons for performance problems. Paging means that the VMM
page-replacement algorithm takes computational pages away to free memory and then writes those
pages into the system paging space. You can attain better performance when page references are found
in the real memory.
The first step in removing scheduling delays due to AIX kernel paging is to make sure a system has the
correct AIX vmo parameters set. The following parameters should be verified and set correctly.
vmo -p -o maxperm%=90
vmo -p -o minperm%=3
vmo -p -o maxclient%=90
vmo -p -o strict_maxperm=0
vmo -p -o strict_maxclient=1
vmo -p -o lru_file_repage=0
vmo -r -o page_steal_method=1
chdev –l sys0 –a ‘minpout=4096 maxpout=8193’
The asynchronous I/O feature allows a program to initiate I/O and to continue running useful work,
while other I/O operations are carried out in parallel by the operating system. Because Oracle often
requires multiple server and user processes at the same time, it takes advantage of asynchronous I/O
feature to overlap program execution with I/O operations. The asynchronous I/O feature is used with
Oracle on the AIX operating system to improve system performance.
Use ‘ioo –a | grep aio’ and ‘ioo -a -F | grep path’ to cross check the current setting of the below
parameters. If there are any deviation, with the below settings, amend the same.
Note: For AIX 6.1 has a new implementation of the asynchronous I/O kernel extension and in most
cases, does not require any additional tuning.
aio_active = 0
aio_maxreqs = 65536
aio_maxservers = 30
aio_minservers = 3
aio_server_inactivity = 300
posix_aio_active = 0
posix_aio_maxreqs = 65536
posix_aio_maxservers = 30
posix_aio_minservers = 3
6
posix_aio_server_inactivity = 300
aio_fastpath = 1
aio_fsfastpath = 1
posix_aio_fastpath = 1
posix_aio_fsfastpath = 1
rfc1323 = 1
sb_max = 4M
tcp_mssdflt = 1024
ipqmaxlen = 512
tcp_sendspace = 65536
tcp_recvspace = 65536
udp_sendspace=65536
udp_recvspace=65536
no -L ipsendredirects 1
no -L ipsrcrouteforward 1
no -L ipsrcrouterecv 1
no -L ipsrcroutesend 1
lsattr -EL sys0 iostat false
lsattr -EL sys0 log_pg_dealloc false
lsattr -EL sys0 maxbuf 20
lsattr -EL sys0 maxmbuf 0
lsattr -EL sys0 maxuproc 16384
7
3. Recommended AIX Settings for Oracle DB
SMT is a new feature on POWER5 (or later) systems that run AIX 5.3 or IBM i5/OS® Version 5, Release 3
or newer releases of these operating systems, including AIX 6.1. SMT allows two instruction streams to
share access to the execution units on every clock cycle. Instructions are fetched and dispatched based
on the available resources at execution time, allowing for the interleaving of instruction streams to
improve concurrent instruction execution and for better use of hardware resources. The operating
system abstracts each instruction path as a separate processor (also referred to as a SMT thread of
execution). The two instruction paths appear as two logical processors, one for each instruction stream.
For example, a server with four physical processors and SMT enabled has eight/sixteen logical
processors as far as the operating system or Oracle Database is concerned. Oracle processor licensing is
based on the number of physical processors used, which in this case, is four.
# smtctl –m on
To further reduce the risk of memory over commitment causing delays in process scheduling which may
cause evictions the following Oracle and AIX software versions should be used.
In Oracle RAC 10gR2 and 11gR1, several critical processes run with a special scheduling policy and
priority. You should not change the priority of these processes, but it is useful to understand which
processes have special scheduling requirements so that you can monitor them to make sure the system
is properly configured to allow them to run with the proper settings.
> oprocd.bin will have a priority (PRI) of ‘0’ and a scheduling policy (SCH) of ‘2’
> ocssd.bin should have a priority (PRI) of ‘0’ and a scheduling policy (SCH) of ‘-‘
> All ‘ora_lms’ and ‘ora_vktm’ processes should have a priority (PRI) of ‘39’ and a scheduling policy (SCH)
of ‘2‘
8
Priority and scheduling policy can be verified by running the following command at the oracle server,
Table 1. Correct values for Oracle 10.2.0.4 and 11.1.0.6 (or newer) releases
If the priority and scheduling policy of the oprocd and ocssd.bin process is set incorrectly, verify that the
Oracle Clusterware user ID has the proper capabilities set (CAP_NUMA_ATTACH,
CAP_BYPASS_RAC_VMM and CAP_PROPAGATE). The CAP_NUMA_ATTACH capability is required
because it gives authority to a non-root process to increase its priority (CAP_NUMA_ATTACH also
provides other capabilities, but, because oprocd or ocssd.bin do not require these, this paper does not
discuss them). The CAP_PROPAGATE capability propagates the parent process’s capabilities to the child
processes. The CAP_BYPASS_RAC_VMM capability is not related to scheduling, but is required;
therefore, the oprocd and ocssd.bin processes can pin memory (CAP_BYPASS_RAC_VMM also
authorizes other capabilities that this section does not discuss).
To check existing capabilities for a user whose user ID is oracle, see the following example:
To set the capabilities for that same user, see the following example:
If the priority and scheduling policy of the LMS and VKTM processes are not correct, verify that the
${ORACLE_HOME}/bin/oradism executable process is owned by root and has the owner S bit set.
9
The new 64 KB pages are preferred over 16 MB pages, and it is recommended that you use them instead
of 16 MB pages on systems that support both page sizes. This is because 64 KB pages provide most of
the benefit of 16 MB pages, without the special tuning requirements of 16 MB pages. If the memory is
not properly tuned for 16 MB pages, the performance can be worse.
To use 64 KB pages for Oracle text, data and stack, edit the Oracle XCOFF header file using elow
command
$ export LDR_CNTRL=DATAPSIZE=64K@TEXTPSIZE=64K@STACKPSIZE=64K
10
4. Recommended HP-UX Settings for T24 Application Server
Below is the HP-UX parameter list specific for T24 application. All these parameters are specific to HP-UX
11 v2 or above.
ksi_alloc_max 32768
max_thread_proc 4096
Maxdsiz 1 GB
maxdsiz_64bit 4 GB
Maxssiz 256 MB
maxssiz_64bit 2 GB
Maxtsiz 512 MB
maxtsiz_64bit 2 GB
Maxuprc ((nproc*9)/10)
Msgmap (msgmni+2)
Msgmni Nproc
Msgtql Nproc
Nfile 8192
Nflocks 65536
Ninode (8*nproc+2048)
Nkthread (((nproc*7)/4)+16)
Semmap (semmni+2)
Semmni (nproc*2)
Semmns (semmni*2)
11
Semmnu (nproc-4)
Semvmx 32767
Shmmni 512
Shmseg 120
12
5. Recommended ORACLE Settings for T24
Below are the recommended parameters for Oracle with respect to T24 application.
db_cache_size 4G
db_keep_cache_size 2G
filesystemio_options SETALL
cursor_sharing SIMILAR
open_cursor 5000
session_cached_cursors 5000
trace_enabled False
shared_pool_size 2GB
undo_management AUTO
13
undo_retention 3600
db_flashback_retention_target 0
Optimizer_mode all_rows
optimizer_index_cost_adj 1
optimizer_index_caching 50
query_rewrite_enabled True
query_rewrite_integrity Trusted
Fast_start_mttr_target 1800
filesystem_io SETALL
open_cursors 5000
This will take care on its own to allocate memory for SGA and PGA. You shall reset SGA_TARGET and
PGA_AGGREGATE_TARGET to zero for new parameters of 11g to take effect.
Together with the above parameter changes, the below recommendations has also been suggested:
14
alter system set event = '44951 trace name context forever, level 1024' scope=spfile;
There are two recommended performance setting for LOB namely CACHE and RETENTION. It is also
recommended to run the scripts, after database Conversion.
1) For performance benefits, it is recommended that the LOB segments are set to CACHE, which gives
better Read / Write than the NOCACHE option.
2) With the CACHE option, LOB data reads show up as wait event ‘db file sequential read’, writes are
performed by the DBWR process.
3) The CACHE option is not the default setting for the LOBS, so after the initial or subsequent
population of the database, and when any new tables are created, a script needs to be run to
correct the settings. The script can be found in Appendix 1 section.
LOB RETENTION Parameter
Consistent Read (CR) on LOB uses a different mechanism than that used for other data blocks in Oracle.
Older versions of the LOB are retained in the LOB segment and CR is used on the LOB index to access
these older versions (for in-line LOBs which are stored in the table segment, the regular UNDO
mechanism is used).
1) It is recommended that time-based retention using the RETENTION keyword is preferred; this
specifies how long older versions are to be retained.
2) Example: The UNDO TABLESPACE should be configured to autotextend and the recommended initial
UNDO_RETENTION setting is 3600 (1 hours). This should be monitored and increased if necessary.
3) The RETENTION parameter setting for LOBs is not retained when data is loaded via an export/import
or DATAPUMP export/import, therefore after populating a database or after an import, a script
needs to be run to correct the settings. The script can be found in Appendix 2 section.
15
REM Code to generate LOB Cache Script
REM
spool cache_lobs.sql
from user_lobs;
spool off
REM **************************************************
REM
spool lob_retention.sql
from user_lobs;
spool off
16
set head on echo on feed on ver on
REM **************************************************
# SQLNET.ORA
NAMES.DIRECTORY_PATH= (TNSNAMES, ONAMES, HOSTNAME)
TCP.NODELAY=YES
DEFAULT_SDU_SIZE=32767
RECV_BUF_SIZE=32767
SEND_BUF_SIZE=32767
# TNSNAMES.ORA
DEMO =
(DESCRIPTION =
(SDU=32767)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 11.11.1.111)(PORT = 1521))
(SEND_BUF_SIZE=32767)
(RECV_BUF_SIZE=32767)
(TCP.NODELAY=YES)
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = DEMO)
)
)
Discussion was initiated with the bank to understand the advantage and disadvantage of GPFS over ASM
and it has been finalized to use ASM over GPFS.
17
Bank has chosen ASM for Oracle RAC (Multi server) implementation, it is recommended to follow the
best practice document for ASM implementation provided by Oracle. Below are the few
recommendation/highlights specific to TECHCOM.
Have at least 4 diskgroups, (redo + undo, datafiles, indexdatafiles and archive + flashback)
Have a constant LUN size of 20 GB for redo +undo log files. It is highly recommended to use 146
GB 15 KRPM to create LUNs for REDO + UNDO.
Create 6 to 12 groups of redo log file (with 1 member in each group) of size 512 MB (or 1 GB if
there are multiple log switches)
Create UNDO file of size 20 GB (minimum), if the retention is increased, then you need to also
increase the size of this space allocated.
Have a constant LUN size of 100 GB for datafiles, indexfiles and archive + flashback diskgroups
Number of LUNs is directly depends on the Initial DB size and projected growth rate. For
TECHCOM, to start with, it is advised to have 400 GB for datafiles (+DATA diskgroup), 200 for
indexfiles (+INDEX diskgroup), 200 GB for archive files (+ARCHIVE files).
Define/Create datafiles (.dbf) to grow max of size 25 GB
Set the AU (Allocation Unit) as 4 MB
It is recommended to use RAID 10 disks for all the data storage
Set the stripe size at the storage as 128K or 512K depending on the performance of the storage
as well as recommendation from corresponding vendor.
If the Storage has multiple controllers, then priority/high bandwidth has to be set for accessing
production disks over any other environment that shares the same storage.
Below tabular column gives high level recommendation of how to setup ASM specific to TECHCOM.
Note: Even though the initial Oracle DB size is around 250 GB, it is strongly recommended to pre
allocate space for around 4 TB (which is already reserved for production DB) and LUNs are created from
each Pool (disk array) as highlighted below.
ASM Diskgroup
ASM Files to be LUN size to start
Diskgroups configured Size No Of LUNs with Pool Size from Storage
REDO + 4 80 GB 146 GB * 15K RPM Disk
,+REDO UNDO 20 GB (Separate Disk Array)
,+DATA DATA 100 GB 24 400 GB 2.5 TB Disk Pool (Pool A)
,+INDEXDATA INDEX 100 GB 6 200 GB 0.5 TB Disk Pool (Pool B)
,+ARCHIVE ARCHIVE 100 GB 12 200 GB 1 TB Disk Pool (Pool C)
18
6. Changes in ORACLE DB for T24
Partition all F_JOB_LIST_NN using HASH method to have 16 partitions (each partition should have its
own _PK object). It is also advised to create partition tables for the following tables which are expected
to hold large volume of records as well as tables that grow at exponential rate. This is to avoid any DB
contention or concurrency issues due to very high hit rate.
a) FXXX.ACCONT
b) FXXX.CUSTOMER
c) FXXX.LIMIT
d) FXXX.EB.CONTRACT.BALANCES
e) FXXX.ACCT.ACTIVITY
f) FXXX.RE.STAT.LINE.BAL
g) FXXX.RE.STAT.LINE.MVMT
h) F.DE.O.HEADER
i) F.DE.I.HEADER
j) F.DE.O.HANDOFF
k) F.DE.I.MSG
l) F.DE.O.MSG
m) FXXX.STMT.ENTRY
n) FXXX.STMT.ENTRY.DETAIL
o) FXXX.CATEG.ENTRY
p) FXXX.CATEG.ENTRY.DETAIL
q) FXXX.RE.CONSOL.SPEC.ENTRY
r) FXXX.RE.SPEC.ENTRY.DETAIL
s) FXXX.FUNDS.TRANSFER$HIS
t) FXXX.TELLER$HIS
u) FXXX.PD.PAYMENT.DUE$HIS
v) FXXX.PD.BALANCES.HIST
Below is a sample CREATE TABLE/INDEX command to create table and index with HASH partition
method. Sample has been provided for both BLOB and XMLTYPE columns. Use ‘jstat’ command to find
the type of the table.
Note: It is better to partition the tables even before the data migration begins which would also benefit
the data INSERT/UPDATE operation.
19
TABLESPACE T24TESTTINDUNGDATA XMLTYPE COLUMN XMLRECORD STORE AS LOB_TF_XML_TABLE_1(
TABLESPACE T24TESTTINDUNGDATA ENABLE STORAGE IN ROW CACHE) COMPRESS
PARTITION BY HASH ("RECID" )
(
PARTITION "TF_XML_TABLE_1_P01" TABLESPACE "T24TESTTINDUNGDATA",
PARTITION "TF_XML_TABLE_1_P02" TABLESPACE "T24TESTTINDUNGDATA",
PARTITION "TF_XML_TABLE_1_P03" TABLESPACE "T24TESTTINDUNGDATA",
PARTITION "TF_XML_TABLE_1_P04" TABLESPACE "T24TESTTINDUNGDATA",
PARTITION "TF_XML_TABLE_1_P05" TABLESPACE "T24TESTTINDUNGDATA",
PARTITION "TF_XML_TABLE_1_P06" TABLESPACE "T24TESTTINDUNGDATA",
PARTITION "TF_XML_TABLE_1_P07" TABLESPACE "T24TESTTINDUNGDATA",
PARTITION "TF_XML_TABLE_1_P08" TABLESPACE "T24TESTTINDUNGDATA",
PARTITION "TF_XML_TABLE_1_P09" TABLESPACE "T24TESTTINDUNGDATA",
PARTITION "TF_XML_TABLE_1_P10" TABLESPACE "T24TESTTINDUNGDATA",
PARTITION "TF_XML_TABLE_1_P11" TABLESPACE "T24TESTTINDUNGDATA",
PARTITION "TF_XML_TABLE_1_P12" TABLESPACE "T24TESTTINDUNGDATA",
PARTITION "TF_XML_TABLE_1_P13" TABLESPACE "T24TESTTINDUNGDATA",
PARTITION "TF_XML_TABLE_1_P14" TABLESPACE "T24TESTTINDUNGDATA",
PARTITION "TF_XML_TABLE_1_P15" TABLESPACE "T24TESTTINDUNGDATA",
PARTITION "TF_XML_TABLE_1_P16" TABLESPACE "T24TESTTINDUNGDATA" ) ;
20
TABLESPACE T24TESTTINDUNGDATA LOB (XMLRECORD) STORE AS LOB_TF_JOB_LIST_1( TABLESPACE
T24TESTTINDUNGDATA ENABLE STORAGE IN ROW CACHE) COMPRESS
PARTITION BY HASH ("RECID" )
(
PARTITION "TF_JOB_LIST_1_P01" TABLESPACE "T24TESTTINDUNGDATA",
PARTITION "TF_JOB_LIST_1_P02" TABLESPACE "T24TESTTINDUNGDATA",
PARTITION "TF_JOB_LIST_1_P03" TABLESPACE "T24TESTTINDUNGDATA",
PARTITION "TF_JOB_LIST_1_P04" TABLESPACE "T24TESTTINDUNGDATA",
PARTITION "TF_JOB_LIST_1_P05" TABLESPACE "T24TESTTINDUNGDATA",
PARTITION "TF_JOB_LIST_1_P06" TABLESPACE "T24TESTTINDUNGDATA",
PARTITION "TF_JOB_LIST_1_P07" TABLESPACE "T24TESTTINDUNGDATA",
PARTITION "TF_JOB_LIST_1_P08" TABLESPACE "T24TESTTINDUNGDATA",
PARTITION "TF_JOB_LIST_1_P09" TABLESPACE "T24TESTTINDUNGDATA",
PARTITION "TF_JOB_LIST_1_P10" TABLESPACE "T24TESTTINDUNGDATA",
PARTITION "TF_JOB_LIST_1_P11" TABLESPACE "T24TESTTINDUNGDATA",
PARTITION "TF_JOB_LIST_1_P12" TABLESPACE "T24TESTTINDUNGDATA",
PARTITION "TF_JOB_LIST_1_P13" TABLESPACE "T24TESTTINDUNGDATA",
PARTITION "TF_JOB_LIST_1_P14" TABLESPACE "T24TESTTINDUNGDATA",
PARTITION "TF_JOB_LIST_1_P15" TABLESPACE "T24TESTTINDUNGDATA",
PARTITION "TF_JOB_LIST_1_P16" TABLESPACE "T24TESTTINDUNGDATA" ) ;
21
Following are the recommendations that need to be applied the T24 Oracle DB to avoid the overhead
created on T24 application due to object contention at Oracle.
Below is the list to Oracle to do multiple INSERT/UPDATE concurrently on the same object blocks.
You can increase this list whenever you could find an object had high write contention at Oracle. You
might use OEM (Oracle Enterprise Manager) to understand the Oracle behavior.
Below is the list of work file or transition tables in T24 whose data content are transition in nature.
22
You can enhance this list with any other objects or tables in T24 (or from local development) that might
fall under the same category.
Below is the commonly used list of external indexes that would be created to improve both Online as
well as COB performance. The below list are just a basic list, this should be further enhanced even to
improve the COB or Online performance of T24 system based on the detailed analysis conducted later as
a separate exercise.
23
FXXX.AC.CASH.POOL.LINK NEXT.RUN.DATE AC.CP.UPDATE.DATES
FXXX.PD.PAYMENT.DUE CUSTOMER EOD.UPDATE.PDLD.TCB
FXXX.PD.PAYMENT.DUE CO.CODE
F.TEC.ITEMS ITEM.CLASSIFICATION
F.TEC.ITEMS STATUS
FBNK.AUTO.TRANSFER.TCB CATEGORY EOD.TAIKHOAN.TAPTRUNG.TCB
FXXX.DX.TRANSACTION CUSTOMER DX.COB.CO.SYSTEM
24
7. TAFC and DCD Recommendation
It is recommended to install TAFC (TEMENOS Application Framework C) release R10 SP11 to avail the
performance benefits provided by the DCD.
export JBASE_TIMEZONE=Asia/Ho_Chi_Minh
Set the above variable to set the correct timezone setting for TAFC to align with OS settings.
Below are the recommended DCD (Direct Connect Driver) variables that need to be set in .profile to get
effect for all users. Care should also be taken to add the same in ‘environment.vars’ in case of tcserver
as well as in ‘jConsrv.vars’.
export JEDI_XMLDRIVER_PREFETCH_ROWS=1000 – Used to inform the DCD to fetch 1000 records for
every fetch from the cursor there by improving the speed of SELECT statements issued on large tables;
Default value is 500.
Both the above variables are set to improve the performance of DCD (in turn T24) both during Online as
well as COB.
export JEDI_XMLDRIVER_TRACE=1 - Will record a detail log for DCD details in ‘XMLdriver.log’
25
export JEDI_XMLDRIVER_PIDLOG=1 - Will create individual log for each process/PID
Initialization of ‘jsh’ process or T24 Login can be speed up to improve the login timing by removing
‘empty’ directories from ‘t24lib’ and ‘t24bin’. Following actions can implemented to improve the overall
login process for T24.
1> Delete the ‘empty’ directory in ‘t24lib’ and ‘t24bin’. Make use of the excel sheet
‘DeleteList_t24lib_t24bin.xls’ attached.
2> Remove unwanted path reference in JBASE_PATH and JBCOBJECTLIST environment variables.
Define only the required path where TAFC has to look for ‘binaries’ and ‘libraries’.
3> Do not share the ‘libraries’ and ‘binaries’ or avoid unwanted NFS mount points.
26
8. T24 Multi Application Server Setup
Create a group with common gid among all the T24 application servers. Create user id with same uid
Install TAFC on all T24 application servers with common installation path. E.g. ‘/usr/TAFC/R10’. After
installing TAFC, make sure that licenses are installed and configured. For more details, look for
‘jInstallKey’ and please refer to TAFC installation guide.
Install oracle sql client on all T24 application server with common installation path. E.g.
‘/u01/app/oracle/product/<<release>>/client’
Define appropriate details of ORACLE DB server configuration at tnsnames.ora. Make sure that the
recommended parameters are also set correctly in sqlnet.ora
Mount the entire bnk directory on a cluster file system commonly being shared on all T24 application
servers.
Configure the home directory of the unix user being created for T24 with appropriate bnk.run path
details
Note: From the initial testing conduced on HP CFS (Cluster File System), it has been observed that CFS is
3 to 5 times slower than ‘local file system’. Hence it is recommended to involve HP to investigate and
improve its performance to acceptable level. Filing to obtain performance at this level would lead to
performance issue in T24 during Online as well as COB (during reporting stage).
27
8.5 .profile setup
Since the entire bnk is being shared across all T24 application servers, .profile might require to undergo
few changes.
Port range allocation for T24 process are allocated by TAFC based on JBCPORTNO definition
available in the .profile. It is recommended to define different port ranges among all T24
application servers. This can be quickly achieved with the help of unix scripts.
Sample:
case `hostname’ in
esac
Although TAFC applications have been successfully deployed over Multiple Applications Servers
certain idiosyncrasies related to operating systems, performance and recovery of file locks over
networked file systems have prompted the requirement to provide an alternative lock strategy
for Multiple Application Server deployment. As such the jBASE Distributed lock service is now
offered as an alternative mechanism, to the networked file system, by which locks can be
propagate between servers.
The Distributed lock service is provided via a daemon process executing on Linux/Unix systems
or as an installable service for the Windows platform. The Distributed lock service can be
initiated using either the lower case form of the executable, ‘jdls’, or the TAFC capitalized
convention for process daemons, jDLS, (jBASE Distributed Lock Service).
.profile settings
In .profile set the JDLS variable to point to the ip address of the server where jDLS is installed and this
variable setting is not required on the application server where the jDLS service is running. This can be
achieved using unix script.
Sample script
28
case `hostname’ in
server A) ;;
esac
Below is the sample script to start jDLS which can be used by administrator.
export TAFC_HOME=/usr/tafc.sp11/R10
export JBCRELEASEDIR=$TAFC_HOME
export JBCGLOBALDIR=$TAFC_HOME
export JAVA_HOME=/opt/java6
export PATH=$JAVA_HOME/bin:$PATH
export JRELIB=/opt/java6/jre/lib:/opt/java6/jre/lib/platform
export LD_LIBRARY_PATH=$TAFC_HOME/lib:$JBCDEV_LIB:$JRELIB:${LD_LIBRARY_PATH:-
/usr/lib}
export PATH=$TAFC_HOME/bin:$PATH
jdls -k
jdls -K
jdls -ibD
export TAFC_HOME=/usr/tafc.sp11/R10
export JBCRELEASEDIR=$TAFC_HOME
export JBCGLOBALDIR=$TAFC_HOME
export JAVA_HOME=/opt/java6
export PATH=$JAVA_HOME/bin:$PATH
export JRELIB=/opt/java6/jre/lib:/opt/java6/jre/lib/platform
export LD_LIBRARY_PATH=$TAFC_HOME/lib:$JBCDEV_LIB:$JRELIB:${LD_LIBRARY_PATH:-
/usr/lib}
export PATH=$TAFC_HOME/bin:$PATH
jdls -k
jdls -K
Note: It is recommended to start and stop jDLS as ‘root’ user in order to avoid any accidental mistakes of
killing any jDLS client process processes. By default jDLS will be running on 50002 port, care should be
taken if there is a network firewall between the application servers and JDLS server.
For more details of jDLS and its working behavior, refer for the jDLS reference guide.
29
8.6 ORACLE DCD configuration
For connecting to oracle database from the T24 application, please look at the sample configuration
shown below.
30
8.7 T24 Desktop Connectivity
Since TECHCOM is still looking forward to use T24 Desktop client as T24 Client application to connect to
T24 Multi Application Setup, 2 proposals are given for the customer.
Using of hardware load balancer should be a very straight forward implementation as the client
application does not require any information other than connecting IP (Physical IP or Virtual IP). If
domain name are being used, then the ‘hosts’ files in user desktop need to be configured to resolve the
same to IP is required.
In case of 2nd option, TEMENOS has developed client (jDrac) and server process (jConman + jConsrv) to
achieve load balancing for T24 Desktop Client. T24 Desktop Client is already build with jDrac, client
program that can interact with jConman (server process) which has in build logic do load balancing
among the multiple T24 servers (jConsrv) that are being configured with. For more details on how to
configure T24 Desktop Client, jConman and TCServer (jConsrv), please refer to ‘jConman Workload
Setup.doc’.
After implementing T24 in multiple servers, it is advised to review the TSA.SERVICE records and define
them appropriately. TSM record has been updated with appropriate details in ‘SERVER.NAME’ and
‘WORK.PROFILE’ accordingly. It is also advised to review all ‘AUTO’ services with ‘WORK.PROFILE’ to
restrict the number of agents that would be required and ‘SERVER.NAME’ to restrict the service to one
server in case required. For more details on service definition, please refer to ‘T24 TSA Service
Documentation’.
31
9. T24 Application Server and Oracle DB Server DNZ Setup
In real world, the security policy/practice is to deploy application server and DB server in separate
network restricting user access. Discussion has been initiated for the best practice of how to implement
T24 Application server and Oracle DB server.
Deploying T24 Application server and Oracle DB server within the same DNZ gives better
performance/throughput of the system and better user experience. Separating the two layers would
create overhead because of the firewall which would considerably reduce the throughput of the system
altogether.
TEMENOS do not insist or recommend deploying both application and DB server within the same DNZ
but leaves the decision to the TECHCOM IT department. On the other hand, it has to be understood that
deploying application and DB in separate DNZ would create performance degrade and bank has to
understand pros & cons and prepare themselves accordingly.
Initial performance impact testing conducted by introducing firewall revealed 3 times drop in
throughput of T24 Application during COB.
32
10. Mutex Contention in Oracle 11G R2 on AIX 6.1
During the performance testing of Oracle 11G R2 on AIX 6.1 revealed that there has been bottleneck
due to ‘latch cache lock’ or ‘latch cache pin’ due to ‘X – mutex contention’. IT team has been advised to
raise priority call with Oracle Support (in turn with IBM if required) to analyze and resolve this issue. The
above contention was not allowing the system to scale when SELECT has been executed on the same
table/object from multiple concurrent sessions.
As this being a wide topic to cover and more over not been part of the exercise but few straight forward
recommendations and quick wins were passed to the IT operation team who were in the process of
testing and comparing the performances of jBASE against Oracle DB server.
It is recommended that Bank engages a separate engagement with TEMENOS to analyze and deliver
recommendations to improve the overall T24 system on target Oracle DB server.
Implementing T24 on Oracle with stub less architecture overcomes the need of holding stub files
(pointer to TABLES that exist in external DB) in bnk.data there by avoiding the NFS mount point in case
of multi-application setup. Deploying multiple instances/areas of T24 with common TAFC runtime
installed within the same server would end with common locking server/mechanism. In other words,
even though RECORD LOCK is required for a REC1 on TABLE1 in AREA1, it would still be reflected as same
request for TAFC and there by restricting the concurrent accessing of same records in multiple
environments with the same server.
To avoid this situation, following methods can be used to overcome the reported problem.
1) Use JBASE_DATABASE variable to differentiate multiple environments with in the same server so that
TAFC runtime could differentiate the RECORD LOCK request from multiple environment as independent
request.
2) Use JBASE_UNIX_LOCKDIR variable to point to different locations for each environments so that TAFC
runtime could handle the RECORD LOCK request of individual environment separately.
33
13. ARC-IB Stress test
Another major exercise of the engagement was to run a stress test for ARC-IB system that is expected to
go-live soon. The objective of the exercise was to (a) Simulate 1000 concurrent users and (b) Find any
bottlenecks in scaling of system and (c) Review and pass any recommendations that would adapted for
further improvements of performance.
The stress test was conducted using an open source tool called ‘jMeter’ available in open market. The
detail of the tool and script capturing and replaying mechanism is explained in a separated document.
For more details please refer to ‘ARC-IB jMeter TCB Doc.docx’.
Initial implemented architecture revealed that there has been difference in the TEMENOS proposed
ARC-IB architecture and TECHCOM IT implemented architecture. Discussion was called with the IT heads
to finalize the implementation architecture and below is the highlights.
3> ARC-IB WAS Server, Tomcat Authentication Server and RSA Authentication Server should reside in the
same DNZ (to avoid firewall)
4> DNS settings to be configured correctly at OS level for all the above referred servers to avoid any
overhead during request and response authentication
Due to initial architecture implementation difference, stress test was conducted on Windows WAS
server (target is on RHEL) + Tomcat Authentication Server + RSA Authentication server.
Following were the agreed scripts that were captured to complete and conclude the stress test exercise.
34
d. Log off
3. Saving customer, open a term deposit
a. Login to ARC
b. Open Term Deposit with all default value of version
c. Log off
Initial stress test was conducted with 100 concurrent users. But the test was failed due to following
bottlenecks at various tiers of the application.
1> Websphere (WAS) not scaling to handle multiple concurrent threads or concurrent connections
3> T24 Login Script failed to scale due to SELECT fired on F.OS.XML.CACHE table from multiple sessions
concurrently.
4> T24 Login Script failed to scale due to SELECT fired on FXXX.ACCOUNT table to determine USER access
and credibility.
This test was conducted after addressing all the bottlenecks that is referred on above.
1> Websphere – Thread Pool parameters are changed to allow the maximum concurrent threads to be
created in WAS server to take the request to T24 Application Server (via TCServer). It was recommended
to have 150 threads as ‘maximum’ and ’20 as minimum’. Also default set to ‘150’. Care should be taken
that increasing the number of thread pool will increase the number of concurrent request processed in
T24. It is advised to keep this to a reasonable number so as to avoid overhead on T24 Application Server
as it would also be processing the Internal Users request.
Above recommended settings denotes that, WAS would only take maximum of 150 concurrent request
to T24 via TCServer at any single point in time.
Note: Make sure that inactiveTimeout period of the thread be increased to some reasonable number to
avoid overhead on WAS server. Set this it atleast 10 mints.
2> TCServer was upgraded from 1.5.2_5 to 1.5.2_30 for better performance. Also the MIN_SESSIONS
and MAX_SESSIONS were configured accordingly. As per the above recommendation wrt to WAS,
MIN_SESSION was set to 20 and MAX_SESSION was set to 150.
35
Care should be taken that, increasing the number of thread pool size at WAS need to be aligned with
MIN and MAX sessions setting for the ADAPTER at TCServer. Also it is recommended to have the
SCALEDOWNTIME set to 600 (10 min) to avoid loosing of any inactive session process (tSS) in T24.
Note: Make sure the TIMEOUT setting is set appropriately (atleast 120 – 2 Min).
3> Core issue was raised to avoid SELECT statements during LOGIN phase of the USER to avoid
contention.
4> Core issue was raised to avoid SELECT statements during LOGIN phase of the USER to avoid
performance degrade. Also with that, it was also suggest doing some local change to avoid SELECT
statements as well as achieve the core functionality via local code.
After implementing all the above recommended changes, Stress test was again resumed with 300
concurrent users pumped within 60 secs.
Below is the test result page for ‘jMeter’ tool captured. The throughout was calculated 3.2 transaction
per second.
36
Various combination of ramp up speed was used to create various intensity of load on the system during
login script. Altering the ‘Ramp Up Speed’ of the script has helped us to achieve the same.
11 Sec to complete 100 user login (5 parallel sessions = 500), load menu and logout. (Here the Ramp Up
Speed was set to 0, means all the 100 threads would be launched in the same sec and would post the
txn).
7 sec to complete 100 user login (5 parallel sessions) and load menu. (Here the Ramp Up Speed was set
to 0, means all the 100 threads would be launched in the same sec and would post the txn).
37
30 Sec to complete 100 User login (5 parallel sessions), load menu and logoff – Without overhead on the
system. (Here the Ramp Up Speed was set to 30, means all the 100 threads would be launched within
the 30 sec at definite intervals and would post the txn).
38
14. Follow Up
2 Review ARC-IB field dropdown ENQ, replace them with NOFILE ENQ, also
avoid Full Table Scan of SELECT statement BANK
3 ARC-IB TABBED Screen to be launched with Menu driven (to avoid 1st
page to be launched by default) TEMENOS
39
10 Bank to engage TEMENOS for T24 Online and COB Performance tuning on
Oracle DB BANK
15. Conclusion
The bank has to understand and apply the recommendations where ever applicable. ARC-IB
developments done by the bank need to adhere to standards of TEMENOS to avoid any deviations as
well as for better performance. Bank has to review the follow up items as listed above and take
necessary action. Bank has also need to take initiative to improve the ARC-IB screen for better User
eXperience.
Bank also has to plan for a new engagement for ‘T24 Performance tuning on Oracle DB’.
40