You are on page 1of 44

Is Your Linux System Ready for

Informix ?
Sanjit Chakraborty D08
IBM April 24 2012

Linux is an operating system that becomes very popular over the last several years. The
growing popularity of Linux influenced business to port their Informix databases on this
platform.

If you are a Linux user, is your Linux system optimally configured for Informix? In this
presentation you will see what needs to consider to run Informix optimally on Linux
platform.
Agenda
• Pre-installation Considerations
• Installation
• Post-installation Considerations
• Reversion

This presentation provides guidelines for install and manage Informix on Linux platform.
Pre-installation Considerations
Supported Linux Platforms
• x86 (32-bit edition for Intel Pentium-, Xeon-, and AMD
Athlon-based systems)
• POWER® (IBM eServer™, iSeries™, and pSeries® systems)
• zSeries® (IBM eServer zSeries® systems)
• Intel EM64T (X86-64)
• IA64 (64-bit edition for Intel Itanium-based systems)
• AMD64 (64-bit edition for AMD Opteron- and Athlon64-
based systems)
Intel EM64T
x86 P O W E R®
IA64 zSeries®
zSeries
AMD64

IBM offers the flexibility and choice to deploy Informix on a wide variety of hardware
platforms and operating systems. The availability of 64-bit computing platforms presents
new possibilities for increased performance of database servers, as well as database
applications. 32-bit platforms have an inherent address-space limitation of 4 gigabytes
(GB). Removal of this 4 GB limit on the address space of database servers allows for the
creation of larger buffer pools, sort heaps, package caches, and other resources that can
consume large amounts of memory. A 64-bit environment and the ability to address more
than 4 GB of memory can greatly enhance the scalability and performance of databases.
The Linux platforms supported by the new Informix are:

•x86 (32-bit edition for Intel Pentium-, Xeon-, and AMD Athlon-based systems)
•POWER® (IBM eServer™, iSeries™, and pSeries® systems)
•zSeries® (IBM eServer zSeries® systems)
•Intel EM64T (X86-64)
•IA64 (64-bit edition for Intel Itanium-based systems)
•AMD64 (64-bit edition for AMD Opteron- and Athlon64-based systems)
Supported Linux Distributions

• Informix MACHINE NOTE


This product has been certified on:

• Red Hat Enterprise Linux ES release 6.0 (Kernel: 2.6.32, Glibc: 2.12)

• SUSE SLES 11 (Kernel: 2.6.27, Glibc: 2.9). The following packages

• Asianux 3 SP3 (Kernel: 2.6.18, Glibc: 2.5.49)

• Ubuntu Server Edition 8.04.1 LTS (Kernel: 2.6.24, Glibc: 2.7.10)

There are different types of Linux available in the market today. Is Informix supported on
your Linux platform?

The Informix Machine notes is the best place to verify this information. Currently Informix
product is certified on these four most popular Linux distributions. However, you may use
Informix on other Linux distributions but IBM never tested or certified Informix product on
those distributions.

You can find the Informix Machine Notes at:


IBM Informix 11.70 Information Center > Release Notes > Release, documentation, and
machine notes for IBM Informix
Chose the Right Hardware
• How much physical memory need?
• What type and how many CPUs/Cores are required
to support the expected system load?
• What type of application(s) will be run?
• How much CPU power needed?
• Will the servers be use for HDR, ER or MACH11?
What type and speed needed for the networking
components?
• Do you have enough storage space?

Choosing the right hardware is one of the most important steps you can take to optimize
system performance. In most cases, an underpowered system will require the addition of
new hardware later. By selecting the right equipment for the job you will avoid additional
work later. In order to choose the right hardware, you must determine the following:
- How much physical memory do you need? The amount of memory will determine
how well you are able to cache objects.
- What type and how many CPUs/Cores are required to support the expected
system load?
- What type of application(s) will be run?
- How much CPU power is needed?
- Will the servers be use for HDR, ER or MACH11? What type and speed needed for
the networking components?
- Do you have enough storage space?

Answers to these questions and others will help you decide how to configure the system.
32-Bit or 64-Bit?
32-Bit limitation:
• Limited to 4GB of virtual memory per process
– Max addressable space - not a Linux issue
• Low memory issues
• Physical memory limitations

All can be avoided by running


64-bit Linux & 64-bit Informix

When you designing the system, 64-bit hardware and software should be used. Now that
all new hardware is 64-bit capable there is no point to run in 32-bit mode. The
disadvantages of 32-bit include:

- Limited to 4 GB of virtual memory per process. This limitation is not usually problematic
in Linux, but should be avoided nevertheless.
- Low memory issues. With 32-bit Linux there is special low memory below 840 MB
reserved for kernel operations, page table entries and direct memory access. This memory
can be taxed at times, thus causing process failures.
- Physical memory limitations. With the best 32-bit hardware, physical memory is limited
to 64 GB. This is built into the architecture and cannot be extended.

These issues can easily be avoided by running 64-bit Linux and 64-bit Informix software.
Kernel Version
• Make sure using latest kernel
– Which kernels are installed?
rpm -qa | grep kernel

– Which kernel is currently running?


uname –r

• Ensure proprietary modules in right place


/lib/modules/<kernel-version>/kernel/drivers/addon

Make sure to install the latest kernel where all proprietary drivers available and certified/supported.

You may have more than on kernel/versions installed on the machine. Used rpm command to find all kernels installed on the system.
Use the uname commad to find out the current kernel that running.

Note that proprietary drivers are often installed under /lib/modules/<kernel-version>/kernel/drivers/addon. For example, the EMC
PowerPath drivers can be found in the following directory when running the 2.4.21-32.0.1.ELhugemem kernel:

$ ls -al /lib/modules/2.4.21-32.0.1.ELhugemem/kernel/drivers/addon/emcpower
total 732
drwxr-xr-x 2 root root 4096 Aug 20 13:50 .
drwxr-xr-x 19 root root 4096 Aug 20 13:50 ..
-rw-r--r-- 1 root root 14179 Aug 20 13:50 emcphr.o
-rw-r--r-- 1 root root 2033 Aug 20 13:50 emcpioc.o
-rw-r--r-- 1 root root 91909 Aug 20 13:50 emcpmpaa.o
-rw-r--r-- 1 root root 131283 Aug 20 13:50 emcpmpap.o
-rw-r--r-- 1 root root 113922 Aug 20 13:50 emcpmpc.o
-rw-r--r-- 1 root root 75380 Aug 20 13:50 emcpmp.o
-rw-r--r-- 1 root root 263243 Aug 20 13:50 emcp.o
-rw-r--r-- 1 root root 8294 Aug 20 13:50 emcpsf.o

Therefore, when you upgrade the kernel you must ensure that all proprietary modules can be found in the right directory so that the
kernel can load them.

For example, to install the 2.4.21-32.0.1.ELhugemem kernel, download the kernel-hugemem RPM and execute the following command:
# rpm -ivh kernel-hugemem-2.4.21-32.0.1.EL.i686.rpm

Never upgrade the kernel using the RPM option '-U'. The previous kernel should always be available if the newer kernel does not boot or
work properly.
To make sure the right kernel is booted, check the /etc/grub.conf file if you use GRUB and change the "default" attribute if necessary.
Here is an example:
default=0
timeout=10
splashimage=(hd0,0)/grub/splash.xpm.gz
title Red Hat Enterprise Linux AS (2.4.21-32.0.1.ELhugemem)
root (hd0,0)
kernel /vmlinuz-2.4.21-32.0.1.ELhugemem ro root=/dev/sda2
initrd /initrd-2.4.21-32.0.1.ELhugemem.img
title Red Hat Enterprise Linux AS (2.4.21-32.0.1.ELsmp)
root (hd0,0)
kernel /vmlinuz-2.4.21-32.0.1.ELsmp ro root=/dev/sda2
initrd /initrd-2.4.21-32.0.1.ELsmp.img
In this example, the "default" attribute is set to "0" which means that the 2.4.21-32.0.1.ELhugemem kernel will be booted. If the
"default" attribute would be set to "1", then 2.4.21-32.0.1.ELsmp would be booted.
Necessary Packages Installed
• Check the Informix MACHINE NOTE
Ubuntu Server Edition 8.04.1 LTS (Kernel: 2.6.24, Glibc: 2.7.10).
The following packages have to be installed:

libgcc 4.2.4
libstdc++6 4.2.4
libncurses5 5.6
libpam 0.99

– What packages are installed?


rpm -qa

Do you have all the necessary package installed to run the Informix product?

Check the Informix MACHINE NOTE


- Ubuntu Server Edition 8.04.1 LTS (Kernel: 2.6.24, Glibc: 2.7.10). The following packages
have to be installed:

libgcc 4.2.4
libstdc++6 4.2.4
libncurses5 5.6
libpam 0.99
Kernel Parameters
• Informix MACHINE NOTE
– Suggested values might need to tune

SHMMAX: 4398046511104
SHMMNI: 4096
SHMALL: 4194304
SEMMNI: 4096
SEMMSL: 250
SEMMNS: 32000
SEMOPM: 32

– What is the current settings?


/proc/sys/kernel

It’s important your UNIX kernel parameters meets Informix requirement. Check the
Informix machine notes for Informix recommended kernel parameters. The machine noted
provide a suggested value you may need to set parameters to a higher value based on your
activities.

Kernel Parameters from Informix machine notes:

The values of the kernel parameters that were used for testing this
product are given below. These values might need to be tuned depending
on the application and availability of system resources. They
can either be dynamically changed in the /proc file system or are defined
in the kernel sources and can be changed by rebuilding the kernel.

SHMMAX: 4398046511104
SHMMNI: 4096
SHMALL: 4194304
SEMMNI: 4096
SEMMSL: 250
SEMMNS: 32000
SEMOPM: 32

- The value of the kernel parameter "SEMMSL" should be set to at least


Shell
• Korn Shell required for Informix scripts
– ISM
– Alarmprogram

• /bin/ksh
/bin/pdksh (Ubuntu)

The Korn shell is required for the ISM and Informix server alarmprogram scripts. Install it as
/bin/ksh. On Ubuntu Server Edition, pdksh needs to be installed.
Global Security Kit (GSKit)
• GSKIT
– Data Encryption
– SSL communication
• Installed part of Informix Server
• Installation location
– /usr/local/ibm/gsk8_64
• 25 MB of free disk space required
• Additional OS package required
– compat-libstdc++-33-3.2.3-61 or later

IBM Informix Database Server uses the libraries and utilities provided by the IBM Global
Security Kit (GSKit) for data encryption and Secure Sockets Layer (SSL) communication. The
GSKit is bundled with the server and will be installed on your machine as part of the server
installation process.

Here are more details on the GSKit:

a. The GSKit is also bundled with other IBM products and might already be present on
your machine. If GSKit is not installed,
Informix server will install GSKit in /usr/local/ibm/gsk8_64 directory on your
machine.

b. The GSKit installation directory must have 25 MB of free disk space.

c. One of the following packages must be installed on your system:


RHEL 5 - compat-libstdc++-33-3.2.3-61 or later

d. The RPM Package Manager is required to be installed on the system.


Memory Needed
• Total and Usage Memory
– 256 MB min for Informix Server
– 35 MB min for HPL, onpload, ipload
cat /proc/meminfo

These are the minimum memory requirement for database server. However, you need much more memory based on your application
and business requirement.

Checking Memory Usage


To determine the size and usage of memory, you can enter the following command: “grep MemTotal /proc/meminfo”.

You can find a detailed description of the entries in /proc/meminfo at http://www.redhat.com/advice/tips/meminfo.html.


Alternatively, you can use the free(1) command to check the memory:
$ free total used free shared buffers cachedMem: 4040360 4012200 28160 0 176628 3571348-/+
buffers/cache: 264224 3776136Swap: 4200956 12184 4188772$ In this example the total amount of available memory is
4040360 KB. 264224 KB are used by processes and 3776136 KB are free for other applications. Don't get confused by the first line
which shows that 28160KB are free! If you look at the usage figures you can see that most of the memory use is for buffers and
cache since Linux always tries to use RAM to the fullest extent to speed up disk operations. Using available memory for buffers (file
system metadata) and cache (pages with actual contents of files or block devices) helps the system to run faster because disk
information is already in memory which saves I/O. If space is needed by programs or applications like Informix, then Linux will free
up the buffers and cache to yield memory for the applications. So if your system runs for a while you will usually see a small
number under the field "free" on the first line.
Large Memory Addressability
• Ability to support system configurations with
greater than 4GB of RAM on 32-bit OS
– Max BUFFERPOOL 2147483647
– Max LRU 512
– DS_TOTAL_MEMORY only limited by the amount
of virtual memory
– RA_PAGES only limited by number of buffers
– Max memory segment 4GB

Large Memory Addressability (LMA)

IBM Informix LMA provides the ability to support system configurations


with greater than 4GB of RAM. Most UNIX systems are limited to 4GB of
RAM based on the memory addressing limitations of 32-bit architectures.

The values for the following ONCONFIG parameters are increased from
32-bit to 64-bit platform by LMA support:

- The maximum number of buffers in BUFFERPOOL is 2147483647.

- The maximum of LRU queues for lrus field in BUFFERPOOL is 512.

- The DS_TOTAL_MEMORY, which is the total memory available for


decision support memory, is only limited by the amount of virtual
memory available. The sort memory comes out of the DS_TOTAL_MEMORY
memory and hence there is no explicit limit on the amount of sort
memory.

- The read ahead parameter RA_PAGES is only limited by the number of


buffers and therefore can be any value less than 2147483647. The
"chunk" write algorithm is not dependent on the amount of buffers
or shared memory and can write as many pages in a single I/O as
possible.

- The maximum size of any shared memory segment is 4398046511104 bytes.

- The value for SHMADD is 4294967296 kilobytes.

Refer to IBM Informix Administrator's Reference manual for the


information about the related configuration parameters setting.
Enable Huge Pages
Mechanism that allows the Linux kernel to utilize
the multiple page size capabilities in memory
• Helps optimal memory usages
– Allows large memory with reduce IO
• Check current settings
cat /proc/meminfo |grep Hugepagesize

• Alter it in /etc/sysctl.conf
sysctl -w vm.nr_hugepages=<number>

Hugepages is a mechanism that allows the Linux kernel to utilize the multiple page size
capabilities of modern hardware architectures. Linux uses pages as the basic unit of
memory, where physical memory is partitioned and accessed using the basic page unit. The
default page size is 4096 Bytes in the x86 architecture. Hugepages allows large amounts of
memory to be utilized with a reduced overhead. Linux uses "Transaction Lookaside
Buffers" (TLB) in the CPU architecture. These buffers contain mappings of virtual memory
to actual physical memory addresses. So utilizing a huge amount of physical memory with
the default page size consumes the TLB and adds processing overhead.

The Linux kernel is able to set aside a portion of physical memory to be able be addressed
using a larger page size. Since the page size is higher, there will be less overhead managing
the pages with the TLB. In the Linux 2.6 series of kernels, hugepages is enabled using the
CONFIG_HUGETLB_PAGE feature when the kernel is built. Systems with large amount of
memory can be configured to utilize the memory more efficiently by setting aside a portion
dedicated for hugepages. The actual size of the page is dependent on the system
architecture.
A typical x86 system will have a Huge Page Size of 2048 kBytes. The huge page size may be
found by looking at the /proc/meminfo :

# cat /proc/meminfo |grep Hugepagesize


Hugepagesize: 2048 kB

Number of Hugepages can be allocated using the /proc/sys/vm/nr_hugepages entry, or by


using the sysctl command.
Graphic Library
• X Terminal
• OpenMotif runtime libraries v 2.3.1 or higher
– HPL
– Onpref

Computer must support the X terminal and the mwm window manager to run some of the
graphical utilities, for example, ipload, onperf and xtree etc. These utilities require
OpenMotif runtime libraries version 2.3.x. The minimum version is 2.3.1, i.e. openmotif-
2.3.1 or openmotif-libs-2.3.1.
32-bit Packages
• ISM
– glibc and ncurses
• Ubuntu Server Edition x86_64
– libc6-i386
– libncurses
• Debian x86_64
– libc6-i386
– lib32ncurses5

You may use 64-bit OS and Informix but Informix Storage Manager (ISM) is still a 32-bit product that comes
with Informix. The following 32-bit packages are required for using ISM:

glibc and ncurses

If the product runs on Ubuntu Server Edition x86_64, the following 32-bit packages are required:

libc6-i386
libncurses

The 32-bit libncurses is missing and not packaged, you need to copy
it from a i386 (32bit) installation: /lib/libncurses.so.5 to x86_64
32-bit compat libs: /lib32/libncurses.so.5.

If the product runs on Debian x86_64, the following 32-bit packages


are required:

libc6-i386
lib32ncurses5
Disk Space

Disk requirement is totally depend on application and business requirement. We will leave
it on your consideration.
Right File System
• Which one is good for database server, Ext2, Ext3,
Ext4, NFS, FA16, FAT32, NTFS, Sysfs or Procfs?
• Extended with JFS gave very good performance
• Avoid the Reiser File System
• Enough memory for JFS file system caching
• Use df -t to determine the type

There are tremendous discussions which file system is the “best” on Linux. For running
Informix on Linux better use ext3. Another options is – of course – ASM.

Linux supports numerous file system types


• Ext2: This is like UNIX file system. It has the concepts of blocks, inodes and directories.
• Ext3: It is ext2 filesystem enhanced with journalling capabilities. Journalling allows fast
file system recovery. Supports POSIX ACL (Access Control Lists).
• Isofs (iso9660): Used by CDROM file system.
• Sysfs: It is a ram-based filesystem initially based on ramfs. It is use to exporting kernel
objects so that end user can use it easily.
• Procfs: The proc file system acts as an interface to internal data structures in the
kernel. It can be used to obtain information about the system and to change certain
kernel parameters at runtime using sysctl command.
• NFS: Network file system allows many users or systems to share the same files by using
a client/server methodology. NFS allows sharing all of the above file system.
• Linux also supports Microsoft NTFS, vfat, and many other file systems. See Linux kernel
source tree Documentation/filesystem directory for list of all supported filesystem.

Journaling (JFS) has a dedicated area in the file system, where all the changes are tracked.
When the system crashes, the possibility of file system corruption is less because of
journaling.
Optimizing ext_ File System
• Change the ratio of created inodes per bytes
• File system with fewer number of inodes will
shorten fsck times
• Disable atime-Updates on ext for better
performance

Creating the ext3 file system


When creating the file system for storing your database files you might want to change the
ratio of created inodes per bytes. This is especially useful for shorten the periodic full file
system check in ext3 (i outlined it in this post).
You can safely create a ext3 file system with only one inode per 1 MB space with:
mkfs.ext3 -T largefile <device> Or even one inode per 4 MB with:
mkfs.ext3 -T largefile4 <device> Remember you will need one inode per created file or
directory. So i recommend this options only on file systems intended for usage by data files
(this includes redo logs and even archive logs).
Creating a file system with fewer number of inodes will shorten your fsck times extremely.
The standard created file system creates one inode per every 4 KB. fsck’ing an 8 TB file
system filled to 50% with data files (all of 32 GB size) takes approx 7 hours! Checking the
same file system created with “-T largefile” only takes 10 minutes. The “-T largefile4″ needs
approx 5 minutes.
Disable atime-Updates on ext3
When accessing a file or directory; the ext file system updates the file or directories last
accessed timestamp. It does to for every read to every file. This has a unnecessary
performance impact. To avoid this impact you can turn off these updates by adding
noatime,nodiratime to your /etc/fstab or remounting the file system.
File Systems: Ext2 vs. Ext3 vs. Ext4

Ext2 Ext3 Ext4

Introduced 1993 2001 2008

Journaling feature NO YES YES

Max individual file size 2TB 2TB 16TB

Overall file system size 32TB 32TB 1EB

1 EB (exabyte). 1 EB = 1024 PB (petabyte). 1 PB = 1024 TB (terabyte).

Here is a comparison between different extended file systems.


Swap Space
• Min 70MB of for HPL, ipload, onpload

You may need more but for HPL, ipload etc., you need at least 70MB swap spaces.
Informix Installation Consideration
Product Installation
• Create a group and user ‘informix’
– groupadd informix
– useradd -g informix -m informix
• Set the password for user informix
– passwd informix
• Need ROOT access to install the product
• Create a directory for unpack/extract the product
– mkdir /opt/informix-ids-11.70.FC4
• Install the product : Command Line or GUI
• Create the demo instance part of installation
JRE Shared Libraries
• Update /etc/ld.so.conf, add following at end
<$INFORMIXDIR>/extend/krakatoa/jre/bin
<$INFORMIXDIR>/extend/krakatoa/jre/bin/classic
<$INFORMIXDIR>/extend/krakatoa/jre/lib
• Bundled JRE used from
$INFORMIXDIR/extend/krakatoa/jre
• As user root run ‘ldconfig’

In order to ensure the Java Runtime Environment (JRE) shared libraries are loaded
properly, the following steps are necessary:
• Add the following lines at the end of /etc/ld.so.conf
<$INFORMIXDIR>/extend/krakatoa/jre/bin
<$INFORMIXDIR>/extend/krakatoa/jre/bin/classic
<$INFORMIXDIR>/extend/krakatoa/jre/lib

This assumes the bundled JRE in $INFORMIXDIR/extend/krakatoa/jre is used. Substitute


<$INFORMIXDIR> by the value of your INFORMIXDIR environment variable.

• Run ldconfig as root # ldconfig


Processor Affinity
• Override the system's built-in scheduler to
force a process to run only on specified CPUs
• Better performance on SMP or NUMA
• Use VPCLASS Informix configuration
parameter

Processor affinity refers to binding a process or a set of processes to a specific CPU or a set of CPUs. The advantage of doing this is to
override the system's built-in scheduler to force a process to run only on specified CPUs. This can provide some performance gains in
Symmetric Multiprocessor (SMP) and Non-Uniform Memory Access (NUMA) environments because it is much more likely that the
processor's cache will contain cached data for the process bound to that processor.

The NUMA architecture was designed to surpass the scalability limits of the SMP architecture. With SMP, all memory access is posted to
the same shared memory bus. This works well for a relatively small number of CPUs, but a problem with the shared bus can occur if you
have dozens, and even hundreds, of CPUs competing for access to the shared memory bus. NUMA alleviates these bottlenecks by
limiting the number of CPUs on any one memory bus, and by connecting the various nodes through a high-speed interconnection.

When a process is scheduled onto different processors, there is little chance for cache hits, and you can experience performance
degradation (Because of the migration, the cache must be filled again!).

You must be cautious when using this feature, since incorrect use can have a negative performance impact. Overriding what the kernel
selects as best for the process can be tricky. Obtaining a significant performance improvement using processor affinity involves some
experimentation because every workload is different and the kernel and I/O scheduler operate differently with every workload.

Set the aff option of the VPCLASS configuration parameter to the numbers of the CPUs on which to bind CPU virtual processors in your
$INFORMIXDIR/etc/$ONCONFIG file.

Note: Binding a CPU virtual processor to a processor does not prevent other processes from running on that processor. Application
processes or other processes that you do not bind to a CPU are free to run on any available processor.

Operating System requirements : CPU affinity is a 2.6 kernel feature that has also been back-ported to Red Hat Enterprise Linux 3.

If you find a bitmask hard to use, then you can specify a numerical list of processors instead of a bitmask using -c flag:
# taskset -c 1 -p 13545
# taskset -c 3,4 -p 13545
Direct-IO and Asynchronous IO
• By default Automatic Storage Management
uses Asynchronous IO
• Not all file systems supports Asynchronous IO
• Direct IO available with RH 3 and SUSE 9

On Linux, Automatic Storage Management uses asynchronous I/O by default.

Direct I/O support is available and supported on Red Hat Enterprise Linux 3 and SUSE Linux Enterprise Server 9. To use direct I/O on Red
Hat Enterprise Linux 3, the driver that you use must support vary I/O. On Linux on POWER, you can use direct I/O on Red Hat Linux 4.

Note that not all file systems support DirectI/O or even Asynchronous I/O. EXT3 does support both. Asynchronous I/O is not supported
for chunks stored on NFS file systems.
KAIO
• Linux kernel 2.6 or higher
– Red Hat Enterprise Linux AS release 4.0
– SUSE LINUX Enterprise Server 9
– libaio.so.1 library 0.3.96-3 or higher
• Sufficient number of parallel KAIO requests
echo new_value > /proc/sys/fs/aio-max-nr
• Raw devices for chunks
• Run poll threads on separate VPs
– NETTYPE ipcshm,...,...,NET or
– NETTYPE soctcp,...,...,NET

4. Kernel Asynchronous I/O (KAIO)

KAIO is enabled by default on this platform. It can be disabled by setting


the environment variable KAIOOFF=1 in the environment of the process that
starting the server.

When using KAIO, it is recommended to run poll threads on separate VPs by


specifying NET as VP class in the NETTYPE onconfig parameter, e.g.
NETTYPE ipcshm,...,...,NET or
NETTYPE soctcp,...,...,NET

On Linux, there is a system wide limit of the maximum number of parallel


KAIO requests. The file /proc/sys/fs/aio-max-nr is containing this value.
It can be increased by the Linux system administrator, e.g. by

# echo new_value > /proc/sys/fs/aio-max-nr

The current number of allocated requests of all OS processes is visible


in /proc/sys/fs/aio-nr.

By default, IBM Informix Database server is allocating half of the maximum


number of requests, and assigns them equally to the number of configured
CPU VPs. The number of requests allocated per CPU VP can be controlled by
the environment variable KAIOON, by setting it to the required value before
starting the server. The minimum value for KAIOON is 100. If Linux is
about to run out of KAIO resources, e.g. when dynamically adding many CPU
VPs, warnings will be printed to the online.log file. In this case, the
Linux system administrator should add KAIO resources as described above.

-----
Location of Shared Memory
• Default SHMBASE 0x44000000L
• Check address space before alter the default
cat /proc/<pid of oninit process>/maps

The ONCONFIG variable SHMBASE is set to the following:

SHMBASE 0x44000000L

The SHMBASE can also be set to start above the shared library addresses. When doing so,
ensure that dynamically loaded shared libraries do not collide with the shared memory
segments. The address space layout can be checked by the following command: $ cat
/proc/<pid of oninit process>/maps
Change Location of Shared Library
• Beginning with kernel version 2.4.19, Linux
provides a way to dynamically change the default
start address for shared libraries on a per-process
basis. This feature is available, if the file
/proc/$$/mapped_base exists.

• To change the start address for shared libraries of


the oninit processes, the new start address needs
to be specified by user root in the shell from
where oninit is started.
Change Location of Shared Library (Cont.)
• Example:
$ echo $$
29712
$ su root
Password:
# # the following sets the start address of shared libraries to
0xB0000000:
# echo -1342177280 > /proc/29712/mapped_base
# exit
$ oninit

Assuming the $ONCONFIG parameter SHMBASE is 0x10000000, this


gives 2.5 GB of contiguous address space available for the database
server.
Change Location of Shared Library (Cont.)

• On Red Hat Enterprise Linux 3 the start address


for shared libraries is 0xb7600000 and memory
address space is utilized downwards.

• The ONCONFIG variable SHMBASE is recommend


to be set at SHMBASE 0x10000000L

• $ cat /proc/25830/maps [see notes]

08048000-089b9000 r-xp 00000000 03:01 65677 /work/9.50/bin/oninit


089b9000-08b20000 rw-p 00970000 03:01 65677 /work/9.50/bin/oninit
08b20000-08bbb000 rw-p 00000000 00:00 0
10000000-12b5d000 rw-s 00000000 00:04 22052864 /SYSV52694801 (deleted)
12b5d000-13afd000 rw-s 00000000 00:04 22085633 /SYSV52694802 (deleted)
13afd000-13b81000 rw-s 00000000 00:04 22118402 /SYSV52694803 (deleted)
13b81000-14381000 rw-s 00000000 00:04 22151171 /SYSV52694804 (deleted)
b7363000-b736e000 r-xp 00000000 03:02 189112 /lib/libnss_files-2.3.2.so
b736e000-b736f000 rw-p 0000a000 03:02 189112 /lib/libnss_files-2.3.2.so
b736f000-b7370000 rw-p 00000000 00:00 0
b7370000-b74a1000 r-xp 00000000 03:02 346507 /lib/tls/libc-2.3.2.so
b74a1000-b74a4000 rw-p 00130000 03:02 346507 /lib/tls/libc-2.3.2.so
b74a4000-b74a7000 rw-p 00000000 00:00 0
b74a7000-b74af000 r-xp 00000000 03:02 188931 /lib/libgcc_s-3.2.3-20030829.so.1
b74af000-b74b0000 rw-p 00007000 03:02 188931 /lib/libgcc_s-3.2.3-20030829.so.1
b74b0000-b74d1000 r-xp 00000000 03:02 346509 /lib/tls/libm-2.3.2.so
b74d1000-b74d2000 rw-p 00020000 03:02 346509 /lib/tls/libm-2.3.2.so
b74d2000-b74d3000 rw-p 00000000 00:00 0
b74d3000-b757c000 r-xp 00000000 03:02 110469 /usr/lib/libstdc++.so.5.0.3
b757c000-b7581000 rw-p 000a8000 03:02 110469 /usr/lib/libstdc++.so.5.0.3
b7581000-b7586000 rw-p 00000000 00:00 0
b7586000-b758d000 r-xp 00000000 03:02 189159 /lib/libpam.so.0.75
b758d000-b758e000 rw-p 00007000 03:02 189159 /lib/libpam.so.0.75
b758e000-b759d000 r-xp 00000000 03:02 110373 /usr/lib/libelf-0.89.so
b759d000-b759e000 rw-p 0000f000 03:02 110373 /usr/lib/libelf-0.89.so
b759e000-b75a3000 r-xp 00000000 03:02 189090 /lib/libcrypt-2.3.2.so
b75a3000-b75a4000 rw-p 00004000 03:02 189090 /lib/libcrypt-2.3.2.so
b75a4000-b75cb000 rw-p 00000000 00:00 0
b75cb000-b75cd000 r-xp 00000000 03:02 189092 /lib/libdl-2.3.2.so
b75cd000-b75ce000 rw-p 00001000 03:02 189092 /lib/libdl-2.3.2.so
b75ce000-b75db000 r-xp 00000000 03:02 346511 /lib/tls/libpthread-0.60.so
b75db000-b75dc000 rw-p 0000c000 03:02 346511 /lib/tls/libpthread-0.60.so
b75dc000-b75df000 rw-p 00000000 00:00 0
b75eb000-b7600000 r-xp 00000000 03:02 189079 /lib/ld-2.3.2.so
b7600000-b7601000 rw-p 00015000 03:02 189079 /lib/ld-2.3.2.so
bffbb000-c0000000 rwxp fffbf000 00:00 0
Post-Installation Consideration
Page Cache
• Page Cache
– Holds data of files & executable programs
– Reduce the number of disk reads
– Control the percentage of total memory used for
page cache
echo "1 15 30" > /proc/sys/vm/pagecache

Minimum percentage System will maintain page Maximum percentage


of memory cache during memory pruned of memory

Tuning Page Cache


You can tune the page cache for user applications.
Page Cache holds data of files and executable programs, i.e. pages with actual contents of files or block devices. Page Cache (disk cache)
is used to reduce the number of disk reads. To control the percentage of total memory used for page cache in RHEL 3, the following
kernel parameter can be changed:
# cat /proc/sys/vm/pagecache 1 15 30

It holds three values that can be set by writing a space-separated list to the file:
• Minimum percentage of memory that should be used for pagecache
• The system will try and maintain this amount of pagecache when system memory is being
pruned in the event of a low amount of system memory remaining
• Maximum percentage of memory that should be used for pagecache
The above three values are usually good for database systems. It is not recommended to set the third value very high like 100 as it used
to be with older RHEL 3 kernels. This can cause significant performance problems for database systems. If you upgrade to a newer
kernel like 2.4.21-37, then these values will automatically change to "1 15 30" unless it's set to different values in /etc/sysctl.conf.
For information on tuning the pagecache kernel parameter, I recommend reading the excellent article Understanding Virtual
Memory. Note this kernel parameter does not exist in RHEL 4.
The pagecache parameters can be changed in the proc file system without reboot:
# echo "1 15 30" > /proc/sys/vm/pagecache Alternatively, you can use sysctl(8) to change it:
# sysctl -w vm.pagecache="1 15 30" To make the change permanent, add the following line to the file /etc/sysctl.conf. This file is used
during the boot process.
# echo "vm.pagecache=1 15 30" >> /etc/sysctl.conf
Tune Swapping Priority
• Linux swaps unused pages, even memory free
– Page in memory not access for some time
– Often free memory use for file system cache
• A low value for “swappingness” benefits
database applications
sysctl -w vm.swappiness=5

Linux does swap unused pages even if there is plenty of memory free. It does so by
checking if a memory page (which is 4 kb in size) was accessed recently. If they were not
accessed for some time the page gets swapped out to disk. The memory freed is used for
other purposes – most often for the file system cache.

A high swappiness value means that the kernel will be more apt to unmap mapped pages.
A low swappiness value means the opposite, the kernel will be less apt to unmap mapped
pages. In other words, the higher the vm.swappiness value, the more the system will swap.
vm.swappiness takes a value between 0 and 100 to change the balance between swapping
applications and freeing cache. At 100, the kernel will always prefer to find inactive pages
and swap them out; in other cases, whether a swapout occurs depends on how much
application memory is in use and how poorly the cache is doing at finding and releasing
inactive items.

Recommend to set this value quite low on a database server… e.g. 5 or even 1. You can
check the current settings at /proc/sys/vm/swappiness. Note that Huge Pages are not
swappable thus remain always in memory.

Tuning the Linux memory subsystem is a tough task that requires constant monitoring to
ensure that changes do not negatively affect other components in the server. If you do
choose to modify the virtual memory parameters (in /proc/sys/vm), change only one
parameter at a time and monitor how the server performs.
Stack Overflow
• Default STACKSIZE
AF
– 32K for 32-bit Memory block header corruption detected
– 64K for 64-bit Condition Failed - Bad pool pointer
• Increase STACKSIZE for recursive
operation
– Cascading Deletes
• Recommendation
– Monitor: onstat -g sts
– 128K Stack size
– Use Env variable INFORMIXSTACKSIZE

It is recommended that the environment variable INFORMIXSTACKSIZE be set to 128 (the


default is 64) if the application involves operations
which would require the Informix to perform recursive database tasks (for example,
cascading deletes).
Linux OOM Killer
• Linux feature dealing with memory exhaustion
• Linux x86-32bit
• oninit can get killed by OOM killer
• Solutions
– Upgrade to Linux 64-bit
– Tune operating system swap space
– Run the Hugemem kernel
– Configure a lower zone memory area :
/proc/sys/vm/lower_zone_protection to a value
– Configure (huge pages) and (huge page size) for Linux

A common problem that you can encounter with Informix database server on Linux x86-32bit platform, mainly RHEL
below 5. This problem is widely known as out-of-memory (OOM) killer.

The Linux operating system incorporated an interesting feature to dealing with memory exhaustion, and it comes in the
way of the OOM killer. The OOM killer terminates selected process in order to free up enough memory to keep the
system operational. Once memory gets tights on Linux system, Informix database process (oninit) can get killed as a result
of OOM killer and eventually Informix server crash.

You can easily verify the consequence of OOM killer from Informix and Linux message logs.

Informix message log:


08:51:59 Fatal error in ADM VP at mt.c:13418
08:51:59 Unexpected virtual processor termination, pid = 29493, exit = 0x9
08:52:04 PANIC: Attempting to bring system down

Linux message log (/var/log/messages):


Jan 17 08:51:59 darwin kernel: Out of memory: Killed process 29493 (oninit).

Following are some most common solutions to get around OOM killer situation:
•Upgrade to a 64-bit version of LINUX.
•Tune operating system swap space.
•Run the hugemem kernel.
•Configure a lower zone memory area by setting /proc/sys/vm/lower_zone_protection to a value.
•Configure or tune (huge pages) and (huge page size) for Linux.

Please check the following link to get more information on Linux OOM Killer feature:
http://linux-mm.org/OOM_Killer
Disable Unnecessary Daemons
• Disable unnecessary daemons
– Frees memory
– Decreases startup time
– Decreases the number of processes
– Increased security
/sbin/chkconfig --list
Informix Reversion Consideration
Revert 64-Bit 11.7 to 32-Bit 11.7
• Keep Informix 64-bit server online
• Disconnect all users
• As User Informix:
$INFORMIXDIR/etc/conv/update.sh -32 –d
• Shutdown the 64-bit server
• Change INFORMIXDIR, point to 32-bit server
• Bring the 32-bit server online
Revert 64-Bit 11.7 to 32-Bit 11.5/11.1/10
• Keep Informix 64-bit server online
• Disconnect all users
• As User Informix:
– $INFORMIXDIR/etc/conv/update.sh -32
– onmode -b
• Change INFORMIXDIR, point to 32-bit server
• Bring the 32-bit server online
References
• Informix Server System Requirements
https://www-304.ibm.com/support/docview.wss?uid=swg27013343
Questions?!?

4/13/2012 Template Presentation - Session Z99 43


Is Your Linux System Ready for
Informix ?
Sanjit Chakraborty
sanjitc@us.ibm.com

You might also like