You are on page 1of 5

vmo -o minperm%=1 -o maxperm%=1 -o strict_maxperm=1 -o minclient%=1 -o maxclient%=1

Monitor with vmstat -v to see when/if it applies. You might need to do something memory intensi
ve to trigger the page replacement daemon into action and take care of that 1%.
cat "somefile_sized_1%_of_memory" > /dev/null

Perfectly and acceptable workaround! I just set the maxclient% to a value equal of minperm%
and strict_maxclient=1, in less of 2 minutes 16GB of memory was freed. Now I will back the
original values.

Short version: look at the in use clnt+pers pages in the svmon -G output (unit is 4k pages) if you
want to know all file cache, or look at vmstat -v and look at "file pages" for file cache excluding
executables (same unit).

For an extremely short summary, memory in AIX is classified in two ways:

Working memory vs permanent memory
Working memory is process (stack, heap, shared memory) and kernel memory. If that sort of mem
ory needs to be pages out, it goes to swap.

Permanent memory is file cache. If that needs to be paged out, it goes back out to the filesystem
where it came from (for dirty pages, clean pages just get recycled). This is subdivided into non-
client (or persistent) pages for JFS, and client pages for JFS2, NFS, and possibly others.

Computational vs non-computational pages.

Computational pages are again process and kernel data, plus process text data (i.e. pages that
cache the executable/code).
Non-computational are the other ones: file cache that's not executable (or shared library).

svmon -G (btw, svmon -G -O unit=MB is a bit friendlier) gives you the work versus permanent
pages. The work column is, well, work memory. You get the permanent memory by adding up the
pers (JFS) and clnt (JFS2) columns.

In your case, you've got about 730MB of permanent pages, that are backed by your filesystems
(186151*4k pages).

Now the topas top-right "widget" FileSystemCache (numperm) shows something slightly
different, and you'd get that same data with vmstat -v: that's only non-computational permanent
pages. i.e. same thing as above, but excluding pages for executables.

In your case, that's about 350MB (2.2% of 16G). Either way, that's really not much cache.

The command you are looking for (imho) is:

# svmon -P -O filtertype=working,segment=off,filtercat=exclusive,unit=MB
The key options here are:
'filtertype=working' # aka, no cache;
'segment=off' # actually default when using -O
'filtercat=exclusive' # do not include shared memory or kernel atm
'unit=MB' # who want to calculate from # of pages ??
And, you will want to look at other options such as -C (command name related, some examples
below) and perhaps -U (user related)

++++ start of comment ++++

inserting what I would have entered as a comment to your question, but lack the reputation - as a
new user here.

Your vmstat output tells me more than just your current situation - as it is the single line output - it
is historical - and I suspect you have been having memory issues as it shows a history of pi/po
(paging space page in/ paging space pageout)

Other columns of interest are the fr/sr columns:

fr: pages freed by lrud (least recently used daemon aka page stealer)
sr: pages scanned/searched by lrud looking for an 'old' page
sr/fr: ratio that expresses how many pages must be "scanned" to free 1
What I consider troubling are the pi/po values given here - and completely out of line with the data
from the other commands - also no uptime here, so hard to know what for 'test' generated these

pi: paging space page in (i.e., read application memory from paging space)
po: steal memory and write application (aka working) memory to paging space - only working
memory goes to/from page space

In your presentation you show pi=22 and po=7. This means, on average, the system was reading
information from paging space (after it had been written) 3x more often than it wrote data. This is
an indication of a starved system because data is being read-in (pi) then stolen again (sr/fr) before
it is ever touched (referenced aka used) - or readin and removed again before the application
'waiting' for it ever has a chance to access it.

In short, the data presented is not 'in sync' with the 'pain' moments - although it might explain why
only 2.2% of your memory is now used for caching (it may even be 'computational aka the loaded

As far as vmstat goes I also suggest the flags -I (capital:i which adds 'fi' and 'fo' - fileIn and fileOut
activity) and -w (wide) so the numbers are better positioned under the textual headers.

++++ end of 'comment'

So let's see an excerpt using -P (process view)

# svmon -P -O filtertype=working,segment=off,filtercat=exclusive,unit=MB | head -15
Unit: MB
Pid Command Inuse Pin Pgsp Virtual
14614630 httpd 21.5 0.06 0 21.5
11272246 httpd 21.4 0.06 0 21.4
12779758 httpd 21.2 0.06 0 21.2
17760476 httpd 20.9 0.06 0 20.9
11796712 httpd 20.8 0.06 0 20.8
17039454 httpd 20.6 0.06 0 20.6
11862240 httpd 20.6 0.06 0 20.6
14680090 httpd 20.5 0.06 0 20.5
10747970 httpd 20.5 0.06 0 20.5
11141286 httpd 20.5 0.06 0 20.5
4718766 mysqld 13.6 0.02 0 13.6
When you are not root you only see the commands in your environment.

$ svmon -P -O filtertype=working,segment=off,filtercat=exclusive,unit=MB
Unit: MB

Pid Command Inuse Pin Pgsp Virtual
5505172 svmon 10.7 0.19 0.44 11.4
6553826 ksh 0.57 0.02 0 0.57
9175288 ksh 0.55 0.02 0 0.55
12910710 sshd 0.55 0.02 0 0.55
15204356 sshd 0.52 0.02 0 0.52
12779760 head 0.18 0.02 0 0.18
You may want to look at a specific command - so switching back to root to look at httpd

svmon -C httpd -O filtertype=working,segment=off,filtercat=exclusive,unit=MB
Unit: MB
Command Inuse Pin Pgsp Virtual
httpd 227.44 0.69 0 227.44
Details: excerpt
# svmon -C httpd -O filtertype=working,segment=category,filtercat=exclusive,unit=MB >
Unit: MB
Command Inuse Pin Pgsp Virtual
httpd 230.62 0.81 0 230.62

EXCLUSIVE segments Inuse Pin Pgsp Virtual
230.62 0.81 0 230.62

Vsid Esid Type Description PSize Inuse Pin Pgsp Virtual

81a203 3 work working storage m 24.6 0 0 24.6
8b82d7 3 work working storage m 18.8 0 0 18.8
8b9d37 3 work working storage m 18.2 0 0 18.2
8915f2 f work shared library data m 2.00 0 0 2.00
89abb3 f work shared library data m 2.00 0 0 2.00
824ea4 f work shared library data m 2.00 0 0 2.00
This does not show off the 'segment=category' well, so now with a simpler command - tail - and
show a summary and detail of each memory 'segment' type - but still 'working' memory only (aka
no caching)

# svmon -C tail -O filtertype=working,segment=category,unit=MB

Unit: MB
Command Inuse Pin Pgsp Virtual
tail 82.5 52.6 5.12 90.6

SYSTEM segments Inuse Pin Pgsp Virtual
34.1 33.1 2.38 35.8

Vsid Esid Type Description PSize Inuse Pin Pgsp Virtual

10002 0 work kernel segment m 34.1 33.1 2.38 35.8

EXCLUSIVE segments Inuse Pin Pgsp Virtual
0.18 0.02 0 0.18

Vsid Esid Type Description PSize Inuse Pin Pgsp Virtual

88b4f1 f work working storage sm 0.09 0 0 0.09
82d005 2 work process private sm 0.07 0.02 0 0.07
8e0c9c 3 work working storage sm 0.02 0 0 0.02

SHARED segments Inuse Pin Pgsp Virtual
48.2 19.5 2.75 54.6
Vsid Esid Type Description PSize Inuse Pin Pgsp Virtual
9000 d work shared library text m 48.2 19.5 2.75 54.6

It is normal for AIX to use up most of its memory, and it doesn't release memory as quickly as
compared to other OSes. All of these are taken care by AIX's Virtual Memory Manager (VMM)
and the lrud kernel process. The VMM's behavior can be tuned by using the vmo command.

In AIX there are two types of files put in memory - computational (i.e. executable files and their
working area); and non-computational files (i.e. filesystem caches).

When AIX needs more memory, the lrud process is executed to steal memory. The type of files in
memory, which the lrud will remove from memory, is determined by these VMM parameters -
minperm(%), maxperm(%), and lru_file_repage. The vmo command can be used to make changes
to these parameters.

Below shows type of files removed from memory by the lrud.

If numperm(%) (non-computational files cache) is higher than maxperm(%); lrud will remove
non-computational files.

If numperm(%) is lower than minperm(%); lrud will remove either > computational or non-
computational file pages, whichever is least recently used.

If numperm(%) is between minperm(%) and maxperm(%) and lru_file_repage is '1'; non-

computational pages will be removed if it is lesser than computational pages. Otherwise if
lru_file_repage is '0': only non-computational pages will be removed.

To determine if AIX is having memory issues, I would look at the ratio of pages scanned and
pages freed (I cannot remember where is this in the nmon output). If this ratio has a high value, it
shows that the lrud is scanning a lot of pages to find pages to remove from memory.

Disclaimer: My answer is based on AIX version 5.3 - 6.0 which I worked on in my previous
company 3 - 4 years ago. But I doubt there might be significant change in the behavior of the lrud
and VMM parameters in newer versions of AIX.