Professional Documents
Culture Documents
printfriendly.com/p/g/zne6xK
linuxhero.tk/cpubottlenecks/
CPU Bottlenecks
For servers whose primary role is that of an application or database server, the CPU is a
critical resource and can often be a source of performance bottlenecks. It is important to
note that high CPU utilization does not always mean that a CPU is busy doing work; it might
be waiting on another subsystem. When performing proper analysis, it is very important
that you look at the system as a whole and at all subsystems because there could be a
cascade effect within the subsystems.
Note: There is a common misconception that the CPU is the most important part of
the server. This is not always the case, and servers are often overconfigured with CPU
and underconfigured with disks, memory, and network subsystems. Only specific
applications that are truly CPU intensive can take advantage of today’s high-end
processors.
#uptime
Tip: Be careful not to add to CPU problems by running too many tools at one time.
You might find that using a lot of different monitoring tools at one time could be
contributing to the high CPU load.
Using top, you can see the CPU utilization and what processes are the biggest contributors
to the problem. If you have set up sar, you are collecting a lot of information, some of which
is CPU utilization, over a period of time. Analyzing this information can be difficult, so use
isag, which can use sar output to plot a graph. Otherwise, you may wish to parse the
information through a script and use a spreadsheet to plot it to see any trends in CPU
utilization. You can also use sar from the command line by issuing sar -u or sar -U
processornumber. To gain a broader perspective of the system and current utilization of
more than just the CPU subsystem, a good tool is vmstat
1/4
SMP
SMP-based systems can present their own set of interesting problems that can be difficult
to detect. In an SMP environment, there is the concept of CPU affinity, which implies that
you bind a process to a CPU.
The main reason this is useful is because of CPU cache optimization, which is achieved by
keeping the same process on one CPU rather than moving between processors. When a
process moves between CPUs, the cache of the new CPU must be flushed. Therefore, a
process that moves between processors causes many cache flushes to occur, which means
that an individual process will take longer to finish. This scenario is very hard to detect
because, when monitoring it, the CPU load will appear to be very balanced and not
necessarily peaking on any CPU.
Linux supports nice levels from 19 (lowest priority) to -20 (highest priority). The default value
is 0. To change the nice level of a program to a negative number (which makes it higher
priority), it is necessary to log on or su to root.
To start the program xyz with a nice level of -5, issue the command:
To change the nice level of a program already running, issue the command:
To change the priority of a program with a PID of 2500 to a nice level of 10, issue:
CPU affinity enables the system administrator to bind interrupts to a group or a single
physical processor (of course, this does not apply on a single CPU system). To change the
affinity of any given IRQ, go into /proc/irq/%{number of respective irq}/ and change the CPU
mask stored in the file smp_affinity. To set the affinity of IRQ 19 to the third CPU in a system
(without SMT) use the command as per below
How to test all above tips at home for cpu high utilization.
"stress" tool
It is a workload generator tool designed to subject your system to a configurable
measure of CPU, memory, I/O and disk stress.
3/4
Email:
Comments:
4/4