You are on page 1of 21

Run levels:

0 > shutdown hpux


S > single user mode booted to local console only with root FC (RO) mounted
s > same as S only current terminal acts as system console.
1 > single user mode with local FS (RW) mounted
2 > multi user state with CDE launched
3 > same as 2 but with NFS
4 > GUI (here VUE started instead CDE)
5,6 > reserved

Boot process:
1) PDC (processor dependent code) gets executed
a) Checks CPU
b) Checks stable storage for boot path
c) Loads ISL utilities from leaf area of boot disk
d) Here you can halt boot using ESC key and can run PO, SEA commands.
2) ISL (Initial system loader) gets loaded
a) Read AUTO file default kernel
b) Load and runs HPUX from LIF area
c) Here you can halt boot process and boot system into single user mode. U can provide diff
options to SSL i.e. kernal vmunix. Like hpux –is, hpux –lq, hpux –lm
3) HPUX loads (Secondary system loader)
a) Uses options and path names from ISL to load kernel
b) And by default loads vmunix
4) After kernel vmunix gets loaded –
a) Swapper deamon starts with PID 0
b) Kernel runs /sbin/pre_init_rc
c) Kernel calls /sbin/init
d) /sbin/init reads /etc/inittab and calls –
i) /sbin/ioinit – to scan hardware and build kernel io tree
ii) /sbin/bcheckrc – to check FS listed in /etc/fstab
iii) /sbin/rc – to start additional services like lp, cron, cde
iv) /usr/sbin/getty – to start n show login prompt to user.

UID range:
1 to 60,000, 0 is reserved for root. And 1 to 100 is for system accounts.
Above 60,000 also u can create user but he wont be able to access any of system resources.

Password file commands:


#vipw – to edit passwd file with lock so other user can not modify while its open
#pwck – check passwd file for consistency
#pwcon – generate shadow file
/etc/skel is template directory. User default profile is being copied from this directory to user’s home
directory once new user gets created

Hardware addressing:
c t d – controller target device numbers
Major no – Kernal device class driver
Minor no – Physical location, access options etc.

Max FS size:
Range default
LVs per VG 1 to 255 255
PVs per VG 1 to 255 16
PEs per VG 1 to 66,535 1016
As max PE size is 64MB and 66,535 Pex max per VG, one can create max of 64x66353=4TB of file system.

Groups:
Groups are primary and secondary. At a time single user can be member of one primary and 16
secondary groups i.e. total of 17 groups only

80h error:
test3:root-/>scsimgr -v -f replace_wwid -C lunpath -I 21
Binding of LUN path 4/0/5/1/0/4/0.0x5006048452aa4347.0x4034000000000000 with new LUN
validated successfully

Patch naming conventions:


Patch name format is PHxx_yyyy
Where,
xx = area of patch
CO – General HPUX commands
KL – Kernel patches
NE – Network specific patch
SS – all other subsystem patches
yyyy = unique number

Adding LUn to VM:


Take lun id from storage team (which will looks like 06C1,0709,0471)
syminq|grep -i lunid
This will give you disk numbers like below.
0709 -- /dev/rdisk/disk100 -- 34GB
0471 -- /dev/rdisk/disk102 -- 68GB
Login to VM host
pvcreate -f /dev/rdisk/disk102
pvcreate -f /dev/rdisk/disk100
hpvmmodify –P dveaidb –a disk:avio_stor::disk:/dev/rdisk/disk102
hpvmmodify –P dveaidb –a disk:avio_stor::disk:/dev/rdisk/disk100
Login to Guest and scan for new disks to get disk numbers under guest
disk 6 0/0/0/0.2.0 sdisk CLAIMED DEVICE HP Virtual Disk
/dev/dsk/c0t2d0 /dev/rdsk/c0t2d0
disk 8 0/0/0/0.3.0 sdisk CLAIMED DEVICE HP Virtual Disk
/dev/dsk/c0t3d0 /dev/rdsk/c0t3d0
/dev/rdisk/disk7 /dev/rdsk/c0t2d0
/dev/rdisk/disk9 /dev/rdsk/c0t3d0
mkdir /dev/vg01
mknod /dev/vg01/group c 64 0x010000
vgcreate -s 64 -p 60 -e 12500 vg01 /dev/disk/disk7 /dev/disk/disk9
lvcreate -L 26624 /dev/vg01
lvcreate -L 38912 /dev/vg01
newfs -F vxfs -o largefiles /dev/vg01/rlvol1
newfs -F vxfs -o largefiles /dev/vg01/rlvol2
mkdir /eaidb -- 26
mkdir /eaid1 -- 38
fsadm -F vxfs 26624M /dev/vg01/lvol1
fsadm -F vxfs 38912M /dev/vg01/lvol2
pvchange -t 90 /dev/disk/disk7
pvchange -t 90 /dev/disk/disk9

Adding SWAP:
First check how much space left in vg00 which can be used as secondary swap
check free PE * PE size
swapinfo -tam
lvcreate -L sizeMB -C y -r n /dev/vg00
vi /etc/fstab
/dev/vg00/lvolxx ... swap pri=1 0 1
swapon -p 1 /dev/vgoo/lvolxx
add -f if above command fails.
swapinfo -tam

LAN commands:
# lanscan -q
It should show :
2
3
900
901 01
902
903
904
Please, note the PPA sequence under Team - 900 / 901.
Open command prompt and ping to the IP continuously.
# lanadmin -r 0
Check whether more than 3-4 RTO's you are receiving.

Bad logins checks:


/usr/sbin/acct/fwtmp -X < /var/adm/btmps > /tmp/badlogins.log

Booting nPAR:
how to boot in NPAR in vpar mode.
halt at EFI then goto EFI prompt and
hpux /stand/vpmon
Boot path of vpmon:
vparstatus -m
0.0.4.1.0.1.0
testsvr:root-/>vparstatus -m
Console path: No path as console is virtual
Monitor boot disk path: 0.0.4.1.0.1.0
Monitor boot filename: /stand/vpmon
Database filename: /stand/vpdb
Memory ranges used: 0x0/349224960 monitor
0x14d0c000/237568 firmware
0x14d46000/581632 monitor
0x14dd4000/688128 firmware
0x14e7c000/1228800 monitor
0x14fa8000/50692096 firmware
0x18000000/134213632 monitor
0x3ffec000/81920 firmware
0x707fc000000/67108864 firmware
0x787fc000000/67108864 firmware
0x807fc000000/67108864 firmware
0x887fc000000/67108864 firmware

Breaking HW mirror in blade:


Shell> drvcfg -s
Enter to selct adapter SAS1068
RAID Properties
Manage array
delete array
Y
If u now select
RAID Properties
it gives option to ceate IR volumes.. which means hardware mirror is broked

Building one depot file:


First, use swcopy to copy all your depots to one directory depot:
swcopy -x enforce_dependencies=false -s /some/where/thing1.depot \* @ /some/place/directorydepot
swcopy -x enforce_dependencies=false -s /some/where/thing2.depot \* @ /some/place/directorydepot
NOTE: you must escape or quote the asterisk (as above), because you don't want the shell to replace it
with a list of file names.
The resulting directory depot is accessible over the network by swinstall as-is. (You can install from
another HP-UX system like this: "swinstall -s hostname:/some/place/directorydepot")
But if you want to create a single .depot file, you must use swpackage as a second step. (The .depot files
are actually images of swinstall tapes, hence "media_type=tape".)
swpackage -x media_type=tape -s /some/place/directorydepot \* @ /some/location/everything.depot

Change MP IP:
connect to console
cm -- to goto command menu
cl to view
ce to edit
xd - reset

Clone mount point:


symdg list --- chk the dg lsit
symdg show testdb_flash_dg --- check the dg target devices (get disk num from here)
vgimport -v testdbvg07 /dev/disk/disk284 /dev/disk/disk285 -- Import them with relevent vg name
vgexport -p -s -m /tmp/vg07.map testdbvg07 -- export the vg again
cp /tmp/vg07.map /usr/emc/scripts/testdb_clone/vgtest/vg07.map -- copy currebtly fresh map file to
original script (take backup first)
./unmount_testdb_clone.ksh -- unmount
./mount -- mount again

Cluster commands:
#cmviewcl -v
#cmhaltpkg -v misbicl
#cmhaltnode -v -n misdbn1 -n misdbn2
#cmcheckconf -v -C /etc/cmcluster/misbicl.cfg
#cmapplyconf -v -C /etc/cmcluster/misbicl.cfg
#cmcheckconf -C /etc/cmcluster/misbicl.cfg -P /etc/cmcluster/pkg/misbi/misdbpkg.conf
#cmapplyconf -C /etc/cmcluster/misbicl.cfg -P /etc/cmcluster/pkg/misbi/misdbpkg.conf
#cmruncl -v
#cmrunpkg -n misdbn1 -v misdb
#cmviewcl -v
#cmhaltcl -- halt whole cluster
To enable AUTO_RUN for pakage
cmmodpkg -e pkg_name
cmmodpkg -e -n pkg_name <<to disable

TOC of vpar:
When CLI appears to hung and server wond accept ssh connection.
sh: The fork function failed. Too many processes already exist.
Reset the vpar with below TOC comamnd. It will glow attnetion LED on SD and send aletrts to HP as well
vparreset -p <vpar_name> -t

Current boot disk:


root-/>echo "boot_string/S" | adb /stand/vmunix /dev/kmem
boot_string:
boot_string: (12/0/9/1/0.0.0;)/stand/vmunix
Or first disk in setboot -v

Add other paths of disks in vg:


vgdsf -c vg01
It will add persistant device files in vg pv name

Export display for GUI:


root-/>export DISPLAY=172.17.1.1:0.0
root-/>xclock
Extend filesystem with new LUN:
===================================================
on v1 and v2 pvcreate -f formats drive.... caution!! dont format any other drive.
on v3 -f option also wont allow to format drive if used by other VG.
===================================================
Ask for lun ID to storage team.
syminq |grep -i lunid ---- to get disk ctd number(you will see alternate paths too)
host:root-/>syminq | grep -i 0FAD
/dev/rdsk/c4t13d7 M(8) EMC SYMMETRIX 5773 1300FAD000 35788800
/dev/rdsk/c6t13d7 M(8) EMC SYMMETRIX 5773 1300FAD000 35788800
OR
cd /dev/dsk
ls -lrt ---- check the latest created files to determine ctd number
OR
ioscan -funC disk>old_info
ioscan -fnC disk>new_info
diff old_info new_info ---- chk the new added disks
=======================================================
pvcreate -f /dev/rdsk/c4t13d7 -- create pv of discovered disks
pvcreate -f /dev/rdsk/c4t14d0
Note: INCLUDE ALL ALTERNET PATHS ALSO IN BELOW COMAMND
vgextend /dev/vg02 /dev/dsk/c4t13d7 /dev/dsk/c6t13d7 /dev/dsk/c4t14d0 /dev/dsk/c6t14d0 --
Xtend target VG on new PV
lvextend -L 73456 /dev/vg02/lvol1 ---- xtend lvols
fsadm -F vxfs -b 73456M /iamorad1 ---- xtend filesystem online
bdf --- check the new size of mount point
fsadm -o largefiles /iamorad1 ---- apply largefiles upport
fsadm /iamorad1 --- confirm large files enabled or not
========================================================

Firmware upgrade blade:


download firmware and upload to the server. It should be tar file. untar it which will contain below files.
hpoa330.bin
PF_CTAHISYS0425EFI.tar
bl860c_1_92_install_manual.txt
fweupdate_1p92.efi
update_SFW.nsh
Get the path of primary disk
setboot -v
Copy the firmware to the EFI partition which is _p1
/usr/sbin/efi_mkdir -d /dev/rdsk/c2t1d0s1 /efi/hp/firmware
/usr/sbin/efi_cp -d /dev/rdisk/disk1_p1 update_SFW.nsh /efi/hp/firmware/update_SFW.nsh
/usr/sbin/efi_cp -d /dev/rdisk/disk1_p1 fweupdate_1p92.efi /efi/hp/firmware/fweupdate_1p92.efi
Check if its copied properly or not
/usr/sbin/efi_ls -d /dev/rdisk/disk1_p1 /efi/hp/firmware
Then reboot the system and halt at EFI shell.
At EFI shell
fs0:
cd EFI/HP/FIRMWARE
dir
[1;33mfs0:\EFI\hp\firmware> dir
_[0;37;40mDirectory of: _[1mfs0:\EFI\hp\firmware_[0;37;40m
05/20/11 02:03a <DIR> 4,096 .
05/20/11 02:03a <DIR> 4,096 ..
05/22/11 08:58p 10,168,320 fweupdate_1p92.efi
05/22/11 08:58p 26 update_SFW.nsh
2 File(s) 10,168,346 bytes
2 Dir(s)
Run .nsh file at prompt
[1;33mfs0:\EFI\hp\firmware> update_SFW.nsh
[0;37;40m_[1;33mupdate_SFW.nsh> fweupdate_1p92.efi -mnuF
[0;37;40m*************************************************************************
**** ****
**** FWEUPDATE ****
**** EFI Firmware Update Utility for IPF Systems ****
**** (c) Copyright Hewlett-Packard Company, 2001-2006 ****
**** All rights reserved. ****
**** ****
**** v1.00 ****
**** ****
*************************************************************************
Executing Command line options: -mnuF......................................... update will start
Lastly it will reset MP.
Open new session of MP.
goto command menu
cm
sysrev
check the new firmware version!!!!! (system FW 4.25)

Currently used files or mount points:


root-/>fuser -cuk /u01
/u01: 6830mto(oradb)
root-/>fuser -cuk /u01
/u01:
root-/>/usr/local/bin/lsof |grep /u01
sqlplus 6830 oradb txt REG 64,0x1000b 71920 14886 /u01
(/dev/vg02/lvol11)
sqlplus 6830 oradb mem REG 64,0x1000b 408315 28142 /u01 --
zoneinfo/timezlrg.dat
sqlplus 6830 oradb mem REG 64,0x1000b 5337208

glance license error:


# glance
ERROR : license error :The entity is not licensed.
# /opt/OV/bin/oalicense -resolve
License resolution is successfully completed

chmod 644 /var/opt/OV/datafiles/sec/lic/reslic.dat


chmod 655 /var/opt/OV/datafiles/sec/lic

glance not working:


/opt/perf/bin/ovpa stop all
/opt/perf/bin/midaemon -T
Check if midaemon has stopped( use the command ps -ef | grep mid to check if midaemon is stopped or
not) If it is still running kill the process.
/opt/perf/bin/ttd -kill
/opt/OV/bin/ovc -kill
rm -rf /var/opt/perf/datafiles/RUN

Start OVPA
/opt/perf/bin/midaemon -bufsets 16 -skipbuf 8 -smdvss 512M
/opt/perf/bin/ovpa start all
/opt/OV/bin/ovc -start
/opt/perf/bin/perfstat -p

In order to permanently use the startup parameters for midaemon, it's necessary to edit the
"/etc/rc.config.d/ovpa " file, please add the following two lines, right before the line "MWA_START=1 ":

MIPARMS = "-p -bufsets 32 -skipbuf 8 -smdvss 512M "


export MIPARMS

Also I noticed that scopeux it's complaining about corrupted logfiles, under this scenario it would be
better to start with a new set of log-files. After modifying the "/etc/rc.config.d/ovpa " file, please do the
following:

Stop OVPA
/opt/perf/bin/ovpa stop all
/opt/perf/bin/midaemon -T
/opt/perf/bin/ttd -kill
rm /var/opt/perf/datafiles/RUN

Move the logfiles to a temporary location


mv /var/opt/perf/datafiles/log* /var/opt/perf/datafiles/tmp/

Start OVPA
/opt/perf/bin/ovpa start all

And then you can start glance. We can check if midaemon is running with the startup parameter by
using the command "ps -ef | grep midaemon", you should see an output similar to the following:

# ps -ef | grep midaemon


root 19922 1 0 Oct 3 ? 5:08 /opt/perf/bin/midaemon -p
-bufsets 32 -skipbuf 8 -smdvss 512M
root 14248 14165 1 16:16:35 pts/0 0:00 grep midaemon

Golden image build:


/opt/ignite/data/scripts/make_sys_image -s local -d /tmp/shri

Golden image restore:


On golden imager server chk
cat /etc/bootptab
add server ip entry
also add archieve path under
/var/opt/ignite/archives/B.11.31/B.11.31.golden_image.cfg
goto destination server
power up and get into EFI
dbprofile -dn testprofile -sip 172.17.102.138 -cip 172.17.102.165 -gip 172.17.102.254 -m
255.255.255.0 -b "/opt/ignite/boot/nbp.efi"
where
sip = ignite server ip
cip = client i.e. desitnation server ip
gip = gateway ip
-m = submet mask
then boot the server with this profile
lanboot select -dn testprofile

hpvm tools:
swinstall -s /opt/hpvm/guest-images/hpux/11iv3/hpvm_guest_depot.11iv3.sd
# ll
total 18640
-rw-r--r-- 1 bin bin 9543680 Jan 11 2011 hpvm_guest_depot.11iv3.sd
swisntall this depot
IT REQUIRES REBOOT !!

hpvm status:
root-/>hpvmstatus -s
[HPVM Server System Resources]

vPar/VM types supported by this VSP = Shared


Processor speed = 1598 Mhz
Total physical memory = 392997 Mbytes
Total number of operable system cores = 16
CPU cores allocated for VSP = 0
CPU cores allocated for vPars and VMs = 16
CPU cores currently in use or reserved for later use = 9
Available VSP memory = 31751 Mbytes
Available swap space = 108279 Mbytes
Total memory allocated for vPars and VMs = 350208 Mbytes
Memory in use by vPars and VMs = 186624 Mbytes
Available memory for vPars and VMs = 163584 Mbytes
Available memory for 16 (max avail.) CPU VM = 153152 Mbytes
Available memory for 7 (max avail.) CPU vPar = 162816 Mbytes
Maximum vcpus for an HP-UX virtual machine = 16
Maximum vcpus for an OpenVMS virtual machine = 8
Maximum available vcpus for a VM = 16
Available CPU cores for a virtual partition = 7
Available entitlement for a 1 way virtual machine = 1598 Mhz
Available entitlement for a 2 way virtual machine = 1598 Mhz
Available entitlement for a 3 way virtual machine = 1598 Mhz
Available entitlement for a 4 way virtual machine = 1598 Mhz
Available entitlement for a 5 way virtual machine = 1598 Mhz..............................

hyperthreading:
#setboot -m on
and then reboot. This enables HT at HW level
#kctune lcpu_attr=1
This enabled HT in OS.

Procedure to configure ignite client:


On the Ignite Client, install the Ignite Bundle and the Ignite Installation utilities for various OS Releases
as Desired. Also make sure that the Ignite bundle which installed on client should be same as Ignite
Server.

# swlist -l bundle | grep -i ignite


IGNITE C.7.6.100 HP-UX Installation Utilities (Ignite-UX)
Ignite-UX-11-11 C.7.6.100 HP-UX Installation Utilities for Installing 11.11 Systems

Put entry of Ignite server in host /etc/hosts


x.x.x.x blade8

Put the entry of host in ignite server in /etc/hosts


cat /etc/rc.config.d/nfsconf
NFS_CLIENT=1
NFS_SERVER=0

/sbin/init.d/nfs.client stop
/sbin/init.d/nfs.client start
Export FS from ignite server
/ignite_image/ and /var/opt/ignite/recovery/archives
mount test2:/var/opt/ignite/recovery/archives/qaapp /Test -- to test nfs

host=`hostname`
/opt/ignite/bin/make_net_recovery -s ignitesvr -a ignitesvr:/primary_copy/testsrv -x inc_entire=vg00 >>
/tmp/ignite_status
subject=`tail -n 10 /tmp/ignite_status |grep -i make_net_recovery`
/usr/bin/mailx -s "$host $subject" abc@xyz.com < /tmp/ignite_status
File recovery from ignite:
To recover a single file from your make_net_recovery archive do the following on the system where the
file is to be recovered (I'll restore stand/bootconf in this example)
mount igniteserver:/var/opt/ignite/recovery/archives/$(uname -n) /mnt
cd /
gzip -dc /mnt/2007-05-23,05:00 | tar xvf - stand/bootconf

Itanium SD hardware adaressing system:


first number 4 or 6 is balde/cell number
from io output check the cell 4 is connected to cab0 bay 1 and chassi 1 and cab 0 bay 0 chasiss 3 then
last number is lba pronted on physicall hardware .. go track it
[Available I/O devices (path)]: 4.0.8
4.0.9
4.0.10
4.0.14
6.0.8
6.0.9
6.0.10
6.0.14
[SD] MP:CM> io

--------------------------+
Cabinet | 0 | 1 |
--------+--------+--------+
Slot |01234567|01234567|
--------+--------+--------+
Cell |XXXXXXXX|XXXX....|
IO Cab |0.0.0.0.|1.1.....|
IO Bay |1.0.1.0.|1.0.....|
IO Chas |3.1.1.3.|3.1.....|

[SD] MP:CM> cp

-------------------------------+
Cabinet | 0 | 1 |
-------------+--------+--------+
Slot |01234567|01234567|
-------------+--------+--------+
Partition 0 |XXXX....|........|
Partition 1 |........|XXXX....|
Partition 2 |....XXXX|........|

Identify physical disk into VM:


#hpvmdevinfo -P <vmname>
Virtual Machine Name Device Type Bus,Device,Target Backing Store Type Host Device Name
Virtual Machine Device Name
==================== =========== ================= ==================
================ ===========================
<vmname> disk [0,1,0] disk
/dev/rdisk/disk336 /dev/rdisk/disk4
<vmname> disk [0,1,1] disk
/dev/rdisk/disk332 /dev/rdisk/disk5
<vmname> disk [0,1,3] disk
/dev/rdisk/disk675 /dev/rdisk/disk9

#hpvmdevinfo -P <vmname>
xd -An -j8200 -N16 -tx /dev/disk/disk74

Identify disk PVID on guest first


vm:root-/>xd -An -j8200 -N16 -tx /dev/disk/disk76
70608a28 4ec7a7ff 70608a28 4ec7a942
vm:root-/>xd -An -j8200 -N16 -tx /dev/disk/disk72
70608a28 4ec7a7ef 70608a28 4ec7a942
vm:root-/>xd -An -j8200 -N16 -tx /dev/disk/disk74
70608a28 4ec7a7f6 70608a28 4ec7a942

Get the PVID from host as well and match them with PVID of guest disks.. If it matches then tis same
DISK

host:root-/>xd -An -j8200 -N16 -tx /dev/disk/disk532


70608a28 4ec7a7ff 70608a28 4ec7a942
host:root-/>xd -An -j8200 -N16 -tx /dev/disk/disk538
70608a28 4ec7a7f6 70608a28 4ec7a942
host:root-/>xd -An -j8200 -N16 -tx /dev/disk/disk526
70608a28 4ec7a7ef 70608a28 4ec7a942

Memory upgrade in vpar:


Upgrade Memory from 30GB to 38GB
Shutdown the server
#shutdown –hy 0
Check vparstatus by below command
#vparstatus
Need to run below command on another vpar which lies in same NPAR
#vparmodify -p vparname -a mem::2048 --- safer side.. to add 2 GB
#vparmodify –p vparname –m mem::38912
Need to check vparstatus it should show 38GB memory
#vparstatus
Once it’s showing 38GB need to boot the vpar
#vparboot –p vparname –o “-lq"
Check vpar version
#vparinfo -P

Mirroring in v3:
echo "3
EFI 400MB
HPUX 100%
HPSP 500MB">/tmp/partitionfile
echo yes|idisk -wf /tmp/partitionfile /dev/rdisk/disk9
insf -e -C disk
mkboot -e -l /dev/rdisk/disk9
efi_ls -d /dev/rdisk/disk9_p1
lifls -l /dev/rdisk/disk9_p2
mkboot -a "boot vmunix" /dev/rdisk/disk9
efi_cp -d /dev/rdisk/disk9_p1 -u /EFI/HPUX/AUTO /tmp/x; cat /tmp/x
pvcreate -B -f /dev/rdisk/disk9_p2
vgextend vg00 /dev/disk/disk9_p2
for i in 1 2 3 4 5 6 7 8 9 10
do
lvextend -m 1 /dev/vg00/lvol$i /dev/disk/disk9_p2
done
lvlnboot -r /dev/vg00/lvol3
lvlnboot -b /dev/vg00/lvol1
lvlnboot -s /dev/vg00/lvol2
lvlnboot -d /dev/vg00/lvol2
lvlnboot -v
setboot -a 0/2/1/0.0.0.0.0

Modifying vpar name:


vparmodify -p oldname -P newname

Mount ISO:
nohup pfs_mountd &
nohup pfsd &
pfs_mount -o xlat=UNIX pathToIso mountPoint
mkdir /isoimg
lvcreate -n ISOLV -L 3096 /dev/vg00
dd if=isoimage of=/dev/vg00/rISOLV bs=8192
mount /dev/vg00/ISOLV /isoimg

Device disks are not visible on server:


IF LUNS ARE NOT VISIBLE ON THE SERVER THEN LOOK OUT SYSLOG FOR ERROR

May 3 16:46:05 destiantion vmunix: Evpd inquiry page 83h/80h failed or the current page 83h/80h
data do not match the previous known page 83h/80h data on LUN id 0x0 probed beneath the target
path (class = tgtpath, instance = 23) The lun path is (class = lunpath, instance 69).Run 'scsimgr
replace_wwid' command to validate the change

then run commnad


scsimgr -v -f replace_wwid -C lunpath -I 69 (69 is instance number)

lun will be visible


Moving mount point across servers:
for moving /dumps mount point across server
On source
vgexport -s -v -m /var/VGMAP/vg02.map /dev/vg02
umount /dumps
vgchange -a n vg02
vgexport /dev/vg02
Ask storage to remove the luns from old server and allocate them to new server
On destination server
If IVM: hpvmmodify –P devsap –a disk:avio_stor::disk:/dev/rdisk/disk
mkdir /dumps
mkdir /dev/vg02
mknod /dev/vg02/group c 64 0x020000
vgimport -v -s -m /tmp/vg02.map /dev/vg02
vgchange -a y vg02
vi /etc/fstab

MSA and SAN switch IP config:


1> MSA.
Connect cable to console. Default password
username: manage
password: !manage
For setting the ip to MSA
set network-parameters ip <address> netmask <netmask> gateway <gateway> controller a|b

For viewing the ip configuration


show network-parameters

2> SAN Switch


Connect cable to console. default password
username: admin
password: password@123
SAN switch (will ask the further info automatically)
switch:admin> ipaddrset
Ethernet IP Address [192.168.74.102]:
Ethernet Subnetmask [255.255.255.0]::
Gateway IP Address [192.168.74.1]:
DHCP [Off]:off

For viewing the ip configuration


ipaddrshow

measureware services restart:


mwa stop all
midaemon -T
ttd -k

cd /var/opt/perf/datafiles
nowis=`date +%d%b%y-%H:%M`
mkdir /var/opt/perf/datafiles.old.`echo $nowis`
cp log* /var/opt/perf/datafiles.old.`echo $nowis`

mwa start all


mwa status all
mwa start all
mwa status all

Sharing in nfs:
vi /etc/dfs/dfstab
share -F nfs -o anon=2 /dvd

vi /etc/rc.config.d/nfsconf
NFS_CLIENT=0
NFS_SERVER=1
AUTOFS=0

/sbin/init.d/nfs.client stop
/sbin/init.d/nfs.server stop
/sbin/init.d/nfs.core stop
/sbin/init.d/nfs.core start
/sbin/init.d/nfs.server start

#unshareall -- to unshare all

Changing priority with nice:


root-/>renice -n -20 17593
17593: old priority 0, new priority -20

Booting npar:
goto cm command menu
pe power cycle
p select partition
4 select partition number
select on
check vfp for booting status..
then goto console of that npar

npar info:
vparstatus -N 1:1 -A
to list all available resources on npar

PARISC install:
Installing npar
insert dvd or ignite tape, goto console
co:
Menu: Enter command or menu > SEA
Searching for potential boot device(s) on the core cell
This may take several minutes.
To discontinue search, press any key (termination may not be immediate).
IODC
Path# Device Path (dec) Device Type Rev
----- ----------------- ----------- ----
0/0/6/1/0/4/0.0 Fibre Channel Protocol 14
P0 0/0/9/1/0.5 Sequential access media 4
0/0/10/1/0.0 Fibre Channel Protocol 10
P1 0/0/11/1/0.0 Random access media 4
0/0/12/1/0.0 Fibre Channel Protocol 10
0/0/14/1/0/4/0.0 Fibre Channel Protocol 14

To boot from tape

Main Menu: Enter command or menu > bo 0/0/9/1/0.5


BCH Directed Boot Path: 0/0/9/1/0.5
Do you wish to stop at the ISL prompt prior to booting? (y/n) >> n
Initializing boot Device.
Boot IO Dependent Code (IODC) Revision 4
Boot Path Initialized.
HARD Booted.
ISL Revision A.00.44 Mar 12, 2003
ISL booting hpux (;0):INSTALL
Boot
: tape(0/0/9/1/0.5.0.0.0.0.0;0):WINSTALL
12283904 + 2244720 + 2718888 start 0x20c568

NTP config:
echo "x.x.x.x ntpserver" >> /etc/hosts
vi /etc/rc.config.d/netdaemons
export NTPDATE_SERVER="ntpserver"
export XNTPD=1
export XNTPD_ARGS=

echo "server ntpserver" >> /etc/ntp.conf ; cat /etc/ntp.conf


date;/sbin/init.d/xntpd stop;/sbin/init.d/xntpd start
even then if ur not getting ouput frm ntpq -p
check /etc/rc.config.d/
if multiple copies of netdeamons are there delete it and restart the xntpd service

Online PV expansion i.e. storage LUN size increased:


for online expansion of physical disk ,ie size increases but LUN same.
/dev/rdisk/disk16 8+267 -> 275GB (online expansion)
Vg to be modified vg01
diskinfo is now 275GB // previously 8GB
#vgmodify -t vg01 // to check the table available for disk
#vgmodify -r -a -p 36 -e 20988 vg01 // -r is for review option and p and e values comes from above
output.
#vgmodify -a -p 36 -e 20988 vg01 // this will make changes ...

Port assignment in blade enclosure:


To assign port in vlan of blade enclosure
login to enclosure OA via putty
connect interconnect 0 (to connect to switch)
menu
select switch configuration
select VLAN Menu
select VLAN Port Assignment
select edit
select the port and mark it as "untagged" in the target vlan
select save
make same changes in switch residing in bay 1 also.

Printer commands:
Check the status of printer:
#lpstat myprinter
myprinter-3982 appldb priority 0 Apr 14 12:16
OFGBDa09990.t 578 bytes

printer queue for myprinter


Printer Status: On line
Warning: myprinter is down
Check if printer is on network and reachable
#/usr/sbin/ping myprinter
PING myprinter: 64 byte packets
64 bytes from x.x.x.x: icmp_seq=0. time=91. ms
64 bytes from x.x.x.x.: icmp_seq=1. time=110. ms
#enable myprinter
printer "myprinter" now enabled
#lpstat myprinter
printer queue for myprinter
Printer Status: On line
printerhost: myprinter: ready and waiting
no entries

To cancel the queue


#cancel myprinter -e

To sent test print


#cat test | lp -d myprinter

Reduce blade CPU:


Boot server and run below commands at EFI shell
info cpu
cpuconfig 0 off
where 0 is cpu socket number which you want to deconfigured.

Replace faulty disk online:


-Check old disk info
ioscan -m lun /dev/disk/disk175
-deactivate old disk
pvchange -a n /dev/disk/disk175_p2
-replace new disk with old one
ioscan -fnCdisk
insf -e -C disk
-make sure old disk is offline and no_hw and new disk is online
-replace wwid with below
scsimgr replace_wwid -D /dev/rdisk/<old disk>
-redirect the device files with
io_redirect_dsf -d /dev/disk/<old disk> -n /dev/disk/<new disk>
-create few steps till makeboot to create new dsf
io_redirect_dsf -d /dev/disk/disk233 -n /dev/disk/disk175
-make sure now old disk is shown as onlien and claimed in ioscan
-restore config
vgcfgrestore -n /dev/vg00 /dev/rdisk/disk175_p2
-Activate disk
pvchange -a y /dev/disk/<old disk>_p2
pvchange -a y /dev/disk/disk175_p2

-make sure sync is started


lvdisplay -v /dev/vg00/lvol* |grep -i stale|wc -l

Restore system using network ignite backup:


Goto igniteserver
#cd /var/opt/ignite/clients
copy the source netowrk ignite into MAC dir of the destination server
#mkdir 0x6EBC14DA8DD8 ----------- can be obtained using lanadress at EFI or ---> hpvmstatus
<servername> ---> MAC ID
#cp -pr 0x001E0B5CB6A8 (source mac) 0x6EBC14DA8DD8 (dest mac)
chown -R bin:bin 0x6EBC14DA8DD8
chmod -R 755 0x6EBC14DA8DD8
goto destination server, power up and get into EFI
dbprofile -dn testprofile -sip x.x.x.x -cip x.x.x.x -gip x.x.x.x -m 255.255.255.0 -b
"/opt/ignite/boot/nbp.efi"
where
sip = ignite server ip
cip = client i.e. desitnation server ip
gip = gateway ip
-m = submet mask
then boot the server with this profile
lanboot select -dn testprofile
continue the setup
Replace faulty disk online (needs re-mirroring):
To remove the bad PE and then add the new PE, First deactivate PV
#pvchange -a n /dev/vg00/disk3_p2
reduce all lvols mirror copies forcefully
#lvreduce -k -m 0 /dev/vg00/lvol1 /dev/vg00/disk3_p2
IF THERE ARE STALE LE PRESENT USE
lvreduce -k -m 0 /dev/vg00/lvol1 2 --- here 2 is PE number shown in lvdisplay -v command i.e. PV2 below

--- Logical extents ---


LE PV1 PE1 Status 1 PV2 PE2 Status 2
00000 /dev/disk/disk2_p2 00000 current /dev/disk/disk3_p2 00000 current
00001 /dev/disk/disk2_p2 00001 current /dev/disk/disk3_p2 00001 current

now reduce vg forcefully


#vgreduce -f vg00
now remove HD, Insert new HD in slot, ioscan, re mirror again.

Root FS getting full:


Remove any core files
/var/adm/crash
find / -xdev -name core -exec rm {} \;
look for in the '/' file system is a "real" file in the /dev directory.
Do this:
# cd /dev
# find . -type f | xargs ls -l
If this returns anything then examine it closely. There should be nothing other than device files in the
/dev directory structure.
you could check is if someone made an error using a tar command.
# ll /dev/rmt

Sharing dvd:
on host
#swreg -l depot /mydvd
on guest
#swinstall -s <host IP>:/mydvd

Single user mode:


After you type 'bo pri', the system will prompt you 'interact with ipl?'
Answer with 'y'.
The prompt be:
ISL>
Type 'hpux -is'
that will put you into single user mode.
passwd root

SMTP config:
config filez in /etc/mail/sendmail.cf
echo test | sendmail -v abc@xyz.com

DMxyz.com
Dj<hostname>.com
Dsmailserver_name.xyz.com
#C{E}root <<hash this entry

/sbin/init.d/sendmail stop
/sbin/init.d/sendmail start

If it wont start enabled server=1 in


vi /etc/rc.config.d/mailservs

Taking tape root ignite:


mt -t /dev/rmt/2mn status
insf
nohup make_tape_recovery -I -v -a /dev/rmt/2mn -x inc_entire=vg00 -t "Ignite `hostname` 25-Apr-2011"
&
cd /var/opt/ignite/recovery/latest
tail -f recovery.log

updateux utility:
root-/>swlist -s /cdrom | grep -i oe
FIFOENH B.11.31.02 Fifo Performance Enhancement
HPUX11i-BOE B.11.31.1203 HP-UX Base Operating Environment
HPUX11i-VSE-OE B.11.31.1203 HP-UX Virtual Server Operating Environment

mount x.x.x.x:/cdrom /cdrom


update-ux -s /cdrom

UNIX95:
UNIX95=1 ps -ef -o vsz= -o pid= -o comm= | sort -rnk1 | awk '{ print $1/1024" MB "$2" "$3; }' | grep -i
scx

VM hang issues:
Remove dynamic memory.
if it hungs.. due to memory crunch
#hpvmmodify -P name -x ram_dyn_type=none
Check below parameter (mostly dynamic memory module will be disappear from hpvmstatus)
[Dynamic Memory Information]
Type : driver
Minimum memory : 512 MB
Target memory : 2106 MB
Memory entitlement : Not specified
Maximum memory : 2048 MB <--------------
Current memory : 2106 MB
Comfortable minimum : 8186 MB
Total memory : 8186 MB
Free memory : 0 MB
Available memory : 9 MB

Booting vpar using tape:


goto running vpar
create alias for tape device connected to SCSI and then boot vpar
root-/>vparmodify -p apps3 -a io:5/0/3/1/0.5:TAPE
root-/>vparboot -p apps3 -B TAPE
vparmodify -p freevpar2 -d cpu::6
vparmodify -p freevpar2 -d mem::20480

VxFS upgrade:
root-/>vxupgrade -n 6 /portalapp
vxfs vxupgrade: V-3-22591: /dev/vg01/rlvol1: current version is 4; can only upgrade to version 5.
root-/>vxupgrade -n 5 /portalapp
oot-/>vxupgrade -n 6 /portalapp
root-/>fstyp -v /dev/vg01/rlvol1
vxfs
version: 6
f_bsize: 8192
f_frsize: 8192
f_blocks: 17891328
f_bfree: 3018517
f_bavail: 2994935

You might also like