Professional Documents
Culture Documents
Unix Concepts
What is Unix ??
What are OS,Kernel,Shell,File system,Process
and Daemons ?
What are multitasking , multi user and
distributed computing ?
What is UNIX ??
UNIX is a networking operating system initially developed by Bell Labs with multi user,multi
tasking and distributed computing capabilities.The efforts are pioneered by Dennis Ritchie and
the OS is written using C language.
The features that made UNIX a hit from the start are:Multitasking capability ,Multiuser
capability ,Portability ,UNIX programs and Library of application software
Multitasking : lets a computer do several things at once, such as printing out one file while
the user edits another file.
Multiuser: Permits multiple users to use the computer.
System portability : permit movement from one brand of computer to another with a
minimum of code changes.
UNIX tools :UNIX comes with hundreds of programs that can divided into two classes
Integral utilities and Tools
User
Tools and Apps
Shell
Kernel
Hardware
Operating System
• Set of programs that manages all computer
operations and provides interface between the user
and the hardware.
• It is an environment where the applications can
run.
• It accepts user input, routes it to the proper device
for processing, and returns program output back to
the user. The operating system interacts with
peripherals through device drivers written
specially for each component of the system
Kernel
• Acts as an intermediary between applications running on a
computer and the hardware inside the computer. It controls
physical and virtual memory, schedules processes, and starts and
stops daemons. All commands interact with the kernel. Kernel is
a major part of operating system.
• Kernels are classified on the basis of their architecture as
“Monolithic” and “Micro kernel “.
• Monolithic : Uses a single big kernel. All changes done requires
a relinking of kernel followed with a reboot of system. Eg:- SCO
unix and HP-UX
• Micro kernel : Consists of a core kernel with a set of loadable
kernel modules.The kernel modules are loaded on demand and
the kernel is relinked dynamically without a reboot.It is possible
to load/unload the kernel modules dynamically without affecting
system performance. This imparts Plug-n-play feature in
the OS. Eg : Solaris and Linux
Shell : The unix command interpreter
The shell interprets and translates commands entered by the user into actions
performed by the system. There are six shells by default in Solaris8: they are
Bourne Shell, Korn Shell, C Shell, Z Shell, TC Shell and BASH Shell.
The default shell in Solaris is Bourne shell.
The shell is a command programming language that provides an interface to the
UNIX operating system. Its features include control-flow primitives, parameter
passing, variables and string substitution. Constructs such as while, if then
else, case and for are available. Two-way communication is possible between
the shell and commands. String-valued parameters, typically file names or flags,
may be passed to a command. A return code is set by commands that may be
used to determine control-flow, and the standard output from a command may
be used as shell input.
The shell can modify the environment in which commands run. Input and output
can be redirected to files, and processes that communicate through `pipes' can
be invoked. Commands are found by searching directories in the file system in
a sequence that can be defined by the user. Commands can be read either from
the terminal or from a file, which allows command procedures to be stored for
later use.
File system
A File system is defined as a “ Heirarchy of Files and Directories “ .
Data on a Solaris system is stored in a hierarchical fashion on the file
system. Organizing data in this way makes it easy to locate and group
related operating system control files and user information.
Mounting a file system : The process of making ( or attaching ) a file
system as part of a unix device tree. After mounting you can access a
file system relative to a “mount point”.
File systems are classified as Disk based, Distributed or Pseudo
file systems .
Disk based : Exists in hard disk. Eg: ufs , vxfs
Distributed : Network based .Available on the network eg: NFS
Introduction
to
Sun hardware
and
Storage
Basic Sun Hardware
Sun is RISC processor based
Has Open Boot PROM and NVRAM
System information is stored in the NVRAM
(Like HOSTID,ETHERNET Address etc)
• Sun Blade 1000: A dual cpu workstation using ultra sparc 3( 600-
900Mhz) processor with 4/8MB external cache , has two UPA64
slots for frame buffers ,supports creator 3D or Elite 3D m3 or
Elite 3D m6 graphics cards ,Has four 64bit PCI slots ( 3* 33mhz
and one 66mhz) .The model uses sun 232pin SDRAM DIMMs.
Entry level servers
SunFire15K
SunFire6800
New systems…..
SunBlade 2000
SunFire V880
Sunray100 & Sunblade100
SunRay100
Features :
•Popularly called as “STARFIRE”
•Can have upto 4 Dynamic Domains in the same Server .
•16 System boards with 4 CPUs each and upto 4 GB mem in each .
•All the boards communicate via a Gigabit - XB backplane .
•Supports upto 20 Tbytes of Disk capacity .
Storage Devices
• Uni pack
• Multipack
• Sparc Storage Array( SSA100 & SSA200)
• Sun StroEdge product family
• Sun Enterprise Network Array-SENA
• T3 array
Unipack & Multipack
StorEdge
L 1000
Tape Library
Latest systems from Sun
• Log in as root
• have all users close all the files and log out
• backup all user files and store the backups
• shutdown the system to an idle state.
• Procure the 3 installation CD’s
• ( solaris installation CD ,
• Solaris 8 software 1 of 2 &
• Solaris 8 software 2 of 2)
Installation Process
• Insert the Installation CD-ROM into the drive
• Boot the release media ( Solaris Installation CD)
OK> boot cdrom
• Keep the swap partition at the starting cylinder ( default
needs 512MB of space for solaris8)
• The Mini-root will be copied and the system will reboot .
On reboot it will ask for CD 1 of 2.
• During installation answer the following
– host name
– network connectivity
– IP address
Installation process …….
• The networking information section
• The Time zone verification section
• Selecting software
– customizing the software
• Selecting disks
• Filesystem and disk layout
• Reboot info
• The whole of the system info can be re-entered
after installation using the command
# sys-unconfig
Solaris Boot Process
Each SPARC based system has a PROM (programmable read-only memory)
chip with a program called the monitor. The monitor controls the operation of
the system before the kernel is available. When a system is turned on, the
monitor runs a quick self-test procedure that checks things such as the
hardware and memory on the system. If no errors are found, the system begins
the automatic boot process.
Boot PROM :
1. The PROM displays system identification information and then runs self-test
diagnostics to verify the system's hardware and memory.
2. Then the PROM loads the primary boot program, bootblk, whose purpose is
to load the secondary boot program located in the ufs file system from the
default boot device.
Boot Process……….contd...
Boot programs :
Kernel initialization :
6. The kernel creates a user process and starts the /sbin/init process,
which starts other processes by reading the /etc/inittab file.
Boot Process……….contd...
Init Phase:
kernel:
For 32-bit Solaris systems, the relevant files are:
/platform/`arch -k`/kernel/unix and /kernel/genunix
For 64-bit Solaris systems, the files are:
/platform/`arch -k`/kernel/sparcV9/unix and /kernel/genunix
Run levels
A system's run level (also known as an init state) defines what services and resources are
available to users. A system can be in only one run level at a time. The Solaris
environment has eight run levels ( 0,S,1,2,3,4,5 and 6) . The default run level is
specified in the /etc/inittab file as run level 3.
Runlevel 0: Shutdown the system( Power down) . You will get ok> prompt
Runlevel S or s : Single user state with all file systems mounted and accessible
Runlevel 1 : Administrative state with all files accessible and user logins allowed
Runlevel 2 : multi user state with all daemons running except NFS server.
Runlevel 3: Multi user state with NFS resource sharing available
Runlevel 4: Alternate super user, currently unavailable
Runlevel 5: shut down and turn off the power if possible.
Runlevel 6 : reboot the system
You can change the current run level with one of the following commands
# init < init level > or # shutdown -y -g0 -I <init level>
/etc/inittab file
When you boot the system or change run levels with the init
or shutdown command, the init daemon starts processes by
reading information from the /etc/inittab file. This file defines
three important items for the init process:
The system's default run level
What processes to start, monitor, and restart if
they terminate
What actions to be taken when the system enters
a new run level
Each entry in the /etc/inittab file has the following fields:
id:rstate:action:process
– # pkgrm SUNWspro
– # pkgrm LGTOman
The pkgchk command
• The pkgchk command verifies that the
attributes and contents of package name s
are correct by comparing them to their
values as specified in the system log file.
– #pkgchk SUNWaudio
Log file
• The /var/sadm/install/contents file
– The pkgadd command updates the above
file ,which is a file listing all the packages that
are installed in the system.
– This makes it possible to identify package that
contain a particular or related files.
– The pkgrm command uses this file to identify
where files are located and updates the file.
How to display software
information
• Admintool
• browse
• software
• Adding and removing software using
admintool.
Spooling packages
• A package can be copied from the
installation CDROM without installing it so
that it can be stored on a system.
– Pkgadd -d
/cdrom/cdrom0/s0/Solaris_2.6/Product -s spool
SUNWaudio
• This will spool the package into the /var/spool/pkg
directory.
• You can specify a different directory as an
argument to the -s option.
– mkdir /export/pkgs
– pkgadd -d
/cdrom/cdrom0/s0/s0/Solaris_2.6/Product -s
/export/pkgs SUNWaudio
Command summary
• Pkginfo-- lists packages installed on the
system or on media.
• Pkgadd -installs packages
• pkgrm removes packages
• pkgchk verifies the attributes and contents
of the path names belonging to packages.
Files and directories
• /var/sadm system log and admin files
• /opt/packagename preferred location for
the installation of the packages
• /opt/pkgname/bin preferred location for the
executable files
• /var/sadm/install/contents package map of
the entire system.
Module 6
Maintaining patches
• Objectives:
– Obtain current patch information and patches
– verify current patches installed on your system
– Install patches
– back out patches
Patch
• In its simples form ,you can think of a patch
as a collection of files and directories that
replaces or update existing files and
directories that are preventing proper
execution of the software.
• Patches correct application bugs or add
product enhancements.
• Each patch has a readme file that details the
bug it fixes .
• the readme file contains other important info
about the patch
Patch numbering
• Patches are assigned numbers and are packaged in a directory
named with the patch number.
• If the number id 10xxxx and the revision is yy then the directory
name will be 10xxxx - yy ( eg: 108625-14)
• /var/sadm/patch
– contains information about the patches
– id -a ?????
The super user account
• Account : root ,UID of 0 , GID of 1
• read and write access to all files stored in
the local disks
• can send kill signal to all processes under
the control of the system’s CPU.
• No limitations
Usage of root account
• Shutting down the system
• backing up and restoring file systems
• mounting and unmounting file systems
• adding user accounts
• enabling password aging
• Members of the sysadmin group can modify
databases using the admintool.
Switching users
• Becoming super user
– su
• becoming a different user
– su - bob
– su bob ????????
– When you use the - option the environment of
the new user is adapted.ie.it causes the
/etc/profile and $HOME/.profile to be executed.
File ownership
• The owner of a file identifies the user to
whom the file belongs .
• When you create a file ,you are the owner
of the file.
• Ls -l and the first 10 fields
Chown command
• The chown command is used to change the
ownership of the files and directories.
• Only super user can use the chown
command.
– Chown user_name filename
– chown UID filename
The chgrp command
• Use the chgrp command to change the
ownership on files or directories.
– Chgrp groupname filename
– chgrp GID filename
The groups command
• Use the groups command to display group
memberships.
• Used with an argument ,the groups
command displays the groups to which the
user name belongs
– groups
– groups username
Monitoring system access
• The who command
– console- displays boot and error messages
– pts pseudo device
– term ASCII terminal device
• who -r : Gives out current system run-level
• user login database in /var/adm/utmpx file
• user login history in /var/adm/wtmpx file
• The last command
– use the last command to display the login and
logout information.
– Displays the most recent entries first.
– What
. The fingerare the information that this command
comamnd
gives?
- Display information about local and
remote users , who are currently logged in
/etc/passwd file
• Maintaining the /etc/passwd file is an integral part of system security.Without
and entry in this file ,users are unable to log in to a system.
• Each passwd record has seven fields separated by a colon
login:x:UID:GID:comment:homedir:shell
LoginID - this field represents the login name
x - this field is the place holder for the users encrypted passwd entry in the
/etc/shadowfile
UID - User Id
GID - Group ID
comment : some comment
homedir - User’s home directory( default is / )
shell - user’s login shell ( default is bourne)
/etc/shadow file
• Only super user can access this file.
• When a password is encrypted,it appears as
a series of numerals and uppercase and
lower case letters unrelated to the actual
word.
• Record format
– loginID
– password
• a 13 character encrypted password
• *LK* indicates that the account is locked
• NP indicates no password
– lastchg
• indicates the number of days between jan 1 1970
and last password modification
– Min
• minimum number of days between password
changes
– max
• maximum number of days for which password is
valid
– warn
• number of days that the user is warned before
password expires
• Inactive
– number of inactive days before the account is
locked
• expire
– this field contains the date when the user
account expires
The /etc/group file
• The /etc/group database defines all system
groups and specifies any additional groups
to which a user belongs.
• Record format
– groupname
– password
• this field is for a group password and is currently
unused
– GID
– userlist
The /etc/default directory
• Several files containing variables that
specify system defaults are located in the
/etc/default directory.
• The files which relate to system security are
– login
– passwd
– su
/etc/default/passwd
• Variables contained:
– MAXWEEKS
• specifies the maximum number of weeks a
password is valid.
– MINWEEKS
• specifies the minimum number of weeks between
password changes
– PASSLENGTH
• minimum password length.
/etc/default/login
• Variables contained:
– PASSREQ
• if yes null passwords are not permitted.
– CONSOLE
• if defined ,root login is permitted only on console.
/etc/default/su
• Variables contained:
– SULOG
• this value specifies the name of the file in which all
the su attempts will be logged.
– CONSOLE
• all successful attempts to become super-user are
logged into the console in addition to log file.
Monitoring the su command
• Look at the /var/adm/sulog file to verify who is
using the su command t become superuser.
• Format
– su
– 10/20 date and time
– + or - success or failure
– console the device
– root-sys users
Role Based Access Control
(RBAC)
RBAC stands for Role Based Access Control.
RBAC can be thought of as a way to delegate system tasks to a
combination of designated users and groups. The traditional UNIX
model is one of a single computer system that shares its resources
among multiple users. However, the management of the system is
left to a single 'superuser' because the rights of this special account
give access to the entire system. This could lead to problems of
misuse or simply misunderstanding.
The RBAC system allows a subset of tasks that fall under 'root'
access to be granted to the user community, in the hopes that savvy
users can correct their own problems, and daily administrative tasks
can be off-loaded by the (usually) very busy administrator.
How RBAC works ???
RBAC elements
• The RBAC model introduces three elements to the Solaris Operating
Environment:
• Role - A special identity that can be assumed by assigned users only.
• Authorization - A permission that can be assigned to a role or user to
perform a class of actions otherwise prohibited by security policy.
• Rights Profile - A package that can be assigned to a role or user. It may
consist of
a) Authorizations ,
b) Commands with security attributes( The Solaris security attributes are
the setuid functions for setting real or effective user IDs (UIDs) and group
IDs (GIDs) on commands)
c) Supplementary (nested) rights profiles
RBAC files
• /etc/user_attr is the extended user attributes database. The file
contains users and roles with authorizations and execution
profiles.
• /etc/security/auth_attr is the authorization attributes database.
All system authorizations and their attributes are listed here.
• /etc/security/prof_attr is the execution profile attributes
database. Profiles on the system are defined here. Each profile
has an associated authorization and help file.
• /etc/security/exec_attr is the profile execution attributes
database. This file is where each profile is linked to its
delegated, privileged operation.
Module 9
Administration of initialization
fields
• Objectives:
– set up a variable in the .profile file
– maintain the /etc/profile file
– customize the templates in the /etc/skel
directory
– customize initialization files.
Initialization files for users
• Initialization files contain a series of
commands that are executed when a shell is
started.
• Two types of initialization files: system and
user.
• System files are in /etc directory
System initialization files
• The system initialization files for the bourne
and the korn shells is /etc/profile.
• The system initialization files for the c shell
is /etc/.login
• templates for these files are in /etc/skel
directory.
/etc/profile
• The /etc/profile file
– exports environment variables such as
LOGNAME for login name.
– Exports PATH
– sets the variable TERM for the default terminal
type
– displays contents of /etc/motd file
– sets default permissions
/etc/skel directory
• It contains the templates of initialization
files .
• Use the initialization files as a starting point
for providing prototype initialization files
for the users.
Comparison of shell environments
An inode contains all the information about a file except its name, which is kept in a
directory. An inode is 128 bytes. The inode information is kept in the cylinder
information block, and contains: The type of the file(Regular,Directory,Block
special,Character special,Symbolic link,FIFO also known as named pipe,Socket),The
mode of the file (the set of read-write-execute permissions),The number of hard links
to the file,The User ID of the owner of the file,The Group ID to which the file
belongs,The number of bytes in the file,An array of 15 disk-block addresses,The date
and time the file was last accessed,The date and time the file was last modified,The
date and time the file was created
The array of 15 disk addresses (0 to 14) point to the data blocks that store the
contents of the file. The first 12 are direct addresses; that is, they point directly to the
first 12 logical storage blocks of the contents of the file. If the file is larger than 12
logical blocks, the 13th address points to an indirect block, which contains direct
block addresses instead of file contents. The 14th address points to a double indirect
block, which contains addresses of indirect blocks. The 15th address is for triple
indirect addresses, if they are ever needed
File system address chain
Data
Inode block
0 to 11
Single Indirect block
12
Single
indirect
block1
13 Double indirect
block
14 Single
Tripple indirect indirect
block blockN
Data blocks and Free blocks
Data blocks :
The rest of the space allocated to the file system is occupied by data blocks, also called
storage blocks. The size of these data blocks is determined at the time a file system is
created. Data blocks are allocated, by default, in two sizes: an 8-Kbyte logical block
size, and a 1-Kbyte fragmentation size.
For a regular file, the data blocks contain the contents of the file. For
a directory, the data blocks contain entries that give the inode number
and the file name of the files in the directory
• Free Blocks :
Blocks not currently being used as inodes, as indirect address blocks,
or as storage blocks are marked as free in the cylinder group map.
This map also keeps track of fragments to prevent fragmentation from
degrading disk performance.
Introducing disk slices
• Slices
– disk storage devices are divided into sections
called slices.
– A disk drive provided by Sun can contain up to
eight slices,labeled 0 thro 7.
– Slice 0 and 1,by default contain root and swap
respectively.
– By definition,slice 2 represents the entire disk.
• Slices are configured during installation.
• The advantages to partitioning are
– functionally organize the data
– enables the super user to develop backup
strategies.
File systems
• The file structure tree consists of a root file
system and a collection of mountable file
systems.
– The root filesystem
• system operating files and directories.
– The /usr filesystem
• admin utilities and library routines
– /export/home filesystem
• users home directories.
– /opt file system
• contains optional unbundled and third party
software.
Logical device names
• Contained in the /dev directory
• consists of
– controller number
– target number
– disk number
– slice number
Files for mounting
• The /etc/vfstab file
– this maintains all the information required to
mount a file system at the boot time.
– Discuss the format/fields of this file.
• The /etc/mnttab file
– this contains a record of all the mounted file
systems.
Mounting filesystems
• The mount command
– the mount command when issued without any arguments displays
the currently mounted file systems.
• Mount
• A local file system is attached to the root file structure with
the mount command.
• The directory on which the filesystem is mounted to the root
file system is called a mount point.
– Mount filesystem mountpoint
• Mounting a large file enabled file system
– file systems containing files larger than 2 Gbytes can be mounted
without any special options
• Mounting a small-file enabled file system
– the nolarge option with the mount command
will force all files subsequently written to the
filesystem to be smaller than 2 Gbytes
• mount -o nolargefiles filesystem mountpoint
• this option fails if
– the filesystem contains a large file at the time of
mount
The mountall command
• The mountall command mounts multiple
file systems specified in a file system table.
• The local /etc/vfstab file is referenced.
• It will mount only those filesystems with
“yes” in the boot field of that file.
Unmounting file systems
• The umount command
– The umount command unmounts a currently
mounted filesystem that is specified in one or
more arguments as a mount point .
• #umount mountpoint
• the umountall command
– the umountall command causes all mounted
files except root,/proc,/var and /usr to be
unmounted.
Displaying the capacity of file
systems
• The df command
– the df command is used to display information for each
mounted file system.
• Df -k directory
• -k displays usage in Kbytes and subtracts the space reserved
by the OS from amount of available space.
• -h : a new option in solaris9 to list filesystem size in MB,GB
or TB
• The du command
– the du command is used to display the number of the
disk blocks used by directories and files.
– Options with the du command
• -k displays in Kbytes
• -s displays only the summary in 512 byte blocks.
• -a display the number of blocks used by all files and
directories within the specified directory hierarchy.
The quot command
• The quot command displays how much disk
space is used by users.
– Quot -af
• a report on all mounted file systems
• f include number of files
Module 12
Introduction to disk management
• Objectives:
– utilities to create,check and mount file systems
– list the potential advantages of any virtual disk
management application.
– List difference between Solstice DiskSuite and
Veritas Volume Manager
– advantages of concatenated and striped virtual
file system.
Preparing a slice for use as a file
system
• Before a slice or an entire disk can be used
to store data,it must first have a basic
filesystem structure created on it.
• The newfs utility is used for this purpose
• the newfs utility will destroy any existing
data on a slice.
– #newfs /dev/rdsk/c0t3d0s5
– #mkfs
.To create -F < FSswap
additional type>
file<raw-slice
use name>
# mkfile 100m /home/swap/myswapfile
Checking new file system
• The fsck command detects and interactively repairs
inconsistent file system conditions.
• Using fsck without any arguments will perform file
system checks on all file systems listed in the local
/etc/vfstab file.
– #fsck /dev/rdsk/c0t3d0s4
– cd /test
– ls
• File system limitations
– a file system can consist of only a single slice
– a file system can be no larger than one Tbyte in
size.
• Block device and raw device paths.
– Mount /dev/dsk/c0t0d0s7 /mnt
– newfs /dev/rdsk/c0t0d0s7
– fsck /dev/rdsk/c0t0d0s7
Virtual volume management
• In order to overcome the limitation of one slice
per filesystem,there are virtual volume
management applications that can create virtual
volume structures in which a file system can
consist of almost an unlimited number of disks or
slices.
• There are two virtual volume managers available
through sun
– Solstice disk Suite
– veritas VxVM Volume manager
Access paths
• The key feature of all virtual volume management
applications is that they transparently control a file
system that can consist of many disk drives.
• The physical access paths are similar to regular devices
in that they have both raw and block device path.
• The following are typical virtual volume device path
names:
– /dev/md/rdsk/d42
– /dev/md/dsk/d42
– /dev/vx/rdsk/apps/logvol
– /dev/vx/dsk/apps/logvol
• Virtual volume building blocks
– Solstice DiskSuite: it uses standard partitioned
disk slices that have been created using the
format utility.
– Veritas VxVM: it manages disk space in a
partitionless environment.The veritas
application specially formats the disks and
internally keeps track of which portions of a
disk belong to a particular volume.
• All veritas volume are composed of pieces
called subdisks.
• Vitual volume types:
– concatenated volumes
– striped volumes
Concatenated volumes
• A concatenated volume combines portions
of one or more physical disks into a single
virtual structure.
• The portions are contiguous
• it creates a volume that is larger than one
physical disk.
• A volume can be grown “on-the fly”
Striped volumes
• Striping is a term for breaking up a data
stream and placing it across multiple disks
in equal-sized segments.
• Each physical disk is attached to a different
system interface
• data segments can be written in parallel
• performance improvement
RAID summary
§ RAID stands for Redundant Array of Inexpensive Disks.
§ The different raid levels are
• RAID 0 : Striping without parity, Not redundant
• RAID 1: Mirroring or Duplexing
• RAID 2: Uses hamming error correction codes
• RAID 3: Byte level data striping with fixed parity disk
• RAID 4: Block level data striping with fixed parity disk
• RAID 5: Block level data Striping with distributed parity
• RAID 1+0 : mirror two striped volumes
• RAID 0+1 : stripe a mirroed volume
• RAID 5+0
Module 13
Networks
• Objectives:
– describe IP addressing classes A,B and C
– functions of files --
hosts,nodename,hostname.xxy
– identify users logged in to local network
– log into one machine from another machine
– execute a command on another system
• Copy files from one system to another
• describe the files hosts.equiv and .rhosts
• ping and spray
• netstat -i command
Network terminology
• Broadcast bus
• CSMA-CD
• ethernet interface
– all sun workstations have an ethernet interface
built into the CPU board.
– The most common is le0 interface
– other interfaces are hme0 (100Mbps)
Ethernet address
• The ethernet address is a 48 bit number.
• It is represented by hexadecimal digits and
is subdivided into six two-digit fields
separated by colons.
• The ethernet address is also called as the
machine address code (mac)
• it is globally unique
Internet
• An internetwork is a linked group of LANs connected to a wide
area network.
• For network of computers to communicate ,each must have a
unique address that is known to the other computers on the
network.
• Internet addresses are 32 bits,which are divided into four 8-bit
fields.
• Each 8-bit field is represented by a decimal number between 0
and 255.
– [0-255|0-255|0-255|0-255]
• each internet address is divided into network number and the
host number.
• Network number
– the network number identifies your network to
the outside world.
• Host number
– you assign the host number that uniquely
identifies your workstation on your network.
– Do not use 0 or 255 for your host number.
Internet network classes
• Class A
– first bit is 0
– very large networks
– upto 16 million hosts
. –Class
the first
B 8 bits are network number.It can be
large 127
upto networks uptoA
for class 65000 hosts
networks
first two bits are 10
next 14 bits are the network number
the network number can be between 128 to
191
• Class c
– small and mid-sized networks upto 254 hosts
– first 3 bits are 110 and the next 21 bits are
network number.
– This allows upto 2,097,152 class c networks.
Networking files
• The /etc/inet/hosts file
– each ethernet address has a corresponding host
name.
– This file associates the IP address with the host
names.
. The /etc/inet/netmasks file
– Advantage??????
- Contains the Subnet
– The /etc/hosts file ismask of the link
a symbolic system.
to this file.
. The /etc/defaultrouter file conatins the default
gateway( to be created manually)
• The /etc/nodename file
– this file contains the host name.
• The /etc/hostname.le0
– this file identifies the ethernet interface such as le0 t be
configured at boot up and contains the host name or the host
or the hosts IP address.
• The /etc/hostname6.hme1 file links the interface hme1
to system name and binds it to IPv6
• The /etc/passwd file
– this file is looked at by the system when the remote access is
requested.
– An entry for the user in the local systems
passwd file enables that user to log in remotely.
• The /etc/hosts.equiv file
– this file identifies the remote systems as trusted
hosts.
– Advantage is that the need for sending ASCII
passwords on the network can be avoided.
• The users .rhosts file
– the rlogin process searches for this file.
– By default this file does not exist.
• Both hosts.equiv and .rhosts files have the
same format
– hostname
– hostname username
• If only the host name is used then users
from the named hosts are trusted
• if both the hostname and the username are
used then only the named users are trusted.
• If + is used then all the systems and all the
users are trusted.
The rlogin command
• This command enables a login session on a
remote system.
• The success of this command depends on
the hosts.equiv and the .rhosts file entries.
• Format
– rlogin hostname [-l username]
• use the -l option to specify a different user.
The rsh command
• This is used to execute a program on a
remote system
– format
– rsh hostname command
The rcp command
• This enable you to copy files or directories to and
from another machine.
– Format
– rcp sourcefile hostname:destination file
– rcp hostname:sourcefile destinationfile
– rcp -r /perm saturn:/tmp
• the rsh and the rcp commands require appropriate
entries in the hosts.equiv and .rhosts file.
The telnet command
• This is an industry standard program that
uses a server process to connect to the
operating system.
• The telnet server simulates a terminal to
enable a user to log into a remote host
system and work in that work environment.
The ftp command
• It is used to send files,get files from a
remote system.
• Files can be transferred in ASCII,bin and
dos formats.
The rusers command
• It is used to identify the users logged into a
remote system on the network
– format
– rusers hostname
• a gives a report for all the systems
• l gives a long listing.
The ifconfig command
• It is used to assign an address to a network
interface and to configure network interface
parameters.
• # ifconfig -a
• To change ip
• # ifconfig hme1 down
• # ifconfig hme1 192.9.55.26 netmask
255.255.255.0
• # ifconfig hme1 up
The ping command
• It sends an echo request to the named hosts.
• It does not tell you the state of the system but only
that its network interface is configured.
• PING ( Packet Ineternet Gropper) uses ICMP
( Internet Control Messaging protocol) echoes to
check whether a destination is reachable or not.
• Used to check physical connectivity to a
networked system.
The spray command
• Unlike the ping command this command uses the
higher level protocol.
• This command is typically used to test the response
of the system over a period of time.
• spray sends a one-way stream of packets to host
using RPC and reports how many were received,
as well as the transfer rate.
• spray is not useful as a networking benchmark, as
it uses unreliable connectionless transports, such
as UDP .
The netstat command
• This command displays the status of
various network related data structures.
• The output consists of
– name the network interface
– MTU maximum transmission unit.
– Net/Dest the name of the destination
– address the host name.
– Ipkts/Iers the number of input packets and
errors since the interface was configured
– Opkts/Oerrs the number of output packets and
errors since the interface was configured
– collis the number of collisions on this
interface
– queue the number of packets awaiting
transmission at the interface.
Adding routes
• To add/delete routes use “ route “ command
• To display the current routes use
# netstat -r
• To add a route use
# route add 192.0.2.32/27 somegateway
• will create an IPv4 route to the destination 192.0.2.32
with a netmask of 255.255.255.224
# route add -inet6 3ffe::/16 somegateway
• will create an IPv6 route to the destination 33fe:: with
a netmask of 16 one-bits followed by 112 zero-bits.
/etc/inet/networks
• Network name database file
• The networks file is a local source of
information regarding the networks which
comprise the Internet.
• The network file has a single line for each
network, with the following information:
<official-network-name> <network-number>
<aliases>
•
/etc/inet/netmasks
The netmasks file contains network masks used to implement IP subnetting. It supports
both standard subnetting and variable length subnetting . When using standard subnetting
there should be a single line for each network that is subnetted in this file with the network
number, any number of SPACE or TAB characters, and the network mask to use on that
network. Network numbers and masks may be specified in the conventional IP `.' (dot)
notation (like IP host addresses, but with zeroes for the host part). For example,
128.32.0.0 255.255.255.0
• When using variable length subnetting, the format is identical. However, there should be a
line for each subnet with the first field being the subnet and the second field being the
netmask that applies to that subnet
128.32.27.16 255.255.255.240
128.32.27.32 255.255.255.240
128.32.27.48 255.255.255.240
128.32.27.64 255.255.255.240
Ndd command
• get or set driver configuration parameters pertaining to TCP/IP family
• To see which parameters are supported by the TCP driver use the following
command:
• # ndd /dev/tcp \?
• To disable IPv4 packet forwarding
• # ndd -set /dev/ip ip_forwarding 0
• To enable IPv4 packet forwarding
• #ndd -set /dev/ip ip_forwarding 1
• To check link status use
• # ndd -get /dev/hme link_speed
• ( will return a value 0 for 10Mbps speed and 1 for 100Mbps )
Snoop command
• snoop captures packets from the network and displays
their contents. snoop uses both the network packet
filter and streams buffer modules to provide efficient
capture of packets from the network. Captured packets
can be displayed as they are received, or saved to a file
for later inspection.
• To capture output to a file use -o option
• # snoop -o outputfile hos1 host2
• this will capture packets between host1 and host2 and
save it to a file called “outputfile” for future
• analysis.
Module 14
Backup and recovery
• Objectives:
– dump a filesystem to tape using ufsdump utility
– restore file s or filesystem from tape using the
ufsrestore utility
– recover the /(root) and /usr filesystems
– discuss tar,cpio and dd
– the mt utility
Why backups?
• Most crucial system admin function
– accidental file removal
– originals get lost or damaged
– hardware failure
– external failure of the system
– internal failure of the system
• a system admin should act as though any of
these events could happen today.
Types of backup
• Full dumps
– dumps that backup the entire file system
• incremental dumps
– dumps that backup only those files that have
changed since the last lower-level dump.
Incremental backups
• The ufsdump has 10 backup levels
• levels 1 thro 9 are incremental backups.
• they backup those files that have changed
since the last dump at lower level.
• They depend on the information stored in
the /etc/dumpdates file to decide which files
to backup.
The ufsdump command
• It is used to backup a file system.
• Format
– ufsdump options files_to_dump
• options
– 0-9 the dump level option
– u update the dumpdates file
– c set the blocking factor to 126.this causes the
dump to write in 63K bytes instead of 32 K
– A create an online archive of the file names
dumped
– f specify the device name where the dump will
be taken
– v verifies data on tape against data on file
system.
Hoe to backup file system
• Check for system activity
• notify all the users about the availability
• bring the sytem to run level S
• verify the file system using fsck
• perform a 0 level dump
– #ufsdump 0cuf /dev/rmt/0 /export/home
Performing remote backups
• To perform a remote backup you must
– have a root access privileges on the system with
tape device.
– Specify the server:tape_device in the ufsdump
or ufsrestore command line
• #ufsdump 0uf mars:/dev/rmt/0 /export/home
Restoring filesystems
• Reasons
– adding a new disk drive
– reinstalling or upgrading the OS
– reorganizing the filesystems or disks
– re creating a damaged file system.
The restoresymtable file
• This file is created when restoring the entire
contents of a dump tape.
• The restoresymtable file is used for
checkpointing, which is information passed
between incremental restores.
• This file is not required once the restoration
is thro.
The ufsrestore command
• This command extracts files from a backup
created by the ufsdump command
• format
– ufsrestore options filename
• options
– i perform an interactive restore
– r restore the entire backup
– T list the table of contents of the backup.
– V displays the pathnames of the files that are
restored.
How to restore files
• Load the tape in the tape drive
• become the superuser
• change your working directory to a temporary
location, such as /var/tmp
• display the contents of the tape and identify the
correct path names of the files to be restored
• extract the files
– #ufsrestore xvf /dev/rmt/0 file
• Check the restored files and move them to their
correct location.
How to perform an interactive
restore
• Change your working directory to a temporary
location such as /var/tmp
• start the ufsrestore with the interactive option
– #ufsrestore ivf /dev/rmt/0
• display the tape contents
• add files to the extraction list
– Extract the files
– exit the interactive restore
– check the restored files and move them to their correct
location.
How to move a filesystem
• Unmount the file system
• check the file system with fsck
• dump the filesystem to tape
• use the format utility to partition a new disk
• create a new filesystem on the new disk
• check the file system with fsck
• mount the new file system on a directory
• Restore the file system
• remove the restoresymtable
• check the restored file system with fsck
How to restore the root file
system
• Load and boot the release media to run level S.
• Create the new file system if necessary
• check the file system with fsck
• mount the file system to /a directory and change to that.
• restore the root file system
• Restore the restoresymtable
• unmount the new file system
• check the restored file system with fsck
• reboot the system.
The mt command
• This enables direct tape manipulation.
• Format
– mt command
• command
– status displays the status information
– rewind rewinds the tapes
– retention
– Erase
– bsf
– fsf
The tar command
• This enables you to back up single or multiple files in a
directory hierarchy.
• Format
– tar options filename
• options
– c create a new tar file
– t list the table of contents of the tar file
– X extract the specified files from the tarfile
– f use the next argument as the name of the device
– v print the file names as they are restored
– p restore the files wit the permissions.
The cpio command
• The cpio command creates an archive of
single or multiple files by taking a list of
names from standard input and writing the
archive to standard output,which is usually
redirected to a device file.
• Command format
– [command|] cpio options [> filename]
Cpio options
• Options
– o create an archive file
– i extract the archive
– B set the block input/output record to 5120.the
default size is 512 bytes
Cpio examples
Create an archive of the current directory contents:
# find . -print|cpio -ocvB >/dev/rmt/0
# ls |cpio -oc > /export/home/backup.cpio
Create cpio backup of directory /data/test in /backup/test
# find /data/test -print -depth | cpio -oc > /backup/test
Extract the readme file from the cpio archive
# cpio -ivcB readme < /dev/rmt/0
Extract the files from the cpio archive /backup/test use
# cpio -icvd < /backup/test
( this will restore to directory where you are invoking the cpio command)
List the file names contained in a cpio archive
called “db.cpio” use # cat db.cpio | cpio -ivt
The dd command
• It converts and copies files with various
data formats.
• Format
– dd [optios]
• options
– if input file
– of output file
– bs=n block size.
MODULE 15
Network File System ( NFS)
• OBJECTIVES:
– Describe the functions of an NFS
• Server
• Client
– Determine what directories or file systems a
server is sharing.
– Mount a remote resource on a client from the
command line.
Client - Server model
• Essentially a software model
• A “Server” component ‘gives out’ services
• A “client” component ‘takes’ the service
from the server
• A server is by default client to it self.
• Both the componets can reside in same
Client
physical system or on differentserver
systems
statd and lockd nfsd and mountd
Client - server communication
• TCP/IP based programmes interact with each other using the
TCP/IP suite as the underlying protocol suite via TCP/UDP
ports.
• For the client server interaction the knowledge of source
IP,source port, destination IP and destination port are essential.
• A port is defined as the end point of communication.
• The services in a host are identified using unique port numbers.
Eg: telnet uses port 23, smtp uses 25 and pop uses 110.
• Ports upto 1024 are well known ports and they are reseved
• Ports above 1024 are open to users.
/etc/services file
• The /etc/services file is a local source of information regarding
each service available through the Internet
• Maps well known services to port numbers.
• The /etc/services file contains information regarding the known
services available in the DARPA Internet. For each service, a
single line should be present with the following information:
service_name port_number protocol_name aliases
• Fields can be separated by any number of SPACE and/or TAB
characters. A `#' (number sign) indicates the beginning of a
comment;
• Any newly added service must have a unique entry in this file,
otherwise it may fail to work.
Remote Procedure Call(RPC)
• A network service must use an agreed-upon unique port number.
To eliminate the problem of too many hosts and too many
services to configure and maintain distinctive information for,
Sun created an RPC service that does not require predefined port
numbers to be established at boot time.
• A process, rpcbind, interprets incoming requests and sends them
to the appropriate server processes. Using RPC, clients are given
the actual port number at connection time by rpcbind (listening
at well-known port 111). RPC services register themselves with
rpcbind when they start, and are assigned an available port
number at that time. RPC services are named rpc.<daemon>.
/etc/rpc file
• To see which services are currently running, use the rpcinfo -p command.
• The configured ports for RPC are listed in /etc/rpc. The /etc/rpc file is a local
source containing user readable names that can be used in place of RPC
program numbers.
• The rpc file has one line for each RPC program name. The line has the
following format:
RPC_program_name RPC_program_number aliases
• sample /etc/rpc file :
rusersd 100002 rusers
nfs 100003 nfsprog
mountd 100005 mount showmount
walld 100008 rwall shutdown
rpcinfo
• To see which services are currently running, use the “ rpcinfo -p”
command.
• An RPC program is written is such a way that when it initializes itself at
the start time, it will contact rpcbind and registers it self with the rpcbind.
• On registration the rpcbind will allocate a next available port number to
the service.All subsequent requests to the service are intercepted by
rpcbind and provided with the assigned port number.
• TIP : ERROR:: “RPC program not Registered “
• This is a very misleading error message.
• If you see this error , please ensure that the corresponding daemon is
running and the service is available in the system
The Solaris NFS environment
• The Solaris NFS environment relates to the
ability of one networked system to access
the files and directories of another.
• The NFS service enables a computer to
access another computer’s file systems
• A Solaris system can be a server, client or
both at any given time.
NFS server
• A Solaris NFS server provides file system
access to NFS clients.
• The /etc/dfs/dfstab file.
• Configuration of this file is the
responsibility of the system admin.
NFS client
• The Solaris NFS client accesses files from Solaris
NFS server by mounting the distributed file
systems of a server in a fashion similar to the
mounting of local file system.
• There is no copying of the filesystems.
• A series of RPC enable the client to access file
system transparently on the disk of the server.
• How does the mount look?
NFS File systems
• What can be shared?
– Whole or partial directory
– Even a single file can be shared
• What cannot be shared?
– A file hierarchy that overlaps one already
shared.
– Modems and printers.
Benefits of NFS services
• Everyone on the network can access the
same data.
• Reduced storage costs.
• Data consistency and reliability.
• Transparent mounting of remote files.
• Reduced system admin tasks.
• ACL support
How to share resources?
• The /etc/dfs/dfstab file.
• How to start the server and client processes.
• The shareall command.
• Verify the shares using dfshares command
NFS client access
• Mounting a remote resource
– # mount sun:/usr/share/man /usr/share/man
• Unmounting a remote resource
– #umount /usr/share/man
NFS client access
• The mountall and the umountall commands
– #mountall -F nfs
– #mountall -r
– #umountall -F nfs
– #umountall -r
• If mounting or unmounting of multiple NFS
or remote file systems listed in the /etc/vfstab
the above commands can be used
Module 16
THE LP PRINT SERVICE
• OBJECTIVES:
– List the OS’s supported by the Solaris print
service.
– Describe the functions of LP print service
– Describe what a print server and print client
are.
– Define the terms Local and remote printers.
• Diagram local and remote print models.
• Verify printer type exists in the terminfo
database.
• Use the admintool to add a local and a
remote printer.
Print Service Architecture
• Client-Server model:
– A print server is a system configured to accept
print requests from print clients for printers that
are directly connected to them or network
attached.
– A print client is a system that uses a print server
for printing and is configured to provide access
to a remote printer.
Printing system
• A computer that include a printers contains
– LP print service software
– SunSoft print client software
– print filters
– hardware-printers,network connection.
Solaris 2.6 LP print software
• The LP print software includes the following
– Print protocol adapter
• replaces the SAF network listner(listen) and lpnet on the
inbound side of the LP spooler with more modular design.
• Allows for multiple spooling systems to co-exist on the same
hosts.
– Has network printer support.
– Is extensible by 3rd party application developers to
support other printing protocols.
Features of LP print software
• Provides a variety of printer service
functions.
• Includes PostScript filters in the SUNWpsf
package.
• Supports wide range of printers.
LP print directories
• /usr/bin user commands
• /etc/lp server configuration files.
• /usr/share/lib terminfo database directory
• /usr/sbin print service admin command
• /usr/lib/lp daemons,filters&binaries
• /var/lp/logs LP daemons logs
• /var/spool/lp spooling directory
Printing functions
• Queing
• Tracking
• Fault Notification
• initialization
• Filtering
Queing
• When print requests are spooled the jobs are
lined up with other jobs waiting to be
printed.This process of lining of the jobs is
called queing.
Tracking
• The print service tracks the status of every
job to enable users to remove jobs and
system admins to manage jobs
• Advantage is that if there is a system crash
then the remaining jobs will resume once
the system reboots
Fault Notification
• When problem occurs
– error messages are displayed on console or
– mailed to the system admin.
Initialization
• The print service initializes a printer before
sending it a print job to ensure it is in a
known state.
Filtering
• Certain print jobs,as raster images are
converted into descriptions the printer can
recognize
• uses filters.
Content Types
• Every print request consists of atleast one
file containing information with a particular
format, which is called a content type:
– eg:PostScript
• Every printer mist be defined with a printer
type and at least one content type.
Matching print requests to printer
• If you have a PS printer,specify that the
content type is PS.
• This way,users can print PS and other
supported content types without specifying
content type.
• The only time a user needs to specify a
content type when printing a file is if the
file needs special filtering .
Print Filters
• Print filters are programs used by the print
service to convert the content of requests to
the type of content accepted by the printer.
Filter Information
• Stored in several place
– The default PS filters are stored in
/usr/lib/lp/postscript directory.
– /etc/lp/fd
– Look up table of filters
/etc/lp/filter.table
Checking for defined printer
types
• To verify if your printer type exists,list the
contents of the /usr/share/lib/terminfo
subdirectories.
• The terminfo entry has a directory name
with same initial letter or digit as the
abbreviation of the printer.
Interface programs
• Interface programs are usually shell scripts
used by the print service to set certain
default printer settings.
• /etc/lp/interfaces/printer_name
The printing Environment
• Local and Remote printers.
– Local
– Remote
• Heterogeneous environment
– Solaris 2.x and Sun OS 4.1.x print clients cab
be served by a Solaris 2.x server
– Solaris 2.x and Sun OS 4.1.x print clients cab
be served by a Sun OS 4.1.x server
Solaris 2.6 print Client Process
• The steps for printing a document
– A user submits a print request by entering a
print command.The print job is placed into
local spooling area.
– The print client command checks a hierarchy of
print configuration resources to determine
where to send the print request
• The print client command sends the request
directly to the print server using the BSD
protocol.
• The print server processes the request and
sends it to the appropriate printer where it is
printed.
Submitting a print request
• The solaris 2.6 print client software
provides both SVID and BSD commands to
submit print jobs
– lp <filename>
– /usr/ucb/lpr <filename>
– lp -d <printer> <filename>
– /usr/ucb/lpr -P <printer> <filename>
POSIX style
• Using the POSIX style
– $lp -d <server>:<printer> <file>
Finding the printer
• The command line
• The user’s PRINTER or LPDEST variable
• $HOME/.printers
• /etc/printer.conf
• _default in a network name services
database.
• If the printer name is in POSIX style ,then
the print client command forwards the print
request to the server.
2.6 local printing model
• When a print job is submitted,the print
scheduler /usr/lib/lpsched is contacted.
• The job data is placed in the spooling area.
• The scheduler processes the data
– matches with a filtering chain to convert the
data into format acceptable to printer
– the data is filtered
– schedules printing
Solaris 1.x
• Client side -remote printing model
– lpr is used to submit the jobs
– lpr places the jobs in the local spooling area and
contacts lpd daemon
– lpd daemon transfers it to the print server
• Server side-remote printing model
– lpd daemon receives requests
– sends it to the printer.
Solaris 2.0 -2.5.1
• Client-side remote printing
– lp or lpr is used to submit.
– Both commands contact lpsched
– lpsched places jobs in local spool.
– Lpsched contacts lpNet which transfers the job
to server.
• Server-side
– The SAF listens for network requests
– requests are passed to lpNet
– lpNet contacts lpsched
– lpsched processes the requests and sends to
printer.
Solaris 2.6
• Client side- remote printing
– lp or lpr commands can be used for submitting
the jobs.
– Both commands place the print job into a
temporary spooling area.
– Both commands contact the print server
themselves in order to transfer jobs.
• Server side -remote printing
– the inetd process listens for requests.
– When it gets one,it starts in.lpd,the print
protocol adapter.
– In.lpd places jobs in spooler and contacts
lpsched.
– Lpsched processes the request and sends it to
the printer.
Configuring print services
• Setting up printer
• setting up the printer server
– spooling directory space of 20-25Mbytes
– Atleast 32Mbytes of RAM
• Setting up the print client
• Network access
Configuring local printer
• To add a new printer use….
# lpadmin -p <printer-name> -v <device-name>
# enable <printer-name>
# accept <printer-name>
• To display the status of the printer
# lpstat -t
• To make a printer default ( LPDEST env variable)
# lpadmin -d <default-printer-name>
• To remove a printer
# lpadmin -x <printer-name>
• To turn off banner pages during printing
# lpadmin -p printer-name -o nobanner=never
Configuring printers
• To print on both sides of the paper use
# lp -d <printername> -o duplex
• Check to see if the print scheduler is running.
# lpstat -r
• print scheduler can be stopped with
# /usr/lib/lp/lpshut
• Print services can be started with
# /usr/lib/lp/lpsched
• lpfilter command manage the list of available filters. System information about
filters is stored in the /etc/lp/filter.table file. The filter descriptor files supplied
(PostScript only) are located in the /etc/lp/fd directory.Filters are needed for
printing to postscript printers( eg: HP Lasejet ).The syntax is
# lpfilter -f <filter-name> -F <filter-def>
Configuring printers using GUI
• Admintool->
– browse->
• printers->
– add.
Deleting printers
• Admintool
• # lpadmin -x hp
Network Printing with JetAdmin
• echo $?
Test operator
• Zero is true
• nonzero is false
– test “$name” = “fred”
– [“$name” = “fred”]
Conditional expression
• Case statement,use instead of many if statements
– case “$hour” in
– 0? | 1[01])
– echo “good morning”;;
– 1[2-7])
– echo “good afternoon”;;
– *)
– echo “good evening”;;
– essac
Flow control
• Repeat statements
– while loop
– untill loop
– for loop
Shell functions
• Modular scripts
• function name
• define function before use
• accept parameters and return values
Sample administrative shell
scripts
• /etc/init.d directory contains bourne shell
scripts
– /etc/init.d/syslog
– /etc/init.d/volmgt
Writing simple programs
• Plan
• break script into functions
• write and test small sections
• anticipate error conditions
• use existing scripts
• use verbose comments
• debug
• Debugging scripts with
– use shells debug optio:sh -x
– +indicates shell activity that are not normally
seen.
Module 20
Solstice Disk Suite
• Sun’s Solution for configuring Software
RAID on Sun Systems.
• GUI based “Metatool” available.
• Easy command line options are also used for
many servers not supporting GUI.
• Comes bundled along with Solaris releases
for all users.Does not need any license.
Metadisk Driver
• Set of loadable,pseudo device drivers
• Metadevices
– Basic functional units of the metadisk driver
– Logical devices ,can be made up of one or more
component partitions
• Simple / Concatenation / Stripe / Mirror / Raid-5
– By default, 128 unique metadevices in the range 0-127
• Names located in /dev/md/dsk and /dev/md/rdsk
State Database Replicas
• State Database Replicas
– Keeps track of configuration and status for all
metadevices
– Keeps track of error conditions that have
occurred
– Requirement of multiple copies of state
database (min - 3)
– Each replica occupies 517KB or 1034 disk
blocks
State Database Replicas
(Contd….)
• Basic State Database Operation
– /etc/system or /etc/opt/SUNWmd/mddb.cf
(older sds)
/etc/lvm/mddb.cf ( sds version 4.2.1)
– Locator Blocks
– Commit Counter
– Checksum
• Location of replicas
System Files of SDS
• Old path is /etc/opt/SUNWmd ( SDS 4.0)
• New path is /etc/lvm( SDS 4.2.1)
• md.tab :- Workspace file
• md.cf :- Disaster recovery file (file form of atabase)
• md.cf does not get updated when hot sparing occurs
• should NOT be edited manually.
• mddb.cf :- Has Driver name, minor unit of block
device unique to each replica and Block number of
master block
State Database Replicas (Contd…)
• Setting up the MetaDB (State Database)
A DiskSuite installation would not be able to operate
without a "state database", known as a “metadb” .
Ideally, the metadb should be simultaneously located on
more than one SCSI controller and on 3 or more disks.
This is for redundancy and failover protection. Each
copy of the metadb is called a “ state database replica”.
• To view current metadb status use
# metadb
• To inquire the status of state database replica use
# metadb -i
State Database Replica creation
• To create one metadb on two disks, each having three replicas (for a
total of six replicas):
# metadb -a -f -c 3 /dev/dsk/c0t3d0s6 /dev/dsk/c1t0d0s6
• The options on the line above are:
-a attach a new database
-f form a new database ( force)
-c (#) number of state replicas per partition
• Note: the metadb command creates a file called
metadb.cf which must never be edited by us.
• Next, we need to add an entry into the file:
/etc/lvm/mddb.tab for each metadb we have created (in this case,
one).
•
Concatenation
Edit md.tab file and insert the following entry
d1 2 1 /dev/dsk/c0t0d0s2 1 /dev/dsk/c1t1d0s2
(This means concat made of 2 devices each having 1 component )
• To create the meta device use
# metainit d1
To create all meta devices listed in md.tab use
# metainit -a
• The command line syntax to create a concat is
# metainit d1 2 1 /dev/dsk/c0t0d0s2 1 /dev/dsk/c1t1d0s2
Striping
• Edit md.tab file to enter the following line
d1 1 2 /dev/dsk/c0t0d0s2 /dev/dsk/c1t1d0s2 -i 16k
(1 stripe containing 2 components)
• Interlace value defaults to 16k if not specified
• Note : The metainit syntax follows the form
MDNAME X Y SLICES
where MDNAME = meta device name
if X > Y then you get a stripe
if X < Y then you get a concat
Concatenated Stripes , an example
• d1 2 2 /dev/dsk/c0t0d0s2 /dev/dsk/c1t1d0s2
-i 16k 3 /dev/dsk/c0t0d0s3
/dev/dsk/c1t1d0s3 /dev/dsk/c2t1d0s2 -i 32k
• metainit -n d1 verifies if the info. in md.tab
is accurate
• metaclear d1 will delete the metadevice
( Data is LOST )
Mirroring
• Edit md.tab and insert the entries
d10 -m d1 d2 is a two way mirror
d01 1 1 /dev/dsk/c0t0d0s2
d02 1 1 /dev/dsk/c1t1d0s2
• Execute following commands in the specified order
# metainit d01
# metainit d02
# metainit d10
• d10 -m d1 is a one-way mirror
• d10 is called the Metamirror after entire mirror is setup.
• To add a sub mirror to existing mirror
# metattach d10 d03
• To remove a sub mirror ( break the mirror)
# metadetach d10 d2
Mirroring……contd….
# metainit d0 -m d1
(makes a one-way mirror. d0 is the device to mount (called
metamirror) , but d1 is the only one associated with an actual device
(called submirror) .Now d0 is a "one-way mirror" . There's only one
place where the data is actually stored, namely d1.)
# metattach d0 d2
(attaches d2 to the d0 mirror. Now there are 2 places where the data
are stored, d1 and d2. But you mount the metadevice d0)
# metadetach d0 d1
(detaches d1 from the d0 mirror ,breaking the mirror)
• To suspend / resume use of sub mirror use
# metaoffline d0 d2 ( suspends the use of d2 on d0 mirror)
# metaonline d0 d2 ( resumes the use of d2 device on d0 mirror)
Root Mirroring
1) Install second hard disk and create slices similar to root disk.
2) Create state data base replicas in both disks
# metadb -a -f -c 2 c0t0d0s7 c1t0d0s7
3) Edit md.atb and enter following entries
d10 1 1 /dev/dsk/c0t0d0s0
d20 1 1 /dev/dsk/c1t0d0s0
d0 -m d10
d11 1 1 /dev/dsk/c0t0d0s1
d21 1 1 /dev/dsk/c1t0d0s1
d1 -m d11
do the same for all other slices in root disk
Root mirroring…contd….
4) Create all the meta devices using # metainit -a -f
(the -f will force to metadevice creation even on mounted slices)
5) Run metaroot command on device designated as root metamirror.
# metaroot d0
6) Copy original /etc/vfstab and preserve it as /etc/vfstab.org. Now edit the
/etc/vfstab file and modify entries for swap area. Change /dev/dsk/c0t0d0s1 to
/dev/md/dsk/c0t0d0s1.the line for / will be already updated by metaroot
command.Do the same for remaining slices.
7) Reboot the system. This is must. On reboot do a df -k and swap -l to verify
that the root and swap slices are under SDS control.
8) attach sub mirror to meta mirror . # metattach d0 d20
9) This will initiate the mirror syncing process. Verify with # metastat d0
10)Run metattach commands for remaining metamirrors . Run next command
only after the completion of previous resync operation.
Creating RAID-5
• Edit md.tab and insert ( -r is the keyword)
d1 -r /dev/dsk/c0t0d0s2 /dev/dsk/c1t1d0s2 \ /dev/dsk/c2t1d0s2 -i
16k
# metainit d1
This will create a RAID5 device d1 with stripe size 16K
• metainit on existing raid-5 devices DESTROYS data
• To avoid the destruction of data on a RAID-5 device, the device entry
should have “k” option.
• Example :-
d1 -r /dev/dsk/c0t0d0s2 /dev/dsk/c1t1d0s2 \ /dev/dsk/c2t1d0s2 -k -i
16k
UFS logging
Done using trans-meta devices. Trans devices have a main
device and a logging devices. Logging avoids long fsck times
at boot. Before writing to the device , the data will be first
written to the logging device and then the transaction is
commited to the actual disk. The command
# metainit d0 -t d1 d2
sets up a trans device d0 with d1 as the master and d2 as the
logging device. recommended 1MB logging/1GB data on master
# metainit d0 -t c0t1d0s2 c3t2d0s5 (same as above )
For attaching and detaching a log device on/from d0 use
# metattach d0 d1
# metattach d0 c3t1d0s5
# metadetach d0
metastat
The metastat command displays the current status for each
metadevice (including stripes, concatenations, concatenations of
stripes, mirrors, RAID5, and trans devices) or hot spare pool.
-p
Displays the list of active metadevices and hot spare pools in a
format like md.tab.
-s setname
Using the -s option will cause the command to perform its
administrative function within the specified diskset.
-t
Prints the current status and timestamp for the specified
metadevices and hot spare pools. The timestamp provides the
date and time of the last state change.
Metareplace
The “metareplace” command is used to enable or replace
components (slices) within a submirror or a RAID5 metadevice.
When you replace a component, the metareplace command
automatically starts resyncing the new component with the rest of
the metadevice. When the resync completes, the replaced
component becomes readable and writeable. Note that the new
component must be large enough to replace the old component.
A component may be in one of several states. The Last Erred and the
Maintenance states require action. Always replace components in
the Maintenance state first, followed by a resync and validation of
data. After components requiring maintenance are fixed, validated,
and resynced, components in the Last Erred state should be
replaced. To avoid data loss, it is always best to back up all data
before replacing Last Erred devices.
Metareplace examples
•This example shows how to recover ,when a single
component in a RAID5 metadevice ,is errored.
# metareplace d10 c3t0d0s2 c5t0d0s2
In this example, a RAID5 metadevice d10 has an errored
component, c3t0d0s2, replaced by a new component,
c5t0d0s2.
•This example shows the use of the -e option after a
physical disk in a submirror has been replaced.
# metareplace -e d11 c1t4d0s2
Note: The replacement disk must be partitioned to match
the disk it is replacing before running the metareplace
command.
Metasync
The metasync command starts a resync operation
on the specified metadevice. All components that
need to be resynced are resynced. If the system
crashes during a RAID5 initialization, or during a
RAID5 resync, either an initialization or resync
restarts when the system reboots.
Applications are free to access a metadevice at the
same time that it is being resynced by metasync.
Also, metasync performs the copy operations from
inside the kernel, which makes the utility more
efficient.
Use the -r option in boot scripts to resync all
possible submirrors.
Metaonline and metaoffline
metaoffline : This command prevents DiskSuite from reading and writing to
the submirror that has been taken offline. While the submirror is offline, all
writes to the mirror will be kept track of (by region) and will be written when
the submirror is brought back online. The metaoffline command can also be
used to perform online backups: one submirror is taken offline and backed
up while the mirror remains accessible. (data redundancy is lost while one
submirror is offline.) The metaoffline command differs from the metadetach
command because it does not sever the logical association between the
submirror and the mirror. To completely remove a submirror from a mirror,
use the metadetach command.
When the metaonline command is used, reading from and writing to the
submirror resumes. A resync is automatically invoked to resync the regions
written while the submirror was offline. Writes are directed to the submirror
during resync. Reads, however, will come from a different submirror. Once
the resync operation completes, reads and writes are performed on that
submirror. The metaonline command is only effective on a submirror of a
mirror that has been taken offline. Note: A submirror that has been taken
offline with the metaoffline command can only be mounted as read-only.
Metattach and metadetach
metattach is used to add submirrors to a mirror, add logging
devices to trans devices, or grow metadevices. Growing
metadevices can be done without interrupting service. To grow the
size of a mirror or trans, the slices must be added to the
submirrors or to the master devices. DiskSuite supports one-to-
three-way mirrors
To concatenate a single new slice to an existing metadevice, d8.
(Afterwards, use the growfs command to expand the file system.)
# metattach d8 /dev/dsk/c0t1d0s2
This example expands a RAID5 metadevice, d45, by attaching
another slice.
# metattach d45 /dev/dsk/c3t0d0s2
metadetach is used to detach submirrors from mirrors, or detach
logging devices from trans metadevices.
metainit and metaclear
The metainit command configures metadevices and hot spares
according to the information specified on the command line or it
uses configuration entries you specify in the /etc/lvm/md.tab file. All
metadevices must be set up by the metainit command before they
can be used.( the -f option tells the metainit to continue even if you
have mounted slices in the metadevice.
metaclear deletes all configured metadevice(s) and hot spare
pool(s), or the specified metadevice and/or hot_spare_pool. Once a
metadevice or hot spare pool is deleted, it must be recreated using
metainit before it can be used again.
Any metadevice currently in use (open) cannot be deleted.
Diskset
A shared diskset, or simply diskset, is a set of shared disk drives
containing metadevices and hot spares that can be shared
exclusively but not at the same time by two hosts.A diskset provides
for data redundancy and availability. If one host fails, the other host
can take over the failed host's diskset. (This type of configuration is
known as a failover configuration)
Disksets use this naming convention: /dev/md/SETNAME
Metadevices within the shared diskset use these naming
conventions:
/dev/md/SETNAME/{dsk | rdsk}/dnumber
where setname is the name of the diskset, and number is the
metadevice number (usually between 0-127).
Hot spare pools use setname/hspxxx, where xxx is in the range 000-
999.Metadevices within the local diskset have the standard DiskSuite
metadevice naming conventions
metaset
The metaset command administers sets of disks shared for exclusive
(but not concurrent) access between such hosts. While disksets
enable a high-availability configuration. Shared metadevices/hot spare
pools can be created only from drives which are in the diskset created
by metaset. To create a set, one or more hosts must be added to the
set. To create metadevices within the set, one or more devices must
be added to the set. # metaset -s colour -a -h red blue
The name of the diskset is colour. The names of the first and second
hosts added to the set are red and blue, respectively. (The hostname is
found in /etc/nodename.) Adding the first host creates the diskset. A
diskset can be created with just one host, with the second added later.
The last host cannot be deleted until all of the drives within the set
have been deleted. This example adds drives to a diskset.
# metaset -s colour -a c2t0d0 c2t1d0 c2t2d0 c2t3d0 c2t4d0 c2t5d0
The drive names of the c2t0d0, c2t1d0, c2t2d0, c2t3d0, c2t4d0, and
c2t5d0. Note that there is no slice identifier ("sx") at the end .
Expanding a metadevice
The expansion process involves addition of a concat.You will
loose all redundancy with the expansion.
# metattach d1 c3t1d0s2
extends a metadevice by concatenating a slice to the end. It
does not add a filesystem.
# growfs /dev/md/rdsk/d1
If the metadevice is not mounted, the above command
extends the filesystem to include the added section. You
cannot shrink this filesystem later.
# growfs -M /export/home /dev/md/rdsk/d1
If the metadevice is mounted, the above command will
extend the filesystem to include the concatenated section.
Again, you cannot shrink the filesystem later.
Module 21
Practical scenarios
Practical scenarios
• Some useful commands
• How to add a new disk to solaris
• How to add a new network card
• How to add swap space
• How to create alternate boot disk
• Introduction to DNS
Useful commands…...
# prtconf : Gives system configuration information like the total amount of
memory and the configuration of system peripherals formatted as a device tree.
# sysdef : lists all hardware devices, as well as pseudo devices, system devices,
loadable modules, and the values of selected kernel tunable parameters.
# dmesg : dmesg looks in a system buffer for recently printed diagnostic messages
and prints them on the standard output.
# eeprom : displays or changes the values of parameters in the
EEPROM( similar to setenv)
# vmstat : vmstat reports virtual memory statistics regarding process, virtual
memory, disk, trap, and CPU activity
# iostat : The iostat utility iteratively reports terminal, disk, and
tape I/O activity, as well as CPU utilization.
System Diagnostics
System Diagnostics
1) Run diagnostic at OBP level and displays test results using LED’s on
the front panel or on the keyboard. It also displays diagnostic and error
messages on the system console.
2) Along with main logic board it checks inerfaces such as PCI, SCSI,
Ethernet, Serial,Parallel,Keyboard, Mouse,NVRAM, Audio and Video
3) Before running OBDiag ,set the OBP diagnostic variable to “ true” ,
the’auto-boot?’ varible to “false” and reset the system
ok > setenv diag-switch? True
ok > setenv auto-boot? False
ok > reset-all
To run the OBDiag
ok > obdiag
Power On Self Test(POST)
1) POST resides in the firmware of each board in a system and it is used to initialize ,
configure , and test system boards.POST output is sent to seraial port A( for ultra enterprise
the output is sent to serail port A on the system and clock board)
2) The status LED’s gives POST completion status. If a system board fails in the POST test
the amber light stays lit.
3) to run POST
ok > setenv diag-switch? True
ok > setenv diag-level max
ok > setenv diag-device disk ( if u want to boot from disk as the system default is “net” )
ok > setenv auto-boot ? False
ok > reset-all
4) Power cycle the system ( turn off then switch on ).On pwoering on the output is displayed
on device on serial port A or on console > you may also view the results using
ok > show-post-results
Solaris OS Diagnostics commands
1) /usr/platform/sun4u/sbin/prtdiag -v
: displays system config and diagnostic info and lists any failed
field replacable units( FRU’s)
2) /usr/bin/showrev -p or patchadd -p
: Display revision info on current harware and software
3) /usr/sbin/prtconf : Displays system configuration info
4) /usr/sbin/psrinfo -v
: Displays CPU info including clock speed
5) cpustat , mpustat, vmstat , iostat commands
Sun Explorer Data Collector 3.5.2
Introduction to
Crash Dump Analysis
Crash Dump Analysis
Crash dump file contains the system memory image of a
failed/running system.( default location is /var/crash/<hostname> )
You can enable savecore for future analysis via “dumpadm” command
or by editing /etc/rc2.d/S20sysetup file
The savecore utility saves a crash dump of the kernel .It saves
the crash dump data in the file vmcore.n and the kernel's namelist
in unix.n.
You can force a core dump by giving “sync” command at OK prompt.
The coredump can be analysed using “adb” and “crash” commands
and ACT and ISCDA tools.
The ACT Kernel Dump Analysis Tool
•ACT is a tool developed by engineers at Sun over the course of several years
to aid in the process of analysing kernel dumps. The ACT tool analyzes a
system kernel dump and generates a human-readable text summary.
Frequently, this text summary can be sent to Sun rather than uploading a
potentially huge core file.
•ACT prints detailed and accurate information about: Where the kernel panicked
, a complete list of threads on the system ,the contents of the /etc/system file
which was read when the failed system booted,a list of kernel modules that
were loaded at the time of the panic ,the output of the kernel message buffer etc
ACT is delivered in a standard Sun package format. Simply unzip and untar the
package and install it as any other package using pkgadd. The ACT package
is installed in the directory /opt/CTEact. The actual executable can be
found in /opt/CTEact/bin/act.
•When possible, ACT should always be run from the server that produced the
core to be analysed. This tool was later obsoleted by the ISCDA tool.
Initial System Crash Dump Analysis (ISCDA)