You are on page 1of 399

WELCOME

EDS SUN LEVEL -1 TRAINING


Module 1

Unix Concepts
What is Unix ??
What are OS,Kernel,Shell,File system,Process
and Daemons ?
What are multitasking , multi user and
distributed computing ?
What is UNIX ??
UNIX is a networking operating system initially developed by Bell Labs with multi user,multi
tasking and distributed computing capabilities.The efforts are pioneered by Dennis Ritchie and
the OS is written using C language.
The features that made UNIX a hit from the start are:Multitasking capability ,Multiuser
capability ,Portability ,UNIX programs and Library of application software
Multitasking : lets a computer do several things at once, such as printing out one file while
the user edits another file.
Multiuser: Permits multiple users to use the computer.
System portability : permit movement from one brand of computer to another with a
minimum of code changes.
UNIX tools :UNIX comes with hundreds of programs that can divided into two classes
Integral utilities and Tools

It is based on an open standard. Vendors have customized this to suite their


requirements.Important flavours of unix are Solaris, Linux, SCO unix, HP-UX and AIX.
Solaris is based on SVR5( System V Release 4) unix. Solaris is from Sun Micro systems.
How UNIX is organized ???

The UNIX system is functionally organized at three levels:


The kernel, which schedules tasks and manages storage;
The shell, which connects and interprets users' commands, calls programs from
memory, and executes them; and
The tools and applications that offer additional functionality to the operating
system

User
Tools and Apps
Shell
Kernel
Hardware
Operating System
• Set of programs that manages all computer
operations and provides interface between the user
and the hardware.
• It is an environment where the applications can
run.
• It accepts user input, routes it to the proper device
for processing, and returns program output back to
the user. The operating system interacts with
peripherals through device drivers written
specially for each component of the system
Kernel
• Acts as an intermediary between applications running on a
computer and the hardware inside the computer. It controls
physical and virtual memory, schedules processes, and starts and
stops daemons. All commands interact with the kernel. Kernel is
a major part of operating system.
• Kernels are classified on the basis of their architecture as
“Monolithic” and “Micro kernel “.
• Monolithic : Uses a single big kernel. All changes done requires
a relinking of kernel followed with a reboot of system. Eg:- SCO
unix and HP-UX
• Micro kernel : Consists of a core kernel with a set of loadable
kernel modules.The kernel modules are loaded on demand and
the kernel is relinked dynamically without a reboot.It is possible
to load/unload the kernel modules dynamically without affecting
system performance. This imparts Plug-n-play feature in
the OS. Eg : Solaris and Linux
Shell : The unix command interpreter
The shell interprets and translates commands entered by the user into actions
performed by the system. There are six shells by default in Solaris8: they are
Bourne Shell, Korn Shell, C Shell, Z Shell, TC Shell and BASH Shell.
The default shell in Solaris is Bourne shell.
The shell is a command programming language that provides an interface to the
UNIX operating system. Its features include control-flow primitives, parameter
passing, variables and string substitution. Constructs such as while, if then
else, case and for are available. Two-way communication is possible between
the shell and commands. String-valued parameters, typically file names or flags,
may be passed to a command. A return code is set by commands that may be
used to determine control-flow, and the standard output from a command may
be used as shell input.
The shell can modify the environment in which commands run. Input and output
can be redirected to files, and processes that communicate through `pipes' can
be invoked. Commands are found by searching directories in the file system in
a sequence that can be defined by the user. Commands can be read either from
the terminal or from a file, which allows command procedures to be stored for
later use.
File system
A File system is defined as a “ Heirarchy of Files and Directories “ .
Data on a Solaris system is stored in a hierarchical fashion on the file
system. Organizing data in this way makes it easy to locate and group
related operating system control files and user information.
Mounting a file system : The process of making ( or attaching ) a file
system as part of a unix device tree. After mounting you can access a
file system relative to a “mount point”.
File systems are classified as Disk based, Distributed or Pseudo
file systems .
Disk based : Exists in hard disk. Eg: ufs , vxfs
Distributed : Network based .Available on the network eg: NFS

Psuedo : RAM based . Exists in physical RAM.eg: tmpfs , procfs, fdfs


Process
• Process is defined as the “ Part of a program under execution “.Process is an
entity that executes a given piece of code, has its own execution stack, its own
set of memory pages, its own file descriptors table, and a unique process ID. A
job under unix is broken down into smaller pieces called processes and they are
executed. A unix system executes multiple processes at the same time- making it
mutitasking.
• Each process is tracked with a unique integer called “ Process ID (PID) “. The
process to PID mappings are kept in a ‘process table’which is maintained by the
kernel.A process can create,terminate or communicate to other processes.
• When a process create a new process , it is called a “parent process”. The
newly created process is called a “ child process” . All child processes belonging
to a parent have same PPID ( Parent PID).A ‘fork() ‘ system call is used to
create a new process.
Process Communication
Processes communicate with each other using named pipes,sockets and IPC
A pipe is a mechanism which allows two processes to communicate with each other. A
named pipe (also called a named FIFO, or just FIFO) is a pipe whose access point is a file
kept on the file system. By opening this file for reading, a process gets access to the
reading end of the pipe. By opening the file for writing, the process gets access to the
writing end of the pipe. If a process opens the file for reading, it is blocked until another
process opens the file for writing. The same goes the other way around. A named pipe is
created using ‘ mknod’ command.
sockets-based mechanisms uses TCP, UDP, IP, or any other protocol from the TCP/IP
protocols family as the underlying protocol for communication between processes.
Inter-Process Communications(IPC) methods, which are derived from Unix System V
release 4. These mechanisms include message queues (used for sending and receiving
messages), shared memory (used to allow several processes share data in memory) and
semaphores (used to co-ordinate access by several processes, to other resources). Each of
these resource types is handled by the system. These resources also have some security
support by the system, that allows one to specify which processes may access a given
message queue,
Daemons
• A daemon is a program that resides in system memory. When
called upon it performs a specific system function. Administrators
can specify which daemons they wish their system to run using
scripts or start them manually from the command line. Finding
out what daemons it has loaded in memory can sometimes identify
a machine's purpose
• Daemons can be viewed as system processes which run
transparent to user( similar to a ‘service’ in windowsNT).
Eg : Vold ( Volume management daemon),inetd( networking
daemon), lpd ( printing daemon), cron (clock daemon)
• Daemons can be started or stopped using scripts in /etc/init.d
directory under solaris.
Terminology
• Host- A networked computer system
• Host name- A unique name for a system.
• IP address- a number used by the networking software to identify the host
• clients-A host or a process that uses the service from one or more servers
on a network
• Server- A host or a process that provides resources to one or more clients
on the network.
• Network-A group of computers connected to each other.
• Multi-tasking: Enables more than one process or application to be used at
a time
• Multi-user: Enables more than one user to access the same system
resources.
• Distributed processing: enables the use of resources across the network
SWAP
•Swap space is generally known as “ Virtual memory”.This exists in hard disk
and functions as an extension to physical memory( RAM).When a process is
made ready for execution it is allowed memory area from swap.When the code
actually executes, it is moved to RAM.The swap space is divided into memory
chunks called “pages”. The movement of a page from swap to RAM is called ‘
page-in’ while the one from RAM to swap is called ‘ page-out’.Page-in and
page-out are together called “ paging”.”Swapping” is a process in which all the
memory pages belonging to a particular process are moved out of RAM to swap
space.
•Required swap size = "double the RAM" rule (The modern rule is "however
much you need".)
•Using raw partitions and striping swap across disks yields optimal performance.
•The system core is dumped here on the event of a system crash. The swap is
mounted under the mount point /tmp.We can not run any consistency check on
swap. Swap space is totally handled by the OS.
Module 2

Introduction
to
Sun hardware
and
Storage
Basic Sun Hardware
Sun is RISC processor based
Has Open Boot PROM and NVRAM
System information is stored in the NVRAM
(Like HOSTID,ETHERNET Address etc)

The Boot PROM allows to configure NVRAM


parameters from ok> prompt and allows to
run diagnostic commands

Sun has an on-board network card


Hence the motto “ NETWORK IS COMPUTER”

The important sun architectures are


sun4c, sun4m, sun4d and sun 4u
Scalable Processor ARChitecture
SPARC: stands for Scalable Processor ARChitecture
Exists in 3 versions as
SPARC-V7,SPARC-V8( 32bit) and SPARC-V9(64bit)
SPARC-V7 : Example is Sun SPARC processors
SPARC-V8 : Is a 32 bit implementation .
In SuperSPARC,MicroSPARC and HyperSPARC processors. Are 32
bit processor architecture and is used in sun4c and sun4m based
architecture .
SPARC-V9 :Is a 64bit implementation .
UltraSPARC is a 64bit processor. Used in sun systems
whose name start with “Ultra”. Currently UltraSPARC,
UltraSPARC-II and UltraSPARC-III processors are available.
SUN 4c Architecture
• Based on Sun Sparc Microprocessor
• 32 bit Architecture
• Desktop Models 4/20 - IPC 20Mhz
• 4/40 - SLC 40Mhz
• 4/60 - SS1 60Mhz
• 4/75 - SS2 75Mhz
• Server Models 4/280 4/490
Sun 4m Architecture
• Multiprocessor Support
• Supports Microsparc , Supersparc and Hyper Sparc
Processors.
• All models have an in built Centronics parallel
interface.
• ‘M’ bus for CPU Interface ,‘S’ bus for add on devices
• Various Sun4m Models are
• Desktop Models like SS4,SS5,Classic etc.
• Higher end Desktops such as SS10, SS20 etc..
Sun Sparc 20
• Has 2 ‘M’ bus connectors for Pluggable
Super Sparc or Hyper Sparc modules.
• Processor Speeds 100/125/150Mhz
• Different Processor modules running at
40Mhz & 50Mhz.Can’t be mixed.
• 4 Sbus slots with onboard Video Controller
• Single Serial Port Connector
• 2 internal Fast SCSI-2 disk bays
Sun 4d Architecture
• ‘XD’ bus for Processor - Processor I/F
• Server Models only
• Sparc Server 1000 and 1000e Models
• Sparc Centre 2000 and 2000e Models
• Support for Hyper Sparc and Super Sparc
processors also.
Sun 4u Architecture
• Latest 64 bit Ultra Sparc Processors
• UPA(Ultra Port Architecture) bus
• Support for Sbus and PCI - II Bus
• Desktop Models Ultra140/Ultra 170 with 8bit Graphics
• Desktop Models Ultra 140E/Ultra 170E with 24bit
Creator Graphics Controllers.
• Ultra 30/60 with PCI-2 Bus Support
• Ultra 5& 10 low cost workstations with Enhanced IDE
and PCI architecture.
Ultra Sparc Features
• 64 bit Sparc processor
• Ultra sparc can execute more instructions
per cycle than others.- 4 instructions/cycle
• Two level CPU cache 16KB data and 16KB
instruction . One level external Cache up to
a maximum of 4MB.
• Special Multimedia Instructions like Intel’s
MMX called Visual Instruction Set (VIS)
Ultra Port Architecture

• UPA Interconnect is a Packet Switched 64 bit Bus


running at 71Mhz in U140/U170 models and
83.3Mhz in others at the speed of 1.3Gbytes/Sec
compared to 400MB/Sec of 32bit M bus running
at 40/50Mhz.
• Multiple Processors,Memory,Graphics can be
placed on the same bus for enhanced performance.
• Data protection through ECC.
Sun Models
1)Desktops -
Ultra 5 / 10 /30 / 60 /80
Sun Blade 100 , 1000 , 2000
2) Workgroup Servers -
Ultra Enterprise 2 / 250 /450/220R/420R
SunFire 280R , 480R
3) Midrange Servers -
UE 3K / 3.5K / 4 K / 4.5K / 5K / 5.5K / 6K / 6.5K
Sun Fire 3800,4800,4810,6800 , V880
4) High end Servers -
Ultra Enterprise 10K , Sun Fire 12k , 15k
Enterprise Midrange Servers

• The Server-line consists of UE 3K /4K


/5K /6K /3.5K /4.5K /5.5K /6.5K SunFire
3800,4800,6800
• Offers Hot pluggable
components ,automatic system
recovery ,remote monitoring and DR/AP
for online repair & reconfig , to minimise
Dynamic Reconfiguration
& Alternate Pathing
•DR - a set of enhancements for the OS
•Can dynamically attach/detach system boards in
a live system without halting .
•Available in E10K and for I/O boards only in
E3K - 6K , 3.5K - 6.5K
•AP - I/O operations in a live system to be
redirected w/o reboot to a predetermined
alternate path if the system board serving the
primary path must be removed from config .
Automatic System Recovery

Power on Selftest (POST) or OpenBoot Diagnostics (OBP)


detects the failed component and deconfigures the same so
that the system can boot without it .
•In a running system , component failure like a CPU ,
immediately resets the system with the failed CPU module de
configured .
•This prevents the system from crashing again due to the
failed component or failing to startup .
•A system call is generated to replace the faulty component .
Sun Desktops

Sun ultra 5 : 270-400 Mhz Ultra SPARC-2i Single processor,Max


512MB RAM in 4 DIMM slots
Sun Ultra 10: 300-480 Mhz Ultra SPARC 2i single processor,
max 1GB RAM in 4 DIMM slots ,has 33mhz-32bit PCI bus ,
PGX24 on-board graphics
Sun Ultra 80: 1 to 4 ,450mhz Ultra Sparc 2i processors with 8MB e-cache,
uses 670W SMPS,uses 112.5mhz UPA,
has 16 DIMM slots - 8 onboard and 8 on memory raiser card ,
can use 64/256 MB modules , Has 4 PCI slots ( 2 * 33mhz,32/64bit
one 33mhz,32bit and one 66/33mhz , 64/32 bit slot),
uses 40 MB/sec ultra SCSI
Sun Blade range

• Sun Blade 100 :Desktop style enclosure , uses a 200W SMPS,


can accommodate one ATA66, 15GB hdd,holds a 500mhz ultra
sparc 2e cpu, has three 33mhz,32bit PCI bus ,display is on-board
ATI Rage XL with 8MB external SGRAM,The model uses
PC133,JDEC DIMM

• Sun Blade 1000: A dual cpu workstation using ultra sparc 3( 600-
900Mhz) processor with 4/8MB external cache , has two UPA64
slots for frame buffers ,supports creator 3D or Elite 3D m3 or
Elite 3D m6 graphics cards ,Has four 64bit PCI slots ( 3* 33mhz
and one 66mhz) .The model uses sun 232pin SDRAM DIMMs.
Entry level servers

• Enterprise 250,450,220R & 420R


• E250 : Is a high performance , shared memory ,symmetric multi
processing system , can have upto 2 cpu’s of make 250-400MhzUltra
SPARC II cpu with on board e-cache ,Can hold upto 2GB RAM in 16
DIMM slots,Has one 33mhz & one 66mhz PCI bus, Can have upto 6
hot- pluggable drives with 40MB/sec ultra scsi, Can have upto 2,
360W powersupplies ( offer redundancy) , has built-in automatic
system recovery feature ,uses a buffered 144bit UPA interconnect

• E450 : Can have upto 4 processors, rest similar to E250


• E220R : rack-mountable sytem can have upto 2 cpu’s
• E420R : Rack mountable ,can hold upto 4 cpu’s
Mid Range servers
• Enterprise 3000/3500/4000/4500/5000/5500/6000/6500 and
• SunFire 3800,4800 & 6800

• E6k/5k are available in data center system cabinet containing either a


16 slot or 8 slot card cage .

• E4k has a standalone enclosure containing an 8 slot card cage .

• Same CPU/Memory/disk boards ,processor/memory modules ,power


supplies ,fan & internal disks can be used in both enclosures .
Sun Fire Range

• SunFire 3800,4800,6800 models.


• They are rack mountable ,has upto 24 cpu’s and 192GB memory ,
upto 32 I/O slots ( PCI and compressed PCI modules ), provide
extensive redundancy
• Has multiple “domain” feature
• Redundant power & colling facilities
• 9.6 Gbyte Bus bandwidth
• It is possible to “ partition “ a SunFire system and create
• “ domains” within a partition .
• Partitioning is a mechanism through which the resources in a single
system behave as logically separate systems
SunFire features……...
• Domain :
• A domain is a logicaly independent section within a partition .
• A partition can accommodate one or more domains .
• Each domain run its own OS and can be configured without interrupting other
domains .
• Domains can be used for testing new applications , OS updates or department
wise domain setup.
• SunFire 3800/4800/4810 has upto 2 doamins while 6800 has 4.
• 6800 can be divided into two partitions by logically isolating one set of
repeater boards for each partition.A 6800 partion can hold upto 2 domains.
• The alternate path withinh the system is available via a “repeater board”.
• The repeater board acts as aswitch and connects multiple cpu/memory/I/o
boards together.It has 3 components called Address repeater(AR), SunFire
data controller (SDC) and data crossbar.
Netra systems
• Rack mountable systems
• Has no console & keyboard
• Configured through serial console port or
through ethernet port
• Typically used in ISP setups
Sun fire systems…..

SunFire15K

SunFire6800
New systems…..
SunBlade 2000

SunFire V880
Sunray100 & Sunblade100

SunRay100

Sun Blade 100


Ultra Enterprise 10000

Features :
•Popularly called as “STARFIRE”
•Can have upto 4 Dynamic Domains in the same Server .
•16 System boards with 4 CPUs each and upto 4 GB mem in each .
•All the boards communicate via a Gigabit - XB backplane .
•Supports upto 20 Tbytes of Disk capacity .
Storage Devices

The storage devices used along with sun systems are

• Uni pack
• Multipack
• Sparc Storage Array( SSA100 & SSA200)
• Sun StroEdge product family
• Sun Enterprise Network Array-SENA
• T3 array
Unipack & Multipack

• Unipack : Houses one device . It can be a hard


disk , a tape device or a CDROM drive

• Multipack : Houses more than one hard


disk/tape/cdrom device .

• Identified with ok > probe-scsi-all command

• uses SCSI technology


Sparc Storage Array

• Sparc Storage Array(SSA) connects to a host via fibre optic


cables
• The SSA has a Fibre Channel Optical Module( FC/OM) which
• connect to a Fibre Channel Sbus ( FC/S) card mounted on the
• host system .
• Possible to connect a maximum of 2 FC/OM ‘s per FC/S card.
• Has two models SSA100 and SSA200
• SSA100 : Has 3 drive trays each of which can hold 10 drives
Fibre optic cable
• SSA200 : Has upto 6 differential SCSI disk trays , Has removable
power supply,LCD
FC/S display & diskarray controller module
HOST
• Uses LUN funda
card to identify disks.
FC/OM
SSA
card
T3 Array
• Hardware based array
• Has fibre-to-fibre architecture
• Uses OS independent ethernet based configuration tool
called SunStorEdge component manager
• Supports RAID 0,1,5 levels
• Available in tabletop,rack-ready or factory rack
mounted enclosures
• Available as
• T3 Workgroup array (T3WG) and
• T3 Enterprise array(T3ES)
Comparison of T3WG and T3ES
Feature T3WG T3ES

Raid controller Single Dual failover


hotswap
redundant
controller with
mirrored cache
Number of Nine(9) Eighteen(18)
disks
Cache 256MB, Battery 256 mirrored
backed backed battery
backed
Hot swap Two(2) Four(4)
power supply
Raid Levels 0,1 & 5 1&5
T3 WG
• Each T3WG is a self contained independent controller unit.
Each controller unit contains one drive tray enclosure .The
enclosure contains 2 hot-swappable redundant power
supplies ,2 hot swappable redundant unit interconnect cards ,
9 hot swappable RAID ready dual ported bi-directional fibre
channel drives and a RAID controller with 256MB of cache.
• Thus each tray has a RAID controller( H/W RAID).
• T3WG supports 18.2/36.4/73GB drives .
• Maximum upto 72 drives per cabinet and upto a maximum
of 32 cabinets can be connectd to a system.Uses a special
GBIC system interface.
Sun Storedge A5000 series

• Based on second generationfibre channel


technology with redundant features
• Has capacity of 45GB upto 12TB
• Uses Dual ported 9GB FCAL drives
• supports RAID 0,1,0+1 & 5
• uses a 100 MB/s full duplex fibre technology
• A5200 array can have upto 22 drives per sub
system
Sun storage solutions
• Desktop range : unipack and multipack

• Workgroup range : A1000,D1000,T3-WG

• Midrange : Sun StorEdge 3900,6900, T3,


A5200

• Data Centre : StorEdge 9910 , 9960

• Tape libraries : DLT 8000 , L1000 etc


Sun storage solutions( contd…)
Sun storage solutions…Contd...
StorEdge
9960
Datacentre

StorEdge
L 1000
Tape Library
Latest systems from Sun

Sun Blade series and their code names:


100 - Grover
1000 - Excalibur
Sun Fire series and their code names :
280R - Littleneck
V880 - Daktari
3800 - Serengeti 8
4800 - Serengeti 12
6800 - Serengeti 24
12000( 12k) - Starkitty
15000(15k) - Star cat( Serengeti 72)
Input Devices
• Type 4 Keyboard used in 4c and some 4m
architecture machines.Has a dip switch inside the
housing to set the Language type. A Type 4
Optical mouse can be connected to the
keyboard.Requires a reflector pad.
• Type 5 keyboard are used in all later models.Can
be connected in Type-4 slots.Support Type -5
mechanical and Type -4 or Type -5 optical mouse.
Monitors
• Many 17”,19”,20” ,21” and the latest 24”
monitors are used.Some older models may
have RGB outputs separately and need a
converter to connect the the display cards.
• Newer models have a single connector to
connect from the monitor to machine.
Module 3

The Boot PROM


The Boot PROM
• The Boot PROM contains the program (monitor) for the power-on-self-
test(POST) and system initialization sequence. An Fcode interpreter
allows the usage of drivers across different hardware platforms
• PROM features
a)POST
initiated by a system reset condition or by a boot command
verifies the basic CPU board logic ,Tests vary with different system models
b) Device drivers
small basic drivers make initial contact with various peripherals during a boot operation
User interface
c) Boot commands
Commands to modify configuration information
d) diagnostic commands
Commands to display or diagnose hardware components
The Openboot concept
• OpenBoot firmware is executed immediately after you turn on your system. The
primary tasks of OpenBoot firmware are to:
a)Test and initialize the system hardware
b) Determine the hardware configuration
c)Boot the operating system from either a mass storage device or from a network
device
d)Provide interactive debugging facilities for testing hardware and software
• The OpenBoot architecture provides a significant increase in functionality and
portability when compared to proprietary systems of the past. Although this
architecture was first implemented by Sun Microsystems as OpenBoot on
SPARC(TM) systems, its design is processor-independent
• Versions : 1.x (the original SPARC boot PROM) ,2.x (the first OBP) ,3.x(OBP
with down loadable firmware) and 4.x ( Latest version)
The openboot commands
• banner: CPU, Memory,HostID, Ethernet address
• boot : -a , -r ,-s ,-v, -x options
• help
• reset
• probe-scsi ,probe-ide ,probe-fcal ,probe-scsi-all,probe-sbus
• devalias and nvalias
• Printenv and setenv
• show-sbus ,show-devs, show-disks , show-displays ,show-nets, show-
post-results ,show-tapes
• .enet-addr , .idprom , .version , .speed
• watch-clock , watch-net
• test-floppy , test-all , test /memory
• set-default <parameter> , set-defaults
• obdiag
The Device tree
• OpenBoot deals directly with hardware devices in the system. Each device has a
unique name representing the type of device and where that device is located in the
system addressing structure.
• A full device path name is a series of node names separated by slashes (/). The root of
the tree is the machine node, which is not named explicitly but is indicated by a
leading slash (/). Each node name has the form:
• device-name@unit-address:device-arguments
• The following example shows a full device path name
• sbus@1f,0/esp@0,40000/sd@3,0:a
• 1f,0 represents an address on the main system bus, because the SBus is directly
attached to the main system bus in this example.
• 0,40000 is an SBus slot number (in other words, 0) and an offset (in other words,
40000), because the esp device is at offset 40000 on the card in SBus slot 0.
• 3,0 is a SCSI target and logical unit number, because the disk device is attached to a
SCSI bus at target 3, logical unit 0.
Module 4
Solaris Installation
• Objectives:
– Features of Solaris 2.x
– define software configuration,clusters and packages
– Identify the hardware requirements for the solaris 2.x
on a standalone workstation.
– Prepare an existing system for standalone installation
– Installing the OS and reconfiguring the system
– The Solaris boot process
Capabilities of Solaris 2.x
• Sun OS 5.x = Solaris 2.x
• Solaris is based on SVR5 unix
• Solaris 2.x supports NFS distributed file system Domain Name service and Network
information services
• Currents versions : Solaris7,Solaris8 and Solaris9
• Features of Solaris 9 ( latest version):
• LDAP is tightly integrated to kernel( has built in iPlanet directory server)
• No longer supports NIS and Intel Solaris version
• Solaris 9 Resource Manager : allows improved management of system resources via use of
resource pools
• Solaris volume manager allows soft aprtitions - thus eliminating the 8 slices per disk barrier
• Uses Internet Key Exchange (IKE) protocol, PPP4.0 and Solaris Secure Shell and
Sendmail version 8.12
Sun system configuration
• The sun network computer environment
includes system configurations such as
– client systems
• diskless
• auto client
• java station
– standalones
– servers
Software groupings
• Packages
– is a group of files and directories .
– Eg SUNWman
• Clusters
– packages are grouped into logical connections
called clusters

Software configuration
Core = Basic OS, Kernel & Drivers
clusters
( SUNWCreq needs 718MB space)
• End user = core + open windows GUI
( SUNWCuser needs 1.2GB)
• Developer = end user + man pages
( SUNWCprog needs 1.5 GB)
• Entire distribution
( SUNWCall needs 1.9 GB)
• Entire distribution + OEM
( SUNWCXall needs 2.3 GB)
Hardware Requirements
• It must be based on SPARC or Intel system.
• Around 2.3 GB of free disk space( for
entire plus OEM installation cluster )
• Minimum 64Mb RAM
• It must include CD-ROM drive or
• network connectivity .
Installation preparation

• Log in as root
• have all users close all the files and log out
• backup all user files and store the backups
• shutdown the system to an idle state.
• Procure the 3 installation CD’s
• ( solaris installation CD ,
• Solaris 8 software 1 of 2 &
• Solaris 8 software 2 of 2)
Installation Process
• Insert the Installation CD-ROM into the drive
• Boot the release media ( Solaris Installation CD)
OK> boot cdrom
• Keep the swap partition at the starting cylinder ( default
needs 512MB of space for solaris8)
• The Mini-root will be copied and the system will reboot .
On reboot it will ask for CD 1 of 2.
• During installation answer the following
– host name
– network connectivity
– IP address
Installation process …….
• The networking information section
• The Time zone verification section
• Selecting software
– customizing the software
• Selecting disks
• Filesystem and disk layout
• Reboot info
• The whole of the system info can be re-entered
after installation using the command
# sys-unconfig
Solaris Boot Process
Each SPARC based system has a PROM (programmable read-only memory)
chip with a program called the monitor. The monitor controls the operation of
the system before the kernel is available. When a system is turned on, the
monitor runs a quick self-test procedure that checks things such as the
hardware and memory on the system. If no errors are found, the system begins
the automatic boot process.

Boot phases : 4 phases in solaris.They are BootPROM,Boot programs,kernel


initialization and Init .

Boot PROM :
1. The PROM displays system identification information and then runs self-test
diagnostics to verify the system's hardware and memory.
2. Then the PROM loads the primary boot program, bootblk, whose purpose is
to load the secondary boot program located in the ufs file system from the
default boot device.
Boot Process……….contd...
Boot programs :

3. The bootblk program finds and executes the secondary boot


program, ufsboot, and loads it into memory.

4. After the ufsboot program is loaded, the ufsboot program loads


the kernel.

Kernel initialization :

5. The kernel initializes itself and begins loading modules, using


ufsboot to read the files. When the kernel has loaded enough
modules to mount the root file system, it unmaps the ufsboot
program and continues, using its own resources.

6. The kernel creates a user process and starts the /sbin/init process,
which starts other processes by reading the /etc/inittab file.
Boot Process……….contd...

Init Phase:

7. The /sbin/init process starts the run control (rc) scripts,


which execute a series of other scripts. These scripts
(/sbin/rc*) check and mount file systems, start various
processes, and perform system maintenance tasks.

This completes the boot process….

The platform-independent kernel is /kernel/genunix.

The platform-specific component is /platform/`uname


-m`/kernel/unix
The solaris boot programs...
bootblk: The primary boot program in solaris. It can be installed by
running the installboot command .A copy of the bootblk is available at
/usr/platform/`arch -k`/lib/fs/ufs/bootblk

ufsboot: The secondary boot program, /platform/`arch -k`/ufsboot is run.


This program loads the kernel core image files.

kernel:
For 32-bit Solaris systems, the relevant files are:
/platform/`arch -k`/kernel/unix and /kernel/genunix
For 64-bit Solaris systems, the files are:
/platform/`arch -k`/kernel/sparcV9/unix and /kernel/genunix
Run levels
A system's run level (also known as an init state) defines what services and resources are
available to users. A system can be in only one run level at a time. The Solaris
environment has eight run levels ( 0,S,1,2,3,4,5 and 6) . The default run level is
specified in the /etc/inittab file as run level 3.
Runlevel 0: Shutdown the system( Power down) . You will get ok> prompt
Runlevel S or s : Single user state with all file systems mounted and accessible
Runlevel 1 : Administrative state with all files accessible and user logins allowed
Runlevel 2 : multi user state with all daemons running except NFS server.
Runlevel 3: Multi user state with NFS resource sharing available
Runlevel 4: Alternate super user, currently unavailable
Runlevel 5: shut down and turn off the power if possible.
Runlevel 6 : reboot the system
You can change the current run level with one of the following commands
# init < init level > or # shutdown -y -g0 -I <init level>
/etc/inittab file
When you boot the system or change run levels with the init
or shutdown command, the init daemon starts processes by
reading information from the /etc/inittab file. This file defines
three important items for the init process:
The system's default run level
What processes to start, monitor, and restart if
they terminate
What actions to be taken when the system enters
a new run level
Each entry in the /etc/inittab file has the following fields:
id:rstate:action:process

The /etc/inittab file controls the init process.


Run Control (rc) scripts
Run control (rc) scripts control run level changes. Each run level has
an associated rc script located in the /sbin directory.For each rc script
in the /sbin directory, there is a corresponding directory named
/etc/rcN.d that contains scripts to perform various actions for that run
level. For example, /etc/rc2.d contains files used to start and stop
processes for run level 2. The /etc/rcn.d scripts are always run in
ASCII sort order. The scripts have names of the form:
[KS][0-9][0-9]* ( eg:- S98sendmail, K07snmpdx)
Files beginning with K are run to terminate (kill) a system process.
Files beginning with S are run to start a system process.Run control
scripts are also located in the /etc/init.d directory. These files are
linked to corresponding run control scripts in the /etc/rcn.d
directories.
Starting and stopping services
Done by invoking corresponding script in /etc/init.d directory.
For example to start the NFS server in your system use
# /etc/init.d/nfs.server start
To stop the NFS server service use
# /etc/init.d/nfs.server stop
Note : Always stop a service then start it.
To verify whether the service has been stopped or started using
# pgrep -f nfs
/etc/system file
The solaris kernel is now dynamically configured. It consists of a small static
core and many dynamically loadable kernel modules. Drivers, file systems,
STREAMS modules, and other modules are loaded automatically as needed,
either at boot time or at runtime. When these modules are no longer in use,
they may be unloaded. Modules are kept in memory until that memory is
needed.
“Modinfo” command provides information about the modules
currently loaded on a system. Similary we have modload and
modunload commands for manual loading and unloading of modules
The loading of modules is controlled by the /etc/system file.The
kernel tunable parameters are manually specified in the /etc/system
file for performance tuning etc. The file contains commands of the form:
set parameter = value
eg: set maxusers=256
Module 5
software package administration
• Objectives:
– Display software package information
– Add software package from a CD-ROM drive
– Remove a software package
– Add and remove software packages using the
admintool software program
– Add a software package from a spooled
directory
Package
• All bundled and unbundled software is
distributed as packages on a solaris 2.x
system.
• Packages contain:
– Files describing the package
– files describing the relationship to the target
system
– The actual files to be installed
The pkginfo command
• The pkginfo command is used to display
software package information.
• Pkginfo -d device -l pkgname
– #pkginfo |more
– displaying a listing from the cdrom
– #pkginfo -d
/cdrom/cdrom0/s0/Solaris_2.6/Product |more
The pkgadd command
• Use the pkgadd command to add a software
package.
– Package [-d device] pkg_name

– #pkgadd -d /cdrom/cdrom0 SUNWspro


– # pkgadd -d .
The pkgrm command
• Use the pkgrm command to remove a
software package.
– Pkgrm package_name

– # pkgrm SUNWspro
– # pkgrm LGTOman
The pkgchk command
• The pkgchk command verifies that the
attributes and contents of package name s
are correct by comparing them to their
values as specified in the system log file.
– #pkgchk SUNWaudio
Log file
• The /var/sadm/install/contents file
– The pkgadd command updates the above
file ,which is a file listing all the packages that
are installed in the system.
– This makes it possible to identify package that
contain a particular or related files.
– The pkgrm command uses this file to identify
where files are located and updates the file.
How to display software
information
• Admintool
• browse
• software
• Adding and removing software using
admintool.
Spooling packages
• A package can be copied from the
installation CDROM without installing it so
that it can be stored on a system.
– Pkgadd -d
/cdrom/cdrom0/s0/Solaris_2.6/Product -s spool
SUNWaudio
• This will spool the package into the /var/spool/pkg
directory.
• You can specify a different directory as an
argument to the -s option.
– mkdir /export/pkgs
– pkgadd -d
/cdrom/cdrom0/s0/s0/Solaris_2.6/Product -s
/export/pkgs SUNWaudio
Command summary
• Pkginfo-- lists packages installed on the
system or on media.
• Pkgadd -installs packages
• pkgrm removes packages
• pkgchk verifies the attributes and contents
of the path names belonging to packages.
Files and directories
• /var/sadm system log and admin files
• /opt/packagename preferred location for
the installation of the packages
• /opt/pkgname/bin preferred location for the
executable files
• /var/sadm/install/contents package map of
the entire system.
Module 6
Maintaining patches
• Objectives:
– Obtain current patch information and patches
– verify current patches installed on your system
– Install patches
– back out patches
Patch
• In its simples form ,you can think of a patch
as a collection of files and directories that
replaces or update existing files and
directories that are preventing proper
execution of the software.
• Patches correct application bugs or add
product enhancements.
• Each patch has a readme file that details the
bug it fixes .
• the readme file contains other important info
about the patch
Patch numbering
• Patches are assigned numbers and are packaged in a directory
named with the patch number.
• If the number id 10xxxx and the revision is yy then the directory
name will be 10xxxx - yy ( eg: 108625-14)
• /var/sadm/patch
– contains information about the patches

The patch distribution can be obtained from


• http://sunsolve.sun.com
• ftp sunsite.unc.edu
• sun solve CD’s
Using ftp
• Bin
• hash
• connect
• bye
• user
• get and mget( multiple get)
• put and mput( multiple put)
Pre-solaris 2.6 patch contents
• The patch contents and installation tools
have changed in 2.6 .
• Pre 2.6
– the patch directory contains
• install.info and readme
• actual patches
• installpatch and backoutpatch files
Solaris patch installation
• Solaris 2.6 ( and above) patch contents
– installpatch and backoutpatch are no longer
present.
•Solaris reccomended patches installation :
– Patchadd
• Download
– patchrmthe latest patch cluster from
sunsolve.sun.com site .
• Unzip and untar the patch directory in /tmp
• Install patches using “install_cluster” utility
Checking the patch status
• On pre 2.6
– showrev -p
• On 2.6 and above
– patchadd -p
To determine the current patch level use
– showrev -p
#uname -a
Preparing patches for installation
• Depending on where you get the patches
from
– compressed tar files (ftp and web
• 105030-11.tar.Z
– Use
gzipthe followingfiles(
compressed commands
CD to
uncompress and extract the patches
• 105030-11.tar.gz
zcat ./105030-11.tar.Z |tar xvf -
gzcat ./103050-11.tar.gz | tar xvf -
Patch installation
• Cd into the directory where the patch is
extracted
Pre 2.6 installation :
• patchadd <patch number>
Cd into the patch directory
./installpatch .

2.6 and after :

cd to the patch directory


# ./install_cluster
Module 7
Adding users
• Objective:
– Use admintool to create a new group and a new user
account
– Use the appropriate default environment files from
/etc/skel to set up a user environment
– change the password
– setup password aging on an existing user account using
admintool
– Lock a user account using admintool
– delete a user account using admintool
Admintool
• Users
• groups
• hosts
• printers
• serial ports
• software
• Admintool is run under the CDE or openwindows
Adding groups and users
• #admintool &
• groups
• add
– name
– numberline :
Command
–useradd
member list
, gropuadd, userdel,groupdel,usermod
commands
• Users
– browse
– users
– add
• name
• ID
• group primary and secondary
• shell and directory
• password -- aging
Modifying a user account
• Admintool &
• select the login name of the user
• choose modify from the edit menu
• choose account is locked
• verify in the /etc/shadow
Deleting a user account
• Admintool
• select name of the user
• choose delete from the edit menu
Module 8
System security
• Objective:
– use the id command
– describe the user account
– describe the purpose of the sysadmin group
– change user ownership of files and directories
– change group ownership of files and directories
– who and last commands
– Describe the format of /etc/passwd and /etc/shadow and
/etc/group files
– restrict access to the root account
– describe how to monitor logs
Password security
• As system admin you should ensure
– all user account are protected by a password
– encourage users to perform recommended
password maintenance,such as changing them
often.
Identifying users and groups
• UID’s: ( User ID)
– UID’s provide authentication for the login
procedures.
– Identify the ownership of files and directories.
• GID’s :(Group ID)
– Between 0 & 99 are reserved for special
identify group membership of users,files and
accounts
directories.
A user may belong to 1 primary and 15
secondary groups.
Between 0 and 99 are reserved for special
accounts
The id command
• Use the id command to identify your user
ID,user name ,group ID and group name.
– Id

– id -a ?????
The super user account
• Account : root ,UID of 0 , GID of 1
• read and write access to all files stored in
the local disks
• can send kill signal to all processes under
the control of the system’s CPU.
• No limitations
Usage of root account
• Shutting down the system
• backing up and restoring file systems
• mounting and unmounting file systems
• adding user accounts
• enabling password aging
• Members of the sysadmin group can modify
databases using the admintool.
Switching users
• Becoming super user
– su
• becoming a different user
– su - bob
– su bob ????????
– When you use the - option the environment of
the new user is adapted.ie.it causes the
/etc/profile and $HOME/.profile to be executed.
File ownership
• The owner of a file identifies the user to
whom the file belongs .
• When you create a file ,you are the owner
of the file.
• Ls -l and the first 10 fields
Chown command
• The chown command is used to change the
ownership of the files and directories.
• Only super user can use the chown
command.
– Chown user_name filename
– chown UID filename
The chgrp command
• Use the chgrp command to change the
ownership on files or directories.
– Chgrp groupname filename
– chgrp GID filename
The groups command
• Use the groups command to display group
memberships.
• Used with an argument ,the groups
command displays the groups to which the
user name belongs
– groups
– groups username
Monitoring system access
• The who command
– console- displays boot and error messages
– pts pseudo device
– term ASCII terminal device
• who -r : Gives out current system run-level
• user login database in /var/adm/utmpx file
• user login history in /var/adm/wtmpx file
• The last command
– use the last command to display the login and
logout information.
– Displays the most recent entries first.
– What
. The fingerare the information that this command
comamnd
gives?
- Display information about local and
remote users , who are currently logged in
/etc/passwd file
• Maintaining the /etc/passwd file is an integral part of system security.Without
and entry in this file ,users are unable to log in to a system.
• Each passwd record has seven fields separated by a colon
login:x:UID:GID:comment:homedir:shell
LoginID - this field represents the login name
x - this field is the place holder for the users encrypted passwd entry in the
/etc/shadowfile
UID - User Id
GID - Group ID
comment : some comment
homedir - User’s home directory( default is / )
shell - user’s login shell ( default is bourne)
/etc/shadow file
• Only super user can access this file.
• When a password is encrypted,it appears as
a series of numerals and uppercase and
lower case letters unrelated to the actual
word.
• Record format
– loginID
– password
• a 13 character encrypted password
• *LK* indicates that the account is locked
• NP indicates no password
– lastchg
• indicates the number of days between jan 1 1970
and last password modification
– Min
• minimum number of days between password
changes
– max
• maximum number of days for which password is
valid
– warn
• number of days that the user is warned before
password expires
• Inactive
– number of inactive days before the account is
locked
• expire
– this field contains the date when the user
account expires
The /etc/group file
• The /etc/group database defines all system
groups and specifies any additional groups
to which a user belongs.
• Record format
– groupname
– password
• this field is for a group password and is currently
unused
– GID
– userlist
The /etc/default directory
• Several files containing variables that
specify system defaults are located in the
/etc/default directory.
• The files which relate to system security are
– login
– passwd
– su
/etc/default/passwd
• Variables contained:
– MAXWEEKS
• specifies the maximum number of weeks a
password is valid.
– MINWEEKS
• specifies the minimum number of weeks between
password changes
– PASSLENGTH
• minimum password length.
/etc/default/login
• Variables contained:
– PASSREQ
• if yes null passwords are not permitted.
– CONSOLE
• if defined ,root login is permitted only on console.
/etc/default/su
• Variables contained:
– SULOG
• this value specifies the name of the file in which all
the su attempts will be logged.
– CONSOLE
• all successful attempts to become super-user are
logged into the console in addition to log file.
Monitoring the su command
• Look at the /var/adm/sulog file to verify who is
using the su command t become superuser.
• Format
– su
– 10/20 date and time
– + or - success or failure
– console the device
– root-sys users
Role Based Access Control
(RBAC)
RBAC stands for Role Based Access Control.
RBAC can be thought of as a way to delegate system tasks to a
combination of designated users and groups. The traditional UNIX
model is one of a single computer system that shares its resources
among multiple users. However, the management of the system is
left to a single 'superuser' because the rights of this special account
give access to the entire system. This could lead to problems of
misuse or simply misunderstanding.

The RBAC system allows a subset of tasks that fall under 'root'
access to be granted to the user community, in the hopes that savvy
users can correct their own problems, and daily administrative tasks
can be off-loaded by the (usually) very busy administrator.
How RBAC works ???
RBAC elements
• The RBAC model introduces three elements to the Solaris Operating
Environment:
• Role - A special identity that can be assumed by assigned users only.
• Authorization - A permission that can be assigned to a role or user to
perform a class of actions otherwise prohibited by security policy.
• Rights Profile - A package that can be assigned to a role or user. It may
consist of
a) Authorizations ,
b) Commands with security attributes( The Solaris security attributes are
the setuid functions for setting real or effective user IDs (UIDs) and group
IDs (GIDs) on commands)
c) Supplementary (nested) rights profiles
RBAC files
• /etc/user_attr is the extended user attributes database. The file
contains users and roles with authorizations and execution
profiles.
• /etc/security/auth_attr is the authorization attributes database.
All system authorizations and their attributes are listed here.
• /etc/security/prof_attr is the execution profile attributes
database. Profiles on the system are defined here. Each profile
has an associated authorization and help file.
• /etc/security/exec_attr is the profile execution attributes
database. This file is where each profile is linked to its
delegated, privileged operation.
Module 9
Administration of initialization
fields
• Objectives:
– set up a variable in the .profile file
– maintain the /etc/profile file
– customize the templates in the /etc/skel
directory
– customize initialization files.
Initialization files for users
• Initialization files contain a series of
commands that are executed when a shell is
started.
• Two types of initialization files: system and
user.
• System files are in /etc directory
System initialization files
• The system initialization files for the bourne
and the korn shells is /etc/profile.
• The system initialization files for the c shell
is /etc/.login
• templates for these files are in /etc/skel
directory.
/etc/profile
• The /etc/profile file
– exports environment variables such as
LOGNAME for login name.
– Exports PATH
– sets the variable TERM for the default terminal
type
– displays contents of /etc/motd file
– sets default permissions
/etc/skel directory
• It contains the templates of initialization
files .
• Use the initialization files as a starting point
for providing prototype initialization files
for the users.
Comparison of shell environments

Description Bourne Korn shell C shell


shell

System- /etc/profile /etc/profile /etc/login


wide

User $HOME/ $HOME $HOME/


specific .profile /.profile .login

NIL .kshrc .cshrc


Shell
specific
Module 10
Advanced file permissions
• Objectives:
– display and change the default permissions
– set access control lists on files
– how setuid and setgid relate to system security.
– Identify and set sticky bit
– how sticky bit protects files and directories.
The umask filter
• The umask filter determines the default
permissions for files and directories.
• The permissions are assigned during the
creation of new files and directories.
• Displaying your umask
– $umask
• How umask filter works on
– files
– directories.
• Changing umask value
– umask 027
• How to permanently change the umask
value
– vi .profile
Access control lists
• ACL’s provide greater control over file
permissions.
• The traditional UNIX file protection
provides read write and execution
permission for owner,group and others.
• ACL enables to define permissions for the
owner,owner’s group,other,specific users
and groups.
The setfacl command
• Setfacl options aclentry filename
– options
• -m creates an acl
• -s replaces entire acl with new acl
• -d deletes acl
• -r recalculates the acl
• Acl entries
– user::perms the owner’s permissions
– group::perms permissions for owners group
– other:perms permissions for users other that
he owner or members of owner’s group
– mask:perms the mask entry indicates the
maximum permissions allowed for users and
for groups.
ACL commands
• Creating an ACL
– setfacl -m user:username:6 filename
• checking id an ACL exists
– ls -l filename
• deleting an ACL
– setfacl -d user:username:6 filename
• The getfacl command
– to verify that an ACL was set on the file,use the
getfacl command.
– Getfacl options filenames
• options
– -a
– -d
• The following command is used to set the user
permissions to read/write,group permissions to read
only and other permission to none.In addition the
user ss20 is given read/write permissions on the file
and the mask is set to read/write,which means no
user or group can have execute permission
• Setfacl -s user::6,group::4,other:0,mask:6,user:ss20:6
filename
• check for ACL using ls -l
• get ACL info from getfacl
The setuid and setgid permissions
• Setuid and setgid on files and directories
• setgid on directories
• these special permissions enable to control the
modification of files and shred directories.
• If a program has setuid permission,anyone who has
permission to run the program is treated as if he or
she were the program’s owner.
• If a program has setgid permission,anyone who has
permission to run the program is treated as if he or
she belonged to the programs group.
• Executable programs with setuid and setgid
permissions get their UID’s and GID’s from the
owner and group of the program file,instead of
inheriting their UID’s and GID’s from the process
that started them.
• Directories that have setgid permission propagate
their GID to files created below them.That is,new
files and directories will belong to the same group
as the parent directory.
• Identifying setuid and setgid permissions
– the setuid and the setgid bits are displayed as
the letter “s” in the execute field for owner and
group.
– Ls -l
Setting UID and GID
• The setuid and the setgid permissions are set with
chmod command .numeric notations requires four octal
numbers when specifying setuid or setgid and uses the
left most number to refer to these special permissions
• 4=setuid,2=setgid,1=stickybit
• #chmod 4755 filename
• #chmod 2755 filename
• if a capital S appears,it is an error condition indicating
that the setuid or the setgid bit is on and the execute bit
is off.
The sticky bit
• If a directory is publicly writable and has
the sticky bit set,files within that directory
can be removed or renamed only if one or
more of the following is true
– the user owns the file
– the user owns the directory
– the file is writable by the user
– the user is the super user
• Identifying the sticky bit
– the sticky bit is displayed as the letter “t” in the
execute field for others.
– A T is an undefined bit state indicating that the
sticky bit is on and execute is off.
– Ls -l
• Setting the sticky bit
– chmod 777 project
/etc/magic file
1) The “file” command is used to determine the type
of a file .
For example :
# file mac5.gz
mac5.gz: gzip compressed data - deflate method ,
original file name
2) The file command identifies the type of a file using a certain “magic number”
specified in the file header.
3) The /etc/magic file specifies what magic numbers are to be tested for, what
message to print if a particular magic number is found, and additional information
to extract from the file.
4) The /etc/magic file specifies the mapping between file type and
the magic number.
Module 11
Introduction to file systems
• Objectives:
– define the geometry of the disk
– display device configuration
– describe how slices are defined on the disk
– define the term file system
– display mounted filesystems
– display disk space usage by file systems
Physical features of the disk
• A disk drive is composed of the following parts
– disks are composed of several platters
– platters rotate around a spindle
– the read/write heads are moved as a unit by the head actuator
arm.

• The smallest units on the platter are sectors of 512 bytes


each.
• Sectors are sections of a tack.
• The sum of tracks provided by all the heads at a given
position is called cylinder.
Soalris File system Layout

VTOC- Volume Table Of Contents


Contains Partition table , partition tags
and partition flags
VTOC BOOT block : Has the boot image .
Present only if the slice is bootable
BOOT Block
SUPER Block : Contain overall file
system info
SUPER Block
INODE block : stores file permissions
INODE Block and pointers to data blocks which hold
the file/directory
DATA block : where the data is
DATA
kept.The data area is logically divided
block into areas of size determined by the
block size of file system
UFS File system
A UFS file system has these four types of blocks:
Boot block - Used to store information used when booting the
system
Superblock - Used to store much of the information about the
file system
Inode - Used to store all information about a file except its name
Storage or data block - Used to store data for each file
The Boot Block
The boot block stores the procedures used in booting the system.
If a file system is not to be used for booting, the boot block is left
blank. The boot block appears only in the first cylinder group
(cylinder group 0) and is the first 8 Kbytes in a slice.
Super Block
Some important things it contains are:
Size and status of the file system ,Label (file system name and volume name),Size
of the file system logical block,Date and time of the last update ,Cylinder group
size,Number of data blocks in a cylinder group,Summary data block,File system
state: clean, stable, or active,Path name of the last mount point etc .
The superblock is located at the beginning of the disk slice, and is replicated in
each cylinder group. Because the superblock contains critical data, multiple
superblocks are made when the file system is created. Each of the superblock
replicas is offset by a different amount from the beginning of its cylinder group. For
multiple-platter disk drives, the offsets are calculated so that a superblock appears
on each platter of the drive. That way, if the first platter is lost, an alternate
superblock can always be retrieved. Except for the leading blocks in the first
cylinder group, the leading blocks created by the offsets are used for data storage.
A summary information block is kept with the superblock. It is not replicated, but
is grouped with the first superblock, usually in cylinder group 0. The summary
block records changes that take place as the file system is used, and lists the number
of inodes, directories, fragments, and storage blocks within the file system.
Inode Block

An inode contains all the information about a file except its name, which is kept in a
directory. An inode is 128 bytes. The inode information is kept in the cylinder
information block, and contains: The type of the file(Regular,Directory,Block
special,Character special,Symbolic link,FIFO also known as named pipe,Socket),The
mode of the file (the set of read-write-execute permissions),The number of hard links
to the file,The User ID of the owner of the file,The Group ID to which the file
belongs,The number of bytes in the file,An array of 15 disk-block addresses,The date
and time the file was last accessed,The date and time the file was last modified,The
date and time the file was created
The array of 15 disk addresses (0 to 14) point to the data blocks that store the
contents of the file. The first 12 are direct addresses; that is, they point directly to the
first 12 logical storage blocks of the contents of the file. If the file is larger than 12
logical blocks, the 13th address points to an indirect block, which contains direct
block addresses instead of file contents. The 14th address points to a double indirect
block, which contains addresses of indirect blocks. The 15th address is for triple
indirect addresses, if they are ever needed
File system address chain

Data
Inode block
0 to 11
Single Indirect block

12
Single
indirect
block1
13 Double indirect
block

14 Single
Tripple indirect indirect
block blockN
Data blocks and Free blocks
Data blocks :
The rest of the space allocated to the file system is occupied by data blocks, also called
storage blocks. The size of these data blocks is determined at the time a file system is
created. Data blocks are allocated, by default, in two sizes: an 8-Kbyte logical block
size, and a 1-Kbyte fragmentation size.
For a regular file, the data blocks contain the contents of the file. For
a directory, the data blocks contain entries that give the inode number
and the file name of the files in the directory
• Free Blocks :
Blocks not currently being used as inodes, as indirect address blocks,
or as storage blocks are marked as free in the cylinder group map.
This map also keeps track of fragments to prevent fragmentation from
degrading disk performance.
Introducing disk slices
• Slices
– disk storage devices are divided into sections
called slices.
– A disk drive provided by Sun can contain up to
eight slices,labeled 0 thro 7.
– Slice 0 and 1,by default contain root and swap
respectively.
– By definition,slice 2 represents the entire disk.
• Slices are configured during installation.
• The advantages to partitioning are
– functionally organize the data
– enables the super user to develop backup
strategies.
File systems
• The file structure tree consists of a root file
system and a collection of mountable file
systems.
– The root filesystem
• system operating files and directories.
– The /usr filesystem
• admin utilities and library routines
– /export/home filesystem
• users home directories.
– /opt file system
• contains optional unbundled and third party
software.
Logical device names
• Contained in the /dev directory
• consists of
– controller number
– target number
– disk number
– slice number
Files for mounting
• The /etc/vfstab file
– this maintains all the information required to
mount a file system at the boot time.
– Discuss the format/fields of this file.
• The /etc/mnttab file
– this contains a record of all the mounted file
systems.
Mounting filesystems
• The mount command
– the mount command when issued without any arguments displays
the currently mounted file systems.
• Mount
• A local file system is attached to the root file structure with
the mount command.
• The directory on which the filesystem is mounted to the root
file system is called a mount point.
– Mount filesystem mountpoint
• Mounting a large file enabled file system
– file systems containing files larger than 2 Gbytes can be mounted
without any special options
• Mounting a small-file enabled file system
– the nolarge option with the mount command
will force all files subsequently written to the
filesystem to be smaller than 2 Gbytes
• mount -o nolargefiles filesystem mountpoint
• this option fails if
– the filesystem contains a large file at the time of
mount
The mountall command
• The mountall command mounts multiple
file systems specified in a file system table.
• The local /etc/vfstab file is referenced.
• It will mount only those filesystems with
“yes” in the boot field of that file.
Unmounting file systems
• The umount command
– The umount command unmounts a currently
mounted filesystem that is specified in one or
more arguments as a mount point .
• #umount mountpoint
• the umountall command
– the umountall command causes all mounted
files except root,/proc,/var and /usr to be
unmounted.
Displaying the capacity of file
systems
• The df command
– the df command is used to display information for each
mounted file system.
• Df -k directory
• -k displays usage in Kbytes and subtracts the space reserved
by the OS from amount of available space.
• -h : a new option in solaris9 to list filesystem size in MB,GB
or TB
• The du command
– the du command is used to display the number of the
disk blocks used by directories and files.
– Options with the du command
• -k displays in Kbytes
• -s displays only the summary in 512 byte blocks.
• -a display the number of blocks used by all files and
directories within the specified directory hierarchy.
The quot command
• The quot command displays how much disk
space is used by users.
– Quot -af
• a report on all mounted file systems
• f include number of files
Module 12
Introduction to disk management
• Objectives:
– utilities to create,check and mount file systems
– list the potential advantages of any virtual disk
management application.
– List difference between Solstice DiskSuite and
Veritas Volume Manager
– advantages of concatenated and striped virtual
file system.
Preparing a slice for use as a file
system
• Before a slice or an entire disk can be used
to store data,it must first have a basic
filesystem structure created on it.
• The newfs utility is used for this purpose
• the newfs utility will destroy any existing
data on a slice.
– #newfs /dev/rdsk/c0t3d0s5
– #mkfs
.To create -F < FSswap
additional type>
file<raw-slice
use name>
# mkfile 100m /home/swap/myswapfile
Checking new file system
• The fsck command detects and interactively repairs
inconsistent file system conditions.
• Using fsck without any arguments will perform file
system checks on all file systems listed in the local
/etc/vfstab file.
– #fsck /dev/rdsk/c0t3d0s4

– To find the alternate super block use


“ -N “option of newfs command
– View VTOC contents using
# prtvtoc command
Mounting a new filesystem
• Mounting a file system
– mount /test

– mount /dev/dsk/c0t3d0s4 /test


– importance of -o option in mount command

– cd /test
– ls
• File system limitations
– a file system can consist of only a single slice
– a file system can be no larger than one Tbyte in
size.
• Block device and raw device paths.
– Mount /dev/dsk/c0t0d0s7 /mnt
– newfs /dev/rdsk/c0t0d0s7
– fsck /dev/rdsk/c0t0d0s7
Virtual volume management
• In order to overcome the limitation of one slice
per filesystem,there are virtual volume
management applications that can create virtual
volume structures in which a file system can
consist of almost an unlimited number of disks or
slices.
• There are two virtual volume managers available
through sun
– Solstice disk Suite
– veritas VxVM Volume manager
Access paths
• The key feature of all virtual volume management
applications is that they transparently control a file
system that can consist of many disk drives.
• The physical access paths are similar to regular devices
in that they have both raw and block device path.
• The following are typical virtual volume device path
names:
– /dev/md/rdsk/d42
– /dev/md/dsk/d42
– /dev/vx/rdsk/apps/logvol
– /dev/vx/dsk/apps/logvol
• Virtual volume building blocks
– Solstice DiskSuite: it uses standard partitioned
disk slices that have been created using the
format utility.
– Veritas VxVM: it manages disk space in a
partitionless environment.The veritas
application specially formats the disks and
internally keeps track of which portions of a
disk belong to a particular volume.
• All veritas volume are composed of pieces
called subdisks.
• Vitual volume types:
– concatenated volumes
– striped volumes
Concatenated volumes
• A concatenated volume combines portions
of one or more physical disks into a single
virtual structure.
• The portions are contiguous
• it creates a volume that is larger than one
physical disk.
• A volume can be grown “on-the fly”
Striped volumes
• Striping is a term for breaking up a data
stream and placing it across multiple disks
in equal-sized segments.
• Each physical disk is attached to a different
system interface
• data segments can be written in parallel
• performance improvement
RAID summary
§ RAID stands for Redundant Array of Inexpensive Disks.
§ The different raid levels are
• RAID 0 : Striping without parity, Not redundant
• RAID 1: Mirroring or Duplexing
• RAID 2: Uses hamming error correction codes
• RAID 3: Byte level data striping with fixed parity disk
• RAID 4: Block level data striping with fixed parity disk
• RAID 5: Block level data Striping with distributed parity
• RAID 1+0 : mirror two striped volumes
• RAID 0+1 : stripe a mirroed volume
• RAID 5+0
Module 13
Networks
• Objectives:
– describe IP addressing classes A,B and C
– functions of files --
hosts,nodename,hostname.xxy
– identify users logged in to local network
– log into one machine from another machine
– execute a command on another system
• Copy files from one system to another
• describe the files hosts.equiv and .rhosts
• ping and spray
• netstat -i command
Network terminology
• Broadcast bus
• CSMA-CD
• ethernet interface
– all sun workstations have an ethernet interface
built into the CPU board.
– The most common is le0 interface
– other interfaces are hme0 (100Mbps)
Ethernet address
• The ethernet address is a 48 bit number.
• It is represented by hexadecimal digits and
is subdivided into six two-digit fields
separated by colons.
• The ethernet address is also called as the
machine address code (mac)
• it is globally unique
Internet
• An internetwork is a linked group of LANs connected to a wide
area network.
• For network of computers to communicate ,each must have a
unique address that is known to the other computers on the
network.
• Internet addresses are 32 bits,which are divided into four 8-bit
fields.
• Each 8-bit field is represented by a decimal number between 0
and 255.
– [0-255|0-255|0-255|0-255]
• each internet address is divided into network number and the
host number.
• Network number
– the network number identifies your network to
the outside world.
• Host number
– you assign the host number that uniquely
identifies your workstation on your network.
– Do not use 0 or 255 for your host number.
Internet network classes
• Class A
– first bit is 0
– very large networks
– upto 16 million hosts
. –Class
the first
B 8 bits are network number.It can be
large 127
upto networks uptoA
for class 65000 hosts
networks
first two bits are 10
next 14 bits are the network number
the network number can be between 128 to
191
• Class c
– small and mid-sized networks upto 254 hosts
– first 3 bits are 110 and the next 21 bits are
network number.
– This allows upto 2,097,152 class c networks.
Networking files
• The /etc/inet/hosts file
– each ethernet address has a corresponding host
name.
– This file associates the IP address with the host
names.
. The /etc/inet/netmasks file
– Advantage??????
- Contains the Subnet
– The /etc/hosts file ismask of the link
a symbolic system.
to this file.
. The /etc/defaultrouter file conatins the default
gateway( to be created manually)
• The /etc/nodename file
– this file contains the host name.
• The /etc/hostname.le0
– this file identifies the ethernet interface such as le0 t be
configured at boot up and contains the host name or the host
or the hosts IP address.
• The /etc/hostname6.hme1 file links the interface hme1
to system name and binds it to IPv6
• The /etc/passwd file
– this file is looked at by the system when the remote access is
requested.
– An entry for the user in the local systems
passwd file enables that user to log in remotely.
• The /etc/hosts.equiv file
– this file identifies the remote systems as trusted
hosts.
– Advantage is that the need for sending ASCII
passwords on the network can be avoided.
• The users .rhosts file
– the rlogin process searches for this file.
– By default this file does not exist.
• Both hosts.equiv and .rhosts files have the
same format
– hostname
– hostname username
• If only the host name is used then users
from the named hosts are trusted
• if both the hostname and the username are
used then only the named users are trusted.
• If + is used then all the systems and all the
users are trusted.
The rlogin command
• This command enables a login session on a
remote system.
• The success of this command depends on
the hosts.equiv and the .rhosts file entries.
• Format
– rlogin hostname [-l username]
• use the -l option to specify a different user.
The rsh command
• This is used to execute a program on a
remote system
– format
– rsh hostname command
The rcp command
• This enable you to copy files or directories to and
from another machine.
– Format
– rcp sourcefile hostname:destination file
– rcp hostname:sourcefile destinationfile
– rcp -r /perm saturn:/tmp
• the rsh and the rcp commands require appropriate
entries in the hosts.equiv and .rhosts file.
The telnet command
• This is an industry standard program that
uses a server process to connect to the
operating system.
• The telnet server simulates a terminal to
enable a user to log into a remote host
system and work in that work environment.
The ftp command
• It is used to send files,get files from a
remote system.
• Files can be transferred in ASCII,bin and
dos formats.
The rusers command
• It is used to identify the users logged into a
remote system on the network
– format
– rusers hostname
• a gives a report for all the systems
• l gives a long listing.
The ifconfig command
• It is used to assign an address to a network
interface and to configure network interface
parameters.
• # ifconfig -a
• To change ip
• # ifconfig hme1 down
• # ifconfig hme1 192.9.55.26 netmask
255.255.255.0
• # ifconfig hme1 up
The ping command
• It sends an echo request to the named hosts.
• It does not tell you the state of the system but only
that its network interface is configured.
• PING ( Packet Ineternet Gropper) uses ICMP
( Internet Control Messaging protocol) echoes to
check whether a destination is reachable or not.
• Used to check physical connectivity to a
networked system.
The spray command
• Unlike the ping command this command uses the
higher level protocol.
• This command is typically used to test the response
of the system over a period of time.
• spray sends a one-way stream of packets to host
using RPC and reports how many were received,
as well as the transfer rate.
• spray is not useful as a networking benchmark, as
it uses unreliable connectionless transports, such
as UDP .
The netstat command
• This command displays the status of
various network related data structures.
• The output consists of
– name the network interface
– MTU maximum transmission unit.
– Net/Dest the name of the destination
– address the host name.
– Ipkts/Iers the number of input packets and
errors since the interface was configured
– Opkts/Oerrs the number of output packets and
errors since the interface was configured
– collis the number of collisions on this
interface
– queue the number of packets awaiting
transmission at the interface.
Adding routes
• To add/delete routes use “ route “ command
• To display the current routes use
# netstat -r
• To add a route use
# route add 192.0.2.32/27 somegateway
• will create an IPv4 route to the destination 192.0.2.32
with a netmask of 255.255.255.224
# route add -inet6 3ffe::/16 somegateway
• will create an IPv6 route to the destination 33fe:: with
a netmask of 16 one-bits followed by 112 zero-bits.
/etc/inet/networks
• Network name database file
• The networks file is a local source of
information regarding the networks which
comprise the Internet.
• The network file has a single line for each
network, with the following information:
<official-network-name> <network-number>
<aliases>

/etc/inet/netmasks
The netmasks file contains network masks used to implement IP subnetting. It supports
both standard subnetting and variable length subnetting . When using standard subnetting
there should be a single line for each network that is subnetted in this file with the network
number, any number of SPACE or TAB characters, and the network mask to use on that
network. Network numbers and masks may be specified in the conventional IP `.' (dot)
notation (like IP host addresses, but with zeroes for the host part). For example,
128.32.0.0 255.255.255.0
• When using variable length subnetting, the format is identical. However, there should be a
line for each subnet with the first field being the subnet and the second field being the
netmask that applies to that subnet
128.32.27.16 255.255.255.240
128.32.27.32 255.255.255.240
128.32.27.48 255.255.255.240
128.32.27.64 255.255.255.240
Ndd command
• get or set driver configuration parameters pertaining to TCP/IP family
• To see which parameters are supported by the TCP driver use the following
command:
• # ndd /dev/tcp \?
• To disable IPv4 packet forwarding
• # ndd -set /dev/ip ip_forwarding 0
• To enable IPv4 packet forwarding
• #ndd -set /dev/ip ip_forwarding 1
• To check link status use
• # ndd -get /dev/hme link_speed
• ( will return a value 0 for 10Mbps speed and 1 for 100Mbps )
Snoop command
• snoop captures packets from the network and displays
their contents. snoop uses both the network packet
filter and streams buffer modules to provide efficient
capture of packets from the network. Captured packets
can be displayed as they are received, or saved to a file
for later inspection.
• To capture output to a file use -o option
• # snoop -o outputfile hos1 host2
• this will capture packets between host1 and host2 and
save it to a file called “outputfile” for future
• analysis.
Module 14
Backup and recovery
• Objectives:
– dump a filesystem to tape using ufsdump utility
– restore file s or filesystem from tape using the
ufsrestore utility
– recover the /(root) and /usr filesystems
– discuss tar,cpio and dd
– the mt utility
Why backups?
• Most crucial system admin function
– accidental file removal
– originals get lost or damaged
– hardware failure
– external failure of the system
– internal failure of the system
• a system admin should act as though any of
these events could happen today.
Types of backup
• Full dumps
– dumps that backup the entire file system
• incremental dumps
– dumps that backup only those files that have
changed since the last lower-level dump.
Incremental backups
• The ufsdump has 10 backup levels
• levels 1 thro 9 are incremental backups.
• they backup those files that have changed
since the last dump at lower level.
• They depend on the information stored in
the /etc/dumpdates file to decide which files
to backup.
The ufsdump command
• It is used to backup a file system.
• Format
– ufsdump options files_to_dump
• options
– 0-9 the dump level option
– u update the dumpdates file
– c set the blocking factor to 126.this causes the
dump to write in 63K bytes instead of 32 K
– A create an online archive of the file names
dumped
– f specify the device name where the dump will
be taken
– v verifies data on tape against data on file
system.
Hoe to backup file system
• Check for system activity
• notify all the users about the availability
• bring the sytem to run level S
• verify the file system using fsck
• perform a 0 level dump
– #ufsdump 0cuf /dev/rmt/0 /export/home
Performing remote backups
• To perform a remote backup you must
– have a root access privileges on the system with
tape device.
– Specify the server:tape_device in the ufsdump
or ufsrestore command line
• #ufsdump 0uf mars:/dev/rmt/0 /export/home
Restoring filesystems
• Reasons
– adding a new disk drive
– reinstalling or upgrading the OS
– reorganizing the filesystems or disks
– re creating a damaged file system.
The restoresymtable file
• This file is created when restoring the entire
contents of a dump tape.
• The restoresymtable file is used for
checkpointing, which is information passed
between incremental restores.
• This file is not required once the restoration
is thro.
The ufsrestore command
• This command extracts files from a backup
created by the ufsdump command
• format
– ufsrestore options filename
• options
– i perform an interactive restore
– r restore the entire backup
– T list the table of contents of the backup.
– V displays the pathnames of the files that are
restored.
How to restore files
• Load the tape in the tape drive
• become the superuser
• change your working directory to a temporary
location, such as /var/tmp
• display the contents of the tape and identify the
correct path names of the files to be restored
• extract the files
– #ufsrestore xvf /dev/rmt/0 file
• Check the restored files and move them to their
correct location.
How to perform an interactive
restore
• Change your working directory to a temporary
location such as /var/tmp
• start the ufsrestore with the interactive option
– #ufsrestore ivf /dev/rmt/0
• display the tape contents
• add files to the extraction list
– Extract the files
– exit the interactive restore
– check the restored files and move them to their correct
location.
How to move a filesystem
• Unmount the file system
• check the file system with fsck
• dump the filesystem to tape
• use the format utility to partition a new disk
• create a new filesystem on the new disk
• check the file system with fsck
• mount the new file system on a directory
• Restore the file system
• remove the restoresymtable
• check the restored file system with fsck
How to restore the root file
system
• Load and boot the release media to run level S.
• Create the new file system if necessary
• check the file system with fsck
• mount the file system to /a directory and change to that.
• restore the root file system
• Restore the restoresymtable
• unmount the new file system
• check the restored file system with fsck
• reboot the system.
The mt command
• This enables direct tape manipulation.
• Format
– mt command
• command
– status displays the status information
– rewind rewinds the tapes
– retention
– Erase
– bsf
– fsf
The tar command
• This enables you to back up single or multiple files in a
directory hierarchy.
• Format
– tar options filename
• options
– c create a new tar file
– t list the table of contents of the tar file
– X extract the specified files from the tarfile
– f use the next argument as the name of the device
– v print the file names as they are restored
– p restore the files wit the permissions.
The cpio command
• The cpio command creates an archive of
single or multiple files by taking a list of
names from standard input and writing the
archive to standard output,which is usually
redirected to a device file.
• Command format
– [command|] cpio options [> filename]
Cpio options
• Options
– o create an archive file
– i extract the archive
– B set the block input/output record to 5120.the
default size is 512 bytes
Cpio examples
Create an archive of the current directory contents:
# find . -print|cpio -ocvB >/dev/rmt/0
# ls |cpio -oc > /export/home/backup.cpio
Create cpio backup of directory /data/test in /backup/test
# find /data/test -print -depth | cpio -oc > /backup/test
Extract the readme file from the cpio archive
# cpio -ivcB readme < /dev/rmt/0
Extract the files from the cpio archive /backup/test use
# cpio -icvd < /backup/test
( this will restore to directory where you are invoking the cpio command)
List the file names contained in a cpio archive
called “db.cpio” use # cat db.cpio | cpio -ivt
The dd command
• It converts and copies files with various
data formats.
• Format
– dd [optios]
• options
– if input file
– of output file
– bs=n block size.
MODULE 15
Network File System ( NFS)
• OBJECTIVES:
– Describe the functions of an NFS
• Server
• Client
– Determine what directories or file systems a
server is sharing.
– Mount a remote resource on a client from the
command line.
Client - Server model
• Essentially a software model
• A “Server” component ‘gives out’ services
• A “client” component ‘takes’ the service
from the server
• A server is by default client to it self.
• Both the componets can reside in same
Client
physical system or on differentserver
systems
statd and lockd nfsd and mountd
Client - server communication
• TCP/IP based programmes interact with each other using the
TCP/IP suite as the underlying protocol suite via TCP/UDP
ports.
• For the client server interaction the knowledge of source
IP,source port, destination IP and destination port are essential.
• A port is defined as the end point of communication.
• The services in a host are identified using unique port numbers.
Eg: telnet uses port 23, smtp uses 25 and pop uses 110.
• Ports upto 1024 are well known ports and they are reseved
• Ports above 1024 are open to users.
/etc/services file
• The /etc/services file is a local source of information regarding
each service available through the Internet
• Maps well known services to port numbers.
• The /etc/services file contains information regarding the known
services available in the DARPA Internet. For each service, a
single line should be present with the following information:
service_name port_number protocol_name aliases
• Fields can be separated by any number of SPACE and/or TAB
characters. A `#' (number sign) indicates the beginning of a
comment;
• Any newly added service must have a unique entry in this file,
otherwise it may fail to work.
Remote Procedure Call(RPC)
• A network service must use an agreed-upon unique port number.
To eliminate the problem of too many hosts and too many
services to configure and maintain distinctive information for,
Sun created an RPC service that does not require predefined port
numbers to be established at boot time.
• A process, rpcbind, interprets incoming requests and sends them
to the appropriate server processes. Using RPC, clients are given
the actual port number at connection time by rpcbind (listening
at well-known port 111). RPC services register themselves with
rpcbind when they start, and are assigned an available port
number at that time. RPC services are named rpc.<daemon>.
/etc/rpc file
• To see which services are currently running, use the rpcinfo -p command.
• The configured ports for RPC are listed in /etc/rpc. The /etc/rpc file is a local
source containing user readable names that can be used in place of RPC
program numbers.
• The rpc file has one line for each RPC program name. The line has the
following format:
RPC_program_name RPC_program_number aliases
• sample /etc/rpc file :
rusersd 100002 rusers
nfs 100003 nfsprog
mountd 100005 mount showmount
walld 100008 rwall shutdown
rpcinfo
• To see which services are currently running, use the “ rpcinfo -p”
command.
• An RPC program is written is such a way that when it initializes itself at
the start time, it will contact rpcbind and registers it self with the rpcbind.
• On registration the rpcbind will allocate a next available port number to
the service.All subsequent requests to the service are intercepted by
rpcbind and provided with the assigned port number.
• TIP : ERROR:: “RPC program not Registered “
• This is a very misleading error message.
• If you see this error , please ensure that the corresponding daemon is
running and the service is available in the system
The Solaris NFS environment
• The Solaris NFS environment relates to the
ability of one networked system to access
the files and directories of another.
• The NFS service enables a computer to
access another computer’s file systems
• A Solaris system can be a server, client or
both at any given time.
NFS server
• A Solaris NFS server provides file system
access to NFS clients.
• The /etc/dfs/dfstab file.
• Configuration of this file is the
responsibility of the system admin.
NFS client
• The Solaris NFS client accesses files from Solaris
NFS server by mounting the distributed file
systems of a server in a fashion similar to the
mounting of local file system.
• There is no copying of the filesystems.
• A series of RPC enable the client to access file
system transparently on the disk of the server.
• How does the mount look?
NFS File systems
• What can be shared?
– Whole or partial directory
– Even a single file can be shared
• What cannot be shared?
– A file hierarchy that overlaps one already
shared.
– Modems and printers.
Benefits of NFS services
• Everyone on the network can access the
same data.
• Reduced storage costs.
• Data consistency and reliability.
• Transparent mounting of remote files.
• Reduced system admin tasks.
• ACL support
How to share resources?
• The /etc/dfs/dfstab file.
• How to start the server and client processes.
• The shareall command.
• Verify the shares using dfshares command
NFS client access
• Mounting a remote resource
– # mount sun:/usr/share/man /usr/share/man
• Unmounting a remote resource
– #umount /usr/share/man
NFS client access
• The mountall and the umountall commands
– #mountall -F nfs
– #mountall -r
– #umountall -F nfs
– #umountall -r
• If mounting or unmounting of multiple NFS
or remote file systems listed in the /etc/vfstab
the above commands can be used
Module 16
THE LP PRINT SERVICE
• OBJECTIVES:
– List the OS’s supported by the Solaris print
service.
– Describe the functions of LP print service
– Describe what a print server and print client
are.
– Define the terms Local and remote printers.
• Diagram local and remote print models.
• Verify printer type exists in the terminfo
database.
• Use the admintool to add a local and a
remote printer.
Print Service Architecture
• Client-Server model:
– A print server is a system configured to accept
print requests from print clients for printers that
are directly connected to them or network
attached.
– A print client is a system that uses a print server
for printing and is configured to provide access
to a remote printer.
Printing system
• A computer that include a printers contains
– LP print service software
– SunSoft print client software
– print filters
– hardware-printers,network connection.
Solaris 2.6 LP print software
• The LP print software includes the following
– Print protocol adapter
• replaces the SAF network listner(listen) and lpnet on the
inbound side of the LP spooler with more modular design.
• Allows for multiple spooling systems to co-exist on the same
hosts.
– Has network printer support.
– Is extensible by 3rd party application developers to
support other printing protocols.
Features of LP print software
• Provides a variety of printer service
functions.
• Includes PostScript filters in the SUNWpsf
package.
• Supports wide range of printers.
LP print directories
• /usr/bin user commands
• /etc/lp server configuration files.
• /usr/share/lib terminfo database directory
• /usr/sbin print service admin command
• /usr/lib/lp daemons,filters&binaries
• /var/lp/logs LP daemons logs
• /var/spool/lp spooling directory
Printing functions
• Queing
• Tracking
• Fault Notification
• initialization
• Filtering
Queing
• When print requests are spooled the jobs are
lined up with other jobs waiting to be
printed.This process of lining of the jobs is
called queing.
Tracking
• The print service tracks the status of every
job to enable users to remove jobs and
system admins to manage jobs
• Advantage is that if there is a system crash
then the remaining jobs will resume once
the system reboots
Fault Notification
• When problem occurs
– error messages are displayed on console or
– mailed to the system admin.
Initialization
• The print service initializes a printer before
sending it a print job to ensure it is in a
known state.
Filtering
• Certain print jobs,as raster images are
converted into descriptions the printer can
recognize
• uses filters.
Content Types
• Every print request consists of atleast one
file containing information with a particular
format, which is called a content type:
– eg:PostScript
• Every printer mist be defined with a printer
type and at least one content type.
Matching print requests to printer
• If you have a PS printer,specify that the
content type is PS.
• This way,users can print PS and other
supported content types without specifying
content type.
• The only time a user needs to specify a
content type when printing a file is if the
file needs special filtering .
Print Filters
• Print filters are programs used by the print
service to convert the content of requests to
the type of content accepted by the printer.
Filter Information
• Stored in several place
– The default PS filters are stored in
/usr/lib/lp/postscript directory.
– /etc/lp/fd
– Look up table of filters
/etc/lp/filter.table
Checking for defined printer
types
• To verify if your printer type exists,list the
contents of the /usr/share/lib/terminfo
subdirectories.
• The terminfo entry has a directory name
with same initial letter or digit as the
abbreviation of the printer.
Interface programs
• Interface programs are usually shell scripts
used by the print service to set certain
default printer settings.

• /etc/lp/interfaces/printer_name
The printing Environment
• Local and Remote printers.
– Local
– Remote
• Heterogeneous environment
– Solaris 2.x and Sun OS 4.1.x print clients cab
be served by a Solaris 2.x server
– Solaris 2.x and Sun OS 4.1.x print clients cab
be served by a Sun OS 4.1.x server
Solaris 2.6 print Client Process
• The steps for printing a document
– A user submits a print request by entering a
print command.The print job is placed into
local spooling area.
– The print client command checks a hierarchy of
print configuration resources to determine
where to send the print request
• The print client command sends the request
directly to the print server using the BSD
protocol.
• The print server processes the request and
sends it to the appropriate printer where it is
printed.
Submitting a print request
• The solaris 2.6 print client software
provides both SVID and BSD commands to
submit print jobs
– lp <filename>
– /usr/ucb/lpr <filename>
– lp -d <printer> <filename>
– /usr/ucb/lpr -P <printer> <filename>
POSIX style
• Using the POSIX style
– $lp -d <server>:<printer> <file>
Finding the printer
• The command line
• The user’s PRINTER or LPDEST variable
• $HOME/.printers
• /etc/printer.conf
• _default in a network name services
database.
• If the printer name is in POSIX style ,then
the print client command forwards the print
request to the server.
2.6 local printing model
• When a print job is submitted,the print
scheduler /usr/lib/lpsched is contacted.
• The job data is placed in the spooling area.
• The scheduler processes the data
– matches with a filtering chain to convert the
data into format acceptable to printer
– the data is filtered
– schedules printing
Solaris 1.x
• Client side -remote printing model
– lpr is used to submit the jobs
– lpr places the jobs in the local spooling area and
contacts lpd daemon
– lpd daemon transfers it to the print server
• Server side-remote printing model
– lpd daemon receives requests
– sends it to the printer.
Solaris 2.0 -2.5.1
• Client-side remote printing
– lp or lpr is used to submit.
– Both commands contact lpsched
– lpsched places jobs in local spool.
– Lpsched contacts lpNet which transfers the job
to server.
• Server-side
– The SAF listens for network requests
– requests are passed to lpNet
– lpNet contacts lpsched
– lpsched processes the requests and sends to
printer.
Solaris 2.6
• Client side- remote printing
– lp or lpr commands can be used for submitting
the jobs.
– Both commands place the print job into a
temporary spooling area.
– Both commands contact the print server
themselves in order to transfer jobs.
• Server side -remote printing
– the inetd process listens for requests.
– When it gets one,it starts in.lpd,the print
protocol adapter.
– In.lpd places jobs in spooler and contacts
lpsched.
– Lpsched processes the request and sends it to
the printer.
Configuring print services
• Setting up printer
• setting up the printer server
– spooling directory space of 20-25Mbytes
– Atleast 32Mbytes of RAM
• Setting up the print client
• Network access
Configuring local printer
• To add a new printer use….
# lpadmin -p <printer-name> -v <device-name>
# enable <printer-name>
# accept <printer-name>
• To display the status of the printer
# lpstat -t
• To make a printer default ( LPDEST env variable)
# lpadmin -d <default-printer-name>
• To remove a printer
# lpadmin -x <printer-name>
• To turn off banner pages during printing
# lpadmin -p printer-name -o nobanner=never
Configuring printers
• To print on both sides of the paper use
# lp -d <printername> -o duplex
• Check to see if the print scheduler is running.
# lpstat -r
• print scheduler can be stopped with
# /usr/lib/lp/lpshut
• Print services can be started with
# /usr/lib/lp/lpsched
• lpfilter command manage the list of available filters. System information about
filters is stored in the /etc/lp/filter.table file. The filter descriptor files supplied
(PostScript only) are located in the /etc/lp/fd directory.Filters are needed for
printing to postscript printers( eg: HP Lasejet ).The syntax is
# lpfilter -f <filter-name> -F <filter-def>
Configuring printers using GUI
• Admintool->
– browse->
• printers->
– add.
Deleting printers
• Admintool

• # lpadmin -x hp
Network Printing with JetAdmin

• For printing form solaris to HP network printers


• Download the Jetadmin software for solaris from
• ftp://ftp.hp.com/pub/networking/software
• Package Name is SOLe118.PKG
• install the package with
• # pkgadd -d SOLe118.PKG
• The software installs in /opt/hpnpl folder
• Run the Jetadmin utility /opt/hpnpl/admin/hppi
• and configure the printer.
• Needs to insert the printer name to ip mapping in /etc/hosts file
Network printing using TCP/IP
Procedure:
1) Let the printer name be “luna”, add the printer using
# lpadmin -p luna -i /usr/lib/lp/model/netstandard \
-v /dev/null -o dest=192.9.200.123 -I postscript
2) Then edit /etc/lp/interfaces/<printer name> file and add filter files ( if needed)
3) Register this filter with the printing system.
# /usr/sbin/lpfilter -f PStoPCL -F /etc/lp/fd/PStoPCL.fd
4) Enable the print que with
# enable luna
# accept luna
5) To stop printing banner page (header) by default you have to run
# lpadmin -p luna -o nobanner
Also edit the /etc/lp/interfaces/luna file, change the nobanner="no" line to nobanner="yes".
Module 17
Print Commands
• OBJECTIVES:
– use lp command to print files
– use the lpstat command to monitor the print
jobs
– use the cancel command to cancel the jobs in
the queue
– use lpadmin to set up a printer class
– use LPDEST and lpadmin to designate default
printer
• Use the lpmove command to move a
queued print request from one printer to
another.
• Assign priorities to print requests and move
a job to top of the queue.
• Stop and start the LP print service
Basic LP commands
• lp sends files to the printer
• lpstat displays print service status
• cancel cancels print requests
• lpadmin performs various admin tasks
• accept enable queuing of print requests
• reject prevents queuing of further requests
• lpmove moves print requests
• Enable enables printer to print requests
• disable disables printer from printing
requests
The lp command
• The lp command
– lp [-optios] filenames
• options
• -d printername
• -n num
• -o nobanner
• Printing a file
– #lp file
– #lp -n 2 file
– #lp -d staffp file
– #lp -d staffp -o nobanner file
The lpstat command
• The lpstat command
– lpstat [options]
– options
• -a reports whether destinations are accepting
• -d displays the name of default printer
• -o displays status of all output requests on printer
• -p displays idle or busy status and availability
• -s what printers are configured
The cancel command
• Use the cancel command to cancel a
specific printer request waiting in the queue
of the print request currently printing.
– Cancel [request ID] [printer]
– cancel -u user [printer]
• Canceling print jobs
– $lpstat -o
– cancel [job ID]
• canceling a print job currently printing
– #cancel <printer>
Designating a default destination
• A system admin can use the lpadmin
command to designate a printer as the
system-wide default destination for all print
requests.
• Individual users can set their own default
printer by setting the LPDEST variable.
Using printer classes
• Class
– A class is a named group of printers created
with the lpadmin command
• class criteria
– Printer type
– Location
– workgroup
Priority within a class
• You can create a class of printers to ensure
that printers are accessed in a particular
order,because the print service always
checks for printer availability using the
order in which the printers were added.
• High speed printer and then a low speed
printer.
Creating a class of printers
• A class is created the first time a printer is
added to it.
• Once the class has been created the enable
command is used to enable the class to
queue jobs
• #lpadmin -sparky -c bdlg2
• #lpadmin -p streaker -c bdlg2
• #accept bdlg2
How to manage print jobs
• The print service also enables a job to be
placed on hold to give way to a more urgent
one.The suspended job can be resumed at
any time.
• Lp -i printrequest H keyword
• keywords-- hold,resume,immediate
• Place a print job on hold
– lp -i spock-18 -H hold
– lpstat -o spock
• Resume a previously held print job
– lp -i spock-18 -H resume
• Place a print job at the top of the queue
– lp -i spock-12 -H immediate
How to manage priorities
• The solaris 2.x environment enables users
to submit print requests at various priorities
• priorities range from
– 0---highest
– 39--lowest
• The default priority for all users is 20.
• Place an important job at at high priority.
– Lp -d sparky -q 0 fastfile
• Place an unimportant job at low priority
– lp -d sparky -q 30 bigfile
How to move print jobs
• The solaris print service allows requests to
be moved between different queues.It does
not however move requests if their content
type does not match.
• Become a super user
• Use the reject command to prevent further
print requests from being sent to the print
queue.
– Reject -r”spock is down” spock
• List the print queue to see how many print
requests are to be moved.
– Lpstat -o
• Verify the destination printer is accepting
print requests
– #lpstat -a sparky
• Move a specific job or all jobs
– lpmove spock sparky
– lpmove spock-11 sparky
• Use the accept command once the
unavailable printer is available.
How to temporarily Disable a
printer
• Why?
– Paper jam,print cartridge.
• Use the disable command to make the
printer temporarily unavailable to users.
– Disable sparky
• After clearing the paper jam or changing the
print cartridge,enable the printer again
– enable sparky
How to troubleshoot a printer
• Check the status of the queues
– lpstat -o
• stop and start the daemons
– /etc/init.d/lp stop
– /etc/init.d/lp start
How to manually remove a
printer
• Remove the print queue
– rm -r /var/spool/lp/requests/hostname/*
• remove the printer configuration.
– Lpadmin -x printername
• stop and restatrt print daemons
• set up the printer through admintool
Module 18
Process Control
• Objectives
– use the ps command to list processes running
on the system
– use the kill command to terminate processes
running
– use the at command to execute a command at a
future time
– state the function of cron daemon
• Describe the format of the crontab file
• name the format of the crontab file
• Name the two files used to control crontab
access
• Edit users crontab file
The ps command
• Use the ps command to list the processes
that are running on the system.
– Ps [options]
To– list
-e show allofthe
the PID processes
vold daemon use
– -f-ef|grep
# ps generate
voldfull listing
The same can be seen using
# pgrep vold
The kill command
• Use the kill command to send signal to a
specified process.
• Signals
– There are currently 44 signals defined in 2.x
. The default
OS.If youkill signal
use thesent
killis command
15( TERM without
or SIGTERM)
specifying
eg: #kill -9 < PID ofa signal,signal
the process> 15 (SIGTERM) is
sent to the process.
. The pkill command can be used to kill processes by their
name
# pkill -9 in.named
Running commands at specified
times
• The at command
• The crontab files
The at command
• The at command executes a command or
script at a specified time.The command is
executed only once.
• #at[-m] [-r job] time [date]
– -m sends mail to user
– -r removes a specified scheduled job
How to execute the at command
• Specify time and command
– at 5:45
– command
– ^d
• look at the queue
– atq
• use the at -r command to remove a job from
the queue
Displaying the crontab file
• Crontab -l
• The dron daemon is started when the
system boots and runs continuously
• the cron daemon reads the crontab files
in /var/spool/cron/crontabs
• The commands are executed at regularly
specified times
Controlling crontab access
• The two files that control access to the cron utility are
– /etc/cron.d/cron.allow
– /etc/ccron.d/cron.deny
• If the cron.allow file exists only the users listed in the
file can use the crontab command
• If that file does not exist,crontab checks the
cron.deny file to determine if the user should be
prohibited from running crontab.
• If neither file exists,only the super user can run
crontab
How to add jobs
• Cat > filename
• crontab filename
• list the cron jobs using crontab -l
The crontab file format
• The crontab file consists of entries with six
fields in each entry.
• 10 3 * * * 0 ps -ef
– the first field is the minute field
– the hour field
– the day of month field
– the month field
– the day of week field
– The command field contains the command to
be executed.
• To edit the crontab file
– EDITOR=vi
– export EDITOR
– crontab -e
Module 19
Shell scripting
• Objectives:
– list the traditional uses of shell script types
– set and expand shell variables
– evaluate the use of positional parameters as script
arguments
– use various quoting techniques
– use redirections and pipes
– state the purpose of and correctly interpret the exit
status.
• Evaluate “if” conditional statements
• interpret “for” looping statements
• analyze case statements
• recognise shell functions
• evaluate samples of standard administration
scripts.
Bourne shell as a programming
language

• Shell program or shell script


• syntax is different for
– bourne shell
– korn shell
– c shell
• korn shell can run a bourne shell script
• system scripts are bourne shell scripts.
• # comment
• define which shell will run the script:
– #! /bin/sh
– #!/bin/ksh
– #!/bin/csh
• naming scripts
Shell script variables
• Setting and expanding variables
– $cat script1
– #!/bin/sh
– oldfilename=accounting
– echo $oldfilename
• { } -delimit the variable name
• cat script2
• #!/bin/sh
• syll1=op
• syll2=er
• syll3=a
• three-syll=$syll1$syll2$syll3
• echo the first three syllables are:$three-syll
• Echo the whole word is: ${three-syll}tion
Shell script variables
• Quoting : \ prevents shell interpretation
– cat script7
– #!/bin/sh
– name=fred
– echo “hello \$name .where are you going?”
• Quoting: ‘ ‘:prevents shell interpretation of
metacharacters
• #!/bin/sh
• echo a b
• echo ‘a b’
• num=25
• echo ‘the value of num is $num’
• Quoting: “ “:literal text interpretation and
``
– #!/bin/sh
– num=25
– echo “the value of num is $num”
– name=fred
– echo “Hi $name,Hi”
– echo “hey $name ,the time is`date`”
• Command substitution ``
– #!/bin/sh
– whoseon=`who -m`
– echo the person who is logged on is:
– echo $whoseon
• Positional parameters
• positional variables
• the set command
– set `who -m`
• Positional parameters are used to pass
arguments
– $0
– $1
• eg
– #!/bin/sh
– echo the script name is $0
– the first argument passed is $1
Redirection and pipes
• Three channels of communication:
– standard input -stdin-file descriptor 0
– standard output-stdout-file descriptor 1
– standard error-stderr-file descriptor-2
• redirection < and >
• Append stdout:>>
• redirect stderr:2>
• append stderr:2>>
• redirect stdout and stderr to /dev/null
– > /dev/null 2>&1
• pipe stdout to command:command1 |
command2
Conditional expressions
• Exit status is an integer and is saved in the
$?
– Zero indicates success
– non zero ondicates one or more errors
• $pwd

• echo $?
Test operator
• Zero is true
• nonzero is false
– test “$name” = “fred”
– [“$name” = “fred”]
Conditional expression
• Case statement,use instead of many if statements
– case “$hour” in
– 0? | 1[01])
– echo “good morning”;;
– 1[2-7])
– echo “good afternoon”;;
– *)
– echo “good evening”;;
– essac
Flow control
• Repeat statements
– while loop
– untill loop
– for loop
Shell functions
• Modular scripts
• function name
• define function before use
• accept parameters and return values
Sample administrative shell
scripts
• /etc/init.d directory contains bourne shell
scripts
– /etc/init.d/syslog
– /etc/init.d/volmgt
Writing simple programs
• Plan
• break script into functions
• write and test small sections
• anticipate error conditions
• use existing scripts
• use verbose comments
• debug
• Debugging scripts with
– use shells debug optio:sh -x
– +indicates shell activity that are not normally
seen.
Module 20
Solstice Disk Suite
• Sun’s Solution for configuring Software
RAID on Sun Systems.
• GUI based “Metatool” available.
• Easy command line options are also used for
many servers not supporting GUI.
• Comes bundled along with Solaris releases
for all users.Does not need any license.
Metadisk Driver
• Set of loadable,pseudo device drivers
• Metadevices
– Basic functional units of the metadisk driver
– Logical devices ,can be made up of one or more
component partitions
• Simple / Concatenation / Stripe / Mirror / Raid-5
– By default, 128 unique metadevices in the range 0-127
• Names located in /dev/md/dsk and /dev/md/rdsk
State Database Replicas
• State Database Replicas
– Keeps track of configuration and status for all
metadevices
– Keeps track of error conditions that have
occurred
– Requirement of multiple copies of state
database (min - 3)
– Each replica occupies 517KB or 1034 disk
blocks
State Database Replicas
(Contd….)
• Basic State Database Operation
– /etc/system or /etc/opt/SUNWmd/mddb.cf
(older sds)
/etc/lvm/mddb.cf ( sds version 4.2.1)
– Locator Blocks
– Commit Counter
– Checksum
• Location of replicas
System Files of SDS
• Old path is /etc/opt/SUNWmd ( SDS 4.0)
• New path is /etc/lvm( SDS 4.2.1)
• md.tab :- Workspace file
• md.cf :- Disaster recovery file (file form of atabase)
• md.cf does not get updated when hot sparing occurs
• should NOT be edited manually.
• mddb.cf :- Has Driver name, minor unit of block
device unique to each replica and Block number of
master block
State Database Replicas (Contd…)
• Setting up the MetaDB (State Database)
A DiskSuite installation would not be able to operate
without a "state database", known as a “metadb” .
Ideally, the metadb should be simultaneously located on
more than one SCSI controller and on 3 or more disks.
This is for redundancy and failover protection. Each
copy of the metadb is called a “ state database replica”.
• To view current metadb status use
# metadb
• To inquire the status of state database replica use
# metadb -i
State Database Replica creation
• To create one metadb on two disks, each having three replicas (for a
total of six replicas):
# metadb -a -f -c 3 /dev/dsk/c0t3d0s6 /dev/dsk/c1t0d0s6
• The options on the line above are:
-a attach a new database
-f form a new database ( force)
-c (#) number of state replicas per partition
• Note: the metadb command creates a file called
metadb.cf which must never be edited by us.
• Next, we need to add an entry into the file:
/etc/lvm/mddb.tab for each metadb we have created (in this case,
one).

Concatenation
Edit md.tab file and insert the following entry
d1 2 1 /dev/dsk/c0t0d0s2 1 /dev/dsk/c1t1d0s2
(This means concat made of 2 devices each having 1 component )
• To create the meta device use
# metainit d1
To create all meta devices listed in md.tab use
# metainit -a
• The command line syntax to create a concat is
# metainit d1 2 1 /dev/dsk/c0t0d0s2 1 /dev/dsk/c1t1d0s2
Striping
• Edit md.tab file to enter the following line
d1 1 2 /dev/dsk/c0t0d0s2 /dev/dsk/c1t1d0s2 -i 16k
(1 stripe containing 2 components)
• Interlace value defaults to 16k if not specified
• Note : The metainit syntax follows the form
MDNAME X Y SLICES
where MDNAME = meta device name
if X > Y then you get a stripe
if X < Y then you get a concat
Concatenated Stripes , an example

• d1 2 2 /dev/dsk/c0t0d0s2 /dev/dsk/c1t1d0s2
-i 16k 3 /dev/dsk/c0t0d0s3
/dev/dsk/c1t1d0s3 /dev/dsk/c2t1d0s2 -i 32k
• metainit -n d1 verifies if the info. in md.tab
is accurate
• metaclear d1 will delete the metadevice
( Data is LOST )
Mirroring
• Edit md.tab and insert the entries
d10 -m d1 d2 is a two way mirror
d01 1 1 /dev/dsk/c0t0d0s2
d02 1 1 /dev/dsk/c1t1d0s2
• Execute following commands in the specified order
# metainit d01
# metainit d02
# metainit d10
• d10 -m d1 is a one-way mirror
• d10 is called the Metamirror after entire mirror is setup.
• To add a sub mirror to existing mirror
# metattach d10 d03
• To remove a sub mirror ( break the mirror)
# metadetach d10 d2
Mirroring……contd….
# metainit d0 -m d1
(makes a one-way mirror. d0 is the device to mount (called
metamirror) , but d1 is the only one associated with an actual device
(called submirror) .Now d0 is a "one-way mirror" . There's only one
place where the data is actually stored, namely d1.)
# metattach d0 d2
(attaches d2 to the d0 mirror. Now there are 2 places where the data
are stored, d1 and d2. But you mount the metadevice d0)
# metadetach d0 d1
(detaches d1 from the d0 mirror ,breaking the mirror)
• To suspend / resume use of sub mirror use
# metaoffline d0 d2 ( suspends the use of d2 on d0 mirror)
# metaonline d0 d2 ( resumes the use of d2 device on d0 mirror)
Root Mirroring
1) Install second hard disk and create slices similar to root disk.
2) Create state data base replicas in both disks
# metadb -a -f -c 2 c0t0d0s7 c1t0d0s7
3) Edit md.atb and enter following entries
d10 1 1 /dev/dsk/c0t0d0s0
d20 1 1 /dev/dsk/c1t0d0s0
d0 -m d10
d11 1 1 /dev/dsk/c0t0d0s1
d21 1 1 /dev/dsk/c1t0d0s1
d1 -m d11
do the same for all other slices in root disk
Root mirroring…contd….
4) Create all the meta devices using # metainit -a -f
(the -f will force to metadevice creation even on mounted slices)
5) Run metaroot command on device designated as root metamirror.
# metaroot d0
6) Copy original /etc/vfstab and preserve it as /etc/vfstab.org. Now edit the
/etc/vfstab file and modify entries for swap area. Change /dev/dsk/c0t0d0s1 to
/dev/md/dsk/c0t0d0s1.the line for / will be already updated by metaroot
command.Do the same for remaining slices.
7) Reboot the system. This is must. On reboot do a df -k and swap -l to verify
that the root and swap slices are under SDS control.
8) attach sub mirror to meta mirror . # metattach d0 d20
9) This will initiate the mirror syncing process. Verify with # metastat d0
10)Run metattach commands for remaining metamirrors . Run next command
only after the completion of previous resync operation.
Creating RAID-5
• Edit md.tab and insert ( -r is the keyword)
d1 -r /dev/dsk/c0t0d0s2 /dev/dsk/c1t1d0s2 \ /dev/dsk/c2t1d0s2 -i
16k
# metainit d1
This will create a RAID5 device d1 with stripe size 16K
• metainit on existing raid-5 devices DESTROYS data
• To avoid the destruction of data on a RAID-5 device, the device entry
should have “k” option.
• Example :-
d1 -r /dev/dsk/c0t0d0s2 /dev/dsk/c1t1d0s2 \ /dev/dsk/c2t1d0s2 -k -i
16k
UFS logging
Done using trans-meta devices. Trans devices have a main
device and a logging devices. Logging avoids long fsck times
at boot. Before writing to the device , the data will be first
written to the logging device and then the transaction is
commited to the actual disk. The command
# metainit d0 -t d1 d2
sets up a trans device d0 with d1 as the master and d2 as the
logging device. recommended 1MB logging/1GB data on master
# metainit d0 -t c0t1d0s2 c3t2d0s5 (same as above )
For attaching and detaching a log device on/from d0 use
# metattach d0 d1
# metattach d0 c3t1d0s5
# metadetach d0
metastat
The metastat command displays the current status for each
metadevice (including stripes, concatenations, concatenations of
stripes, mirrors, RAID5, and trans devices) or hot spare pool.
-p
Displays the list of active metadevices and hot spare pools in a
format like md.tab.
-s setname
Using the -s option will cause the command to perform its
administrative function within the specified diskset.
-t
Prints the current status and timestamp for the specified
metadevices and hot spare pools. The timestamp provides the
date and time of the last state change.
Metareplace
The “metareplace” command is used to enable or replace
components (slices) within a submirror or a RAID5 metadevice.
When you replace a component, the metareplace command
automatically starts resyncing the new component with the rest of
the metadevice. When the resync completes, the replaced
component becomes readable and writeable. Note that the new
component must be large enough to replace the old component.
A component may be in one of several states. The Last Erred and the
Maintenance states require action. Always replace components in
the Maintenance state first, followed by a resync and validation of
data. After components requiring maintenance are fixed, validated,
and resynced, components in the Last Erred state should be
replaced. To avoid data loss, it is always best to back up all data
before replacing Last Erred devices.
Metareplace examples
•This example shows how to recover ,when a single
component in a RAID5 metadevice ,is errored.
# metareplace d10 c3t0d0s2 c5t0d0s2
In this example, a RAID5 metadevice d10 has an errored
component, c3t0d0s2, replaced by a new component,
c5t0d0s2.
•This example shows the use of the -e option after a
physical disk in a submirror has been replaced.
# metareplace -e d11 c1t4d0s2
Note: The replacement disk must be partitioned to match
the disk it is replacing before running the metareplace
command.
Metasync
The metasync command starts a resync operation
on the specified metadevice. All components that
need to be resynced are resynced. If the system
crashes during a RAID5 initialization, or during a
RAID5 resync, either an initialization or resync
restarts when the system reboots.
Applications are free to access a metadevice at the
same time that it is being resynced by metasync.
Also, metasync performs the copy operations from
inside the kernel, which makes the utility more
efficient.
Use the -r option in boot scripts to resync all
possible submirrors.
Metaonline and metaoffline
metaoffline : This command prevents DiskSuite from reading and writing to
the submirror that has been taken offline. While the submirror is offline, all
writes to the mirror will be kept track of (by region) and will be written when
the submirror is brought back online. The metaoffline command can also be
used to perform online backups: one submirror is taken offline and backed
up while the mirror remains accessible. (data redundancy is lost while one
submirror is offline.) The metaoffline command differs from the metadetach
command because it does not sever the logical association between the
submirror and the mirror. To completely remove a submirror from a mirror,
use the metadetach command.
When the metaonline command is used, reading from and writing to the
submirror resumes. A resync is automatically invoked to resync the regions
written while the submirror was offline. Writes are directed to the submirror
during resync. Reads, however, will come from a different submirror. Once
the resync operation completes, reads and writes are performed on that
submirror. The metaonline command is only effective on a submirror of a
mirror that has been taken offline. Note: A submirror that has been taken
offline with the metaoffline command can only be mounted as read-only.
Metattach and metadetach
metattach is used to add submirrors to a mirror, add logging
devices to trans devices, or grow metadevices. Growing
metadevices can be done without interrupting service. To grow the
size of a mirror or trans, the slices must be added to the
submirrors or to the master devices. DiskSuite supports one-to-
three-way mirrors
To concatenate a single new slice to an existing metadevice, d8.
(Afterwards, use the growfs command to expand the file system.)
# metattach d8 /dev/dsk/c0t1d0s2
This example expands a RAID5 metadevice, d45, by attaching
another slice.
# metattach d45 /dev/dsk/c3t0d0s2
metadetach is used to detach submirrors from mirrors, or detach
logging devices from trans metadevices.
metainit and metaclear
The metainit command configures metadevices and hot spares
according to the information specified on the command line or it
uses configuration entries you specify in the /etc/lvm/md.tab file. All
metadevices must be set up by the metainit command before they
can be used.( the -f option tells the metainit to continue even if you
have mounted slices in the metadevice.
metaclear deletes all configured metadevice(s) and hot spare
pool(s), or the specified metadevice and/or hot_spare_pool. Once a
metadevice or hot spare pool is deleted, it must be recreated using
metainit before it can be used again.
Any metadevice currently in use (open) cannot be deleted.
Diskset
A shared diskset, or simply diskset, is a set of shared disk drives
containing metadevices and hot spares that can be shared
exclusively but not at the same time by two hosts.A diskset provides
for data redundancy and availability. If one host fails, the other host
can take over the failed host's diskset. (This type of configuration is
known as a failover configuration)
Disksets use this naming convention: /dev/md/SETNAME
Metadevices within the shared diskset use these naming
conventions:
/dev/md/SETNAME/{dsk | rdsk}/dnumber
where setname is the name of the diskset, and number is the
metadevice number (usually between 0-127).
Hot spare pools use setname/hspxxx, where xxx is in the range 000-
999.Metadevices within the local diskset have the standard DiskSuite
metadevice naming conventions
metaset
The metaset command administers sets of disks shared for exclusive
(but not concurrent) access between such hosts. While disksets
enable a high-availability configuration. Shared metadevices/hot spare
pools can be created only from drives which are in the diskset created
by metaset. To create a set, one or more hosts must be added to the
set. To create metadevices within the set, one or more devices must
be added to the set. # metaset -s colour -a -h red blue
The name of the diskset is colour. The names of the first and second
hosts added to the set are red and blue, respectively. (The hostname is
found in /etc/nodename.) Adding the first host creates the diskset. A
diskset can be created with just one host, with the second added later.
The last host cannot be deleted until all of the drives within the set
have been deleted. This example adds drives to a diskset.
# metaset -s colour -a c2t0d0 c2t1d0 c2t2d0 c2t3d0 c2t4d0 c2t5d0
The drive names of the c2t0d0, c2t1d0, c2t2d0, c2t3d0, c2t4d0, and
c2t5d0. Note that there is no slice identifier ("sx") at the end .
Expanding a metadevice
The expansion process involves addition of a concat.You will
loose all redundancy with the expansion.
# metattach d1 c3t1d0s2
extends a metadevice by concatenating a slice to the end. It
does not add a filesystem.
# growfs /dev/md/rdsk/d1
If the metadevice is not mounted, the above command
extends the filesystem to include the added section. You
cannot shrink this filesystem later.
# growfs -M /export/home /dev/md/rdsk/d1
If the metadevice is mounted, the above command will
extend the filesystem to include the concatenated section.
Again, you cannot shrink the filesystem later.
Module 21

Practical scenarios
Practical scenarios
• Some useful commands
• How to add a new disk to solaris
• How to add a new network card
• How to add swap space
• How to create alternate boot disk
• Introduction to DNS
Useful commands…...
# prtconf : Gives system configuration information like the total amount of
memory and the configuration of system peripherals formatted as a device tree.
# sysdef : lists all hardware devices, as well as pseudo devices, system devices,
loadable modules, and the values of selected kernel tunable parameters.
# dmesg : dmesg looks in a system buffer for recently printed diagnostic messages
and prints them on the standard output.
# eeprom : displays or changes the values of parameters in the
EEPROM( similar to setenv)
# vmstat : vmstat reports virtual memory statistics regarding process, virtual
memory, disk, trap, and CPU activity
# iostat : The iostat utility iteratively reports terminal, disk, and
tape I/O activity, as well as CPU utilization.

# cpustat: Allows CPU performance counters to be


used to monitor the overall behavior of the CPUs in the system.
# mpstat : Reports per-processor statistics in tabular form.
Useful commands

# prstat : iteratively examines all active processes


on the system and reports statistics based on the selected
output mode and sort order. prstat provides options to exam-
ine only processes matching specified PIDs, UIDs, CPU IDs,
and processor set IDs.
# devfsadm : load every driver in the system and attach to all possible
device instances,then creates device special files in /devices and
logical links in /dev.
Eg: #devfsadm - i st ; to add atape device to OS without reboot
# traceroute :traces the route that an IP packet follows to another internet
host
# nslookup : querry and trobleshoot DNS name servers.
# prtvtoc - report information about a disk geometry and partitioning
Create alternate boot disk

1) Prepare a second hard disk using format utility. The root(/)


partition should be atleast greater or equal to existing
partition.Create file system using “ newfs” command
2) Using “installboot bootblk” command transfer the boot image to
this slice( /usr/platform/`uname -i`/lib/fs/ufs/bootblk)
3) use ufsdump and ufsrestore together to transfer all files in root
partition to alternate partition.Edit /etc/vfstab to correct the scsi id
and slice info of the new partitions.
4) bring down the system to OK prompt and create an alias name
for the alternate disk( say “bootdisk2”) using nvalias
5) You can boot from alternate partition using
ok> boot bootdisk2
Create Alternate boot device
• Install the new disk drive and do boot -r . Let current boot disk be c0t0d0 and new one be c1t0d0. The new
disk must be identified in format utility .Create new root slice ( say c1t0d0s0) .Create a UFS file system
on the alternate root slice # newfs /dev/rdsk/c1t0d0s0
• Create a mount point for the alternate root slice # mkdir /newroot
• Mount the new slice # mount /dev/dsk/c1t0d0s0 /newroot
• Perform ufsdump and restore to move root file system
# ufsdump 0f - /dev/rdsk/c0t0d0s0 | ( cd /newroot;ufsrestore xf -)
• Update /etc/vfstab on alternate slice to reflect the target id of root.Unmount /newslice and run installboot
command to make it bootable.
# umount /newroot
# cd /usr/platform/`uname -i`/lib/fs/ufs/installboot bootblk /dev/rdsk/c1t0d0s0
• Shutdown the system # /usr/sbin/shutdown -y -i0 -g0
• From the ok prompt:
ok> setenv boot-device disk2 (reassign the boot device to new slice ) ok> reset
(recycle the prom monitor) ok> boot (boot from the target
drive/new boot device)
Adding a new disk to the system

1)For a hot-swappable disk after adding disk on the fly issue


following command to update the system and to take it into
kernel ( old command is “drvconfig” )
# devfsadm -i sd ( note : use #devfsadm -i st for a tape device)
2) After that use format utility to partition it.
3) Create file system using newfs or mkfs commands
4) Mount the file system
5) Edit /etc/vfstab to mount the new file system at boot time
Adding a new network card to the system
1) power down and insert new network card
2) To verify that the system has identified it using
ok > show-devs
3) To examine the network activity use
ok > watch-net-all
4) Perform reconfiguration and boot using
ok > boot -r
5) To view network configuration
# ifconfig -a
Adding a new network card to the system
6) to configure Ip address use
# ifconfig qfe1 192.168.10.27 netmask 255.255.255.0 up
7) Update /etc/hosts file and /etc/hostname.qfe1
8) Check the routing table using # netstat -r
9) use ndd command to verify the link speed
(0 stands for 10 Mbps and 1 for 100 Mbps )
# ndd /dev/qfe link_speed
10) to verify link mode ( 0 = half-duplex, 1=full duplex)
# ndd /dev/qfe link_mode
Booting from an external device

1)At OK prompt execute probe-scsi-all


2)Find out the physical path name to external boot device
3)Copy the device name and insert to a “nvalias” command
to set a new alias name for the device ( say bootdisk2)
4)Boot from the new device using
ok > boot bootdisk2
5)You can make this the default boot device using “setenv”
command
Note : For permanent aliasing use “ nvalias” while for
temporary usage use “devalias” command
Adding swap space to system

1)Create a swap file of size 200 MB using


# mkfile 200m /export/home/swapfile
2)Add the swapfile to existing swap space using
#swap -a /export/home/swapfile
3)List the available swap space using
# swap -l
4)To make changes permanent , put an entry in
/etc/vfstab file
5)To remove a swap file #swap -d /export/home/swapfile
# rm -r /export/home/swapfile
6) Remove vfstab entries if any( otherwise system will fail to boot)
DNS
1)Domain Name Service resolves host names to IP address
2)Uses “ in.named” daemon at server side and “resolver” at client side
3)client side file : /etc/resolv.conf
4)To configure a client to use DNS
Copy nsswitch.dns to nsswitch.conf
5)create /etc/resolv.conf and insert the lines
domain training.wipro.com
nameserver 200.200.200.10
nameserver 100.100.100.23
6)To trobleshoot DNS issues use “ nslookup” utility
DNS servers
• DNS server has /etc/named.conf , named.root ,
named.zone,named.reverse and named.loop files
•The named.* files contain DNS resourse records such as
SOA, A,PTR,MX, CNAME etc
•The PID of in.named daemon is kept in /etc/named.pid
file
•to restart the DNS server daemon use
# pkill -HUP in.named
•Solaris 8 implements BIND 8.1.2 for DNS
Module 22

System Diagnostics
System Diagnostics

1) Open boot prom( OBP) diagnostic commands


2) Obdiag
3) Power On Self Test ( POST)
4) System board and power supply LED status
5) Solaris OS diagnostic commands
OBP diagnostics
1) Banner :CPU,memory,OBP version,HostID and MAC addr.
2) devalias <alias name > <physicalpath>
3) nvalias <alias name> <physical path > and nvstore
4) nvunalias <alias name> <physical path>
5) printenv and setenv <parameter> <value>
6) probe-sbus , probe-fcal-all,probe-scsi,probe-scsi-all,probe-ide
7) set-defaults and set-default <parameter>
8) These shows physical device path names to all devices,disk
controllers ,frame buffers and network interfaces :
show-devs , show-disks, show-displays ,show-net
OBP diagnostics( continued....)
9) Findings of POST ina readable format :
ok > show-post-results
10) show-tapes : displays physical device path for tape
11) .speed : Display CPU and bus speeds
12) test <device> : tests given device
13) test-all : test all devices
14).version : display OBP and POST version info
15) watch-net : Monitor network connection for primary
network interface)
16) watch-net-all : monitor all network connections
Note: For ultra systems keep auto-boot? To “false “ to run
diagnostic commands
OBDIAG

1) Run diagnostic at OBP level and displays test results using LED’s on
the front panel or on the keyboard. It also displays diagnostic and error
messages on the system console.
2) Along with main logic board it checks inerfaces such as PCI, SCSI,
Ethernet, Serial,Parallel,Keyboard, Mouse,NVRAM, Audio and Video
3) Before running OBDiag ,set the OBP diagnostic variable to “ true” ,
the’auto-boot?’ varible to “false” and reset the system
ok > setenv diag-switch? True
ok > setenv auto-boot? False
ok > reset-all
To run the OBDiag

ok > obdiag
Power On Self Test(POST)
1) POST resides in the firmware of each board in a system and it is used to initialize ,
configure , and test system boards.POST output is sent to seraial port A( for ultra enterprise
the output is sent to serail port A on the system and clock board)
2) The status LED’s gives POST completion status. If a system board fails in the POST test
the amber light stays lit.
3) to run POST
ok > setenv diag-switch? True
ok > setenv diag-level max
ok > setenv diag-device disk ( if u want to boot from disk as the system default is “net” )
ok > setenv auto-boot ? False
ok > reset-all
4) Power cycle the system ( turn off then switch on ).On pwoering on the output is displayed
on device on serial port A or on console > you may also view the results using

ok > show-post-results
Solaris OS Diagnostics commands
1) /usr/platform/sun4u/sbin/prtdiag -v
: displays system config and diagnostic info and lists any failed
field replacable units( FRU’s)
2) /usr/bin/showrev -p or patchadd -p
: Display revision info on current harware and software
3) /usr/sbin/prtconf : Displays system configuration info
4) /usr/sbin/psrinfo -v
: Displays CPU info including clock speed
5) cpustat , mpustat, vmstat , iostat commands
Sun Explorer Data Collector 3.5.2

1)Runs in solaris2.x system and collects system data


2)download the utility from sunsolve.sun.com
3) Unpack and install the utility using
# zcat SUNWexplo.tar.Z | tar xf -
# pkgadd -d . SUNWexplo
4) Run the utility using
# /opt/SUNWexplo/bin/explorer -e
5) The output is kept in /opt/SUNWexplo/output
folder
Module 23

Introduction to
Crash Dump Analysis
Crash Dump Analysis
Crash dump file contains the system memory image of a
failed/running system.( default location is /var/crash/<hostname> )
You can enable savecore for future analysis via “dumpadm” command
or by editing /etc/rc2.d/S20sysetup file
The savecore utility saves a crash dump of the kernel .It saves
the crash dump data in the file vmcore.n and the kernel's namelist
in unix.n.
You can force a core dump by giving “sync” command at OK prompt.
The coredump can be analysed using “adb” and “crash” commands
and ACT and ISCDA tools.
The ACT Kernel Dump Analysis Tool

•ACT is a tool developed by engineers at Sun over the course of several years
to aid in the process of analysing kernel dumps. The ACT tool analyzes a
system kernel dump and generates a human-readable text summary.
Frequently, this text summary can be sent to Sun rather than uploading a
potentially huge core file.
•ACT prints detailed and accurate information about: Where the kernel panicked
, a complete list of threads on the system ,the contents of the /etc/system file
which was read when the failed system booted,a list of kernel modules that
were loaded at the time of the panic ,the output of the kernel message buffer etc
ACT is delivered in a standard Sun package format. Simply unzip and untar the
package and install it as any other package using pkgadd. The ACT package
is installed in the directory /opt/CTEact. The actual executable can be
found in /opt/CTEact/bin/act.
•When possible, ACT should always be run from the server that produced the
core to be analysed. This tool was later obsoleted by the ISCDA tool.
Initial System Crash Dump Analysis (ISCDA)

The script iscda can be run to automatically provide some


useful information after a system crash and will perform
some data gathering that can be used to determine the cause
of the crash.
Obtain script from http://sunsolve.sun.com/diag/iscda/iscda.sh
Run the Script
If your system panics or hangs, you can run the script once the
system has rebooted and the core file is stored on disk. Redirect the
output to a file. This output may be fairly long, especially if you have
a large system that was manually aborted.
Sample usage
# cd /var/crash/mymachine
# iscda unix.0 vmcore.0 > /tmp/iscda.output
References

1)All sun related manual are available in


http://docs.sun.com
2)All sun patches , problem tips, hardware details,
sun part number etc are available in
http://sunsolve.sun.com ( needs a login)
3)Google, sunfreeware.com,sunmanagers.com
4)Solaris resources at kempston and Prinston Univ.
5) PatchPro (http://patchpro.sun.com) : Can generate
a custom patch list for your requirement
The Final word !!!

Thank you for your time…..

You might also like