Professional Documents
Culture Documents
D73488GC11 SG
D73488GC11 SG
D74866
Edition 1.1
D73488GC11
November 2011
Student Guide
Transition to Oracle Solaris 11
Author Copyright © 2011, Oracle and/or its affiliates. All rights reserved.
Contents
1 Introduction
Overview 1-2
iii
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
iv
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
v
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
Agenda 5-7
Oracle Solaris 10 Zones 5-8
Migrating Solaris 10 Zones (V2V) 5-10
Migrating Solaris 10 Global Zones (P2V) 5-12
Agenda 5-14
Configuring Non-Global Zones by Using the Automated Installer (AI) 5-15
Specifying a Non-Global Zone in the AI Manifest 5-16
Non-Global Zone Configuration Files 5-17
vi
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
vii
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
Agenda 7-17
ZFS Deduplication 7-18
ZFS Deduplication Properties 7-20
ZFS Deduplication: Example 7-21
Agenda 7-22
Common Multiprotocol SCSI Target (COMSTAR) 7-23
COMSTAR Benefits and Limitations 7-24
Configuring COMSTAR 7-26
viii
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
I t d ti
Introduction
Overview
• Course goals
• Agenda
• Practices
Welcome to the Transition to Oracle Solaris 11 course. This is an advanced course that builds
on Oracle Solaris 10 system administration courses. It is focused on the skills and knowledge
required for transitioning from the Oracle Solaris 10 operating environment to the Oracle
Solaris 11 operating environment.
This course highlights the new features delivered with Oracle Solaris 11, including the
Automated Installer (AI), the Image Packing System (IPS), and network virtualization.
Throughout the course, you learn how to transition to the Oracle Solaris 11 operating
environment by performing a series of guided hands-on practices that walk you through the
critical tasks associated with operating system migration activities. These practices include
case studies that illustrate best practices when transitioning from Oracle Solaris 10 to Oracle
Solaris 11.
This course does not address system administration tasks currently supported in Oracle
Solaris 10 (or other) operating systems. Rather, it focuses on the new and enhanced features
found in the Oracle Solaris 11 operating system. It is assumed that you already have the skills
and knowledge necessary for administering Oracle Solaris 10.
Course Goals
Goals
Transitioning to a new operating system can be a very daunting task. It involves working with
a wide range of complex technologies and procedures
procedures, many of which are new to the
personnel participating in the project.
Agenda
Day 1
• Lesson 1: Introduction
• Lesson 2: Introducing the Oracle Solaris 11 New Features and
Enhancements
• Lesson 3: Managing Software Packages in Oracle Solaris 11
Practices
Starting with the lesson titled “Managing Software Packages in Oracle Solaris 11,” each
lesson in this course has an associated practice. Within each practice, you are provided with
a virtual environment that contains all the resources needed to install the Oracle Solaris 11
operating system and configure the new features and enhancements.
Introductions
• Name
• Company affiliation
• Title, function, and job responsibility
• Logistics
– Restrooms
– Break rooms and designated smoking areas
– Local cafeterias and restaurants
t d i
IIntroducing the O
th Oracle
l Solaris
Objectives
After completing this lesson,
lesson you should be able to:
• Describe the Oracle Solaris 11 operating system
• List new features and enhancements of Oracle Solaris 11
• Describe the new operating system installation features
• Describe the new software updating
p g features
This lesson introduces you to the new features and enhancements found in the Oracle Solaris
11 operating system. The lesson begins with a description of Oracle Solaris 11 and continues
with a high-level description of each new feature and enhancement.
Next, the lesson provides a comparison of the features found in Oracle Solaris 10 with those
of Oracle Solaris 11. This is followed by a description of a strategy for transitioning from
Oracle Solaris 10 to Oracle Solaris 11.
Agenda
Oracle Solaris is the industry-leading operating system for the enterprise. Oracle Solaris 11
raises the bar for the innovation introduced in Oracle Solaris 10 with a unique feature set that
few other operating systems can offer. Oracle Solaris 11 has been tested and optimized for
Oracle hardware and software and is an integral part of Oracle’s combined hardware and
software portfolio.
Oracle Solaris 11 provides customers with the latest access to Oracle Solaris 11 technology,
allowing developers, architects, and administrators to test and deploy applications within large
data centers, which greatly simplifies their day-to-day operations. Oracle Solaris 11 is
characterized by the reliability, availability, and serviceability that you expect from a leading
enterprise operating system.
system
Oracle Solaris 11 provides new optimizations and features designed to deliver proven
scalability and reliability as an integrated component of Oracle’s Exadata and Exalogic
systems.
Oracle Solaris 11 expands support for Oracle Solaris 10 storage technologies. The ZFS file
system includes a number of enhancements, including ZFS as the root file system,
deduplication, and ZFS snapshot differences. Additional enhancements include Common
Multiprotocol SCSI Target (COMSTAR) technology and Common Internet File System (CIFS)
support for seamless file sharing with Windows environments.
Oracle Solaris 11 includes GNOME 2.30, an intuitive, easy-to-use desktop environment, and
the Firefox 3.6.10 web browser, among a variety of other software included in the network
package repository. GNU (not UNIX) commands and a default bash shell environment are
also available.
Oracle Solaris 11 continues to optimize security controls. This release supplies a number of
Oracle Solaris 11 provides a completely redesigned software packaging model: the Image
Packaging System (IPS). IPS is a comprehensive delivery framework that spans the complete
software life cycle, addressing software installation, updates, operating system upgrades, and
the removal of software packages.
In contrast to the SVR4 packaging model used in earlier Oracle Solaris releases, IPS
eliminates the need for patching. Relying on the use of network repositories of software
packages, IPS dramatically changes how an administrator updates system and application
software. IPS packages can be installed into nonglobal zones in addition to the global zone.
• Unattended installation
– Oracle Solaris 11 Automated Installer (AI)
— Network installation
— Installation manifest
• Interactive installation
– Oracle Solaris 11 LiveCD installation
— Suited for desktops and notebooks
— GUI interface
— Text-based interface
Oracle Solaris 11 greatly enhances your ability to monitor zone resource consumption with
the introduction of zonestat. With zonestat, you can observe memory and CPU
utilization, utilization of resource control limits, total utilization, and per-zone utilization
breakdowns over specified time periods.
With Oracle Solaris 11, you can delegate specific zone administration tasks to different
administrators using Role-Based Access Control (RBAC). With delegated administration
standard, users are identified with the permissions to log in, manage, or clone that zone.
• Network virtualization
• Network Auto-Magic (NWAM)
• Improved IP multipathing (IPMP)
Network sockets implementation has been improved and no longer uses the STREAMS
module. This not only means performance improvements but also a new, simplified developer
interface for adding new socket types. The architecture also keeps an eye on network traffic
volume, allowing it to shift from interrupt driven to polling mode, which is much more efficient
when dealing with high network traffic volumes.
Oracle Solaris 11 includes an integrated L3/L4 load balancer. This addition includes stateless
Direct Server Return (DSR) and Network Address Translation (NAT) operation modes on a
variety of load-balancing algorithms, a command-line, and configuration API to configure
various features as well as view statistics and other configuration details.
Ethernet bridging is supported in Oracle Solaris 11 with the addition of the Spanning Tree and
Storage Enhancements
• ZFS enhancements
– Default file system
– Deduplication
– ZFS snapshot differences (zfs diff)
ZFS is the default root file system in Oracle Solaris 11. UFS is still available for nonroot file
systems. Oracle Solaris 11 has added ZFS deduplication, which detects and removes
redundant data from ZFS file systems. If a ZFS file system has the dedup property enabled,
duplicate data blocks are removed synchronously. As a result, the file system stores only
unique data. Support for listing the differences between ZFS snapshots (zfs diff) has
been added with Oracle Solaris 11. Also, now you can use the shadow migration feature to
migrate data from an old file system to a new one while simultaneously allowing access and
modification of the new file system during the migration process.
COMSTAR (Common Multiprotocol SCSI Target) technology, introduced in Oracle Solaris 11,
allows network file sharing,
sharing similar to NFS and CIFS,
CIFS but for raw block
block-device
device access via
iSCSI or SAN. This technology enables any Oracle Solaris 11 host to become a SCSI target,
allowing it to be accessed over a storage network by a variety of initiator hosts. COMSTAR
supplies a software framework that makes it possible for all SCSI device types to connect to a
transport protocol and provide network device access. In this way, virtual machines can share
image files or access to a database.
Oracle Solaris 11 provides in-kernel CIFS support for seamless file sharing with Windows
environments. The CIFS service also includes new features, such as host-based access
control (allowing a CIFS server to restrict access to specific clients according to IP
addresses), access control lists (ACLs) on shares, and client-side caching of offline files with
synchronization on reconnect.
For desktop users, Oracle Solaris 11 offers a state-of-the-art GNOME desktop. The desktop
includes the innovative Time Slider tool. Integrated with the File Browser, Time Slider
supports file and directory recovery, which is made possible through native snapshot and
clone capabilities in ZFS. A user can click in Time Slider to snapshot a home directory and
later revert to it if necessary.
There are other changes in Oracle Solaris 11 that affect the user experience. The default user
path places /usr/gnu/bin before /usr/bin, giving users a familiar GNU-like environment
by default. The bash shell is now the default interactive shell, and ksh93 replaces ksh as the
default system shell.
Th C
The Common UNIX P Printing
i ti S System
t (CUPS) h has b
been selected
l t d as th
the d
default
f lt print
i t service
i on
Oracle Solaris 11, replacing the LP print service. CUPS support includes a web and graphical
interface to manage your printing environment. A system that is running CUPS becomes a
host that can accept print requests from client systems, process those requests, and then
send them to the appropriate printer.
• Secure by default
• Root treated as a role
• Robust data encryption
Oracle Solaris 11 enhances Oracle Solaris Trusted Extensions by introducing labeled IPsec
and labeled ZFS datasets. Additionally, Trusted Extensions now enables per-label and per-
user credentials, allowing administrators to require a unique password for each label. This
password is in addition to the session login password, thus allowing administrators to set a
per-zone encryption key for each label of every user’s home directory.
Lesson Agenda
This table shows the major changes made to some of the key features of Oracle Solaris 10 in
Oracle Solaris 11.
Lesson Agenda
Transitioning Strategy
Summary
M i
Managing S ft
Software P
in Oracle Solaris 11
k
Objectives
This lesson introduces you to the new Oracle Solaris 11 software packaging feature: Image
Packaging System (IPS). The lesson begins with a description of IPS and later compares IPS
to package management in the Oracle Solaris 10 operating system.
Next, the lesson shows you how to configure and work with the IPS features. This is followed
by a description of the method of publishing your own packages in IPS and creating IPS
images.
Agenda
What Is IPS?
The Image Packaging System (IPS) is a framework that provides for software lifecycle
management, such as installation, upgrade, and removal of packages. IPS also allows users
to create their own software packages, create and manage package repositories, and copy
and mirror existing package repositories. An image is a bootable instance of the Oracle
Solaris 11 operating system.
With IPS, you can perform the following tasks:
• Create and manage images.
• Search the IPS packages on your system and in IPS repositories.
• Copy, y mirror, create, and administer package
g repositories.
• Create and publish IPS packages to a package repository.
• Republish the content of an existing package in a package repository.
To use IPS for software package management, you must be running the Oracle Solaris 11
2010_11 (or later) operating system. IPS is not compatible with Oracle Solaris 10 (or earlier)
operating systems. IPS is compatible with both SPARC (sun4v) and x86 (64 bit-based)
systems.
A key component of IPS is the package repository. A package repository is a location where
software packages are stored and from where packages are retrieved by clients systems.
An important feature of IPS is that it enables users to mirror the package repository to another
server. IPS can retrieve content from mirrored servers. A mirror provides a complete copy of a
repository’s catalog of packages. Using a nearby mirror can speed up system updates,
di t ib ti construction,
distribution t ti zone creation,
ti and
d other
th packaging-intensive
k i i t i operations.
ti
Providing the appropriate network infrastructure that allows client systems to access the IPS
server is crucial to making the IPS package scheme work. Clients rely heavily on network
services, such as DNS, for finding their way to the package repository.
IPS Components
Original Mirror
Repository Repository
IPS is made up of key components. Each component has a role to play. These components
include:
• Package:
ac age A pacpackage
age in IPS
S iss a collection
co ect o ofo act
actions
o s de
defined
ed by a set o
of key-value
ey a ue pa
pairs
s
that represent metadata such as classification, descriptions, or other attributes such as
path and alias. The key-value pair could also represent a data payload. These actions
can represent items such as files found in a file system or installable objects, such as
drivers, services, groups, and users. Each IPS package is represented by a Fault
Management Resource Identifier (FMRI). FMRIs are used with the pkg (1) command
to indicate which packages to perform operations on.
Agenda
When you create a local repository, you must perform these steps:
1. Obtain software packages: When creating a local package repository, you must first
download
do oad tthe
eO Oracle
ac e So
Solaris
a s 11 repository
epos to y image
age from:
o
http://www.oracle.com/technetwork/server-storage/solaris11/downloads/index.html
The repository image provides you with a complete archive of software packages to
allow you to set up a local network IPS repository that client systems can connect to.
The repository image is provided in two parts that must be concatenated. You use the
following command-line instructions to successfully create a full ISO image that can be
burned to a dual-layer DVD or directly mounted using the lofiadm command. You
d
download
l d parts t A andd B off th
the repository
it ISO by
b clicking
li ki th
these lilinks:
k
- Download Part A SPARC, x86 (2 GB)
- Download Part B SPARC, x86 (2 GB)
For client systems to access a local repository, you must set the preferred publisher to the
local IPS publisher as shown in the example in the slide.
Agenda
The pkg command is used to interact with the Image Packaging System. With a valid
configuration, pkg can be invoked to create locations for packages to be installed (as what
are called “images”) and manage packages in those images.
The table in this slide shows which pkg commands are used to perform common package
management tasks. It compares these commands to equivalent commands used in Oracle
Solaris 10.
This slide shows examples of searching for a package (apptrace) and displaying package
information.
The –rr option retrieves the information data from the repositories of the image's
image s configured
publishers.
PHASE A CTIONS
Install Phas 19/19
PHASE ITEMS
Package State Update Phase 1/1
Image State Update Phase 2/2
This slide shows examples of performing a package (apptrace) installation dry-run (-nv)
and a real package installation.
This slide shows examples of listing an installed package (apptrace), verifying package
status, and displaying the contents of a package.
Package Manager
The Package Manager provides most package and publisher operations and some boot
environment (BE) operations. If you are new to the Oracle Solaris 11 and IPS technologies,
use the Package Manager to quickly download and install packages.
IPS allows you to access the package repository by using a web browser. With a web
browser, you can search for and install packages, and view the contents of a package
manifest.
Update Manager
update –all
– pkg CLI command:
— # pkg
p g update
p
Another important feature of IPS is the Update Manager. Update Manager updates all
installed packages to the newest version allowed by the constraints imposed on the system
by installed packages and publisher configuration.
The Update Manager feature can be invoked in one of the three following ways:
• In the Package Manager GUI, click the Updates button or select the Package > Updates
menu option.
• Use pm-launch with the packagemanager sub-command:
$ /usr/lib/pm-launch packagemanager –update –all
• Use the pkg CLI command:
# pkg update
If the system created a new boot environment (BE) for the update, you edit the default BE
name. Click the Restart Now button to restart your system immediately or the Restart Later
button to restart your system at a later time. You must restart to boot into the new BE. The
new BE will become your default boot environment. Your current BE will be available as an
alternate
a te ate boot choice.
c o ce
Agenda
You can create several different types of IPS packages. The package is then published to the
repository by using the pkgsend command. You must perform the steps shown in the slide to
publish a package in IPS.
In this practice, you work with the IPS package publishing feature. During this practice, you
create a simple software package and deploy it by using IPS.
Agenda
The beadm utility is the primary BE management tool. The beadm utility aggregates all
datasets in a boot environment and performs actions on the entire boot environment at once.
You no longer need to perform ZFS commands to modify each dataset individually. It
manages the dataset structures within boot environments. For example, when the beadm
utility clones a boot environment that has shared datasets, the utility automatically recognizes
and manages those shared datasets for the new boot environment.
The beadm utility enables you to perform administrative tasks on your boot environments.
These tasks can be performed without upgrading your system. It automatically manages and
updates the GRUB menu for x86 systems, or the boot menu for SPARC systems. For
example when you use the beadm utility to create a new boot environment,
example, environment that environment
is automatically added to the GRUB menu or boot menu.
This slide shows examples of listing boot environments and associated snapshots.
N means that the boot environment is currently active, and R means that it will be the boot
environment that will be active on reboot as well.
This slide shows examples of creating a new boot environment and a clone.
• The first command creates a new boot environment.
• The second command creates a snapshot of the new boot environment
environment.
• The third command creates a boot environment clone from a snapshot.
This slide shows examples of activating, renaming, and destroying boot environments.
This slide shows examples of mounting and unmounting inactive boot environments.
The Package Manager is a graphical user interface that enables you to install, update, and
manage packages on your installed system. If you use the Package Manager to update all the
packages on your system, a clone of the active boot environment is created. This clone
enables you to, if necessary, boot into the boot environment state that existed before the
update process was started.
You can use the Package Manager to manage your boot environments as follows:
• You can delete old and unused boot environments to make the disk space available.
• You can change the default boot environment on your system.
• You can activate a boot environment.
Quiz
Answer: c
Quiz
Answer: b
Quiz
Answer: a
Quiz
Answer: b
Quiz
You have three publishers listed in this order:
mypublisher.com, which is the highest ranked publisher,
solaris, and whoisit. For search order purposes, you want
to move the whoisit publisher before the solaris
p
publisher.
Answer: d
Quiz
Answer: d
Quiz
Answer: a
Quiz
Answer: c
Quiz
Answer: a
Summary
t lli
IInstalling the O
th Oracle
Operating System
l Solaris
Objectives
This lesson introduces you to the new Oracle Solaris 11 operating system installation
methods. You explore both interactive and automated installations. Next, you compare and
convert Oracle Solaris 10 JumpStart installation to Oracle Solaris 11 installation. The lesson
also shows you how to configure and work with automated installation features. Finally, you
are introduced to the distribution constructor.
Agenda
Hardware Requirement
Disk space Disk space: Recommended size is 7
GB. A minimum of 3 GB is required.
This slide shows the hardware requirements needed for installing Oracle Solaris 11.
Agenda
When starting the Oracle Solaris 11 Text installer, you are provided with a menu of keyboard
layouts as shown in this slide. The default is US English.
The installation menu provides you with options such as installing additional device drivers
and changing the terminal type. The default is “Install Oracle Solaris” (option 1).
During the Oracle Solaris 11 Text installation, you must choose the disk on which to install the
OS.
You are required to assign a name to the install system. This is the network hostname. Also,
you must decide how the installation system network is to be configured:
• Automatically:
uto at ca y Thiss option
opt o uses tthee Network
et o Auto-Magic
uto ag c ((NWAM)) feature.
eatu e NWAM is sa
daemon that takes care of the connection to the network. As the name suggests, the
network connection should work auto-magically, which means that most of the time, you
do not need to care about your connection.
• None: This option disables NWAM. When selecting this option, you must configure the
network manually.
In Oracle Solaris 11, root is configured by default as a role rather than a user. During system
installation, the Text installer helps you to set up the root password and initial user account.
You use the initial user account to log in to the system. After initial user login, a user with the
appropriate privileges can subsequently assume the role of root using su or perform
administrative tasks after authentication using sudo or pfexec.
The Oracle Solaris 11 LiveCD for x86 provides a GUI-based interactive installation that steps
through the process of configuring the system for the OS installation. The LiveCD then installs
a software payload that includes a full desktop operating environment. The LiveCD also
provides additional utilities, such as the Device Driver Utility and partition editor, to help
ensure successful installations.
The Device Driver Utility helps you to detect whether Oracle Solaris 11 can be installed on
your x86 system. When started, it runs a quick device compatibility check on your system. If a
device driver problem is detected, it provides the tools for installing the appropriate device
driver packages from a file, web, or IPS repository.
The GParted Partition Editor allows you to customize the installation disk layout before you
begin the OS installation. Note that GParted is usually used only if you are attempting to set
up a disk to boot multiple operating systems.
An Oracle Solaris 11 LiveCD installer helps you choose the target installation disk or partition.
The Oracle Solaris 11 LiveCD installer provides a point-and-click time zone configuration
interface. Simply click the city nearest to your installation location.
As we saw with the Text installer, in Oracle Solaris 11 root is configured by default as a role
rather than a user. As with the Text installer, during system installation, the LiveCD installer
helps you set up the root password and initial user account. You use the initial user account to
log in to the system. After initial user login, a user with the appropriate privileges can
subsequently assume the role of root using su or perform administrative tasks after
authentication using sudo or pfexec. Note that the root password will be the same as the
user account password entered here.
In addition to the initial user configuration, the Users dialog box allows you to set the
hostname for your system. The network configuration method is automatically set to NWAM.
In these practices, you perform interactive installations of the Oracle Solaris 11 operating
system.
In Oracle Solaris 11, system and network configuration that was previously stored in the /etc
directory is now stored in an SMF repository. Moving configuration data to SMF service
properties enables the delivery of a uniform, extensible architecture for system configuration
that provides you with more complete capability to manage the system configuration.
The following network configuration features have changed in Oracle Solaris 11:
• File system sharing: Sharing a file system is managed through SMF and administered
by using the zfs command. The /etc/dfs/dfstab file is only meaningful for legacy
files systems.
• Network configuration: Network configuration persistence through the editing of these
fil iis no llonger necessary. Y
files You use commandsd such
h as svccfg, svcprop, ipadm,
i
and dladm to manage this type of network configuration. Files such as:
/etc/hostname, /etc/dhcp, and /etc/hostname.ip* .tun* are no longer
relevant.
• The system host name: A system's host name is now set by configuring the
config/nodename service property of the svc:/system/identity:node SMF
service. The /etc/nodename file is no longerg relevant.
The sysconfig utility is used in Oracle Solaris 11 to unconfigure and reconfigure an existing
Oracle Solaris 11 system. This tool replaces the sysunconfig and sysidtool utilities.
The sysconfig utility launches the System Configuration (SC) tool . You use the System
Configuration (SC) tool to interactively unconfigure and configure the OS image.
There are three operations that you can perform using the sysconfig utility:
• Unconfiguration of the system: This operation brings the OS image to a pristine
(unconfigured) state.
• Configuration of the system: This operation allows you to reconfigure the OS image. It
helps you change the host name, IP address, name service, time zone, initial user
account,
t and
d root password.d
• System configuration (SC) profile creation: This operation helps you create an SC
profile. The SC profile is an XML-based file that contains the host name, IP address,
name service, time zone, initial user account, and root password configuration
properties. The SC profile can be used with the sysconfig configure command or
with Automatic Installation (AI) to configure an OS image.
Agenda
Manifests
M M M
...
1 3
The automated installer is used to automate the installation of the Oracle Solaris 11 OS on
one or more SPARC and x86 systems over a network. The installations can differ in
architecture, packages installed, disk capacity, network configuration, and other parameters.
Automated installation can be run in a “serverless” mode where the client boots from the ISO
and uses a manifest that is either located on the media or obtained from a network location
that you have access to. Client access to an IPS original repository and DHCP service are
required.
An automated installation over the network to a client system, as shown in the slide, performs
the following core steps:
1. A client system boots and gets IP information from the DHCP server.
2. The client contacts an install service on the AI server and accesses the boot image and
the AI manifest containing the installation specifications.
3. The client is installed with the operating system, pulling packages from the IPS original
repository specified in the AI manifest.
Assume that you have set up an installation server with one or more install services. You've
customized the installation specifications for the installation services to suit your needs. Now,
you are ready to install the Oracle Solaris 11 OS to client systems on the network. You need
only to boot the client, and the process runs to completion without further input from you.
This flowchart illustrates how a client system is installed. The client browses for available
installation services, seeking a service where the installation criteria in the service's manifest
file match the characteristics of the client system. When a match is found, the installation is
performed on the client system, using a boot image and manifest specifications provided by
the installation service.
AI Environmental Requirements
• The network
• Client access to AI service and IPS repository
• AI service storage location
To use AI to install client systems over the network, you must set up DHCP and also an AI
install service on an install server. AI uses DHCP to provide the IP address, subnet mask,
router, DNS server, and the location of the install server to the client machine to be installed.
The DHCP server and AI install server can be the same machine or two different machines.
The client machines you want to install must be able to access an Oracle Solaris Image
Packaging System (IPS) software package repository. The IPS package repository can be on
the install server, on another server on the local network, or on the Internet. An AI install
service is associated with a SPARC or x86 network boot image (net image), one or more
installation instruction files (AI manifests), and zero or more system configuration instruction
files (SC profiles).
profiles) The net image is not a complete installation.
installation Client machines must access
an IPS package repository to complete their installations. The AI manifest specifies one or
more IPS package repositories where the client retrieves the packages needed to complete
the installation. The AI manifest also includes the names of additional packages to install and
information such as target device and partition information. You can also specify instructions
for configuring the client.
AI does not support storing the AI service in a dedicated ZFS file system. When creating the
AI service,
i store
t th
the service
i iin a standard
t d d didirectory.
t
If two client machines have different architectures or need to be installed with different
versions of the Oracle Solaris 11 OS, you create two AI install services and associate each
install service with a different net image. If two client machines need to be installed with the
same version of the Oracle Solaris 11 OS but need to be installed differently in other ways,
you create two AI manifests for the AI install service. The different AI manifests can specify
different packages to install or a different slice as the install target. If client systems need to
have different configurations applied, create multiple SC profiles for the install service. The
different system configuration (SC) profiles can specify different network or locale setup or
unique host name and IP address.
AI stores the default manifest files in ../auto_install/manifest. Custom manifests and
profiles should never be stored inside the AI service directory structure.
The minimum you have to do to use AI is create one install service. In this minimal scenario,
all clients have the same architecture and are installed with the same version of the Oracle
Solaris OS. The installations use the default AI manifest, which specifies the most recent
version of the OS available from the default IPS package repository on the Internet.
1. Make sure the install server has a static IP address and default route.
2. Install the installation tools package, install/installadm.
3. Run the installadm create-service command.
4. Make sure the clients can access a DHCP server.
5. Make sure the necessaryy information is available in the DHCP configuration
g to boot the
service.
6. Make sure the clients can access an IPS software package repository. To use the
default IPS package repository, the clients must be able to access the Internet.
7. Network boot the client.
When you network boot the client, the following steps are performed:
1. The client gets the install server address from the DHCP server.
2. Because the install server has only one install service, the client uses that service if the
architecture matches.
matches
3. Because the install service has only one AI manifest, the client uses that default AI
manifest, installing software packages from the IPS package repository over the
network.
4. When the client boots after installation, an interactive tool prompts for system
configuration information because no system configuration profile is provided.
To specify installation parameters such as a local IPS publisher, the target disk for installation,
partition or mirror configuration, or additional software packages to install, provide a
customized AI manifest. Perform the following steps before you boot the client, in addition to
the minimum required steps:
1. Create a new AI manifest, or write a script that dynamically creates a custom AI
manifest at client installation time.
2. Run the installadm create-manifest command to add the new manifest or
script to the install service. Specify criteria for the client to select this manifest or script,
or use the -d option to make this manifest or script the default manifest specification for
this service.
service
When you network boot the client, the following steps are performed:
1. The client gets the install server address from the DHCP server.
2. Since the install server has only one install service, the client uses that service if the
architecture matches.
matches
3. The client is directed to the correct provisioning manifest by criteria specified to
create-manifest. If no criteria match, the client uses the default manifest for this
service.
4. The client is provisioned according to the selected manifest.
5. When the client boots after installation, an interactive tool prompts for system
configuration
g information because no system
y configuration
g p
profile is p
provided.
To specify system configuration parameters such as time zone, user accounts, and networking,
provide a Service Management Facility (SMF) system configuration profile (SC profile).
Perform the following steps before you boot the client, in addition to the minimum required
steps:
1. Create an SC profile using the sysconfig create-profile utility.
2. Run the installadm create-profile command to validate the profile, add the
profile to the install service, and specify criteria to select which clients should use this SC
profile.
When you network boot the client, the following steps are performed:
1. The client gets the install server address from the DHCP server.
2. Since the install server has only one install service, the client uses that service if the
architecture matches.
3. Since the install service has only one AI manifest, the client uses that default AI manifest,
installing software packages from the IPS package repository over the network.
4. The client is directed to the correct system configuration profile by criteria specified to
create-profile.
t fil
5. The client is configured according to the selected configuration profile. If no configuration
profile is selected because the criteria do not match, the interactive configuration tool
starts.
Transition to Oracle Solaris 11 4 - 32
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
To install different versions of the Oracle Solaris 11 OS, create additional AI install. Perform
the following steps before you boot the client, in addition to the minimum required steps:
1. Run the installadm create-service command and specify p y a different net image.
g
2. Run the installadm create-client command to direct the client to this new
install service.
3. Create custom manifests and SC profiles (if required) and associate them with the
appropriate AI service.
When you network boot the client, the following steps are performed:
1. The client gets the install server address from the DHCP server.
2. The client is directed to this new install service by create-client.
3. The client is provisioned according to the default provisioning manifest for this service.
4. When the client boots after installation, an interactive tool prompts for system
configuration information because no system configuration profile is provided.
This slide provides an overview of the tasks you must perform when configuring your AI
server.
Setting up the AI server involves the four key tasks shown in the slide.
Note that create-service automatically enables the AI service in SMF.
Also note that create-client
create client is needed only if more than one service for a particular
architecture (Sparc or x86) is provided on the AI server. When there is only one, they will all
use that service by default and do not need to be specifically configured with create-
client.
AI Manifests
• Default manifest
• Custom manifest
• Criteria manifest
AI manifests are XML files used to specify multiple sets of installation and system
configuration instructions for each install service.
AI has three types of manifests:
• Default manifest: A default manifest is an installation manifest that has no criteria
associated with it. The default manifest is used by clients when no other installation
manifest’s criteria match the client.
• Custom manifest: To perform different installations on different clients by using the
same install image, you need to provide customized AI manifests for that install service.
Clients that do not match the criteria specific to any custom manifest are installed using
the instructions in the default
f manifest.
f
• Criteria manifest: The criteria manifest allows you to associate client-specific
installation instructions with AI services. When the client matches the criteria that have
been specified for a criteria manifest, the client uses the associated manifest.
The default.xml manifest file provides a generic configuration applicable to most clients.
You can change the AI defaults by copying the default.xml file to a new file and editing the
new file as desired. You can then apply the new manifest by using the installadm add-
manifest –f command, as in this example:
installadm create-manifest –f new_manifest –n AI_service_name
The <target> element is used to configure the disk drive used for the OS installation.
<software type="IPS">
<source>
<publisher name="solaris">
This slide shows the IPS and packages sections of the default manifest file. The
<software> element defines the location of the IPS origin and which software packages to
install and uninstall. The entire package is recommended so that the system will be
updated coherently when patching or upgrading in the future. The solaris-large-server
package is suitable for a server installation.
The criteria manifest allows you to associate client-specific installation instructions with AI
services. When the client matches the criteria that have been specified for a criteria manifest,
the client uses that manifest.
An AI manifest is selected for a client according to the following algorithm:
• If custom manifests are defined for this install service but the client does not match
criteria for any custom manifest, the client uses the default manifest.
• If the client matches criteria that have been specified for a custom manifest, the client
uses the associated manifest.
If client characteristics match multiple manifests, the client characteristics are evaluated in the
following order:
• mac
• ipv4
• platform
• arch
• cpu
• mem
For example, if one criteria specification matches the client’s MAC address and another
This slide shows examples of arch, mac, and ipv4 criteria files.
The System configuration profiles (SC profiles) specify client system configuration as a set of
configuration parameters in the form of a Service Management Facility (SMF) profile. The SC
profile sets SMF properties for appropriate SMF services.
SC profiles are applied during the first boot of the system after AI installation. SMF services
responsible for particular configuration areas process SMF properties and configure the
system accordingly.
Each client can use any number of SC profiles. For example, a client might be assigned one
profile that provides only the hostname and IP address for that client. The same client and
many other clients might be assigned other profiles that set more broadly applicable property
values.
l If no SC profile
fil is
i provided
id d ffor a particular
ti l client,
li t th
the iinteractive
t ti configuration
fi ti tool
t l iis
started on that client.
The SC profiles can be created using the sysconfig create-profile utility or using a
text editor.
SC Profile: Example
The SC profile is used to configure client systems. This slide shows entries for configuring the
initial standard user and root role.
SC Profile: Example
This slide shows the entries for setting up the time zone and node host name.
SC Profile: Example
This slide shows entries for setting up the system keymap, terminal type, and network type.
SC Profile: Example
<service version="1" type="service" name="network/install">
<instance enabled="true" name="default">
<property_group type="application" name="install_ipv4_interface">
<propval type="astring" name="address_type" value="static"/>
<propval type="net_address_v4" name="static_address"
value="192.168.0.140/24"/>
<propval type="astring" name="name" value="net0/v4"/>
This slide shows entries for configuring an IP address and the name-service switch.
SC Profile: Example
This slide shows how to enable and disable the AI SMF service.
This slide begins a step-by-step walkthrough for configuring an AI service. This walkthrough
includes:
• C Creating
eat g tthee AI se
service
ce
• Adding a client to the AI service
• Creating a custom manifest
• Creating a criteria manifest
• Adding manifests to the AI service
• Creating an SC profile
• Adding the profile to the AI service
ser ice
• Validating the SC profile
In this slide, you create a new AI service named custom_ai in the
/export/AI/custom_ai directory. The AI image used in this service is sol-11-dev-
171-ai-x86.iso (Oracle Solaris 11 Build 171). Next, you add client
08:00:27:85:C7:D8 to the custom_ai AI service.
root@s11-serv1:~# vi /var/tmp/manifests/custom_manifest.xml
<!DOCTYPE auto_install SYSTEM
"file:///usr/share/install/ai.dtd">
<auto_install>
<ai_instance name="custom_ai" auto_reboot="true">
Now that the custom_ai service exists, you create a custom manifest file named
custom_manifest.xml. Here, you set the image name to custom_ai. This results in a
manifest name (identifier) that is used to manage the manifest. Next, the target element
configures the client default boot disk using Oracle Solaris 11 standard conventions. Then,
you set the IPS publisher to a local origin (http://s11-serv1.mydomain.com).
<software_data action="install">
<name>pkg:/entire</name>
<name>pkg:/group/system/solaris-large-server</name>
</software_data>
_
This slide continues the custom_mainfest edit. Here, you identify which software packages
are to be loaded on the client system from the IPS server.
After the custom manifest build is completed, you create a criteria manifest for the client
system. In this case, you use the client’s MAC address as the criteria.
Now that the custom manifest and criteria manifest are built, you associate them with the
custom_ai AI service using the installadm add-manifest command.
Next, you use the sysconfig create-profile
create profile utility to create a system configuration
profile named client_profile for the AI client. The sysconfig create-profile
utility starts the interactive system configuration tool, which guides you through the SC profile
design.
After the SC profile is completed, you use the installadm create-profile command to
associate the new SC profile with the custom_ai AI service and the client criteria manifest.
Finally, you validate the SC profile. If the SC profile passes validation checks, the AI service is
completed and available.
Agenda
Comparing JumpStart to AI
Task JumpStart AI
Set up an install Use the Use the installadm
server. setup_install_serve create-service
This table in the slide compares the methods used to accomplish JumpStart tasks and AI
tasks.
This table compares Oracle Solaris 10 JumpStart rules file keywords to Oracle Solaris 11 AI
criteria file directives.
This table continues the comparison of Oracle Solaris 10 JumpStart rules file keywords to
Oracle Solaris 11 AI criteria file directives.
This table continues the comparison of Oracle Solaris 10 JumpStart rules file keywords to
Oracle Solaris 11 AI criteria file directives.
JumpStart
Rules File AI Manifest Directives
Keyword
boot_device <target>
This table shows how to convert Oracle Solaris 10 JumpStart rules file keywords to Oracle
Solaris 11 AI manifest directives.
This table shows how to convert Oracle Solaris 10 JumpStart rules file keywords to Oracle
Solaris 11 AI manifest directives.
This table shows how to convert Oracle Solaris 10 JumpStart rules file keywords to Oracle
Solaris 11 AI manifest directives.
Agenda
Distribution Constructor
You use the distribution constructor to build custom Oracle Solaris images. These images can
be used to install the Oracle Solaris software on individual systems, multiple systems, or
Virtual Machines (VMs) that run the Oracle Solaris 11 operating system. The distribution
constructor takes an XML manifest file as input and builds an ISO image or Virtual Machine
image that is based on the parameters specified in the manifest file.
Using the distribution constructor, you can build customized versions of the following types of
Oracle Solaris 11 images:
• x86 or SPARC Oracle Solaris Text installer image
• Oracle Solaris x86 LiveCD image
• x86 or SPARC ISO image for Automated Installations
• x86 Oracle Solaris Virtual Machine
The distribution constructor is distributed in the distribution-constructor package. The
distribution-constructor package contains the distro_const command-line utility for
building custom Oracle Solaris images and Virtual Machine images. It also contains default
manifest files that are used to describe the various image types.
This table lists the default manifest files shipped with the distribution-constructor package.
After you install the distribution-constructor package, you can locate these manifest files in the
/usr/share/distro_const/image_type directory.
The distribution-constructor package also contains additional “finalizer” scripts that can be
used to make installation customizations based on the type of image that you are building.
The manifest files point to the finalizer scripts, and the finalizer scripts transform the generic
image into a media-specific distribution. You can create your own finalizer scripts. If you do
create new scripts, edit the manifest files to point to these new scripts.
Note: See the Oracle Solaris 11 Distribution Constructor Guide for more information about
creating
ti custom
t finalizer
fi li scripts.
i t
Building an OS Image
Building an OS image can be done in one step by using the distro_const command
without options. You use the options provided in the distro_const command to stop and
restart the build process at various stages in the image-generation process, in order to check
and debug your selection of files, packages, and scripts for the image that is being built. This
process of stopping and restarting during the build process is called checkpointing.
Checkpointing supports the process of developing and debugging images. You can start
building an image, pause at any stage you want and examine the contents of the image, and
then resume building the image. Checkpointing is optional. The checkpointing feature is
enabled by default in the manifest file. A ZFS dataset, or a mount point that correlates to a
ZFS dataset
dataset, must be specified as the build area
area.
Checkpointing allows you to stop and resume at a specific checkpoint (step).
Example:
• distro_const build -p step manifest
• distro_const build -r step manifest
Alternatively, you can disable checkpointing in the manifest file by setting the
checkpoint enable parameter to false
checkpoint_enable false.
Checkpointing should not be disabled, because it makes debugging problems very difficult.
Quiz
Answer: b
Quiz
Answer: a
Quiz
Answer: b
Quiz
Answer: c
Quiz
Answer: d
Quiz
Answer: c
Summary
In this lesson, you should have learned how to:
• Describe Oracle Solaris 11 installation options
• Plan for an Oracle Solaris 11 installation
• Describe an Oracle Solaris 11 LiveCD installation
• Describe an Oracle Solaris 11 Text installation
In this lesson, you were presented with the Oracle Solaris 11 installation options. You were
shown how to install the operating system using the interactive options (text installer and
LiveCD) as well as automated installation. You then spent some time looking at how to
configure an AI server and client. You also had the opportunity to compare a JumpStart OS
installation to an AI OS installation and see how to perform the conversion. Finally, you were
introduced to the distribution constructor and shown how to build an OS image.
Ad i i t i
Administering O
Oracle
l Solaris
Objectives
This lesson introduces you to the new Oracle Solaris 11 zones features and enhancements.
You learn how to configure a Solaris 10 zone in Oracle Solaris 11 and migrate Solaris 10
zones from Oracle Solaris 10. Finally, you monitor zone resource consumption and delegate
zone administration.
Agenda
Oracle Solaris Zones is a built-in OS virtualization with a long and distinguished pedigree.
One of the most highly adopted, highly used, mature virtualization technologies, Oracle
Solaris Zones was first introduced as a core part of Oracle Solaris 10. As of Oracle Solaris 11,
Oracle Solaris Zones becomes even more central to both the application and the end user.
Enhancements and new features include:
• Integration into the new packaging system (IPS)
• Support for Oracle Solaris 10 Zones
• Integration with the new Oracle Solaris 11 network stack architecture
• Improved observability
• Increased control over administration
• Tight integration with ZFS
Agenda
The Oracle Solaris 10 zone is a complete run-time environment for Oracle Solaris 10
applications on SPARC and x86 machines running the Oracle Solaris 10 9/10 operating
system or later. You must install the s10 patch before you create the archive that will be used
to install the zone. The Oracle Solaris 10 zones are supported on all SPARC, and x86
architecture machines that the Oracle Solaris 11 release has defined as supported platforms.
The Oracle Solaris 10 zone supports the execution of 32-bit and 64-bit Oracle Solaris 10
applications. Oracle Solaris 10 zones include the tools required to install an Oracle Solaris 10
system image into a zone.
You cannot install a Solaris 10 zone directly from Oracle Solaris 10 media. A physical-to-
virtual (P2V) capability is used to directly migrate an existing system to a zone on a target
system. The Oracle Solaris 10 zone also supports the tools used to migrate a Solaris 10 zone
to an Oracle Solaris 10 zone. The virtual-to-virtual (V2V) process for migrating a Solaris 10
zone into an Oracle Solaris 10 zone supports the same archive formats as P2V. The Oracle
Solaris 10 zone supports the whole root zone model. All of the required Oracle Solaris 10
software and any additional packages are installed into the private file systems of the zone.
The zone must reside on its own ZFS dataset; only ZFS is supported. The ZFS dataset will be
created automatically when the zone is installed or attached. If a ZFS dataset cannot be
created, the zone will not install or attach. Note that the parent directory of the zone path must
also be a ZFS dataset or the file system creation will fail. Any script or program that executes
in an Oracle Solaris 10 zone should also work in a Solaris 10 zone.
A /dev/sound device cannot be configured into the Solaris 10 zone.
There are four key tasks to migrating an Oracle Solaris 10 zone to Oracle Solaris 11:
1. Assess the Solaris 10 zone to be migrated. An existing Oracle Solaris 10 9/10 system
(or later
(o ate released
e eased Solaris
So a s 10 0 update) canca be ddirectly
ect y migrated
g ated into
to a Solaris
So a s 10
0 zone
o eoon
an Oracle Solaris 11 system. Depending on the services performed by the original
system, you might need to manually customize the zone after it has been installed. For
example, the privileges assigned to the zone might need to be modified or the network
interface is different. It is critical that you examine the source system and collect the
following information:
- Host name
- Host
H t ID
- Domain name
- Root password
- Running applications
- Networking
- Storage
- Zone configuration
2. Create an archive of the Solaris 10 zone to be migrated. You have a variety of methods
available for creating the archive. The installer can accept the following archive formats:
- flar image
- cpio archives
- gzip compressed cpio archives
- bzip2 compressed cpio archives
- pax archives created with the -x xustar (XUSTAR) format
- ufsdump level zero (full) backups
After you have created an archive, you must provide a method (such as NFS) of
There are four key tasks to migrating an Oracle Solaris 10 global zone to Oracle Solaris 11:
1. Assess the global zone to be migrated. An existing Oracle Solaris 10 19/10 system (or
later
ate released
e eased So Solaris
a s 10 0 update) ca
can be d
directly
ect y migrated
g ated into
to a So
Solaris
a s 10
0 zone
o eoon a
an
Oracle Solaris 11 system. Depending on the services performed by the original system,
you might need to manually customize the zone after it has been installed. For example,
the privileges assigned to the zone might need to be modified or the network interface is
different. It is critical that you examine the source system and collect the following
information:
- Host name
- Host
H t ID
- Domain name
- Root password
- Running applications
- Networking
- Storage
2. Create an archive of the global zone to be migrated. You have a variety of methods
available for creating the archive. The installer can accept the following archive formats:
- flar image
- cpio archives
- gzip compressed cpio archives
- bzip2 compressed cpio archives
- pax archives created with the -x xustar (XUSTAR) format
- ufsdump level zero (full) backups
After you have created an archive, you must provide a method (such as NFS) of
Agenda
Oracle Solaris 11 supports non-global zone installation by using the Automated Installer (AI).
Non-global zones are installed and configured on the first reboot after the global zone is
installed. When a system is installed by using AI, non
non-global
global zones can be installed on that
system by using the configuration element in the AI manifest.
When the system first boots after the global zone installation, the zone’s self-assembly SMF
service (svc:/system/zones-install:default) configures and installs each non-
global zone defined in the global zone AI manifest.
…
</software>
<configuration type="zone" name=“zone5“ source=“http://s11-
ss.mydomain.com/zone_configs/zone5.cfg"/>
</ai_instance>
This example shows an excerpt from an AI manifest file. The configuration element is
highlighted. You use the configuration element in the AI manifest for the client system to
specify non-global zones. Use the name attribute of the configuration element to specify
the name of the zone. Use the source attribute to specify the location of the configuration file
for the zone. The zone configuration file must be in zonecfg export format. AI copies this
configuration file onto the installed client system to be used to configure the zone. The source
location can be any http:// or file:// location that the client can access during installation.
The following files are used to configure and install non-global zones:
• Zone configuration file: The zone configuration file is the zone's configuration in file form
from the outputp of the zonecfg g export
p command. The location of the zone configuration
g
file is specified by the source attribute of the configuration element in the AI manifest. AI
copies this zone configuration file onto the installed client system to be used to configure the
zone.
• AI manifest (optional): This AI manifest for zone installation specifies packages to be
installed in the zone, along with publisher information and certificate and key files as
necessary. To provide a custom AI manifest for a zone, you add the manifest to the install
service that is installing
g the g
global zone. In the create-manifest command, specify p y the
zonename criteria keyword with the names of all zones that should use this AI manifest. If
you do not provide a custom AI manifest for a non-global zone, the default AI manifest for
zones is used.
• SC profile (optional): You can provide zero or more configuration files for a non-global zone.
These SC profiles are similar to the SC profiles for configuring the global zone. You might
want to provide SC profile files to specify zone configuration such as users and the root
password for the zone administrator. To p
p provide SC profile
p files for a zone, add the
configuration profiles to the install service that is installing the global zone. In the create-
profile command, specify the zonename criteria keyword with the names of all zones
that should use this SC profile.
Profile Criteria
------- --------
client4_profile mac = 08:00:27:85:C7:D9
zone5_profile zonename = zone5
This slide shows an example of adding a non-global zone manifest and a profile to an existing
AI service named custom_ai.
Agenda
With Oracle Solaris 11, you can delegate common zone administration tasks for specific
zones to different administrators by using Role-Based Access Control (RBAC). With
delegated administration, for each zone, a user or set of users may be identified with the
permissions to log in, manage, or clone that zone. These specific authorizations associated
with the auth property are interpreted by the appropriate commands running in the global
zone to allow access at the correct authorization level to the correct user.
The admin zone property defines the username and the authorizations for that user for a
given zone (as shown in the example in the slide).
The zonestat utility reports on the CPU, memory, and resource control utilization of the
currently running zones. Each zone’s utilization is reported as a percentage of both system
resources and the zone’s configured limits.
The zonestat utility prints a series of reports at specified intervals. It can print one or more
summary reports. When run from within a zone, only processor sets visible to that zone are
reported. The zone output will include all of the memory resources and the limits resource.
The zonestat service in the global zone must be online to use the zonestat service in the
zone. The zonestat service in each zone reads system configuration and utilization data
from the zonestat service in the global zone. The zonestatd system daemon is started
d i system
during t b
boot.
t Th
The d
daemon monitors
it th
the utilization
tili ti off system
t resources b by zones as
well as zone and system configuration information, such as psrset processor sets, pool
processor sets, and resource control settings. There are no configurable components.
In the slide you see a zonestat utility report on zone memory consumption. This example
shows a summary of utilization every five seconds.
In the slide you see a zonestat utility report on zone CPU (processor sets) consumption.
This example shows a report on the default processor set (pset) once a second for one
minute.
You can use the zonestat utility to report total and high zone resource utilization. In this
example, the zonestat utility silently monitors at 10-second intervals for one minute, and
then produces a report on the total and high utilizations.
Quiz
Answer: a
Quiz
Answer: b
Summary
In this lesson, you were presented with the new Oracle Solaris 11 zones features. You were
also shown the tasks involved in migrating Oracle Solaris 10 zones to Oracle Solaris 11. You
learned that non-global zones can be installed by using the AI service. Finally, you learned
how to monitor zone resource consumption and delegate zone administration.
Practice Environment
Recall from the lessons titled “Managing Software Packages in Oracle Solaris 11” and
“Installing the Oracle Solaris 11 Operating System” that your practice environment is based
on the Oracle VM VirtualBox virtualization software.
The following four virtual machines (VMs) play an important role in this lesson’s practice:
• Sol11X-SuperServer: This VM provides network services such as DNS and NFS used
by the VMs in the practice.
• Sol11X -Server1: This is the IPS server used to install the SUNWs10brand package.
• Sol10- Server1: This is the source system for the zone migration practice.
• Sol11X
Sol11X- Desktop: This is the target system for the zone migration practice.
O
Oracle
l Solaris N t
S l i 11 Network E h
Objectives
This lesson introduces you to the new Oracle Solaris 11 network features and enhancements.
You will learn how to set up and manage NWAM, configure IPMP, configure a virtual network,
configure a network bridge, and configure network link aggregation.
Agenda
The networking stack has been redesigned to unify, simplify, and enhance the observability
and interoperability of network interfaces and features. A new GLDv3 network driver
framework has been added to p provide support
pp for Virtual LANs ((VLANs),
), bridging,
g g, and link
aggregation. The GLDv3 framework also provides the ability to support MAC layers other than
Ethernet.
Here are the key network enhancements:
• Network management and observability: Oracle Solaris 11 adds a variety of robust
new network utilities. For network management, the ipadm utility command provides a
set of subcommands that can be used to manage interfaces (interface creation and
deletion, modifying interface properties, and displaying interface configuration), manage
addresses (address creation and deletion, modifying address properties, and displaying
address configuration), and manage TCP/IP protocol properties (modifying and
displaying them). The ipadm command replaces the traditional ifconfig command.
The dladm command has been enhanced to manage new network devices such as
g
virtual NICs and bridges. For network observability,
y the new wireshark and dlstat
utilities have been added. Wireshark is a powerful network protocol analyzer that
allows you to capture and interactively browse the traffic running on a computer network.
By using dlstat, you can generate reports containing runtime statistics about the
network data links.
Transition to Oracle Solaris 11 6 - 4
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
Agenda
At all times, one NCP and one Location profile must be active on the system. During a system
boot, the profile daemon (nwamd) performs the first set of steps presented in the slide.
When an event triggers a change in the network configuration, the NWAM daemon (nwamd)
functions in various roles and performs the operations presented in the second set of steps
presented in the slide.
The following are some of the event triggers:
• Connecting or disconnecting an Ethernet cable
• Connecting or disconnecting a WLAN card
• Booting a system when a wired interface
interface, a wireless interface
interface, or both are available
• Resuming from suspend when a wired interface, a wireless interface, or both are
available (if supported)
• Acquiring or losing a DHCP lease
Consider the following when using NWAM with other Oracle Solaris technologies:
• IP Multipathing (IPMP): Before configuring your network by using IPMP, you must
disable the network/physical:nwam
/p y SMF service.
• Oracle VM Server for SPARC and VirtualBox: NWAM is supported in both Oracle
Solaris hosts and guests. NWAM manages only the interfaces that belong to the
specified virtual machines and does not interfere with other virtual machines.
• Solaris zones: NWAM works in global zones or in an exclusive stack non-global zone.
NWAM does not work in a shared stack non-global zone.
• Virtual networks: NWAM currently does not manage VNICs and etherstubs.
• Bridging: NWAM implementation does not actively support network configurations that
use the bridging technology. You do not need to disable the
network/physical:nwam service before using this technology on your system.
netcfg
Description
Subcommand
Create Create an in-memory profile of specific type.
end End the current profile specification, and pop up to the next higher scope.
exit Exit the netcfg session. The current profile is verified and committed before
ending.
destroy Remove all of the specified profile from memory and persistent storage.
The netcfg command is used to create and modify NWAM profiles. Using the netcfg
command, you can perform the following tasks:
• C Create
eate or
o destroy
dest oy a use
user-defined
de ed p profile.
o e
• Open an existing profile for viewing and/or editing.
• List all of the profiles that exist on a system and their property values.
• List all of the property values and resources for a specified profile.
• Display each property that is associated with a profile.
• Set or modify one or all of the properties of a specified profile.
• Export
E port the ccurrent
rrent config
configuration
ration for a user-defined
ser defined profile to standard o
output
tp t or a file
file.
• Delete any changes that were made to a profile and revert to the previous configuration
for that profile.
• Verify that a profile has a valid configuration.
This slide shows the netcfg subcommands.
netadm
Description
Subcommand
enable Enable the specified profile. If the profile name is not unique, the profile type must
be specified to identify the profile to be enabled.
list List all available profiles and their current state. If a specific profile is specified by
name, list only the current state of that profile.
show-events Listen for stream of events from the NWAM daemon and display them.
select-wifi Select a wireless network to connect to from scan results on link linkname. Prompts
for selection, Wi-Fi key, and so forth, if necessary.
help Display a usage message with short descriptions for each subcommand.
The netadm command is used to administer NWAM profiles and interact with the NWAM
daemon.
The subcommands supported by the netadm command are shown in this slide.
Configuring NWAM
• Enable NWAM.
# svcadm disable network/physical:default
# svcadm enable network/physical:nwam
• View current NWAM NCPs
NCPs, NCUs
NCUs, and locations
locations.
• Enable the NWAM profile: Once you have created the NWAM profiles, you use
netadm to enable locations and Network Configuration Profiles (NCPs).
Example:
- To enable the classroom location, use:
# netadm enable -p loc classroom
- To enable the oracle_profile ncp, use:
# netadm enable -p ncp oracle_profile
Agenda
set-prop, reset-prop, show- set-prop sets the protocol property to a specific value. reset-prop resets
prop a protocol property to its default value. show-prop displays the current value
of a protocol property.
set-addrprop, reset-addrprop, set-addrprop modifies the value of a property on an address object.
show-addrprop reset-addrprop resets an address property to its default value. show-
addrprop displays the current value of an address property.
Advances in Oracle Solaris have surpassed the capabilities of traditional tools to efficiently
administer various aspects of network configuration. The ifconfig command, for example,
has been the customary tool to configure network interfaces. However, this command does
not implement persistent configuration settings. Over time, ifconfig has undergone
enhancements for added capabilities in network administration. However, as a consequence,
the command has become complex and confusing to use. Another issue with interface
configuration and administration is the absence of simple tools to administer TCP/IP Internet
protocol properties or tunables. The ndd command has been the prescribed customization
tool for this purpose. However, like the ifconfig command, ndd does not implement
persistent configuration
p g settings.
g Previously,
y ppersistent settings
g could be simulated for a
network scenario by editing the boot scripts. With the introduction of the Service Management
Facility (SMF), using such workarounds can become risky because of the complexities of
managing SMF dependencies, particularly in the light of upgrades to the Oracle Solaris
installation.
The ipadm command has been introduced to eventually replace the ifconfig command for
interface configuration. The command also replaces the ndd command to configure protocol
properties. As a tool for configuring interfaces, the ipadm command offers the following
advantages:
• It manages IP interfaces and IP addresses more efficiently by being the tool uniquely
designed for IP interface administration, unlike the ifconfig command that is used for
purposes other than interface configuration.
• It provides an option to implement persistent interface and address configuration
settings.
As a tool to set protocol properties, the ipadm command provides the following benefits:
dladm Enhancements
show-phys Show the physical device and attributes of all physical links.
create-secobj, delete-secobj, Create, delete, and show a secure object in the specified class to be
show-secobj
h bj used
d as a WEP or WPA k key in
i connecting
ti tot an encryptedt d network.
t k
create-vnic, delete-vnic, Create, delete, and show a VNIC over the specified link.
show-vnic
create-etherstub, delete- Create, delete, and show a virtual switch between the VNICs.
etherstub, show-etherstub
show-ib Display InfiniBand (IB) link information.
The dladm command is used to configure data links. This slide shows the new capabilities of
the dladm utility.
dladm Enhancements
The dladm command is used to configure data links. This slide shows the new capabilities of
the dladm utility.
Agenda
Physical Link 1
vnic vnic
Oracle Solaris 11
Network virtualization is the process of combining hardware network resources and software
network resources into a single administrative unit. The goal of network virtualization is to
provide systems and users with efficient, controlled, and secure sharing of the networking
resources. The end product of network virtualization is the virtual network.
Virtual networks are classified into two broad types: external and internal. External virtual
networks consist of several local networks that are administered by software as a single
entity. The building blocks of classic external virtual networks are switch hardware and VLAN
software technology.
Today’s IT organizations face the costly management of server sprawl (shown on the left in
the slide diagram). This includes the hardware, maintenance, and personnel resources
needed to manage, operate, and administer those servers on a daily basis. Oracle’s network
virtualization solution allows enterprises to enable workload isolation and granular resource
control for all of the system’s computing and I/O resources. Using virtual infrastructure (shown
on the right in the slide diagram) to consolidate physical systems in the data center,
enterprises can experience the following:
• Lower total cost of ownership of servers
• Higher server utilization
• Increased operational efficiency
Components Description
Solaris zone A Solaris zone is the combination of system resource controls
and the boundary separation provided by zones.
Virtual NIC (VNIC) A VNIC is a virtual network device with the same data link
functionality as physical interface.
This table shows the key components that make up a virtual network.
• Solaris zone: A Solaris zone is the combination of system resource controls and the
boundary
bou da y sepa
separation
at o pprovided
o ded by zones.
o es Zones
o es act as co
completely
p ete y isolated
so ated virtual
tua se
servers
es
within a single operating system instance. The Solaris zone is the basic server building
block of a virtual network.
• Virtual NIC (VNIC): A VNIC is a virtual network device with the same data link
functionality as physical interface. You configure VNICs on top of a physical interface or
etherstub. You configure VNICs as you configure any physical port, using the same
commands with the same syntax.
• Virtual
Vi t l switch:
it h The
Th virtual
i t l switch
it h provides
id ththe same connectivity
ti it between
b t VNICs
VNIC on a
virtual network that switch hardware provides for the systems connected to a switch’s
ports. Each VNIC is implicitly connected to a virtual switch that corresponds to the
physical interface. You create VNICs on top of a physical NIC or an etherstub.
Global Zone
Zone 1 Zone 2
vnic 1 vnic 2
net0
Network
This slide shows a simple virtual network with two Solaris zones. Whenever you create two or
more VNICs on the same physical port, a virtual switch will be created at the MAC layer. The
effect of the creation of the virtual switch is that traffic between Zone 1 and Zone 2 is switched
at the MAC layer. It is not necessary to stop using the physical NIC (net0) to be switched by
some external piece of hardware. As long as the VNICs share the same physical NIC and are
on the same VLAN, this MAC layer virtual switch can be employed.
This slide shows you how to create two VNICs on the physical interface.
Global Zone
Zone 3 Zone 4
vnic 1 vnic 2
vnic 0
net0
192 168 0 N
192.168.0 Network
t k
This slide shows a simple isolated private virtual network with two Solaris zones. This virtual
network consists of the following:
• GLDv3 network interface net0: This interface connects the g global zone to the p
public
network.
• Etherstub stub0: You use etherstubs to isolate the virtual network from the rest of the
virtual networks in the system as well as the external network to which the system is
connected. You cannot use an etherstub just by itself. Instead, you use VNICs with an
etherstub to create the private or isolated virtual networks. You can create as many
etherstubs as you require. You can also create as many VNICs over each etherstub as
required.
required
• Three VNICs: vnic0 is created over etherstub stub0. This interface can be configured
in the global zone to provide a route between the private virtual network (192.168.1.0)
and the public network. Technologies, such as IP forwarding, IP filtering, and Network
Address Translation (NAT), can be used to customize the relationship between the
private and public networks. VNICs vnic1 and vnic2 are also created over etherstub
stub0 and are used to attach the non-global zones to stub0.
• Two exclusive IP zones: The two exclusive IP zones each have a VNIC assigned.
vnic1 is assigned to Zone 3, and vnic2 is assigned to Zone 4.
This slide shows useful commands for accessing your virtual network configuration. The first
command (dladm show-link) shows you how to list all the link configured in your system.
This includes VNICs and etherstubs. The next command (dladm show-vnic) shows you
how to list the VNIC links. The last command (dladm show-etherstub) shows you how to
list the etherstubs.
Bandwidth Management
Bandwidth management enables you to assign a portion of the available bandwidth of an NIC
to a consumer, such as an application or customer. You can control bandwidth on a per-
application, per-port, per-protocol, and per-address basis. Bandwidth management ensures
efficient use of the large amount of bandwidth available from the new GLDv3 network
interfaces. Resource control features enable you to implement a series of controls on an
interface's available bandwidth.
The allocated portion of bandwidth is known as a share. By setting up shares, you can
allocate enough bandwidth for applications that cannot function properly without a certain
amount of bandwidth. For example, streaming media and Voice-over IP consume a great deal
of bandwidth
bandwidth. You can use the resource control features to guarantee that these two
applications have enough bandwidth to successfully run. You can also set a limit on the
share. The limit is the maximum allocation of bandwidth that the share can consume. Using
limits, you can contain noncritical services from taking away bandwidth from critical services.
You can prioritize among the various shares allotted to consumers. You can give highest
priority to critical traffic, such as heartbeat packets for a cluster, and lower priority for less
critical applications.
You can control bandwidth usage through the management of flows (by using the flowadm
command) and link utilization (by using the dladm command).
Managing Bandwidth
Global Zone
Zone 3 Zone 4
vnic 1 vnic 2
Stub 0 192.168.1 Network
Firewall 100Mb/s
Priority = Low
net0
192.168.0 Network
This slide shows you how to restrict flows and lower priority on a VNIC. Flows consist of
network packets that are organized according to an attribute. Flows enable you to further
allocate network resources.
In this example, a flow named http1 is created by using the flowadm command. This user-
designed flow (http1) restricts vnic2 bandwidth to 100 Mbits/s and sets the link priority to
low.
Agenda
IP Multipathing (IPMP)
IPMP Configurations
An IPMP configuration typically consists of two or more physical interfaces on the same
system that are attached to the same LAN. These interfaces can belong to an IPMP group in
either of the following configurations:
Active-active configuration: In this configuration, all underlying interfaces are active. An
active interface is an IP interface that is currently available for use by the IPMP group. By
default, an underlying interface becomes active when you configure the interface to become
part of an IPMP group.
Active-standby configuration: In this configuration, at least one interface is administratively
configured as a reserve. The reserve interface is called the standby interface. Although idle,
th standby
the t db IP interface
i t f is
i monitored
it d by
b the
th multipathing
lti thi d daemon tto ttrackk th
the iinterface's
t f '
availability, depending on how the interface is configured. If link-failure notification is
supported by the interface, link-based failure detection is used. If the interface is configured
with a test address, probe-based failure detection is also used. If an active interface fails, the
standby interface is automatically deployed as needed. You can configure as many standby
interfaces as you want for an IPMP group.
This slide shows an IPMP active-active configuration. In this configuration, all underlying
interfaces are active. No underlying interfaces are reserved for replacement in the event of an
active interface failure.
IPMP failure detection can be link-based, probe-based, or both to determine the availability of
a specific underlying IP interface in the group. If IPMP determines that an underlying interface
has failed, that interface is flagged as failed and is no longer usable. The data IP address that
was associated with the failed interface is then redistributed to another functioning interface in
the group. If available, a standby interface is also deployed to maintain the original number of
active interfaces.
This slide shows a two-interface IPMP group ipmp0 with an active-active configuration.
• Two data addresses are assigned to the group: 192.168.10.112 and 192.168.10.113.
• Two underlying interfaces are configured as active interfaces and are assigned flexible
link names: link0
link0_ipmp0
ipmp0 and link1
link1_ipmp0.
ipmp0
Probe-based failure detection is used, and thus the active interfaces are configured with test
addresses, as follows:
• link0_ipmp0: 192.168.0.142
• link1_ipmp0: 192.168.0.143
The Active and Failed areas in the diagram indicate only the status of underlying interfaces,
Here, IPMP determines that an underlying interface link0_ipmp0 has failed. The failed
interface is flagged as Failed and is no longer usable. The data IP address that was
associated with the failed interface is then redistributed to the remaining functioning interface
in the group. The IPMP group has been reduced to one active interface and thus a single-
point-of-failure.
IPMP continues to probe the failed underlying interface (link0_ipmp0) to determine if it has
been repaired. When IPMP determines that an underlying interface has been repaired, it flags
the interface as Active. The data IP address that was associated with the failed interface is
then redistributed to the repaired interface.
IPMP maintains network availability by attempting to preserve the original number of active
and standby interfaces when the group was created.
IPMP failure detection can be link-based,
link based, probe-based,
probe based, or both to determine the availability of
a specific underlying IP interface in the group. If IPMP determines that an underlying interface
has failed, that interface is flagged as Failed and is no longer usable. The data IP address
that was associated with the failed interface is then redistributed to another functioning
interface in the group. If available, a standby interface is also deployed to maintain the original
number of active interfaces.
This slide shows a three-interface IPMP group ipmp0 with an active-standby configuration.
• Two data addresses are assigned to the group: 192.168.10.112 and 192.168.10.113.
• Two underlying interfaces are configured as active interfaces and are assigned flexible
link names: link0_ipmp0
link0 ipmp0 and link1_ipmp0.
link1 ipmp0.
• The group has one standby interface, also with a flexible link name: link2_ipmp0.
Probe-based failure detection is used, and thus the active and standby interfaces are
configured with test addresses, as follows:
• link0_ipmp0: 192.168.0.142
• link1_ipmp0: 192.168.0.143
Here, IPMP determines that an underlying interface link0_ipmp0 has failed. The failed
interface is flagged as Failed and is no longer usable. The data IP address that was
associated with the failed interface is then redistributed to another functioning interface in the
group. The available standby interface link2_ipmp0 is moved to an active state to maintain
the original number of active interfaces.
IPMP continues to probe the failed underlying interface (link0_ipmp0) to determine if it has
been repaired. When IPMP determines that an underlying interface has been repaired, it flags
the interface as Active and the standby interface (link2_ipmp0) is moved back to a standby
state. The data IP address that was associated with the failed interface is then redistributed to
the repaired interface.
In the case where the administrator offlines an underlying interface (link1_ipmp0 in the
example in the slide), IPMP flags the interface as Offline and it is no longer usable. The data
IP address that was associated with the failed interface is then redistributed to another
functioning interface in the group. The available standby interface link2_ipmp0 is moved to
an active state to maintain the original number of active interfaces.
This slide shows you the steps to configure an active-active IPMP configuration with flexible
data link names as shown in the diagram in the earlier slide titled “How IPMP Works: Active-
Active.” Here, you rename the data links net0 and net1 to link0_ipmp0 and
link1_ipmp0, respectively. Before these data links can be used by IPMP, you must create
an IP interface for each one.
Now you are ready to create the IPMP group. This involves two steps. You first create the
IPMP group (ipmp0 in this example), and then you add the underlying interfaces
(link0_ipmp0 and link1_ipmp0) to the group. Note that this example shows vanity
naming of the network interfaces. You use vanity naming to label network components. This
helps you clarify complex network topologies
topologies.
Next, assign the data IP addresses to the IPMP interface (ipmp0) in the form of IP address
objects (ipmp0/v4add1 and ipmp0/v4add2).
Finally, assign the test IP addresses to each underlying interface in the form of IP address
objects (link0_ipmp0/test and link1_ipmp0/test).
This slide shows you the steps to configure an active-standby IPMP configuration with flexible
data link names as shown in the diagram in the earlier slide titled “How IPMP Works: Active-
Active.” The steps are similar to those shown on the previous slide.
Here, you rename the data links net0, net1, and net2 to link0_ipmp0, link1_ipmp0,
and link2_ipmp0, respectively. You then create an IP interface for each one.
Now you create the IPMP group. This involves two steps. You first create the IPMP group
(ipmp0 in this example), and then you add the underlying interfaces (link0_ipmp0,
link1_ipmp0, and link2_ipmp0) to the group.
Once the IMP group is created, you set the standby property in one of the underlying
interfaces (link2_ipmp0 in this example) to on.
Next, assign the data IP addresses to the IPMP interface (ipmp0) in the form of IP address
objects (ipmp0/v4add1 and ipmp0/v4add2).
Finally, assign the test IP addresses to each underlying interface in the form of IP address
objects (link0_ipmp0/test, link1_ipmp0/test, and link2_ipmp0).
Monitoring IPMP
root@s11-serv1:~# ipmpstat -g
GROUP GROUPNAME STATE FDT INTERFACES
ipmp0 ipmp0 degraded 10.00s link2_ipmp0 link1_ipmp0 [link0_ipmp0]
root@s11-serv1:~# ipmpstat -i
You use the ipmpstat command to monitor IPMP group activity and health.
This slide shows three examples of ipmpstat usage. The examples that you see here are
taken from an IPMP active-standby
active standby configuration created by the procedure shown in the
previous slide. Here, one of the underlying interfaces has failed.
The first example (ipmpstat –g) displays information about the IPMP group. The IPMP
group is named ipmp0. It has three underlying interfaces: link0_ipmp0, link1_impm0,
and link2_ipmp0. Note that the state of the IPMP group is degraded and the underlying
interface link0_ipmp0 has brackets around it (boxed) indicating that it has failed.
The second example (ipmpstat –i) displays information about the IP interfaces. Here,
link2_ipmp0 is in the Active state and link0_ipmp0 is in the Failed state.
Note the FLAG field. The interface flags are defined as:
• i = Unusable due to being INACTIVE
• s = Masked STANDBY
• m = Nominated to send/receive IPv4 multicast for its IPMP group
• b = Nominated to send/receive IPv4 broadcast for its IPMP group
• M = Nominated to send/receive IPv6 multicast for its IPMP group
• d = Unusable due to being down
• H = Unusable due to being brought OFFLINE by in.mpathd (IPMP daemon) because
of a duplicate hardware address
Monitoring IPMP
This example (ipmpstat –pn) displays information about the IPMP probe. For IPMP
probing to work correctly, the IPMP group must be connected to the local area network and at
least one other host (the probe target) must also be connected to the same network.
Here, interfaces link2_ipmp0 (standby) and link1_ipmp0 are actively probing target
192.168.0.100. Interface link0_ipmp0 probing is failing.
Agenda
Network Bridging
Network bridges are used to connect separate network segments. When connected by a
bridge, the attached network segments communicate as if they were a single network
segment. Bridging is implemented at the data link layer (L2) of the networking stack to
connect subnetworks together.
Using a bridge configuration simplifies the administration of the various nodes in the network
by connecting them to a single network. By connecting these segments through a bridge, all
the nodes share a single broadcast network. Thus, each node can reach the others by using
network protocols such as IP rather than by using routers to forward traffic across network
segments. If you do not use a bridge, you must configure IP routing to permit the forwarding of
IP traffic between nodes.
nodes
To forward packets to their destinations, bridges must listen in promiscuous mode on every
link that is attached to the bridge. Listening in promiscuous mode causes bridges to become
vulnerable to the occurrences of forwarding loops, in which packets circle forever at full line
rate. To prevent this, bridging uses the Spanning Tree Protocol (STP) to prevent network
loops that would render the subnetworks unusable. In addition to STP, Oracle Solaris 11
supports Transparent Interconnect of Lots of Links (TRILL) protocol.
Unlike STP and RSTP, TRILL does not shut down physical links to prevent loops. Instead,
TRILL computes the shortest-path information for each TRILL node in the network and uses
that information to forward packets to individual destinations. As a result, TRILL enables the
system to leave all links in use at all times.
This slide shows you how to create, display, and remove a network bridge.
Agenda
Wireshark is a network protocol analyzer. You can use it to capture and interactively browse
the traffic running on a computer network. Because of its rich and powerful feature set, system
administrators, security experts, developers, and educators around the world use it regularly.
It is freely available as open source and is released under the GNU General Public License
version 2.
With Wireshark you can:
• Capture live packet data from a network interface
• Display packets with very detailed protocol information
• Open and save captured packet data
• Import and export packet data from and to many other capture programs
• Filter packets by using many criteria
• Search for packets by using many criteria
• Colorize packet display based on filters
• View various statistics
This slide shows the Wireshark packet analyzer interface
interface.
The dlstat command reports runtime statistics about data links. The output is sorted in the
descending order of link utilization. The slide lists what you can do using dlstat.
dlstat: Examples
root@s11-serv1:~# dlstat
LINK IPKTS RBYTES OPKTS OBYTES
vnic0 222 9.42K 1.50K 118.00K
vnic1 1.10K 82.73K 168 7.15K
vnic2 1.10K 82.73K 168 7.15K
speedway08.95K 713.56K 17.69K 20.80M
dlstat: Examples
The show-link subcommand reports network traffic statistics for each network link. In the
output, the ID field indicates whether hardware rings are exclusively assigned (indicated by
hw) or shared (indicated by sw) among clients. rx rings are shared if other clients, such as
VNICs, are configured over the link as well. In the example shown in the slide, sharing is
indicated by the vnic0 sw value in the ID column.
The show-aggr subcommand reports incoming and outgoing network traffic statistics for
aggregated links. The PORT field indicates the devices that make up the link aggregation.
flowstat Examples
root@s11-serv1:~# flowstat –i 1
FLOW IPKTS RBYTES IDROPS OPKTS OBYTES ODROPS
http1 430.45K 910.46M 0 398.22K 44.09M 0
root@s11-serv1:~# flowstat -r
FLOW IPKTS RBYTES IDROPS
flowstat Examples
The first example shows information every second about incoming and outgoing traffic on all
configured
fi d flflows on th
the system.
t
The second example shows receive-side statistics for all flows.
The third example shows transmit-side statistics for all flows.
Quiz
Answer: c
Quiz
Answer: d
Answer: a
a. True
b. False
Quiz
Quiz
Answer: a
Quiz
Answer: d
Quiz
Answer: a
Quiz
Answer: a
Quiz
Answer: a
Quiz
Answer: b
Quiz
Answer: b
Quiz
Answer: b
Quiz
Answer: c
Quiz
Answer: c
Quiz
Answer: c
Summary
In this lesson, you were presented with the new Oracle Solaris 11 network features. You were
also shown the tasks involved in managing NWAM and configuring virtual networks. Finally,
you learned how to configure a network bridge.
O
Oracle
l Solaris
S l i 11 St
Storage E h
Objectives
Agenda
A number of important storage features and enhancements have been introduced with the
release of the Oracle Solaris 11 operating system. These features and enhancements
include:
• ZFS d default
f l root fil
file system: ZFS iis the
h ddefault
f l root fil
file system ffor the
h OOracle
l SSolaris
l i
11 operating system. With a ZFS root pool, you do not have to worry about calculating
slice sizes for /, /var, /export, and so on only to find out you did not create them with
enough space (or with too much). With ZFS, they consume only as much space as they
need. ZFS reduces complexity by eliminating the need for multiple volume management
tools. Another benefit to having a ZFS root pool is that you can mirror your root file
system with very little effort.
• Migrating UFS and ZFS file systems: You can use the ZFS Shadow Migration feature
to migrate data from old UFS and ZFS file systems to new file systems while
simultaneously allowing access and modification of the new file systems during the
migration process.
• Splitting mirrored ZFS storage pools: A mirrored ZFS storage pool can be quickly
cloned as a backup pool.
• ZFS snapshot differences: A very useful feature has been implemented for ZFS in
Oracle Solaris 11, which allows you to list all file changes between two snapshots of a
ZFS file system.
Agenda
• You can use the shadowstat command to monitor a file system migration, which
provides the following data:
- The BYTES XFRD column identifies how many bytes have been transferred to the
shadow file system.
y
- The BYTES LEFT column fluctuates continuously until the migration is almost
complete. ZFS does not identify how much data needs to be migrated at the
beginning of the migration because this process might be too time-consuming.
- Consider using the BYTES XFRD and the ELAPSED TIME information to estimate
the length of the migration process.
Agenda
A mirrored ZFS storage pool can be quickly cloned as a backup pool by using the zpool
split command. Currently, this feature cannot be used to split a mirrored root pool.
You use the zpool split command to detach disks from a mirrored ZFS storage pool to
create a new pool with one of the detached disks. The new pool will have identical contents to
the original mirrored ZFS storage pool. By default, a zpool split operation on a mirrored
pool detaches the last disk for the newly created pool. After the split operation, the new pool
must be imported to be accessible.
Agenda
In Oracle Solaris 11, you can determine ZFS snapshot differences by using the zfs diff
command. The zfs diff command gives a high-level description of the differences between
a snapshot and a descendent dataset. The descendent can be either a snapshot of the
dataset or the current dataset.
For each file that has undergone a change between the original snapshot and the
descendent, the type of change is described along with the name of the file. In the case of a
rename, both the old and new names are shown. The type of change follows any timestamp
displayed and is described with a single character (as listed in the slide).
Agenda
ZFS Deduplication
Deduplication is the process of identifying redundancies within a data set and eliminating
them. Eliminating redundant data can significantly shrink storage requirements and improve
bandwidth efficiency. Because primary storage has become cheaper over time, enterprises
typically store many versions of the same information so that new work can reuse old work.
Some operations, such as backup, store extremely redundant information. Deduplication
lowers storage costs because fewer disks are needed, and shortens backup/recovery times
because there can be far less data to transfer.
In Oracle Solaris 11, ZFS deduplication automatically avoids writing the same data twice on
your drive by detecting duplicate data blocks and keeping track of the multiple places where
the same block is needed.
needed With ZFS deduplication,
deduplication data can be deduplicated at the level of
files, blocks, or bytes. ZFS deduplication is synchronous. It instantly removes redundant data
during writes, without the need for background deduplication processes.
Here are some applications that typically benefit from ZFS deduplication:
• Backup to disk storage: On systems with many users, backing up user files to disk
storage has a potential for multiple copies of the same data, such as applications,
system
y files, documents, images,
g and videos.
• Mail servers: Mail servers are classic examples of data duplication. When a user sends
a mail attachment to a mailing list on the network, the mail server maintains a copy of
the same attachment for each recipient. Only one copy of the attachment is really
necessary.
• File servers: When users collaborate on projects, the chances are good that they will
end up storing many documents multiple times.
To support the deduplication feature, Oracle Solaris 11 adds new properties to ZFS.
ZFS has one new ZFS file system property to support deduplication, dedup. You use the
deduplication (dedup) property to remove redundant data from your ZFS file systems. If a file
system has the dedup property enabled, duplicate data blocks are removed synchronously.
The result is that only unique data is stored and common components are shared between
files. When dedup is enabled, the dedup checksum algorithm overrides the checksum
property. Setting the value to verify is equivalent to specifying sha256 for the checksum
property. If the property is set to verify and two blocks have the same signature, ZFS does a
byte-for-byte comparison with the existing block to ensure that the contents are identical.
ZFS has two new ZFS pool properties to support deduplication: dedupratio
d d ti and
dedupditto. The dedupratio property is a read-only value used as a multiplier that
indicates the deduplication ratio achieved for a ZFS pool. The dedupditto property sets a
deduplication copy threshold. If the reference count for a deduped block goes above this
threshold, another ditto copy of the block is stored automatically.
By telling ZFS to store an additional copy after a specific number of references, you build in
some redundancyy just
j in case the original
g block g
gets checksum errors.
In this example, you check the ZFS properties to determine whether deduplication has been
enabled. The properties show that deduplication is currently disabled. Next, you enable
deduplication. You copy the same file to the three different directories in the file system that
has deduplication enabled. Finally, you recheck the ZFS properties and find that the deduped
file system has a deduplication factor of 3.
Agenda
• Benefits:
– The iSCSI protocol runs across existing Ethernet networks.
– Existing Fibre Channel devices can be connected to clients
without the cost of Fibre Channel HBAs.
Benefits of using Solaris iSCSI targets and initiators include the following:
• The iSCSI protocol runs across existing Ethernet networks.
- You can use any supported network interface card (NIC) (NIC), Ethernet hub,
hub or
Ethernet switch.
- One IP port can handle multiple iSCSI target devices.
- You can use existing infrastructure and management tools for IP networks.
• Existing Fibre Channel devices can be connected to clients without the cost of Fibre
Channel HBAs.
• Systems with dedicated arrays can now export replicated storage with ZFS or UFS file
systems.
• There is no upper limit on the maximum number of configured iSCSI target devices.
• The protocol can be used to connect to Fibre Channel or iSCSI Storage Area Network
(SAN) environments with the appropriate hardware.
Current limitations or restrictions on using the Solaris iSCSI initiator software include the
following:
• Support for iSCSI devices that use service locator protocol (SLP) is not currently
available.
• iSCSI targets cannot be configured as dump devices.
• Transferring large amounts of data over your existing network can adversely affect
performance.
Configuring COMSTAR
• Configure an iSCSI initiator: This task is performed on the initiator client host. This
task involves:
- Enabling initiator service
- Configuring the target device discovery method
- Reconfiguring the /dev namespace to recognize the iSCSI disk
• Access the iSCSI disk: This task is performed on the initiator client host. This task
involves:
- Using the format utility to identify the iSCSI LUN information
- Creating a ZFS file system on the iSCSI LUN
Quiz
Answer: b
Quiz
Answer: c
Quiz
Answer: a
Quiz
Answer: d
Quiz
Answer: c
Quiz
Answer: b
Quiz
Answer: c
Summary
Practice 7 Overview:
Oracle Solaris 11 Storage Enhancements
This practice covers the following topics:
• Migrating UFS and ZFS file systems
• Splitting a mirrored ZFS storage pool
O
Oracle
l Solaris S
S l i 11 Security E h
Objectives
Agenda
• Secure by Default
• Root account as a role
• RBAC kernel enhancements
A number of important security features and enhancements have been introduced with the
release of the Oracle Solaris 11 operating system, including the following:
• Secu
Secure e by Default:
e au t O
Oracle
ac e So
Solaris
a s 11 pprovides
o des a fully
u y Secu
Securee by Default
e au t e
environment.
o e t
Oracle Solaris Secure by Default reduces the attack surface of the Oracle Solaris OS by
disabling as many network services as possible while still leaving a useful system. In
this way, the number of exposed network services is dramatically reduced. With
automatic Secure by Default, network services are disabled by default or set to listen for
local system communications only.
• Root account as a role: Oracle Solaris 11 implements a role for root. The root as a role
option was first delivered in Solaris 8 (1998)
(1998). What is different in Oracle Solaris 11 is that
this option is enabled by default during installation. The advantage of root as a role is
that it ensures that administrative actions done by the root account are attributable to a
real (unique) person. Because you must have at least one user who is authorized to
assume the root role, a standard user account (which can assume that role) is
automatically created during the installation process. If you do not want this feature, you
can revert to Solaris 10 behavior by running the following command:
# rolemod -K K type
type=normal
normal root
• Labeled IPsec: When labeled processes in a multilevel secure operating system, such
as Oracle Solaris Trusted Extensions, communicate across system boundaries, their
network traffic needs to be labeled and protected. Traditionally, this requirement is met
by using a physically separate network infrastructure to ensure that data belonging to
different labeled domains stays in separate physical infrastructures. Labeled IPsec/IKE,
which is new in Oracle Solaris 11, enables customers to reuse the same physical
network infrastructure for labeled communications by transferring labeled data within
separate labeled IPsec security associations, removing the need for a redundant and
expensive physical network infrastructure.
• Trusted Extension enhancements: To enable greater flexibility and security, Trusted
Agenda
User-level providers:
Provider: /usr/lib/security/$ISA/pkcs11_kernel.so
Provider: /usr/lib/security/$ISA/pkcs11_softtoken.so
Provider: /usr/lib/security/$ISA/pkcs11_tpm.so
The cryptoadm list command displays a list of the providers currently installed in the
system. Providers are cryptographic services that consumers use. Because providers plug in
to the framework, they are also called “plugins.” The cryptoadm list command separates
the providers into three categories: user-level providers, kernel software providers, and kernel
hardware providers.
The cryptoadm list metaslots command displays the system-wide configuration for a
metaslot. A metaslot is a single slot that presents a union of the capabilities of other slots that
are loaded in the framework. The metaslot eases the work of dealing with all of the
capabilities of the providers that are available through the framework. When an application
that uses the metaslot requests an operation, the metaslot figures out which actual slot should
perform the operation. Metaslot capabilities are configurable, but configuration is not required.
The metaslot is on by default.
The cryptoadm list –m command displays a list of mechanisms that can be used with the
installed providers or metaslot.
A mechanism
h i iis th
the application
li ti off a mode
d off an algorithm
l ith ffor a particular
ti l purpose.
Cryptographic algorithms are established, recursive computational procedures that encrypt or
hash input. Encryption algorithms can be symmetric or asymmetric. Symmetric algorithms use
the same key for encryption and decryption. Asymmetric algorithms, which are used in public-
key cryptography, require two keys. Hashing functions are also algorithms. If a provider is
specified, display the name of the specified provider and the mechanism list that can be used
with that provider. If the metaslot keyword is specified, display the list of mechanisms that can
b used
be d with
ith th
the metaslot.
t l t
The cryptoadm list –p command displays the mechanism policy (that is, which
mechanisms are available and which are not) for the installed providers.
The cryptoadm disable and cryptoadm enable commands allow you to disable or
enable
bl provider
id mechanisms.
h i
root@s11-serv1:~# digest -l
sha1
md5
sha256
sha384
Sha512
Agenda
This slide shows an example of encrypting a ZFS file system within a pool.
In this example, first we generate a keystore file named /myfskey. Then we create a ZFS file
system named mysecretdata with the /myfskey keystore file. The keysource property of
the mysecretdata file system shows that the encryption key source comes from the
/myfskey keystore file.
Agenda
Read-Only Zones
A zone with a read-only zone root is called a read-only zone. An Oracle Solaris read-only
zone preserves the zone's
zone s configuration by implementing read-only
read only root file systems for non
non-
global zones. This zone extends the zone’s secure run-time boundary by adding additional
restrictions to the run-time environment. Unless performed as specific maintenance
operations, modifications to system binaries or system configurations are blocked.
The mandatory write access control (MWAC) kernel policy is used to enforce ile system write
privilege through a zonecfg file-mac-profile property. Because the global zone is not
subject to MWAC policy, the global zone can write to a non-global zone’s file system for
i t ll ti
installation, image
i updates,
d t and
d maintenance.
i t The
Th MWAC policy li iis d
downloaded
l d d when
h th the
zone enters the ready state. The policy is enabled at zone boot. To perform post-install
assembly and configuration, a temporary writable root-file system boot sequence is used.
Modifications to the zone's MWAC configuration only take effect with a zone reboot.
Agenda
BART:
• Is a tool that performs a file-level check of the software
contents of a system
• Enables you to determine what file file-level
level changes have
BART is a tool that performs a file-level check of the software contents of a system. BART
allows you to quickly, easily, and reliably gather information about the components of the
software stack that is installed on deployed systems. Using BART can greatly reduce the
costs of administering a network of systems by simplifying time-consuming administrative
tasks.
BART enables you to determine what file-level changes have occurred on a system, relative
to a known baseline. You use BART to create a baseline or control manifest from a fully
installed and configured system. You can then compare this baseline with a snapshot of the
system at a later time, generating a report that lists file-level changes that have occurred on
the system since it was installed
installed.
BART: Example
root@s11-serv1:/var/tmp# vi bartrules
IGNORE all
/export/home/oracle
CHECK all
root@s11-serv1:/var/tmp# bart create -r bartrules > \
BART: Example
root@s11-serv1:/var/tmp# vi /export/home/oracle/newfile
This is a test.
root@s11-serv1:/var/tmp# bart create -r bartrules > \
bart-`hostname`-`date '+%d%m%Y-%H:%M:%S'`
root@s11-serv1:/var/tmp# ls bart*
Quiz
Answer: b
Quiz
Answer: d
Quiz
Answer: b
Quiz
Answer: a
Summary
Practice 8 Overview:
Oracle Solaris 11 Security Enhancements
This practice covers the following topics:
• Managing encryption keys
• Configuring a ZFS-encrypted pool