Professional Documents
Culture Documents
html
------------>create ldom
============================================================================
how to add cdrom.iso to control domain & LDOM
root@sparc-40:~# ldm ls
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-cv- SP 8 8G 2.9% 59m
LDOM-2 active -t---- 5002 16 6G 6.3% 1h 5m
LDOM-3 active -n---- 5003 16 6G 0.1% 2h 31m
LDOM-1 inactive ------ 16 6G
root@sparc-40:~# ldm bind LDOM-1
root@sparc-40:~# ldm ls
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-cv- SP 8 8G 2.7% 1h 1m
LDOM-1 bound ------ 5000 16 6G
LDOM-2 active -t---- 5002 16 6G 6.3% 1h 8m
LDOM-3 active -n---- 5003 16 6G 0.2% 2h 34m
root@sparc-40:~# ldm start LDOM-1
LDom LDOM-1 started
root@sparc-40:~# ldm ls
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-cv- SP 8 8G 2.7% 1h 2m
LDOM-1 active -t---- 5000 16 6G 2.0% 1s
LDOM-2 active -t---- 5002 16 6G 6.3% 1h 8m
LDOM-3 active -n---- 5003 16 6G 0.2% 2h 34m
root@sparc-40:~#
==========================================================================
root@sparc-40:~# ldm ls
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-cv- SP 8 8G 59% 6m
LDOM-2 active -n---- 5002 16 6G 0.1% 2h 51m
LDOM-3 active -n---- 5003 16 6G 0.1% 1h 55m
LDOM-5 active -n---- 5000 4 2G 0.3% 33m
LDOM-1 inactive ------ 16 10G
root@sparc-40:~#
root@sparc-40:~# ldm stop LDOM-5
============================================================================
how to enable iscsi on solaris
==========================================================================
http://uadmin.nl/init/solaris-11-cheat-sheet-network-administration/
# ipadm show-if
# ipadm show-addr
NWAM!!!
Create a network configuration profile:
# netcfg create ncp datacenter
# netcfg
netcfg> select ncp datacenter
netcfg:ncp:datacenter> create ncu phys net0
Created ncu �net0�. Walking properties ...
ip-version (ipv4,ipv6) [ipv4|ipv6]> ipv4
ipv4-addsrc (dhcp) [dhcp|static]> static
ipv4-addr> 192.168.1.27
ipv4-default-route> 192.168.1.1
netcfg:ncp:datacenter:ncu:net0> end
Committed changes
netcfg:ncp:datacenter> exit
----
=========================================================================
http://unixadminschool.com/blog/2015/08/oracle-solaris-11-administration-command-
cheat-sheet/
Automated Installer (AI) is the new network based multi-client provisioning system
on Oracle Solaris 11. AI provides hands-free installation of both SPARC and x86
systems by using an installation service that installs systems from software
package repositories on the network.
Create an install service from a downloaded ISO file, specifying x86 based DHCP
client starting at address 192.168.1.210 with a total count of 10 addresses:
# installadm create-service -n s11x86 -i 192.168.1.210 -c 10 -s
/path/to/solaris-11-1111-ai-x86.iso
# installadm list
# installadm list -m
Export the default installation manifest associated with the s11x86 service:
List any system configuration profiles associated with the install services:
# installadm list -p
Validate a system configuration profile against the default x86 install service:
Associate a system configuration profile with the deafult x86 install service and
give it a name sc-profile:
Apply a criteria that all clients must have 4096MB memory or greater to the
manifest s11manifest of s11x86 service:
System Configuration
Common system configuration tasks have changed in Oracle Solaris 11 with the
Service Management Facility (SMF) configuration repository being used to store
configuration data. With the addition of configuration layers, administrators now
have better control and assurance that their configuration changes will be
preserved across system updates.
Configuring nodename:
# sysconfig configure -s
The traditional root account has been changed to a �root� role on all Oracle
Solaris 11 installations as part of the Role Based Access Control (RBAC) feature
set. This change gives improved auditability across the operating system, and the
ability for administrators to delegate various system tasks to others in a safe
way.
Add a new user and delegate him the System Adminstrator profile:
Boot Environments
Boot Environments are individual bootable instances of the operating system that
take advantage of the Oracle Solaris ZFS filesystem snapshot and clone capability.
During a system update, new boot environments are created so that system software
updates can be applied in a safe environment. Should anything go awry,
administrators can boot back into an older boot environment. Boot environments have
low overhead and can be quickly created giving administrators an ideal best
practice for any system
maintenance work.
ok boot -L
ok boot -Z rpool/ROOT/solaris-05032012
Update all possible packages to the newest version, including any zones:
# pkg update
# pkg list
Search all packages in the configured repositories for a file called math.h:
# pkg publisher
Oracle Solaris ZFS is the default root file system on Oracle Solaris 11. ZFS has
integrated volume management, preserves the highest levels of data integrity and
includes a wide variety of data services such as data deduplication, RAID and data
encryption.
Create a ZFS pool with 1 disk and 1 disk as seperate ZIL (ZFS Intent Log):
Create a ZFS pool with 1 disk and 1 disk as L2ARC (Level 2 storage cache):
Disk Devices
# cfgadm -s �select=type(disk)�
# fdisk -B c3t2d0s0
# prvtoc /dev/rdsk/c3t0d0s0 | fmthard -s � /dev/rdsk/c3t2d0s0
On x86 systems:
On SPARC systems:
Oracle Solaris Zones provide isolated and secure virtual environments running on a
single operating system instance, ideal for application deployment. When
administrators create a zone, an application execution environment is produced in
which processes are isolated from the rest of the system.
# zonecfg -z testzone
testzone: No such zone configured
Use �create� to begin configuring a new zone.
zonecfg:testzone> create
zonecfg:testzone> set zonepath=/zones/testzone
zonecfg:testzone> set autoboot=true
zonecfg:testzone> verify
zonecfg:testzone> commit
zonecfg:testzone> exit
# zoneadm list -v
# zoneadm list -c
# zoneadm list -i
Install a zone:
Boot a zone:
Login to a zone:
# zlogin -C testzone
Halt a zone
Monitor a zone for CPU, memory and network utilization every 10 seconds:
# zonestat -z testzone 10
# svcs
# svcs -l system/zones
# svcs -p network/netcfg
Show why services that are enabled but are not running, or preventing other
services from running:
# svcs -xv
Display all properties and values in the SMF configuration repository for the
service network/ssh:
# svcprop network/ssh
# svccfg
svc:> select ssh:default
svc:/network/ssh:default> listprop general/enabled
svc:/network/ssh:default> exit
Set the port number of the application/pkg/server service to 10000:
Configure email notifications for all services that drop from online to maintenance
state:
List all configuration changes that have been made in the SMF configuration
repository to the name-service/switch service:
Solaris 11 Networking
# dladm show-phys
# netcfg
netcfg> create loc datacenter
Created loc �datacenter�. Walking properties �
activation-mode (manual) [manual|conditional-any|conditionalall]>
conditional-any
conditions> ip-address is 192.168.1.27
nameservices (dns) [dns|files|nis|ldap] dns
nameservices-config-file (�/etc/nsswitch.dns�)>
dns-nameservice-configsrc (dhcp) [manual|dhcp]> manual
dns-nameservice-domain> datacenter.myhost.org
dns-nameservice-servers> 192.168.1.1
dns-nameservice-search>
dns-nameservice-sortlist>
dns-nameservice-options>
nfsv4-domain>
ipfilter-config-file>
ipfilter-v6-config-file>
ipnat-config-file>
ippool-config-file>
ike-config-file>
ipsecpolicy-config-file>
netcfg:loc:datacenter>
netcfg:loc:datacenter> exit
Committed changes
Activate a network configuration profile:
# netadm enable -p ncp datacenter
Create a virtual network interface over existing physical interface net0 with
address 192.168.0.80:
# dladm create-vnic -l net0 vnic0
# ipadm create-ip vnic0
# ipadm create-addr -T static -a 192.168.0.80 vnic0/v4
Create two virtual network interfaces over a virtual switch (without a physical
network interface):
Restrict network traffic to TCP for a local port 443 for network interface net0:
Configure VLANS:
===================================================================================
=======
1. Download the Oracle Solaris 11 repository image from the following site:
http://www.oracle.com/technetwork/server-storage/solaris11/downloads/index.html.
# cd /export/download
# unzip sol-11-xxx-xxx-repo-full-iso-a.zip
# unzip sol-11-xxx-xxx-repo-full-iso-b.zip
# cat sol-11-1111-repo-full.iso-a sol-11-1111-repo-full.iso-b > sol-11-1111-repo-
full.iso
3. Copy the IPS repository from the ISO image to a local ZFS file system
# lofiadm �a /export/download/ sol-11-1111-repo-full.iso
# mount �F hsfs /dev/lofi/1 /mnt
# rsync �aP /mnt/repo /export/ips
5. Remove the current publisher URI and add a new URI the test the IPS
# pkg publisher
# pkg set-publisher �G '*' �g http://s11-serv1.mydomain.com/ solaris
# pkg publisher
# pkg search entire
===================================================================================
# vi /a/etc/shadow
remove the password section, let it looks likes.
root::15356::::::
# cd /
# umount /a
# zfs set mountpoint=/ rpool/ROOT/solaris
# zpool export rpool
==================================================================================
Login to node1.
Create a new ssh keygen .Here i have used RSA keygen. If you want you can use DSA
instead of RSA.
Arena-Node1#ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (//.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in //.ssh/id_rsa.
Your public key has been saved in //.ssh/id_rsa.pub.
The key fingerprint is:
e4:34:90:01:7e:0a:38:45:fa:bb:4d:ef:0c:57:ce:2a root@node1
Go to the directory where the keys are stored. It will be stored in root�s home
directory by default.
Arena-Node1#cd /.ssh
Arena-Node1#ls -lrt
total 5
-rw------- 1 root root 887 Jul 29 23:03 id_rsa
-rw-r--r-- 1 root root 220 Jul 29 23:03 id_rsa.pub
Arena-Node1#cat /etc/hosts
"/etc/hosts" [Read only] 6 lines, 88 characters
#
# Internet host table
#
::1 localhost
127.0.0.1 localhost
192.168.2.5 node1 loghost
192.168.2.6 node2
Login to node2 and perform the same what we have did for node1.
Arena-Node2#ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (//.ssh/id_rsa):
Created directory '//.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in //.ssh/id_rsa.
Your public key has been saved in //.ssh/id_rsa.pub.
The key fingerprint is:
ad:14:b0:83:75:23:fa:c2:96:b6:1c:1d:85:96:b1:77 root@node2
Arena-Node2#cat /etc/hosts
"/etc/hosts" [Read only] 6 lines, 88 characters
#
# Internet host table
#
::1 localhost
127.0.0.1 localhost
192.168.2.6 node2 loghost
192.168.2.5 node1
Now i am copying the rsa key to node2 as authorized_keys where you want to login
without password. By doing this , i can login from node1 to node2 without password.
In node2,i am copying the rsa key to node1 as authorized_keys.By doing this , i can
login from node2 to node1 without password.
Arena-Node1#ssh node2
Last login: Mon Jul 30 00:18:46 2012 from node1
Oracle Corporation SunOS 5.10 Generic Patch January 2005
Arena-Node2#
Arena-Node2#ssh node1
Last login: Mon Jul 30 00:05:53 2012 from 192.168.2.2
Oracle Corporation SunOS 5.10 Generic Patch January 2005
Arena-Node1#
Thank you for reading this article.Please leave a comment if you have any doubt ,i
will get back to you as soon as possible.
===================================================================================
============