You are on page 1of 830

2012 Hewlett-Packard Company, L.P.

The information contained herein is subject to change without notice. The only warranties for HP products and services are
set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed
as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
2012 Certiport, Inc.
Certiport and the Certiport logo are registered trademarks of Certiport Inc. Certiport shall not be liable for technical or
editorial errors or omissions contained herein.
This is an HP and Certiport copyrighted work that may not be reproduced without the written permission of HP and Certiport.
You may not use these materials to deliver training to any person outside of your organization without the written permission
of HP and Certiport.
No part of this book may be reproduced or transmitted in any form or by any means, electronic or mechanical, including
photocopying, recording, or by any information storage and retrieval system, without written permission from the publisher,
except for the inclusion of brief quotations in a review.
This book is designed to provide information about the topics covered on the Designing & Deploying Connected Device
Solutions HP4-A01 certification exam. Every effort has been made to make this book as complete and as accurate as
possible, but no warranty of fitness is implied.
The information is provided on an as is basis. The authors, Certiport, Inc., and Hewlett-Packard Company, L.P., shall have
neither liability nor responsibility to any person or entity with respect to any loss or damages arising from the information
contained in this book or from the use of the discs or programs that may accompany it.
The opinions expressed in this book belong to the author and are not necessarily those of Hewlett-Packard Company, L.P.,
or Certiport, Inc.
All terms mentioned in this book that are known to be trademarks or service marks have been appropriately capitalized.
Hewlett Packard Inc. cannot attest to the accuracy of this information. Use of a term in this book should not be regarded as
affecting the validity of any trademark or service mark.
All other trademarks and registered trademarks are the property of their respective holders.
Designing & Deploying Server & Storage Solutions for Small and Medium Business
Student textbook
Rev. 1.0
Course ID: 00429060

This eBook is licensed to Catalin Dinca,

Designing & Deploying Server & Storage

Solutions for Small and Medium Business
First Edition
Course ID: 00422833


Technical Reviewers


Managing Editor

This eBook is licensed to Catalin Dinca,

Designing & Deploying Server & Storage Solutions for Small and Medium Business prepares
candidates to pass the exam and achieve the HP ATA Servers & Storage certification. The
course is designed to teach fundamental computing concepts, common procedures, and
analysis skills that will enable candidates to design and troubleshoot connected device
implementations for customers.
Because pictures help solidify concepts and procedures, most topics include one or more
pictures, including screenshots, product pictures, tables, and illustrations.
In addition to text and figures, each chapter includes the following components to help facilitate
Key terms
Tips and tricks

Business scenarios
Review questions
Review Questions

The objectives define key competencies students will learn during the chapter. Some of the
objectives are taken directly from the test objectives. Others are learning objectives that identify
foundational knowledge and other skills that are important when performing IT tasks for a small
or medium business.

Key terms
A big part of succeeding in an IT environment is understanding the vocabulary. Students who
are new to IT often get lost in the flurry of acronyms and unfamiliar terms. To aid understanding,
each key term is shown in bold with an accompanying definition, as shown below:

Tips and tricks

Sometimes a key point deserves special emphasis. It might be a troubleshooting hint, a
common misconception, an issue that is frequently encountered, or an exception to a rule.
Look for the pointing finger for an important fact.

Business scenarios
Because the value of technology is determined by how significant it is to the performance of the
business, students need to be able to analyze business requirements and determine how they
can be met by implementing a specific technology. To help students learn the critical thinking
skills they need to analyze requirements, make recommendations, and troubleshoot problems,
business scenarios are used throughout the course to guide the discussions.
In some cases, these scenarios can be used to support group activities. In other cases, they
simply call out a real-world use for a technology or product. Business scenarios are shown in a
shaded box.
A box like this provides a real-world example or asks you to think about how to apply the
concepts that you just learned to a business scenario

The summary section lists some of the key points students should have learned. They can be
used to wrap up a lecture or for students to review what they have learned.

Review questions
Review questions are provided at the end of each chapter. They can be assigned as homework
or used in class to test students understanding of the materials covered.

Each chapter includes assignments. Most chapters have three types of homework
Multiple choice or matching
Short essay
Research assignment
The multiple choice or matching questions are designed to test the students knowledge of key
terminology and facts. The short essay questions allow students to compare and contrast
technologies, describe solutions, or explain procedures. The research assignments, which are
shown in the shaded scenario box, ask students to apply their knowledge to a business
scenario. In many cases, these assignments ask students to perform research on the HP

This eBook is licensed to Catalin Dinca,

Chapter 1: Why Businesses Need Servers

Servers can help a business become more efficient by allowing its employees to share
information and resources, communicate over email, connect to the web, and access shared
files. Servers help to link employee computers together in an office environment. Certain types
of servers enable employees to access information and resources when they are outside the
office. Understanding the common uses for servers and the differences between servers and
workstations can help you identify what types of servers you should install to best serve your
customers specific needs.
In this chapter, we will examine the client/server topology. Next, we will summarize the various
types of servers used in a typical small or medium-sized business.

In this chapter, you will learn how to:
Describe various ways employees can share information.
Describe various types of server applications and their functionality.
Describe how a servers role impacts its hardware requirements.
Explain the trade-offs between installing a server on-site and utilizing a cloud

Information Sharing
Employees of a company often need to be able to access the same applications and sources of
information, often at the same time. Depending on the business, this information might include
accounting records, customer records, sales brochures, product information, or information
about existing inventory. Today information can be shared between users in a number of ways.
We will look at some of the more common methods for sharing information.

Peer-to-peer network
If a company has only a few computers, employees can share files with each other using a
peer-to-peer network (Figure 1-1). In a peer-to-peer network, each user decides which files to
share with other employees and whom to share them with.

Figure 1-1: Peer-to-peer Network

peer-to-peer network
A network in which users share resources with other users.

The advantage of a peer-to-peer network is that it allows users to share files and printers
without the need for special equipment, such as a dedicated server. However, while this might

be suitable in a very small company where employees are technically savvy and can be trusted
to make decisions about what data should be shared, peer-to-peer networks do have the
following disadvantages:
Company data is distributed across multiple computers.
No data backup or centralized control over file security is maintained.
Resources cannot be shared if a users computer is not on the network.
Remote users do not have the ability to share data.

Client/Server network
In a client/server network (Figure 1-2), shared resources are located on one or more servers. An
administrator determines which employees can access the resources. The administrator also
configures a backup schedule and performs any necessary tasks to ensure that the shared
resources remain available.

Figure 1-2: Client/Server Network

client/server network
A network in which resources are centrally shared and managed from an administrator-controlled server.

A client/server network has many advantages:

Company data is centralized on a dedicated system.
An administrator can control security, backup, and availability.
A client/server network can be configured to support remote users.
Allowing remote users to access a client/server network entails implementing a VPN or
other remote access solution.
virtual private network (VPN)
A network connection that uses a secure tunnel across a non-secure network.

Sharing in the cloud

More and more businesses are relying on cloud services to enable their employees to share
documents and collaborate on them. When resources are shared in the cloud, they can be
accessed from any device that has an Internet connection (Figure 1-3). This capability makes

the cloud especially valuable for sharing resources with mobile users.
cloud service
A service that is offered to subscribers for free, for a fee, or on a pay-per-usage basis.

Figure 1-3: Sharing in the Cloud

Another advantage of the cloud, particularly for SMBs, is that there is no need for capital
investment. Instead, the business can pay by the month for the cloud storage and services they
Despite the advantages of cloud storage, the cloud should not be used for all resource sharing
requirements. The cloud may not be suitable for:
Storing data that is subject to regulatory restrictions, such as healthcare data.
Hosting client/server applications that were not specifically designed for the cloud.
Storing mission-critical data or services.
Although cloud storage is generally secure, you need to realize that data is not
permanently destroyed when you erase a file. If a companys hard drive disposal policy
requires destruction of the hard disk, cloud storage is not a suitable solution for that data.
Scenario: Stay and Sleep
Stay and Sleep started as a single bed-and-breakfast. Over the years, they have expanded to
offer bed-andbreakfast accommodations in 14 different locations. All properties are
managed and reservations are made from the same office. The company has grown to 20
permanent employees, including a CEO, three reservation agents, one general accountant,
three property managers, a receptionist, an acquisitions director, and a public relations
director. They also hire seasonal employees for telemarketing.
Each permanent employee has a desktop computer. The reservation agents and property
managers are each assigned specific properties. The temporary employees take reservations
on paper. The paper reservations are entered into an Excel spreadsheet by the reservation
agent responsible for that property.
The CEO has asked you to help the company streamline its operations. He is concerned that
when a reservation agent is out of the office or on vacation, property bookings are missed. He
also worries that maintenance requests will be delayed on days when the property manager

in charge of that property is absent from work.

Compare the three types of information sharing and list the advantages and disadvantages of
each for this customer. Which solution would you recommend and why?

Types of Servers
So far we have discussed servers primarily in the context of sharing resources. However,
servers can play many roles on a network. We will now look at:
File and print servers
Infrastructure servers
Database servers
Messaging servers
Web servers
Terminal servers
Collaboration servers
One important fact to remember is that a single server might be configured to run multiple types
of services.

File and print servers

The most common reason for a small business to decide to purchase a server is for sharing
files and printers among its employees.

File servers
A file server consists of a storage device, a file sharing protocol, and a set of access control
rules that determine who can read and/or modify a set of files.
Common file sharing protocols are described in Table 1-1.
Table 1-1: File Sharing Protocols



Server Message Block (SMB)

A secure protocol that shares files with

Windows client computers. SMB is also
known as Common Internet File System

Network File System (NFS)

A secure protocol that shares files with Linux

client computers

File Transfer Protocol (FTP)

An industry-standard file transfer protocol.

Files are transmitted in clear text.

Secure FTP (SFTP)

An industry-standard secure file transfer

protocol. Files are transmitted over an
encrypted channel.

The primary considerations when you choose file server hardware are the capacity, speed, and
reliability of the storage device. A network-attached storage (NAS) device is a viable alternative
to a general purpose server when the only requirement is to implement file sharing. NAS will be
discussed in more detail later in this course.
network-attached storage (NAS)
A storage array that attaches directly to the network and implements a file sharing protocol.

Print servers
Another common reason a business will need a server is to enable a number of employees to
share printers. A print server stores software and drivers for locally attached or network printers.
A print server can manage jobs for multiple printers (Figure 1-4). When a print job is sent to the
server, the server queues the print job and sends it to the appropriate printer.

Figure 1-4: Print Server

Although small companies can often get by using a desktop computer as a print server, sharing
printers from a server operating system will provide the following advantages:
Centralized control over printer access and print jobs
The capability to pool multiple printers
The capacity to schedule the availability of a printer
printer pool
Two or more identical physical printers that are shared as a single logical printer.

A print server stores in-process print jobs in a spool file. The storage device where the spool file
is located needs sufficient capacity to store the print jobs in the queue.
Sometimes considered an acronym for simultaneous peripheral output on line or simultaneous peripheral operations on
line. Spooling allows for queuing of file processing until the file can be handled by another process, such as printing.
spool file
A temporary file used to store a print job until it can be sent to the printer.

Infrastructure servers

An infrastructure server is a server that works behind the scenes providing services that allow
different parts of a network to operate together harmoniously. These types of servers issue IP
addresses, resolve host names, authenticate users and computers, and perform other
necessary activities.
To verify that a person or device is who or what it claims to be.

Dynamic Host Configuration Protocol (DHCP) servers

On a network that uses TCP/IP, each network adapter is assigned at least one IP configuration,
consisting of:
An IP address
A subnet mask
A default gateway
Other configuration information, depending on the network environment
This configuration can be set manually. However, in most networks it is set automatically using
a DHCP server.
A DHCP conversation (Figure 1-5) begins when a device broadcasts a request for an address.

Figure 1-5: DHCP Request

Any DHCP server that has an address available for lease can offer an IP address to the device.
On a small network, the DHCP service is often performed by a wireless access point or
broadband router instead of by a server.
wireless access point
A network device that allows network connections using a wireless radio.
broadband router
A network device that provides connectivity to the Internet over a highspeed connection, including cable modems or digital
subscriber lines (DSL).

Name resolution services

Because it is difficult for most people to remember IP addresses, names are usually assigned to
devices on a network.
name resolution
The process by which a user-friendly name is resolved to the IP address associated with that name.

Two types of names are commonly used:

A fully-qualified domain name

A single-label name
fully-qualified domain name
A name that includes the host name and the domain suffix.

If you have ever typed a uniform resource locator (URL) in the address field of a browser, you
have used a fully-qualified domain name. A fully-qualified domain name has the format:
Some examples include:
single-label name
A name that identifies the host, but not the domain.

The most common example of a single-label name is a NetBIOS name. NetBIOS names were
used by Windows operating systems prior to Windows 2000, as well as many other older
Windows applications, to identify computers on a network. Because these operating systems
and applications are still in use today, you might find that you need to implement single-label
name resolution to support an application.
NetBIOS name
A name used by legacy Windows operating systems and applications to identify a computer on the network. NetBIOS names
could be no longer than 15 characters.

DNS servers
The most common use of a Domain Name System (DNS) server (Figure 1-6) is to resolve a
fully-qualified domain name to an IP address.

Figure 1-6: DNS Server

However, a DNS server can also be used to:

resolve single-label names by appending domain suffixes.

locate servers that offer a particular function.
resolve an IP address to a fully-qualified domain name.
Records on a DNS server are organized by zone. Each zone is associated with a specific
domain or subdomain. There are two types of zones:
Forward-lookup zone: stores records used to locate a computer by its fully-qualified
domain name or the service it provides
Reverse-lookup zone: stores records used to resolve an IP address to a fullyqualified domain name.
Some records commonly found on a DNS server are described in Table 1-2.
Table 1-2: Common DNS Records




IPv4 host

Identifies any host with an IPv4 address


IPv6 host

Identifies any host with an IPv6 address



Identifies an alternate name for a host


Mail exchanger

Identifies a server that can relay messages over



Name server

Identifies a name server


Start of authority

Identifies the name server that has authoritative for the



Service locator

Identifies a server that provides a specific service

Simple Mail Transfer Protocol (SMTP)

A protocol used to send email and to forward email between messaging servers.

WINS server
A Windows Internet Naming Service (WINS) server (Figure 1-7) resolves NetBIOS names to IP
addresses. If a network has only a single subnet, a WINS server is not necessary to resolve
NetBIOS names because they can be resolved by broadcasts.

Figure 1-7: WINS Name Resolution

Directory service
A server that offers directory services stores information about the computers, printers, and
users on a network. This information can be used to authenticate users and to provide lookup
services for applications on the network (Figure 1-8).

Figure 1-8: Directory Service

The standard specification that applications use to communicate with a directory service is
Lightweight Directory Access Protocol (LDAP). Two popular LDAP-compliant directory services
Active Directory a directory service supported on Windows servers
Novell eDirectory an open source directory service
Both of these directory services will be discussed in more detail in later chapters.

Authentication server
An authentication server (Figure 1-9) is one that verifies that a computer or user is who it claims
to be and grants or denies access to the network based on this verification process.

Figure 1-9: Authentication

Authentication occurs through an exchange of messages over an authentication protocol.

Some common authentication protocols are described in Table 1-3.
Table 1-3: Authentication protocols


An authentication protocol used to authenticate users and computers
on a legacy (pre-Windows Server 2000) network.


NTLM is also used for authentication in a Windows workgroup or in an

Active Directory domain that includes legacy (pre-Windows XP) client


An authentication protocol used to authenticate users and computers in

an Active Directory domain or Kerberos realm.


Remote Authentication Dial-In User Service, more than simply dial-in

access control, is a protocol used to authenticate computers on behalf
of multiple network devices, including wireless, VPN, dial-up, and
switch port connectivity.

Active Directory domain

A collection of objects stored in the same directory database. Active Directory is a feature of Microsoft Windows Server.
Kerberos realm
A collection of objects stored in the same directory database.

In an Active Directory network, a domain controller is the server that performs authentication.
domain controller
A specialized server that stores the Active Directory database and authenticates users and computers.

Some authentication servers support single sign-on, which allows users to access multiple
servers using one set of credentials.
single sign-on
Process of allowing access to multiple servers based on a single set of authentication credentials.

Proxy server

A proxy server acts as a gateway to the Internet and can be used to optimize Internet access in
situations where one or more specific web pages are frequently accessed. A proxy server
performs Network Address Translation (NAT), which allows computers on an internal network
to access resources on the Internet.
Network Address Translation (NAT)
Technology that allows multiple hosts configured with different private IP addresses to share a public IP address.

Figure 1-10: Proxy Server

A proxy server can also be configured to cache web pages so that subsequent access attempts
can be served from the proxy server instead of requiring a round-trip to the Internet (Figure 110).
To store data locally or in memory the first time that data is accessed to optimize subsequent retrieval.

A proxy server can also be configured to allow users on the public network to access resources
on the internal network. A proxy server configured in this manner is known as a reverse proxy
(Figure 1-11).

Figure 1-11: Reverse Proxy

reverse proxy
Process of forwarding requests for a specific service to an internal server that has a private IP address.

Database servers
A database server runs specialized software known as a relational database management
system (RDBMS). An RDBMS is responsible for storing and retrieving data on behalf of one or
more applications (Figure 1-12).

Figure 1-12: Database Server

A database stores data that conforms to a specific structure, such as the one shown in Figure 113.

Figure 1-13: Sample Database Structure

A database consists of multiple tables and the relationships between them. A table in a
database corresponds to an entity, and each column in the table corresponds to an attribute of
that entity. A row in a table is also referred to as a record.
Any person, place, object, event or idea for which you want to store and process data.
A characteristic or property of an entity.

Typical uses for databases include:

Inventory control
Employee records and payroll
Customer and sales records

Online-Transaction Processing (OLTP) databases

An OLTP database is one that performs a large number of write operations. This might include:
Adding new data
Modifying data
Deleting data
The structure of a database is typically designed to eliminate redundant data so that less disk
space is required and to prevent data modification from introducing inconsistencies in the data.
Examples of OLTP databases include:
Sales database for an e-commerce site
Inventory database for a busy retail store

Decision Support System (DSS) databases

A DSS database is database that is used primarily to perform read operations. Data is read
from a relational database by executing an SQL statement. RDBMSs allow you to create
indexes to optimize data retrieval. This optimization is achieved by changing the order in which
the data is stored on the storage device. A database administrator or developer typically
defines the indexes on a database.
A database object that optimizes the retrieval of relational data.

Sometimes a DSS database will include redundant information to make frequent data retrieval
operations run more efficiently.
Examples of DSS databases include:
Product databases used to show product information on a web page
Customer databases used to lookup up customer phone numbers
Database servers are typically accessed by client/server applications and web-based
applications. Some databases must support access by both client/server applications and Webbased applications.
client/server application
An application that runs on a client computer but accesses a database on a centralized server.

Database servers must support a number of concurrent connections, some of which are quite
resource intensive. The hardware requirements of a database server depend on the type and
number of operations it performs, the number of concurrent connections it must handle, and the
characteristics of the data.

Scenario: Stay and Sleep

Think about the process Stay and Sleep currently uses for making reservations mentioned in
the previous scenario. This process could be facilitated by providing a database server that
stores and retrieves reservation information.
The telemarketers and reservation agents could use a client/server or web-based application
that accessed the database server.
Make a list of the questions you would ask to determine the hardware configuration of the
database server.

Messaging servers
A messaging server handles the storage and transmission of communication. The most
common type of messaging server is an email server.

Figure 1-14: Email Server

An email server, such as the one shown in Figure 1-14, handles several different protocols.
SMTP is the protocol used to send email. Various protocols exist for retrieving email, including
IMAP and POP3. Figure 1-15 compares the features of IMAP and POP3.

Figure 1-15: IMAP vs. POP3

Internet Message Access Protocol (IMAP)
A protocol used by a client computer to access email messages stored on a server.
Post Office Protocol (POP3)
A protocol used to download messages from an email server to a client computer.

An email server needs to forward messages to other email servers, store messages, maintain a
directory of email addresses and distribution lists, and allow users to connect to retrieve their
email. Some email servers are integrated with a directory service that stores information about
users and distribution lists.

Web servers
When you hear the term web server, you probably think of a server on the Internet. While this is
often the case, a web server is more generally any server that serves HTTP or HTTPS requests
(Figure 1-16). In fact, many companies use internal web servers to allow employees to share
information and to perform critical business functions.
Hypertext Transfer Protocol (HTTP)
A protocol used by web browsers to request and receive a web page.
Hypertext Transfer Protocol Secure (HTTPS)
A version of HTTP that authenticates the web server and encrypts the response.

Figure 1-16: Web Server

The precise nature of the work performed by a web server depends on the type and number of
web applications it hosts, as well as the number of requests it needs to handle. Some websites
consist of HTML pages that only need to be sent to the browser. Other websites require the
server to generate the HTML dynamically by executing code.
Hypertext Markup Language (HTML)
A language used to define how a browser displays a web page.

A web server that hosts dynamic websites will require more processor power and memory than
one that hosts only static HTML.
A dynamic website is one written in a language like ASP, ASP.NET, CGI, PHP, or Perl.
Many dynamic websites use a database server to manage transactions and retrieve
Scenario: Stay and Sleep

While you are discussing the reservation database server with the CEO, he mentions that he
is interested in having a website that would accept online reservations without requiring the
assistance of a reservation agent.
Do you think such a system would use dynamic or static HTML?
How might this requirement change your recommendation about the reservation database?

Terminal servers
A terminal server is a server that allows a user to execute and interact with a virtual desktop
and application that are actually running on the server. The server performs all the processing
and sends the user interface across the network to the client computer. Multiple users can
launch isolated sessions with the application, as shown in Figure 1-17. This type of
virtualization is known as presentation virtualization.

Figure 1-17: Terminal Server

presentation virtualization
A virtualization strategy in which all processing occurs on the server and only the user-interface is sent to the client

Terminal servers are often used to:

allow users to run an application that is not supported on the client operating system.
ensure that business applications are centrally managed.
allow users who telecommute or travel to access a company desktop.
provide multiple users with access to applications that are infrequently used or
difficult to configure.
Remember that each concurrent session will use resources on the server. Therefore, you need
to consider how many sessions will be run simultaneously at peak times, as well as their
resource usage characteristics.

Collaboration servers
A collaboration server allows users to work collectively on a document. The collaboration might
concurrent updates
document check-out

document versioning
A version control process that ensures that only one user is modifying a document. When checkout is enabled, a document
must be checked out, modified, and then checked back in.

Some collaboration servers also support workflow, which allows a document to be routed to
different users, according to defined business rules. For example, you might define a workflow
that requires a series of signatures before a purchase order is approved.
A set of rules that define how a document should be processed.

Collaboration servers require reliable storage capacity. Other resource requirements will
depend on the servers workload.
Scenario: BCD Train
BCD Train is a company that teaches its customers on-site how to use several medical
devices. Instructors travel to customer sites and need to be able to access a standard
desktop, training presentations, and demo software. Training presentations are frequently
You need to ensure that each course is always taught using the most up-to-date
presentations and demo software.
What type of server could you use to support the instructors?

In this chapter, you learned:
Client/server networking provides centralized control over shared resources.
Cloud services are sometimes more cost-effective than implementing a server onsite.
The primary role of a file server is to store and provide access to information stored in
A print server stores printer drivers, queues print jobs, and sends them to a physical
A DHCP server assigns IP address settings.
DNS and WINS servers provide name resolution.
A proxy server acts as a gateway between the internal network and the Internet.
Directory servers store information about users, computers, groups, and other
devices on the network.
Authentication servers validate a users or computers identity.
Database servers store and retrieve structured data.

Web servers service requests sent by browsers.

Messaging servers manage the storage, forwarding, and delivery of email.
Terminal servers can execute multiple virtualized user sessions.
Collaboration servers allow multiple users to share and edit a document.

Review Questions
1. Which information sharing strategy provides the best support for accessing data remotely
and from mobile devices?
2. What type of server can resolve both fully-qualified names and single-label names to IP
3. Which protocol is used to centrally manage authentication for wired, wireless, VPN, and
dial-up network access?
4. Which type of database should be optimized for reading data?
5. Which file sharing protocol is not secure?
6. Which DNS record identifies a server that can relay messages over SMTP?
7. What type of server generally provides the ability to check-out and check-in documents?

1. Both an Active Directory domain controller and a RADIUS server perform
2. ___________________ file sharing provides more control over file access than
________________ file sharing.
3. A(n) ________________ server resolves an FQDN to an IP address.
4. A small business might implement a(n) __________________ server if they have
structured information that needs to be queried by multiple users.
5. Clients can interface with a(n) ______________ server using either POP3 or IMAP.
6. A domain controller must be identified with a ____________ record on a DNS server.
7. A Web server with dynamic content typically requires more _________ and _________
than a web server with only static content.
8. A(n) _________ server can improve web access performance by caching web pages.
9. A WINS server resolves ________________ to __________________.
10. A(n) _______________ executes multiple isolated desktop sessions on a single physical

Scenario questions
Scenario 1

A marketing company has six artists, two writers, one salesperson, and a receptionist. They
have Windows 7 client computers and frequently share files using the Public folder. An HP
LaserJet printer is connected to the receptionists computer. She has shared the printer to the
network. They use an off-site printing service to create high quality prints.
The company stores customer data in an Excel spreadsheet on the receptionists computer.
The spreadsheet includes billing information for the customers, and the owner is concerned
about security. It is also becoming increasingly difficult for the artists to find customer
information required to do their jobs, such as the customers logo, preferred color scheme,
and other branding material.
The company is planning to add three more artists, four writers, and three salespeople within
the next year. They also plan to install a high quality printer. They need to ensure that only the
artists can print to the high quality printer. They also want to limit file access to the artist and
writer involved in a specific project.

Which types of servers would you recommend and why?
Scenario 2
A museum wants to allow visitors to browse and purchase the works of its currently featured
artist and those it has featured over the last six months. They want to install several kiosk
computers that can access photographs, descriptions, and prices for each piece of art that is
for sale, as well as biographies of the artist. The information should be displayed in a web
If visitors want to purchase a work of art from one of the kiosks, they should be required to
enter payment and delivery information before completing the purchase. After a work of art is
purchased, it should no longer be available for sale.

Which types of servers would you recommend and why?

Essay question
Scenario: FI-Print
FI-Print specializes in custom designs for invitations. They currently have 20 designers. The
designers print to an HP Color LaserJet CP2025DN Printer that is attached to the network.
The designers complain that they often have to wait a long time for a print job to complete.
The company has decided to purchase two additional printers. Write a short essay that
explains how a print server could be used to address the business requirements.

This eBook is licensed to Catalin Dinca,

Chapter 2: What Is a Data Center?

In the last chapter, you learned about some of the important roles servers play in supporting a
business. As you can imagine, building an environment that supports such mission-critical
operations is critical to ensuring that the business keeps operating smoothly.
In this chapter, you will begin to explore how to choose servers that meet a companys
requirements, and you will learn how to create an environment that will support them. We will
start with a look at the different server form factors available and briefly compare the latest two
generations of HP servers. Next, we will discuss the components, apart from servers, that you
will typically find in a data center. We will also examine data center requirements, including
space, power, cooling, and security. We will finish the chapter with a look at how to configure
remote management access to a server.
data center
A facility or room that contains servers and other equipment that store shared data or host server-based services.

In this chapter, you will learn how to:
Compare tower, rack, and blade products.
Describe the common components of data centers.
Describe the fundamentals of power protection.
Describe best practices for cable management.
Describe cooling management technologies and concepts.
Describe physical space layout requirements.
Describe best practices for securing the physical access to the data center.
Identify and describe remote management offerings.
Configure remote management access.

Server Form Factors

A server is a computer specifically designed to perform a large number of operations in the
background. As with desktop computers, servers come in several different form factors. Each
form factor has its own advantages. These form factors are:

form factor
A categorization that describes the dimensions and shape of a computer, device, or component.

Tower servers
A tower server, shown in Figure 2-1, is similar to a very powerful desktop computer. It is a freestanding unit that includes all the traditional components built into the enclosure. These
Power supply
You typically install one or more internal hard disks in a tower server.

Figure 2-1: ML110 G7 Tower Server

The primary benefit of a tower server is that, if you need only one or two servers, buying tower
servers is less expensive than the other options. Tower servers are optimized for internal
(scale-up) expansion. You can add internal storage devices and expansion cards to support
additional requirements.
One disadvantage of tower servers is that they require more floor space than either rack or
blade servers.

A scalability strategy in which a servers processors, memory, and internal storage are increased to allow it to handle a
higher load.
The ability to grow to support a higher load.

To give you an overview of the options available, we will look at a few examples of HP tower
servers. For this discussion, we will focus on floor space requirements and expandability. We
will discuss the server internals in more detail in later chapters.
HP tower servers have the ML in their name. However, not all ML servers are towers.
Some ML servers are available in rack models as well. An ML server is one that is optimized for
internal expansion by adding PCI expansion cards and disk drives inside the chassis.
The frame on which the system board and other components are mounted. The most common pronunciation for chassis is
chas-ee (rhymes with classy).

HP ProLiant ML 100 servers

The ProLiant ML 100 series servers are appropriate for a small business that needs a low-cost
server. Table 2-1 compares the dimensions of the ML110 G7 server and the ML150 G6 server.
Table 2-1: ML100 Series Server Dimensions

ML110 G7

ML150 G6


14.4 in (36.7 cm)

16.81 in (42.8 cm)


6.9 in (11.5 cm)

7.87 in (2 cm)


18.6 in (47.25 cm)

24.13 in (61.3 cm)

26.84 lb (12.20 kg)

(16 kg)

47.08 lb (21.40 kg)

(30 kg)

Weight range

The weight difference depends on the exact system configuration, such as the number of hard
disks, CPUs, and power supplies installed in the particular server.
Notice that the server model name includes either G or Gen and a number. This
indicates the server generation. This text focuses primarily on the G7 generation. The latest
generation is Gen8, which includes a number of upgrades that we will discuss at various points
throughout this course.
Table 2-2 shows the maximum configuration of the ML110 G7 and the ML150 G6 servers.
Table 2-2: ML100 Series Server Expandability

ML110 G7

ML150 G6


One quad-core Intel Xeon


Two quad-core Intel Xeon



16 GB

48 GB

Storage bays

Expansion slots


Two x4 PCIe

Three x8 PCIe

One x16 PCIe

One x16 PCIe

One x1 PCIe

1 PCI 32 3.3v

large form factor (LFF) hard disk drive

A hard disk drive that fits in a 3.5-inch drive bay.
small form factor (SFF) hard disk drive
A hard disk drive that fits in a 2.5-inch drive bay.
Peripheral Component Interconnect (PCI)
A 32-bit shared bus used to connect expansion cards.
PCI-Express (PCIe)
A serial bus that provides point-to-point communication between devices.

HP ProLiant ML 300 servers

The ProLiant ML 300 series servers are more powerful than the ML 100 series servers. An
ML350e Gen8 tower is shown in Figure 2-2 with its front bezel removed. It has eight SFF hotplug hard drives installed in bays at the bottom of the server.
The rectangular frame that surrounds the front of the computer.

You can identify a hot-plug component by its burgundy colored tab. A hot-plug
component can be added or removed without having to shut down the computer.

Figure 2-2: ML350e Gen8 Tower

The dimensions for two of the servers are shown in Table 2-3.
Table 2-3: ML300 Series Server Dimensions

ML350e Gen8

ML370 G6


18.19 in (46.2 cm)

18.52 in (47.04 cm)


8.58 in (21.8 cm)

9.75 in (24.77 cm)


29.13 in (74 cm)

29.12 in (73.96 cm)

57.21 lb (25.95 kg)

66.14 (30 kg)

96.67 lb (43.85 kg)

96.34 (43.7 kg)

Weight range

Another advantage of the ML300 series servers is that they are more scalable. The maximum
configuration of these two servers is shown in Table 2-4.
Table 2-4: ML300 Series Server Expandability


Storage bays

Expansion slots

ML350e Gen8

ML370 G6

Two eight-core

Two six-core

Intel Xeon processors

Intel Xeon processors

192 GB

192 GB

18 LFF

14 LFF

24 SFF

24 SFF

Two PCIe3 x16

Two PCIe2 x16

One PCIe3 x8

Two PCIe2 x8

One PCIe3 x4

Five PCIe2 x4

One PCIe2 x4

One PCIe x4

One PCIe2 x1


PCI Extended (PCI-X)

A 64-bit shared expansion bus.

HP ProLiant N40L MicroServer

The ProLiant N40L MicroServer is an ultra micro tower. It is an entry-level server that is
appropriate for a small office, particularly if there is limited floor space and fewer than 10 client
computers. It is shown in Figure 2-3 next to a notepad and pencil for size comparison.

Figure 2-3: N40L MicroServer

Its maximum configuration is:

Duo-core AMD Turion II Model Neo N40L processor
4 LFF hot pluggable hard disk drives
The ProLiant MicroServer ships with the Microsoft Small Business Server (SBS) Essentials
2011 pre-installed.
SBS Essentials
A Windows server operating system that can support up to 25 users.

Rack servers
Rack servers are designed to be installed in a 19-in (48.26 cm) wide mounting rack (Figure 24). This configuration allows for an efficient use of floor space when multiple servers are
required. Racks are measured in rack units (U). A U is 1.75 in (4.445 cm) high. A full size rack
is 42U, which is approximately 6.125 ft (186 cm) high. A half size rack is 22U.

Figure 2-4: 42U Rack

A rack unit is sometimes abbreviated as RU instead of U.

Since rack servers come in various sizes, the number you can support in a rack varies
depending on the servers. For example, you could potentially have up to 11 4U servers or up to
8 5U servers in a single rack. You can also mix servers of various sizes in a rack. Another
advantage of rack servers is that they allow you to install rack-mounted external storage
devices that can be shared by multiple servers in a rack.
Rack servers have expansion slots, which are typically used for adding network interface cards
or Fibre Channel host bus adapter (HBA) cards.
Fibre Channel
A network used to connect networked storage.
Host bus adapter (HBA) card
An expansion card used to connect a server to a Fibre Channel storage network.

Rack servers have the additional advantage of allowing you to implement a scale-out solution
using equipment from multiple vendors in the same rack.
A scalability strategy in which additional servers are deployed to handle an increased load.

We will now look at the specifications for a few rack servers.

HP ProLiant DL100 series

The DL100 series servers are appropriate for a small to medium business that currently has
moderate computing requirements but wants the ability to add more servers without requiring
additional floor space.
A product model with a DL in the name is a rack-optimized server.
A DL160 Gen8 server that has 8 SFF hot-plug hard drives installed is shown in Figure 2-5.

Figure 2-5: DL160 Gen8 Server

The physical dimensions of two DL100 series servers are shown in Table 2-5.
Table 2-5: DL100 Series Server Physical Dimensions

DL120 G7

DL160 Gen8





17.64 in (44.8 cm)

17.1 in (43.46 cm)


27.56 in (70.0 cm)

29.5 in (69.9 cm)

22.84 lb (kg)

31 lb (14.1 kg)

26.23 lb (kg)

33 (15.2 kg)

Weight range

These dimensions allow 42 servers to fit in approximately 4 square feet of floor space. On the
other hand, supporting 42 ML100 series servers requires nearly 40 square feet, even when the
servers are packed together as tightly as possible. Plus, with 42 ML100 servers, you would
need to leave space between them for airflow and somehow manage the cables connecting
them to power and the network.
In a rack-mount configuration, the cables are routed securely within the rack, reducing the
clutter and the possibility that a cable will be accidentally unplugged. However, it is important to
note that a fully populated 42U rack is extremely heavy and can produce a lot of heat. If you are
planning to use a fully populated 42U rack, you need to ensure that the datacenter flooring can
handle the weight and that there is proper airflow within the rack.
The maximum hardware configurations for the two servers are shown in Table 2-6.
Table 2-6: DL100 Series Maximum Configuration

DL120 G7
One quad-core

Intel Xeon, Pentium, or

Celeron processor

DL160 Gen8
Two eight-core
Intel Xeon processors


32 GB

Storage bays


Expansion slots

768 GB

One x16 PCIe 2

One PCIe 3 x16

One x8 PCIe 2

One PCIe x8

As you can see, a primary difference between the tower and rack servers is the number of
expansion slots available.

HP ProLiant DL300 Series

The DL300 series servers are rack-optimized servers with more processing power and memory
than the DL100 series servers. Figure 2-6 below shows an iSeries rack filled with 42 DL360p
Gen8 servers.

Figure 2-6: iSeries Rack Filled with DL360p Gen8 Servers

The physical dimensions of two DL300 series servers are shown in Table 2-7.

Table 2-7: DL300 Series Server Physical Dimensions

DL320 G6

DL360p Gen8





17.64 in (44.8 cm)

17.1 in (43.46 cm)


26.85 in (68.19 cm)

29.5 in (69.9 cm)*

Weight range

25.52 lb (11.6 kg)

29.95 lb (13.6 kg)

32 lb (14.51 kg)
45.6 (20.7 kg)

The depth and maximum weight given are for the configuration with 4 LFF hard disk drives.

The maximum configurations for these models are shown in Table 2-8.
Table 2-8: DL300 Series Server Maximum Configurations

Storage bays

Expansion slots

DL320 G6

DL360p Gen8

One six-core

Two eight-core

Intel Xeon processor

Intel Xeon processors

192 GB

768 GB





One x16 PCIe or PCI-X

One x4 PCIe

Two PCIe 3 x16

Blade servers
Server blades are small form factor servers designed for modularity and a high-density footprint.
You install server blades in a blade enclosure, which also has room for storage blades and
other shared components-power, cooling and ventilation, networking and other interconnectsall controlled by an integrated management system. A wheeled blade enclosure filled with
server blades is shown in Figure 2-7. These same enclosures can also be racked in a 19-inch

Figure 2-7: Blade Enclosure with BL460c Blades

high-density footprint
A server property that indicates the ability to fit a large number of servers into a small space.
A connection between a server and a network or other shared resource.

Blade infrastructures generally require less rack space than rack-optimized servers. Blade
enclosures also use less power per server because they have shared power and cooling.
These power-saving techniques allow blade enclosures to have less heat output and lower
cooling costs.
Some blade infrastructure enclosures can increase the number of servers up to 60 percent
when compared to rack enclosures. One of the main advantages of using blade infrastructure is
that it easily allows you to add a new server by inserting it in the enclosure.
One disadvantage of a blade configuration is that the components in a blade enclosure are

tightly integrated and must be provided by the same manufacturer.

BLc3000 enclosure
The BLc3000 enclosure is available in a rack or tower form factors. The rack form factor is 6U
and can hold up to 4 full-height blades or 8 half-height blades.

HP BL460c server blades

The BL460c server blades are half-height blades. The physical dimensions of the two types of
BL460 servers are shown in Table 2-9. As you can see, a half-height blade is considerably
smaller than a 1U server. However, keep in mind that a blade can only be used in an
enclosure, and since the rack-mount enclosure is 6U, that is the amount of space required
whether you are using just one half-height blade or eight half-height blades.
A BL in the model name indicates that a server is a blade.
Table 2-9: BL460c Server Physical Dimensions

BL460c G7

BL460c Gen8


7.154 in (18.17 cm)

7.11 in (18.07 cm)


2.19 in (5.56 cm)

2.18 in (5.54 cm)


20.058 in (50.95 cm)

20.37 in (51.76 cm)

11.5 lb (5.22 kg)

10.5 lb (4.75 kg)

16.5 lb (7.5 kg)

14.00 (6.33 kg)

Weight range

The maximum configuration for these models is shown in Table 2-10.

Table 2-10: BL460c Server Maximum Configuration

BL460c G7

BL460c Gen8

Two six-core

Two eight-core

Intel Xeon processor

Intel Xeon processors


384 GB

512 GB

Internal storage



Expansion slots

Two x8 PCIe

Two PCIe 3 x16


Scenario: Stay and Sleep

Stay and Sleep wants to implement a database server immediately. Their three-year growth
plans call for the addition of four other servers. They have a 5ft (1.5m) x 10ft (3m) room that
they want to use as a data center.

Discuss the server form factors available. What are the advantages and disadvantages of

Data Center Requirements

Creating a data center requires more than just clearing a space in the office and then plugging
in one or more servers. A data center requires:
Sufficient clean power
Proper ventilation and cooling
Physical security measures
Qualified personnel
Management access
In this section, we will look at each of these requirements.

Power and cooling

Servers require clean, consistent power to operate. When designing a data center, you need to
ensure that more than enough power is available to meet the requirements of your servers.
Businesses are facing increased pressure to limit power consumption, not only to reduce costs
but also to reduce their carbon footprint. Because servers and storage devices are the main
consumers of power in a data center, understanding the technologies available for managing
power consumption is especially important.
carbon footprint
The estimated amount of greenhouse gas emissions caused by a person or an organization.

When designing a data center, you must also recognize that the more power a device requires,
the more heat it generates. When heat becomes excessive, it can damage components, which
can cause service interruptions and downtime. Therefore, a server room must be kept cool and
well ventilated.

Power parameters
To understand a servers power requirements, you first need to be familiar with some

Input line voltage

Input line voltage is the power provided by the power outlet. It is measured in VAC.
The electric potential difference between two points in a path for electrical current. The greater the potential difference, in
other words voltage, the higher the total power will be. The unit of measurement for electrical potential energy is the volt.
volts alternating current (VAC)
VAC refers to a measurement of electrical potential difference that fluctuates between positive and negative potentials at a
regular frequency.

In North America, Central America, and Japan, the most common input line voltage is 100 to

120 VAC. This type of input line voltage is called low-line voltage. In most other parts of the
world, the input line voltage is 200 to 240 VAC. This is known as high-line voltage. High-line
voltage is also sometimes used in North American data centers because it is more efficient and
generates less heat than low-line voltage.
When designing a data center, you must also consider whether the facility distributes AC power
as single-phase or three-phase power. Single-phase power is the type most commonly seen in
homes and some businesses. It uses two hot wires and a neutral ground wire. Three-phase
power is used in many commercial buildings. It uses three hot wires. The current running
through each hot wire is 120 degrees out of phase with the other two wires.
Three-phase high-line power is more efficient for systems that require three kilowatts or

Device VA rating
Each device is given a rating that indicates the amount of power, measured in volt-amperes
(VA), that it requires from a facilitys AC feed. The AC feed is the number you use when
choosing a PDU or UPS. This parameter is sometimes referred to as apparent power.
A unit used to measure potential energy.
potential energy
The energy associated with a particle, based on its position within a field.
ampere (amp)
A unit used to measure current.
volt-amperes (VA)
The value of potential energy (volts) multiplied by the value for current (amps).
power distribution unit (PDU)
A device installed in a rack or blade enclosure that converts AC power to the DC power that the server requires.
uninterruptible power supply (UPS)
A device that ensures that power is free from spikes and lows. A UPS supplies power to devices for a short time during a
power failure.

In a perfect electrical path, one volt times one amp equals one watt. However, there is
often resistance in the path represented by the units (in this case servers) consuming the
power. Therefore, the derived unit of measure, watts, represents any loss there may be through

Device input power

The device input power is the amount of power a device converts to work and dissipates as
heat. This type of power is known as real power and is measured in watts. Because all the heat
a device generates needs to be extracted, this measurement helps you determine the cooling
requirements for the data center.

British Thermal Units (BTUs)

The British Thermal Unit (BTU) is the standard for measuring the capacity of cooling systems.

The amount of power (watts) consumed by equipment determines the number of BTUs/hr
required for component cooling, based on this formula:
BTUs/hr = watts x 3.41

For example:
399 watts x 3.41 = 1360 BTUs/hr

Air conditioning equipment is typically rated in terms of tons of cooling, an old

measurement based on the cooling ability of tons of ice (1 ton of cooling = 12000 BTUs/hr).

Input current
Input current is the amount of current, measured in amperes, that a system draws during normal

Inrush current
Inrush current is the amount of current a system draws when the power cord is plugged in or a
circuit breaker is switched on. Inrush current is higher than input current and is cumulative
across all devices within a power circuit. You need to account for inrush current when building
a rack.
circuit breaker
An electrical switch that can automatically close a circuit when an overload or short circuit condition is detected. A circuit
breaker can also be switched manually.

Power supplies in HP servers include circuitry that minimizes inrush current. You can also
segment circuits and stagger segment activation to further reduce the effects of inrush current.

Leakage current
Leakage current (typically measured in milliamps) is residual current that originates in power
supply filters and flows from chassis ground to the phase and neutral power conductors.
Leakage current is cumulative across components within a power distribution circuit and can
become a hazard if proper grounding procedures are not used.

Ventilation and cooling

Proper ventilation and a cool environment are essential for keeping your servers running. For
example, HP recommends that servers be operated in an environment between 50F and
95F (10C to 35C).
HP servers after G6 include an array of sensors designed to monitor critical components, such
as the CPU and RAM. The sensors adjust the fan strength based on how much heat is being
generated by these components. This is important because the more heavily a component is
used, the more heat it generates.
The Gen8 servers include enhanced power and cooling management technologies that are
enabled through the HP 3D Sea of Sensors.

Measuring power requirements

An HP power supply has a marked nameplate on the chassis that shows the following data:
Input requirement-the AC input voltages (or ranges) and associated maximum current
Output power-the DC voltage, maximum current (amperage), and maximum power
The input requirement reflects the input voltage that is required when the power supply is
operating at full power. Most power supplies do not operate at their rated capacity. Therefore,
using these numbers to estimate power requirements can result in excessive power
infrastructure costs. Because of this discrepancy, we will now consider an example so that we
can better estimate power requirements in a real-world setting. Table 2-11 compares the
nameplate ratings and the actual operating needs of a rack that contains 20 ProLiant DL3890
G6 servers and 750W power supplies.
Table 2-11: Nameplate Ratings and Actual Operating Needs

Nameplate ratings

Actual operating needs

Wattage per power

supply unit

750 W

300 W

AC input current per

power supply unit
(measured at 208VAC)

4.5 A

1.38 A

Total rack wattage



Total input VA

15-17 kVA

6-7 kVA

The efficiency of a power supply is determined by how much AC input power is needed to
produce a given amount of output power. A power supply that requires 300 watts input to
produce 250 watts of output is operating at approximately 83 percent efficiency (250 / 300 =
.83). The 50-watt energy differential between the input and output wattage is mostly lost as
heat, which must be removed by the cooling equipment.
Power supply efficiency is not linear or flat across the output range, and most power supplies
achieve maximum efficiency when they operate in the mid-to-upper range of their rated
capacity. A 750-watt power supply providing 300 watts (40 percent capacity) is less efficient
than a 460-watt power supply providing the same 300 watts of power (65 percent capacity). In
choosing the most efficient (right-sized) power supply for a server, an accurate power
consumption estimate for that server is important, particularly if a data center has a number of
The most accurate power consumption predictions are those obtained by pre-configuring and
measuring actual systems under load. This method of obtaining data is usually impractical for
customers since it would require purchasing, setting up, configuring, and running of each
component at top capacity to acquire the measurements. HP has tested server products under
various configurations and loads to determine actual power requirements. The results of these
tests were used as the foundation to create the HP Power Advisor.

HP Power Advisor
The HP Power Advisor utility reduces the research and guesswork required to estimate power
requirements for ProLiant-based systems. An IT administrator can use the HP Power Advisor to
build a complete system, component-by-component and rack-by-rack, assembling a complete
The HP Power Advisor is available at:
We will now quickly summarize the HP Power Advisors features.
The first time you access the HP Power Advisor, you will be prompted to accept the HP Power
Advisor license agreement (Figure 2-8).

Figure 2-8: HP Power Advisor License Agreement

Click I Accept to continue. You will be prompted for the input voltage, as shown in Figure 2-9.
Select the input voltage used at the data center facility from the drop-down list and click Go.

Figure 2-9: HP Power Advisor Input Voltage

Now you are ready to assemble your configuration. Start by dragging your rack model onto the
canvas, as shown in Figure 2-10.

Figure 2-10: HP Power Advisor Empty Rack

You should notice that the HP Power Advisor shows that this is a 22 U (half-height) rack. It also
shows the racks current weight. The power summary at this point shows all the levels at zero
for the VA rating, BTU HR, Input System Current, and Wattage. These levels are all at zero
because no servers or other components that draw power have been added to the rack.
Click the Servers tab. Drag a DL120 G7 server to the rack. Now the canvas looks like Figure 211.

Figure 2-11: HP Power Advisor -- Rack Populated with One Server

Now we will add a second DL120 G7 server, as well as one DL160 G6 server, to the rack. Now
the canvas should look like Figure 2-12.

Figure 2-12: HP Power Advisor with Rack Populated with Three Servers

You also need to define the configuration of each individual server. To do so, select a server
and click Config. A form like the one shown in Figure 2-13 will be displayed.

Figure 2-13: Server Configuration

On this form, you can specify the CPU, memory, and hard drive configuration for the server. If
you scroll down, you can also add expansion cards and power supplies. In addition, you can
specify the percent utilization of the server to indicate its typical load. The example shown in
Figure 2-14 shows a server configuration.

Figure 2-14: Sample Server Configuration

After you have finished configuring the server, click Save. Now you can see, in Figure 2-15,
that the Power Summary to the right and the Configuration Result below both show values.
These values are specific to the selected server.

Figure 2-15: Individual Server Configuration Values

Configure the other servers and then select the rack to show cumulative values for all the
servers in the rack, as shown in Figure 2-16.

Figure 2-16: Cumulative Values

These values tell you that this particular configuration would require the following:
417.48 VA total apparent power that the power distribution components must transfer
1418.03 total BTU/hr that will need to be removed by the cooling system
4.15 amperes total current drawn by the three servers
415.83 total watts dissipated by the equipment

HP Power Advisor recommendations

For some configurations, the Recommendations tab provides the users with the following

types of recommendations:
General Purpose-configuration with best balance of performance, cost, and efficiency
Performance-enhanced configuration for maximum performance
Low Cost-economical configuration achieving good performance
High Efficiency-configuration using less power with possible sacrifice in performance

HP Power Advisor BOM

The BOM tab allows you to generate a simplified bill of materials (BOM) that contains the
products you specified in your configuration.
bill of materials
A list of components or raw materials required to create a product. In the context of servers, the BOM is the list of servers,
racks, and peripherals in the configuration.

HP Power Advisor Power Report

The Power Report tab allows you to enter the cost of electricity per kWh at the data center and
the number of years you anticipate the servers to be in use. It then calculates the cost of power
for the configuration, as well as the total cost of ownership (TCO), including heating and
cooling. A sample report is shown in Figure 2-17.

Figure 2-17: Power Report

total cost of ownership (TCO)
The complete cost of purchasing and maintaining a server, application, or infrastructure over a number of years. In this
example, only power costs are included in the TCO. However, TCO usually takes into account purchase cost, maintenance
costs, and other factors.

Protecting the security of a server is another crucial concern. A companys servers perform
essential operations and, in many cases, store sensitive data. This makes it especially critical
to take precautions to prevent unapproved users from accessing the data center. These

precautions might be simply a lock on the data center door or an electronic access card system
for authorized entry. Whichever type of system you choose, you should make sure that keys or
access cards are issued only to personnel who need access to the data center. You should
also train users who have access to the data center to recognize the importance of keeping the
door closed at all times and not allowing other users to follow them inside.
You should also ensure that the server room does not have alternative points of access,
including windows or ceiling panels.
Finally, you need to take precautions against fire, including installing a reliable smoke detector.
HFC-236fa fire extinguishers can be used safely on computer equipment because they do not
leave any residue. The NFPA 75 Standard for the Protection of Information Technology
Equipment requires that all server rooms be equipped with a sprinkler system. Using a preaction sprinkler system will help prevent accidental water damage because the pipes in this
system do not fill with water until the alarm is sounded. With a wet sprinkler system, water
resides in the pipes.
pre-action sprinkler system
A sprinkler system that has two response phases when a fire is detected: 1. issue alarm and fill pipes; 2. release water.

Another key consideration when you are creating a data center is ensuring that you have
qualified personnel to support and maintain the data center. Some tasks that will need to be
performed include:
Monitoring server performance
Monitoring power and cooling
Monitoring network performance
Troubleshooting problems
Managing backups
Upgrading hardware
Upgrading operating systems
Installing and updating applications
Managing users, groups, and permissions
Supporting users
Planning for business continuity
If the company operates around the clock, the servers must operate around the clock too, and
that means someone qualified to troubleshoot problems needs to be on call at any hour.
Some SMBs will hire one or more IT personnel to perform these functions. Others will
outsource their IT support. In many cases, routine IT functions will be performed by a small staff,
but more complex issues will be outsourced to experts.
When researching which server is best for a data center, you should consider whether the
support agreements will address your needs. For example, you might consider purchasing an
optimized care package, such as the HP Hardware Support Onsite Service package.

HP Hardware Support Onsite Service

HP Hardware Support Onsite Service provides both remote assistance and onsite support for
any covered hardware, helping you to improve product uptime.
You can choose between multiple predefined service-levels. A service level defines the onsite
response time, call-to-repair time, and coverage window combinations.
Service-level options with specified call-to-repair times provide IT managers with support
specialists who will quickly begin troubleshooting the system and help return the hardware to
operating condition within a specified timeframe.
Features of the service include:
Remote problem diagnosis and support
Onsite hardware support
Replacement parts and materials
Service-level options with different coverage windows
Service-level options with different onsite response times
Work to completion
Escalation management
Access to electronic support information and services
HP electronic remote support solution (for eligible products only)
Accidental damage protection (optional; for eligible products only)
Defective media retention (optional; for eligible products only)
Call-to-repair time commitment in lieu of onsite response times for hardware support
(optional; for eligible products only)

Management access
Because most servers require only occasional interactive access, most of the time they should
operate headless (without a monitor). The advantages of this approach are that it reduces:
Power consumption
Equipment costs
The likelihood of accidental or malicious tampering
headless server
A server that is not attached to a monitor.

You have the following options for managing a headless server:

Attach a monitor and input devices
Connect through a console port
Connect through the network
The last two options are covered later in the chapter. We will first look at a special piece of
equipment that allows you to connect multiple servers to the same keyboard and input devices:

a console switch.

Console switch
Many data centers use a console switch, also called a KVM (keyboard video mouse), to
manage two or more servers using a single monitor, keyboard, and mouse. A console switch
can be connected to multiple servers. An administrator can toggle to whichever server needs to
be managed.
One such console switch is the HP Server Console switch, shown in Figure 2-18. The HP
Server Console switch is available with either 8 ports or 16 ports. It also has serial, PS/2, USB,
and BladeSystem interface adapters. The PS/2 and USB ports allow you to connect peripheral
devices that can be monitored by the switch, such as printers, scanners, and storage devices.
The 16-port version allows two users to access the servers at the same time.
If you need to be able to manage additional servers or workstations, you can attach an 8-port
expansion to each port. Because these can be tiered two levels deep, you can manage up to
256 servers through a console switch.

Figure 2-18: 8-port HP Console Switch

The HP Server Console Switch is a 1U rack-mount device. It has CAT5 cable that can be
routed through the back of the rack to ensure proper airflow and cooling.
The HP Server Console Switch with Virtual Media allows cascading up to three levels. Up to
two local keyboards, mice, and monitors can be connected to the switch. It has a switchable
USB pass-through that allows you to connect USB storage devices for performing software
upgrades or installations when used with Virtual Media Interface Adapters.
Virtual Media Interface Adapter
A connector that has an RJ-45 female connector on one end and both a 15-pin HD-15 male connector and a Type A USB
male connector on the other end.

Another console switch option is the HP IP Console Switch G2 with Virtual Media. This console
switch allows both local management and management across a LAN or WAN. It supports data
encryption and smart card/Common Access Card authentication. It is shown in Figure 2-19 with
a PS/2 Interface connector on the left and a Virtual Media Interface Connector on the right.
The act of applying a cryptographic algorithm to data to protect its confidentiality.
smart card authentication
Two-factor authentication using a card that contains a security certificate and a PIN or password.

Common Access Card

A smart card issued by the United States Department of Defense.

Figure 2-19: HP IP Console Switch G2

You can cascade Console Switches beneath IP Console Switches, but you cannot
cascade an IP Console Switch beneath a Console Switch.

Hosted Data Centers

As you can see, a lot goes into creating an on-premises data center. For many SMBs, a more
cost-effective solution is to subscribe to a hosted data center. Hosted data centers offer various
levels of service and generally provide:
Environmentally controlled spaces
24x7 technical staff
When using a hosted data center, a company can subscribe to a service that provides the
processing power and storage currently required. Expanding to a higher level of service or
contracting to a lower one is relatively easy. The major drawback of hosted data centers is lack
of control over the servers, their configuration, and some uncertainty about where your
companys data is stored.
Scenario: Stay and Sleep
What are the environmental considerations for the Stay and Sleep data center?

HP Integrated Lights-Out (iLO)

IT organizations of all sizes today must have the capacity to remotely manage their servers so
that they can meet their business demands for efficiency and responsiveness.
HP remote management gives the administrator virtual presence. That means that the
administrator has complete control as if he or she was in front of servers in data centers or at

remote sites. HP integrated Lights-Out is included with all HP servers and provides the
following benefits:
Simplified remote management
Reduced operational costs
Improved IT productivity
Increased system availability
Lights-Out technology is an autonomous management subsystem that resides on a host server.
It consists of an intelligent processor and firmware. Therefore, it can be used to manage the
server at any time, including:
during initial power-on testing
before the operating system is loaded
while the operating system is functional
throughout an operating system failure

Figure 2-20: Server States Managed by iLO

All servers include standard Lights-Out Standard remote management functionality, which
provides basic system board management functions and diagnostics.
Advanced functionality, referred to as integrated Lights-Out Advanced, can be licensed
separately. iLO Advanced offers sophisticated virtual administration features for full control of
servers in data centers and remote locations.

iLO advantages

HP Lights-Out technology provides these advantages:

Easy setup and use
Enables setup without additional software by using a ROM-based configuration.
Setup can also be done using the browser interface over the network.
Group administration and action
Enables an administrator to easily configure both network and global settings for a
group rather than one at a time.
Integrated Management Log (IML)
Maintains a copy of the host server IML, which can be accessed using a standard
browser, even when the server is not operational.
iLO Event Log or Remote Insight Event Log
Provides a detailed event log that records user actions such as turning server power
on and off, resetting the server, changes in user configurations, clearing the event
log, and successful and unsuccessful login attempts. A supervisor can use this log to
audit user actions.
Headless server deployment
Simplifies cable management by reducing the required cables from five to three per
server. A monitor, keyboard, mouse, and switch box are not needed in every rack.
Auxiliary power
Allows the iLO management processor to obtain its power through a separate
connection to the auxiliary power plane of the server. As long as the server is
connected to a power source, the iLO management processor can power itself up
and remain fully functional even if the server is powered down. If the server provides
redundant power supplies, iLO uses redundant power and continues operation in the
event of a power supply failure.
iLO allows administrators to create a virtual media volume mapped to a folder or file on the
management server that appears as a floppy drive or DVD drive on the server being managed.
Virtual media can be used to do the following:
Run local applications on remote host servers
Apply firmware upgrades to remote servers
Deploy an operating system on remote servers from a Virtual CD network drive
Perform disaster recovery of failed operating systems
Install applications on the remote server from a Virtual CD
G5 and G6 servers were shipped with the iLO 2 management processors. G7 servers include
the iLO 3 management processor. All Gen8 servers include the iLO 4 management processor.

HP iLO Standard
The advanced remote management capabilities for ProLiant servers are based on the iLO
processors embedded in most current ProLiant servers.
HP iLO provides essential remote management capabilities that enable administrators to
diagnose and configure remote servers. Features of the iLO Standard include:

Simple, text-based command line interface to execute tasks securely

Virtual power button to turn the power on or off remotely
Server diagnostics with detailed power-on self-test (POST) tracking, integrated
management log access, and server status
Virtual indicators that enable an administrator to view or change the status of the unit
identification light on the server
Shared network port
Simple Network Management Protocol (SNMP) alerting and alert forwarding through
Simple Network Management Protocol (SNMP)
A TCP/IP protocol used to manage and monitor remote devices.
A utility that allows you to monitor and manage multiple servers through a single console.

iLO Advanced Pack

The iLO Advanced Pack expands on the features of integrated Lights-Out with the purchase of
a software key. This software key enables a virtual presence with remote graphical console
capabilities and virtual media support. Other features enabled by the iLO Advanced Pack
Directory services support
Terminal services pass through
Power monitoring
Shared Remote Console
Record boot and fault buffers
Dynamic power capping
For more information about the iLO Advanced Pack go to the HP website at

iLO 3 user interface

iLO 3 is accessible through the following user interfaces.
Web-based interface
Scripted interface (XML)
Secure Shell (SSH)
A TCP/IP protocol that allows you to connect to a command-line console on a remote server through an encrypted

SSH interface
The SSH interface of iLO 3 enables administrators to use the most important iLO 3 features
from a text-based console. The VT connection in Figure 2-21 shows the prompt that is

displayed after you have connected to an SSH session.

Figure 2-21: iLO3 SSH Interface

The screen identifies the name of the user who is logged in; the DNS name and IP address of
the server; the version, build number, and date of iLO installed on the server; the name of the
server; and basic information such as whether the server is powered on or not.
Type a question mark (?) to display information about the SSH commands available. The HP
CLI commands are shown in Figure 2-22.

Figure 2-22: HP CLI Commands

The power command is used to view and change the power status of the server. To learn more
about the power command, type help power and press the Enter key. After you have done this,
iLO displays the power command options as well as information about the permissions
required to run the power command, as shown in Figure 2-23.

Figure 2-23: Power Command Information

You can learn about each of the other commands by typing help and the command name.

Using the iLO 3 web interface

The iLO 3 web interface provides more functionality than is available through the SSH
interface. Because it is a graphical user interface, it is also easier to use. We will now explore
some of the features of the iLO 3 web interface.
When you access the iLO 3 web interface, you will be prompted to log in, as shown in Figure 224.

Figure 2-24: iLO Login Screen

You can use two types of login credentials to access the iLO 3 web interface: local user
accounts and domain user accounts. Local user accounts are stored inside iLO 3 memory
when the default user Administrator is enabled.
domain user account
A user account that has been created in an Active Directory domain database.

Information capabilities of iLO 3

The iLO 3 web interface displays the information page with system status and status summary
information, and provides access to health information, system logs, and Insight Agent
information. The options available in the Information section are:
System Information
iLO 3 Event Log
Integrated Management Log
Insight Agents
Direct access to remote consoles

Overview page
The Overview page is displayed first. This page provides an overview of the server status, as
shown in Figure 2-25, including the product name, server serial number, product ID, and
system ROM.

Figure 2-25: iLO Overview with Server Status

The overview page also displays information about iLO, including whether standard or
advanced iLO is licensed and the firmware version. It also shows key status indicators. The
System Health indicator displays OK if all is well. The Server Power indicator is shown as
green when power is on. The UID indicator determines whether the Unit ID light is lit. You can
manage power and the UID light using the controls at the bottom of the screen, as shown in
Figure 2-26.

Figure 2-26: Power and UID Light Control

You should only cycle power on a server if you need to do so to correct a problem or
perform an approved change. Disrupting server power can have serious consequences to
applications that are running on it.

System Information
The iLO 3 microprocessor monitors various subsystems and devices when the server is
powered on during server boot, operating system initialization, and operation. Monitoring will
even continue during an unexpected operating system failure. System Information displays the
health of the monitored system.
The System Information Summary tab, shown in Figure 2-27, is displayed by default and
provides administrators with a quick way to check that all subsystems are operational.

Figure 2-27: System Information: Summary

System Information also offers the following health tabs to display more detailed information:
Fans, Temperatures, Power, Processors, Memory, NIC Information, and Drives.
We will examine most of these health tabs at various points during this course. For now, we will
look at the Temperatures tab as an example of the type of detail provided. Figure 2-28 shows
that the iLO processor reads the ambient temperature of the server, as well as the temperature
for various components. If any reading exceeds the threshold, that component will no longer
show a status of OK.

Figure 2-28: Temperature Tab

Event Log
The iLO 3 Event Log page is a record of significant events detected by iLO 3. It is shown in
Figure 2-29. Logged events include major server events, such as a server power outage or a
server reset, and iLO 3 events such as unauthorized login attempts. Other logged events
include successful or unsuccessful browser and remote console logins, virtual power and
power cycle events, clear event log actions, and some configuration changes such as creating
or deleting a user.

Figure 2-29: iLO Event Log

Remote Console
If you need to access the console of a server, you can do so by using the Integrated Remote
Console. The Remote Console allows you to access the servers console even if no operating
system is installed.
You can only access a booted server using Remote Console if you have licensed iLO
Advanced Pack.
As you can see in Figure 2-30, Integrated Remote Console requires Microsoft .NET Framework
3.5 on the management computer. There is also a Java Integrated Remote Console. It requires
Java Runtime Environment, Standard Edition 6.0.

Figure 2-30: Remote Console Page When .NET Framework 3.5 Is Not Installed

If the correct version of the .NET Framework is installed, the Remote Console page will look
like Figure 2-31.

Figure 2-31: Remote Console page with .NET Framework 3.5 Installed

Click Launch to open the remote console. The first time Integrated Remote Console is
launched, you are prompted to install the application, as shown in Figure 2-32.

Figure 2-32: Integrated Remote Console

Click Run to install the application. After it is installed, the Remote Console window opens and
the servers desktop is displayed. Figure 2-33 shows the console of a new server that has not
yet been set up. The server is attempting to boot, but it has not found an operating system.

Figure 2-33: Remote Console - No Operating System Installed

Scenario: Stay and Sleep

The Stay and Sleep data center is also a storage facility for towels, bedding, and cleaning
supplies. A number of non-technical people have access to the room. Stay and Sleep has a
single IT person who supports the desktop computers and the network. They are planning to
hire a database administrator and programmer.
Discuss the best management option for the database server.

In this chapter, you learned:
Servers come in three form factors:
A data center requires:
Sufficient clean power
Proper ventilation and cooling
Physical security measures
Qualified personnel
Management access
You use a components VA rating when determining the power requirements for a

Cooling requirements are measured in BTUs, based on the power consumed by a

A Server Console switch allows multiple devices to be managed through a single
keyboard, monitor, and mouse.
iLO allows you to remotely manage a server in any state, including:
Throughout power-on testing
Before the operating system is loaded
While the operating system is functional
During an operating system failure

Review Questions
1. What is the form factor of the ProLiant ML110 G7 server?
2. What are three drawbacks to high power consumption?
3. How does input current compare to inrush current?
4. Which console switch provides access to a headless server over a LAN?
5. Which iLO option allows you to view the health status of various subsystems and

Match the form factor to the description.
Form Factors


a. Tower

___ Most space efficient

b. Rack-optimized

___ ML350e Gen8

c. Blade

___ Height measured in U

___ Servers share power and cooling
___ BL460c G7

Fill in the blank

1. _____ is the standard for measuring the capacity of cooling systems.
2. The inrush current of a system is _____ its input current.
3. _____ current can become a hazard if proper ground procedures are not used.
4. A(n) _____ fire extinguisher can be safely used on computer equipment.
5. You can use _____ to connect to a server and manage it using HP CLI commands.

1. Power supply efficiency is linear across the output range.
2. TCO includes purchase price, power consumption costs, and maintenance costs.
3. Headless servers can be managed only through the network.
4. The HP Hardware Support Onsite Service package always defines a call-to-repair time of
24 hours.

Essay questions
1. Explain the relationship between power consumption and cooling.
2. Describe the three variables you should consider when choosing the service level for HP
Hardware Support Onsite Service.
3. Compare an HP Console Switch with an HP IP Console Switch.

Scenario questions
An electric car manufacturer has committed to green technology. Their R&D facility is located
in an area that does not have high-speed Internet access.
They need a server that will provide file and print services. In six months, they plan to add a
domain controller to their network. After they release their first car in one year, they expect to
rapidly expand. Within 2 years, they expect to have 600 employees and require 6 servers.
The company does not have any onsite IT expertise and wants to ensure that employees do
not tamper with the servers.

1. Which server form factor will best meet their requirements? Explain why.

2. Would you recommend installing headless servers and managing it using iLO or using a
console switch? Explain why or why not.

Research Activity: Investigating Server Recommendations

Equipment Required
A computer with Internet access for each student

A private detective agency has six agents, a receptionist, and an accountant. They need a file
server. The desktop computers are running the Linux operating system and the company wants
consistent operating systems across the business.

1. Open your web browser.
2. Navigate to
3. Compare the Tower, Rack, and Blade suggestions for the File and Print server
running Red Hat Linux.
4. Which server would you recommend if the customers primary requirement was
5. What is the maximum memory supported by the HP ProLiant BL460c G7 Server
6. Which remote management technology is supported by the HP ProLiant BL460c
G7 Server series?
7. Which blade is recommended for a Linux file server that will be running Red Hat
8. Which tower would be recommended for supporting 25-75 users?

Optional Research Activity: Use the HP Power Advisor

Equipment Required
A computer with Internet access for each student

You are preparing a proposal for a file server solution for a customer. You are researching the
power requirements of the proposed server.

1. Open your web browser.
2. Navigate to
3. Click I Accept to accept the license agreement.
4. Choose 220 as the Input Voltage and click Go.
5. Drag the 10622 G2 rack onto the staging area.
6. Click the Servers tab.
7. Drag the DL360 G7 server into the rack.
8. Select the server in the rack (it will highlight in blue) and click the Config button.
9. Specify the configuration shown in Figure 2-34.
10. Click Save.

Figure 2-34: Configuration

11. Record the information about the configuration in the 100% utilization row of the table.

12. Click Config.

13. Change the utilization to 50% and record the values in the table.
14. Click Power Report.
15. Suppose power costs 15 cents per kilowatt hour and the customer expects to keep the
server for 3 years.
a. What is the hardware-driven cost of ownership over the life span of the
b. Assuming the default cooling cost per watt, what is the total cost of ownership
for heating and cooling?
16. Close the Power Advisor.

This eBook is licensed to Catalin Dinca,

Chapter 3: Inside a Server


In the last chapter, you learned about the components of a data center. Of course, the most
critical components in a data center are the servers. Servers differ in very important ways from
desktop PCs. In this chapter, we take a look at the architecture of a server to illustrate how a
server is designed meets its workload requirements.
We begin with a quick discussion of how server workloads differ from desktop PC workloads.
Next, we delve into processor and memory architectures, focusing primarily on the Intel Xeon
and AMD Opteron processors. From there, we look at other key server components: expansion
buses and the Smart Array controller, including an introduction to RAID. We wrap up our
discussion with a look at the various product series in the G7 and Gen8 ProLiant product
Redundant Array of Independent Disks (RAID) array
A set of physical disks that are seen as a single logical volume by the operating system.

In this chapter, you will learn how to:
Describe processor technologies.
Describe memory technologies.
Describe common server system architectures.
Describe the role of a Smart Array controller.
Describe the characteristics of various RAID implementations.
Describe the features of the ProLiant G7 and Gen8 product series.

Why Servers Are Different

The obvious difference between a server and a desktop PC is the operating system. Although
you can install a server operating system on a desktop PC, you will achieve better performance
when you run server workloads on a computer that is especially designed as a server. Before
we look at the architecture of a server, we should first take a look at the typical workloads of two
different types of servers: a file server and a database server.
The amount of work performed by a system.

File servers
As you learned in Chapter 1, a file servers primary responsibilities are:
Storing files
Authorizing access
Uploading files
Downloading files
Figure 3-1 provides a simple illustration of the steps that need to occur each time a user copies

a file to or from the file server.

Figure 3-1: File Server Steps

As you can see, a file server performs a lot of network input/output (I/O) and storage I/O for each
request. When you multiply this by the number of requests, the server must process, you can
see that a busy file server needs to be optimized for both network and storage access. This
does not mean that the processor is not important. The processor works to help the operating
system make decisions about authorization and determine whether the file has been
processed. If encryption is being used, the processor also needs to perform the necessary
encryption and decryption calculations to process the file access requests.
The reversal of the encryption process to enable the data to be read.

Database server
Like a file server, a database server needs to be optimized for both network and storage I/O.
However, a database server requires significantly more processing power and memory as well.
The exact requirements will depend on whether the server primarily performs transactional or
retrieval operations.

Figure 3-2 shows which resources are used when you execute a simple query to retrieve the
manager hierarchy for a specific employee from a database. As you can see in the Object
column on the right of the illustration, the processor, memory, and storage are all utilized. Keep
in mind that this level of resource utilization reflects a single operation. Some database servers
are responsible for performing many operations per second. Another consideration is that this
query is based on an Employees table that has only 290 records. Some database tables have
thousands of records. The more records a table has and the more complex the query, the more
resources are consumed executing the query. Given the level of resource utilization the server
handles, you can see why it is essential for a database server to have the necessary resources.

Figure 3-2: Resources Used by a Query

CPU Architectures
The central processing unit (CPU) is the workhorse of the server. A server contains one or more
CPUs that must be able to efficiently manage a large number of instructions. When choosing a
CPU configuration, you need to be concerned with performance, scalability, and power
A programmatic operation. Common instructions include calculations and moving values between memory locations.

All the processors discussed in this course are x64 processors. You might sometimes
see the term EM64T (Extended Memory 64 Technology) used to refer to a 64-bit Intel processor
and AMD64 to refer to a 64-bit AMD processor. Legacy computers use 32-bit processors. A
limitation of a 32-bit processor is that it can only use 4 GB of RAM.
Both AMD Opteron and Intel Xeon processors are used in servers. To understand the

advantages offered by a Xeon processor, we will now take a brief look at the history of Intel
architectures and features. Then we will look at the architecture of an AMD Opteron processor.

Intel processors
The primary components of an Intel processor are execution units, caches, and communication
buses. As its name suggests, an execution unit is responsible for performing instructions. The
caches are locations where data and code are stored. The L1 cache is the cache nearest the
execution unit. The L1 cache is SRAM that is actually part of the CPU and provides high-speed
access. The L2 cache is further away from the execution unit and is much larger than the L1
cache. However, accessing the L2 cache takes more time. Also, in some processor
architectures like the Intel Core 2 Duo, if there are multiple cores, the L2 cache is shared
between them, as shown in Figure 3-3.
Static Random Access Memory (SRAM)
A type of volatile memory that provides fast access times and utilizes little power.

Figure 3-3: Intel Core 2 Duo Processor Architecture

Beginning with the Intel Core 2 Duo processor up until the Core i7 processor, any
communication between the CPU and external components, such as the Memory Controller
Hub (MCH), occurs through the Front Side Bus (FSB) unit.
The MCH has a Direct Media Interface (DMI), which is a dedicated series of four serial links to
the I/O Controller Hub (ICH). The DMI supports 2.5 GT/s.

Gigatransfers per second.

The serial links are called lanes.

As illustrated in Figure 3-4, the DMI supports various types of peripherals, including PCIe, USB,
SATA, IDE, as well as controlling APIC power management, timers and interrupts, and fan
speed control.

Figure 3-4: DMI and ICH

The ICH communicates with hard disk drive controllers, including SATA and IDE, as well as
other peripherals. It is also responsible for managing reset and power cycling events.
General Purpose I/O (GPIO)
Used for system customization, interrupts, and wake events.
System Management Bus (SMBus)

A bus used for low-bandwidth devices, such as on/off switches, voltage sensors, and fan controllers.
Serial Peripheral Interface (SPI)
Interface used for BIOS flash device. A BIOS flash device contains initialization code and boot firmware.
Low Pin Count Interface (LPC)
Low bandwidth interface used to connect PS2 keyboard/mouse controls, serial ports, and other low speed I/O devices.
Real Time Clock (RTC)
Computers internal clock.
Advanced Programmable Interrupt Controller (APIC)
Component that directs interrupts to a particular processor.

Understanding the chipset

The processor is one chip in a chipset that also includes the Northbridge and Southbridge
chips. The Northbridge chip is responsible for memory access and AGP video. The
Southbridge chip is responsible for peripheral buses.
Accelerated Graphics Port (AGP)
A point-to-point bus used to connect a video card.

Intel Core i7 architecture

The Core i7 processor is based on a completely new architecture, known as the Sandy Bridge

Figure 3-5: Sandy Bridge Architecture

The Core i7 supports up to four cores. The cores support Hyper-Threading, which allows two
threads to execute simultaneously on each core. The Core i7 also supports Turbo Boost, a
technology that automatically accelerates instruction execution to accommodate peak loads.
An object responsible for executing instructions.

Each core has its own L2 cache. There is an additional L3 cache that is shared between all
cores. The Core i7 processor also supports three DDR3 memory channels that support more
than 28 Gbps bandwidth and operate at 1333 MT/s.
Double Data Rate (DDR)
A type of RAM that transfers two chunks of data each clock cycle.

Another key difference between Core i7 processors and earlier processors is that in Core i7
processors, the FSB has been replaced by a point-to-point link interface called Intel Quick Path
Interconnect (QPI). QPI uses highspeed differential signaling and supports transfer rates up to
6.4 GT/s for both input and output. Its peak bandwidth is approximately 2.5 times as fast as the
best performing FSB. QPI also offers better reliability than FSB.
In addition, the Core i7 supports Intel Virtualization Technology (VT-x), which optimizes running
virtual machines on a single physical computer.
virtual machine
An operating system that runs inside an isolated simulated hardware environment.

Xeon processors
Like the Core i7, a Xeon processor has three caches: core-specific L1 and L2 caches and a
shared L3 cache. Most Xeon processors support Hyper-Threading and Turbo Boost. The Xeon
processors offer several advantages over the Core i7 processor, including:
24x7 dependability
The Xeon processor is optimized to run continuously without reboot.
Error Correction Code (ECC) memory protection
Detection and correction of a high percentage of memory errors.
Improved security
Various features for improving malware protection and performing cryptographic
operations, such as encryption.
More memory capacity
Memory capacity depends on the processor and the number of CPUs installed.
The 5500 series Xeon processors support more efficient power usage through:
Integrated Power Gates
Automatically powers down unused cores.
Automated Lower-Power States
Operates the processor and memory at the lowest possible power state to meet
workload requirements.
In addition, you can purchase a motherboard that has multiple Xeon processors.

Table 3-1 shows the characteristics of various Xeon processors. All processors listed in the
table, except the E5502, E5504, E5506, and L5506, support Turbo Boost.
Table 3-1: Xeon Processor Characteristics

Xeon E5-2600 series

Figure 3-6: Xeon E5-2600 Series Processor

ProLiant Gen8 servers feature Xeon E5-2600 Series processors (Figure 3-6) with 2/4/6/8 cores
that deliver up to 80% higher performance than their predecessors. They provide more cores,
cache, and memory capacity, along with bigger, faster communication pathways to move data
more quickly. Two key technologies deliver additional high-value performance boosts:
Faster performance for peak workloads Intel Turbo Boost Technology
automatically increases processor frequencies to take advantage of power and
thermal headroom.
Up to 2x performance gains for floating point operations Intel Advanced Vector
Extensions provides new instructions that can significantly improve performance for
applications that rely on floating point or vector computations.
thermal headroom

The difference between the amount of heat produced by a system and the maximum operating temperature.
floating point operation
A calculation that involves numbers with a variable number of decimal places.
vector computation
A computation that involves one or more arrays of values.

In addition, this processor also features a 24 MB cache, 8.0 GT/s QPI (two ports), and 80 PCI
lanes. Xeon E5-2600 Series processors provide a scalable, energy-efficient platform. Intel
Trusted Execution Technology ensures that servers boot only into cryptographically verified
known good states.
Peripheral Component Interconnect (PCI)
A shared bus that is used to communicate between components.
A communication path. In general, the more lanes available, the faster data can be transferred between components.

Learning about your Intel processor

If you need to obtain information about a servers processor and the features that it supports,
you can download the Intel Processor Identification Utility. There are versions available for
Windows and Linux, as well as a bootable version.
The Intel Processor Identification Utility allows you to view detailed information about your
processor and the features it supports. The Frequency Test tab, shown in Figure 3-7, allows
you to view the reported CPU speed, system bus speed, size of the L3 cache, number of
threads, and number of cores. Using these values, you can compare the speeds at which your
processor is operating to the speed at which the processor was designed to operate.

Figure 3-7: Frequency Test

The power management settings can impact the reported processor speed.
The CPU Technologies tab, shown in Figure 3-8, reports the technologies supported by the
processor. As you can see, the Core i5-2450M CPU is a 64-bit processor, and it supports Intel
VT and Hyper-Threading.

Figure 3-8: CPU Technologies

The CPUID Data tab, shown in Figure 3-9, gives you detailed information about the
processors type, family, model, stepping, and revision. It also lists the size of each cache,
provides information about the types of packaging, the embedded graphics processor, and the
chipset identifier.

Figure 3-9: CPUID Data

When a CPU model is first introduced, the stepping and revision values are 0. The
stepping or revision value is incremented each time a change is made in the CPU
manufacturing process, either to fix a problem or to make an enhancement. Stepping values are
incremented for more major changes. Revision values are incremented for minor changes.
When a stepping increments, the revision number becomes 0 within that stepping.
You can also save a copy of the report.

AMD64 chipset
An AMD64 chipset has a slightly different architecture than an Intel chipset, as shown in Figure
3-10. The processor has an integrated memory controller that it uses to transfer data to and from
RAM. The processor uses up to three HyperTransport buses to communicate with I/O
subsystems and between processors. A HyperTransport link supports up to 19.2 GB/s of peak
bandwidth per processor.

Figure 3-10: Opteron Architecture

CPU power requirements

Most CPUs require a much lower voltage than that provided by the power supply. A voltage
regulator module (VRM) on the motherboard identifies the amount of power a CPU requires
and supplies only the necessary voltage. You will sometimes see a VRM referred to as a
processor power module (PPM).
Modern CPUs also include features that dynamically adjust the amount of power drawn by the
CPU according to usage. These include:
Intel SpeedStep
Integrated Power Gates
Automated Lower Power States
AMD PowerNow
AMD Dual Dynamic Power Management
Scenario: Stay and Sleep
The CEO of Stay and Sleep wants to install a database application on a desktop computer.
The database application will be used to track reservations made by agents through an online

reservation program.
The database will need to handle several transactions during business hours, but not many
during off hours.
Explain why purchasing a server to run the database application is a better solution.

Memory is a critical component in a server. Many performance issues can be attributed to
insufficient memory. Modern servers all support DDR3 memory. Legacy servers support DDR
or DDR2 memory. All three are types of SDRAM.
Synchronous Dynamic Random Access Memory (SDRAM)
A type of RAM that uses a clock signal for synchronization.
Dual in-line memory module (DIMM)
A series of SDRAM chips on a circuit board that has distinct electrical contacts on each side.

DDR, DDR2, and DDR3 memory is packaged as a DIMM. A DDR3 DIMM is shown in Figure 311. A DDR3 DIMM can be composed of one, two, or four ranks of either 9 or 18 SDRAM chips.
The number of ranks determines the DIMMs capacity. An SDRAM chip delivers 64 bits of data
and 8 bits of ECC in parallel to the CPUs memory bus.

Figure 3-11: DDR3 DIMM

DIMM configurations
Each SDRAM chip on a DIMM provides either 4 bits or 8 bits of a 64-bit data word. Chips that
provide 4 bits are called x4 (by 4), and chips that provide 8 bits are called x8. Eight x8 chips or
sixteen x4 chips make a 64-bit word, so at least eight chips are located on one or both sides of
a DIMM. However, a standard DIMM has enough room to hold a ninth chip on each side. The
ninth chip stores 4 bits or 8 bits of ECC.
An ECC DIMM with nine DRAM chips on one side is single-sided, and an ECC DIMM with nine
DRAM chips on each side is double-sided (Figure 3-12). A single-sided x8 ECC DIMM and a
double-sided x4 ECC DIMM each create a single block of 72 bits (64 bits plus 8 ECC bits). In
both cases, a single chip-select signal from the memory controller activates all the chips on the
DIMM. In contrast, a double-sided x8 DIMM (bottom illustration) requires two chip-select signals
to access two 72-bit blocks on two sets of DRAM chips.
chip-select signal

A signal sent to a specific chip on a bus.

Along with single-sided and double-sided configurations, DIMMs are classified by rank. A
memory rank is an area or block of 64-bits (72 bits for ECC memory) created by using some or
all of the DRAM chips on a DIMM.

Figure 3-12: Memory Sides and Ranks

A single-rank ECC DIMM (x4 or x8) uses all of its DRAM chips to create a single block of 72
bits, and all the chips are activated by one chip-select (CS) signal from the memory controller
(top two illustrations in Figure 3-12). A dual-rank ECC DIMM produces two 72-bit blocks from
two sets of DRAM chips on the DIMM. A dual-rank ECC DIMM requires two chip-select signals.
The chip-select signals are staggered so that both sets of DRAM chips do not contend for the
memory bus at the same time. Quad-rank DIMMs with ECC produce four 72-bit blocks from four
sets of DRAM chips on the DIMM. Quad-rank DIMMs with ECC requires four chip-select
signals. Like dual-rank DIMMs, the memory controller staggers the chip-select signals.
Memory ranks have become more important because of new chipset and memory technologies
and larger memory capacities. Dual-rank DIMMs improve memory capacity by placing two
single-rank DIMMs on one module. The chipset considers each rank as an electrical load on
the memory bus. At slower bus speeds, the number of loads does not degrade bus signal
integrity. For faster memory technologies, the chipset can drive only a certain number of ranks.
For example, if a memory bus has four DIMM slots, the chipset may be capable of supporting
only two dual-rank DIMMs or four single rank DIMMs. If you install two dual-rank DIMMs, then
the last two slots must remain empty. To compensate for the reduction in the number of slots,
chipsets use multiple memory buses.
If the total number of ranks in the populated DIMM slots exceeds the maximum number of loads
the chipset can support, the server may not boot properly, or it may not operate reliably. Some
systems check the memory configuration while booting to detect invalid memory bus loading. If

the system detects an invalid memory configuration, it stops the boot process to avoid
unreliable operation.
To prevent such memory-related problems, HP advises customers to use only HP-certified
DIMMs, which are available in the memory option kits for each ProLiant server.
Another important difference between single-rank and dual-rank DIMMs is cost. Memory costs
generally increase with DRAM density. For example, the cost of an advanced, high-density
DRAM chip usually runs more than twice that of a conventional DRAM chip. Because large
capacity, single-rank DIMMs are manufactured with higher-density DRAM chips, they typically
cost more than dual-rank DIMMs of comparable capacity.

Comparing DDR, DDR2, and DDR3

Some key differences between the three types of memory are summarized in Table 3-2.
Table 3-2: Memory Types

DDR2 and DDR3 are notched differently. They are not interchangeable. A DDR2 DIMM
can only be used in a DDR2 slot. A DDR3 DIMM can only be used in a DDR3 slot.
The original specification defined the DDR3 frequency and transfer rates shown in the table.
JEDEC has extended the DDR3 specification to define additional memory speeds of 1866 MHz
and 2166 MHz. G6 and G7 ProLiant servers support a maximum DDR3 DIMM speed of 1333
MHz. However, Gen8 servers support memory speeds up to 1600 MHz and will support speeds
up to 1866 MHz once processor chipsets that support it are available.
Joint Electron Device Engineering Council (JEDEC)
The organization responsible for creating and maintaining standards related to memory. Their Web site is

We will now take a closer look at memory characteristics and their impact on memory
performance and efficiency.

Memory speed and frequency

The maximum frequency at which memory operates is noted in its name. For example, DDR31333 is rated to operate at 1333 MHz. This value is known as the DDR clock and is double the
real clock rate. If you want to express the maximum speed in Mbps, you should multiply the
frequency by 8. For example, DDR3-1333 memory has a maximum transfer rate of 10,664
These are theoretical data transfer maximums. They do not take into account the clock
cycles used to send control messages between memory and the memory controller.

Memory latency
Another important indicator of memory performance is Column Address Strobe (CAS) latency.
This indicator is sometimes abbreviated as CL. CAS latency is the number of clock cycles a
memory controller must wait before receiving the requested data from DDR. To calculate the
latency in nanoseconds, you need to first calculate the number of nanoseconds for each clock
clock cycle = 1/real clock rate

Next, you multiply the result by the CAS latency.

1 ns = 0.000000001 seconds
So, for example, if a DDR3-800 memory chip had a CAS latency of 7, you would calculate the
latency in nanoseconds as follows:
clock cycle = 1/400 = 2.5 ns
latency = 2.5 ns * 7 = 17.5 ns

Bank interleaving
SDRAM divides memory into two to four banks for simultaneous access to more data. This
combination of division of memory and simultaneous access to memory is called interleaving.
Two-way interleaving is similar to dividing each page in a notebook into two parts and having
two assistants retrieve a different part of the page. Even though each assistant must take a
break, breaks are staggered so that at least one assistant is working at all times. Both
assistants retrieve the data much faster than a single assistant does, especially since the single
assistant does not access data when taking a break. This means that while the processor
accesses one memory bank, the other bank stands ready for access. The processor can initiate
a new memory access request before the previous access completes. This approach to
memory retrieval results in continuous data flow.

A memory chip has two storage areas: the memory array, where the majority of data is stored
and the I/O buffer, where data is stored awaiting transfer to the memory controller. On a DDR
chip, only two bits of data can be transferred from the memory array to the buffer each clock

cycle. Therefore, if the external real clock speed is 200 MHz, the internal data path must also
have a clock speed of 200 MHz.
With DDR2, the internal data path width is doubled, allowing 4 bits of data to be transferred
each clock cycle. Doubling the internal data path width means that the internal data path only
needs half the frequency of the external data path to meet demand. As shown in Figure 3-13, a
DDR2 chips internal data rate only needs to be 100 MHz to support a 200 MHz real clock

Figure 3-13: Prefetch Comparison

With DDR3, the data path was doubled again to 8 bits. This means that with a 100 MHz internal
data path, a DDR3 chip can support a real clock speed of 400 MHz and a DDR clock speed of
800 MHz.

Types of DIMMs
DDR3 initially supported two types of DIMM memoryUnbuffered DIMMs (UDIMMs) and
Registered DIMMs (RDIMMs). ProLiant Gen8 servers support a new third type memory called
Load Reduced DIMM, or LRDIMM. DDR2 supports UDIMMs and Fully Buffered DIMMs
All memory on a server must be of the same type.

Unbuffered DIMMs
With UDIMMs, all address and control signals and all the data lines connect directly to the
memory controller across the DIMM connector. Without buffering, each additional UDIMM that
you install on a memory channel increases the electrical load. As a result, UDIMMs are limited
to a maximum of two dual-rank UDIMMs per memory channel. In smaller memory
configurations, UDIMMs offer the fastest memory speeds with the lowest latencies.

Fully Buffered DIMMs

FBDIMMs buffer all memory signals (address, control, and data) through an Advanced Memory
Buffer (AMB) chip on each DIMM. FBDIMM architecture supports more DIMMs on each memory
channel than UDIMMs, but FBDIMMs are more costly, use more power, and have increased
latency. The greater number of memory channels in server architectures beginning with the
ProLiant G6 and G7 servers eliminated the need for an FBDIMM solution, and for this reason,
FBDIMMs are not part of the DDR3 specification.

Registered DIMMs
RDIMMs lessen direct electrical loading by having a register on the DIMM to buffer the Address
and Command signals between the DRAMs and the memory controller. The register on each
DIMM bears the electrical load for the address bus to the DRAMs. This reduces the overall load
on the address portion of the memory channel. The data from an RDIMM still flows in parallel
as 72 bits (64 data and 8 ECC) across the data portion of the memory bus. With RDIMMs, each
memory channel can support up to three dual-rank DDR3 RDIMMs or two quad-rank RDIMMs.
With RDIMMs, the partial buffering slightly increases power consumption and latencies. Table
3-3 compares RDIMMs and UDIMMs.
Table 3-3: RDIMM and UDIMM Comparison




DIMM sizes available

2GB, 4GB, 8GB, 16GB

1GB, 2GB

Low power version of DIMMs


ECC support

Advanced ECC support

Address parity

Memory Mirroring and Lockstep

Mode support

Relative cost



Maximum capacity on a server

with 16 slots

256 GB

128 GB

Maximum capacity on a Gen8

server with 24 slots

384 GB

128 GB

Address parity
An error checking method in which the register chip calculates parity on the DRAM address lines and compares it to the
parity bit from the memory controller.
Memory mirroring
A memory protection mode in which the memory subsystem writes identical data to two channels simultaneously. If memory

read from one of the channels returns incorrect data due to an uncorrectable memory error, the system automatically
retrieves the data from the other channel.
Lockstep mode
A memory protection mode in which two channels operate as a single channeleach write and read operation moves a
data word two channels wide. If there are three channels, the third channel is not used.

Load Reduced DIMMs

LRDIMMs buffer the memory address bus and the data bus to the memory controller by adding
a full memory buffer chip to the DIMM module. Unlike the FBDIMMs, LRDIMMs deliver data to
the memory controller in parallel rather than using a high-speed serial connection.
Because LRDIMMs are completely buffered, they have both advantages and disadvantages
when used in a server. These include:
Support for the maximum amount of memory per channel.
You can install up to three quad-ranked LRDIMMs on a single memory channel.
Increased power consumption.
The addition of the buffering chip and the higher data rates increase DIMM power
For equivalent memory speeds, buffering requires adding clock cycles to memory
reads or writes. This increases the latency of LRDIMMs relative to single and dualrank RDIMMs.
The increased overhead of a LRDIMM is offset by the fact that on fully populated memory
channels, LRDIMMs operate at a higher speed than quad-rank RDIMMs. This higher operating
speed allows a LRDIMM to have lower overall latency than a quad-rank RDIMM has.
Table 3-4 compares the operating characteristics of a 32 GB low-voltage quad-rank LRDIMM
with those of an equivalent RDIMM.
Table 3-4: Operating Characteristics of Quad-rank LRDIMM vs. RDIMM

LRDIMMs allow you to configure large capacity systems with faster memory data rates than

those of quad-rank RDIMMs. ProLiant Gen8 servers support LRDIMMs, as well as UDIMMs
and RDIMMs, although you cannot mix DIMM types within a single server. G6 and G7 servers
do not support LRDIMMs.

HyperCloud DIMMs
HyperCloud DIMMs (HDIMMs) are high capacity DIMMs that are used in DL380p and DL360p
Gen8 servers. A detailed discussion of HDIMMs is beyond the scope of this course.

HP SmartMemory
HP SmartMemory is a technology introduced for ProLiant Gen8 servers that unlocks certain
features only available with HP-qualified memory. A unique signature written to the memory
serial presence detect (SPD) on each DIMM verifies to Gen8 servers that SmartMemory is
installed and has passed the HP rigorous qualification and testing process.
Low voltage (1.35V) SmartMemory can operate at 1333 MT/s with three DIMMs per channel
(3DPC), but third-party memory must run at 1.5V to achieve that rate. This equates to up to 20%
less power with no reduction in performance. In addition, the industry supports UDIMM at two
DIMMs per channel at 1066 MT/s. SmartMemory supports two DIMMs per channel at 1333
MT/s, which is approximately 25% greater bandwidth than is supported by UDIMM at two
DIMMs per channel.
SmartMemory enables key features, such as complete unit history information, for proactive
notification through HP Active Health. Active Health monitors changes to the server hardware
configuration to enable lifecycle monitoring of memory health status. While Pre-Failure Alert
simply notifies the administrator of an impending failure, SmartMemory provides precise
information on memory-related events, such as multi-bit errors or configuration issues.
When used with HP SIM, the Smart-capable firmware enables fault prediction capabilities.
Before potential problems develop in one of the DIMMs, HP SIM lets you know in advance so
that you can have the DIMM replaced before it fails, perhaps while it is still under warranty.
HP Active Health
A component of the HP iLO Management Engine that provides continuous monitoring of system parameters.
HP System Insight Management (SIM)
A suite of management tools that allow you to manage multiple servers from a single console.

NUMA Architecture
Now that you have a general understanding of memory and processors, we will now review
how the Non-Uniform Memory Access (NUMA) server architecture and DDR3 memory work
together to address memory throughput and latency issues that limited system performance
under older multiprocessing architectures.
AMD Opteronbased servers have used NUMA architecture since their inception, first with
DDR1 and later with DDR2 memory. The AMD-based ProLiant G7 servers use an updated
NUMA architecture that supports DDR3 memory. Starting in G6 and G7, Intel-based HP
ProLiant servers began incorporating NUMA architecture along with other new features. All
ProLiant Gen8 servers use NUMA architecture.

Uniform memory access architecture

Figure 3-14 shows the typical architecture for a two-processor (2P) server that uses the
traditional memory architecture. With this general design, known as uniform memory access,
memory controllers and memory channels are located on a centralized system chipset. Each
processor uses the same pathway to access all of the system memory, communicating with the
memory controllers across the front side bus. The controllers then access the DIMMs on the
memory channels, returning the requested data to the processors. The architecture supports
two memory controller functions. Each of these functions manages two memory channels for
four memory channels per system. The system supported larger memory footprints by
supporting up to four DDR2 FBDIMMs per channel.

Figure 3-14: Traditional Two-processor Uniform Memory Architecture

This architecture gives each memory channel a maximum raw bandwidth of 9.6 GB/s for
systems supporting PC2-6400 fully buffered DIMMs. The memory channels of systems that use
registered DIMMs can support a maximum bandwidth of 6.4 GB/s. With four memory channels
per system, the theoretical maximum memory bandwidth for these systems is 38.4 GB/s and
25.6 GB/s, respectively. There are, however, factors that limit the achievable throughput:
The maximum bandwidth of the front side bus is a performance choke point.
Larger memory footprints require fully buffered DIMMS, which increases memory
latency and decreases memory throughput and performance.

DDR3 and NUMA architecture

Although they vary slightly in their implementation details, Intel and AMD NUMA architectures
share a common design concept. With NUMA, each processor in the system has separate

memory controllers and memory channels. In addition to having more memory controllers and
memory channels in the system, each processor accesses its attached memory directly. Direct
memory access eliminates the bottleneck of the front side bus and reduces latency. A
processor accesses the system memory attached to a different processor through high-speed
serial links that connect the primary system components. In Intel-based systems, the QuickPath
Interconnect (QPI) serves as its high-speed serial links. For AMD-based systems, the
HyperTransport technology performs the same function. Beginning with the HP ProLiant G6, all
HP ProLiant servers use DDR3 memory to help increase memory throughput. Figure 3-15
illustrates the NUMA architecture for a typical ProLiant server with two processors.

Figure 3-15: ProLiant 2P NUMA Architecture

NUMA architecture solves the two related problems that emerged as system complexity grew:
It eliminates bottlenecks in the memory subsystem that constrain system memory
It supports larger memory footprints without significantly lowering memory
DDR3 integrates with the NUMA architecture to deliver significantly improved memory
throughput. For example, when the architecture for the Intel-based 2P ProLiant G6 servers uses
DDR3 memory with six channels, it has a maximum theoretical memory bandwidth of 64 GB/s,
65% greater than that of the older architecture using DDR2 memory.
For ProLiant Gen8 servers, the number of memory channels in 2P systems increases to four
per processor. Memory speeds increase to support up to 1600 MT/s initially and 1866 MT/s
later. When the Gen8 2P systems use quad-rank LRDIMM, the maximum memory capacity will
increase to 768 GB. This increase in memory capacity occurs in both Intel-based and AMDbased server designs.

NUMA support in 4 processor systems

For ProLiant 4P server architectures, Intel-based designs take a slightly different approach to
the memory subsystem than AMD-based systems take. Each design approach has its own set
of benefits, and both rely on DDR3 memory.

Intel 4P architecture
Figure 3-16 shows the processor and memory architecture for the 4P HP ProLiant G7 servers

that use Intel Xeon 5600 series processors.

Figure 3-16: ProLiant G7 Intel 4-way Architecture

While the basic NUMA architecture is evident in 4P HP ProLiant G7 servers, there are distinct
differences between its design and the design of 2P systems. The most significant difference
for the 4P server is that it possesses separate memory buffers between the CPU and the
memory channels. These buffers use a proprietary, high-speed serial link to transport memory
data between themselves and the CPU while providing a standard memory bus interface to the
DDR3 DIMMs. Using this approach, each memory controller supports two memory channels of
two DIMMs each. In addition, the 4P architecture uses four memory controllers per CPU rather
than three. Taken together, these design choices allow the Intel-based 4P systems to support
up to 64 DIMMs, or 2 TB of memory using 32 GB DIMMs.

With the memory buffering used in this architecture, the system memory operates at 1066 MT/s
for all memory configurations, including fully populated systems.
For ProLiant Gen8 servers, this architecture remains relatively unchanged, although the
memory speed is increased to 1333 MT/s.

AMD 4P architecture
AMD-based HP ProLiant servers have used NUMA architecture since their inception. However,
ProLiant G7 servers are the first generation to use DDR3 memory. Figure 3-17 shows
processor and memory architecture for an AMD-based 4P ProLiant G7 server with three DIMM
sockets per memory channel.

Figure 3-17: ProLiant G7 AMD 4-Way Architecture

Each processor has four memory controllers, and each memory controller has a channel
supporting either two or three DDR3 DIMMs, depending on the system. The three DIMM
memory channels are configured electrically as a T, with one DIMM installed at the center of
the T and the other two DIMMs on the ends. This design provides improved signal integrity by
keeping the lengths of the electrical paths to the DIMMs as close to the same length as
reasonably possible. In order to help maintain this symmetry, you install DIMMs on both ends of
the T before installing the third DIMM in the center.
This architecture allows the memory subsystem to support DDR3 operating at 1333 MT/s when
the memory channels are not fully populated. The absence of external memory buffering also
results in lower overall memory latency. Without buffering, however, the architecture only
supports a maximum of 48 DIMMs, or 1TB of system memory.
For ProLiant Gen8 servers, the AMD architecture does not change significantly. Without
memory buffering, these systems support DDR3 memory operating at 1600 MT/s in smaller
memory configurations. With the availability of LRDIMMs capable of supporting three quad-rank
DIMMs per channel, the maximum memory footprint will increase to 1.5 TB at 667 MT/s.

NUMA impact on bandwidth

By removing the front side bus and moving the memory controllers onto the processors, the
newer system architectures eliminate some of the previous memory bottlenecks. The maximum
theoretical memory bandwidth is unattainable in practice because it represents an idealized
scenario in which all memory channels operate at full throughput all the time. Using NUMA
architectures, 2P ProLiant servers can achieve improved measured memory throughput relative
to their theoretical maximums, as shown in Table 3-5.
Table 3-5: Memory Throughput for 2P ProLiant Servers


Theoretical maximum
memory bandwidth

Measured maximum
memory throughput

Intel-based 2P ProLiant G5

25.6 Gbps (RDIMMs)

38.4 Gbps (FBDIMMs)

12 Gbps

Intel-based 2P ProLiant G6/G7

64 Gbps

40 Gbps

Intel-based 2P ProLiant Gen 8

102.4 Gbps

88.6 Gbps

Memory Configuration on an HP ProLiant Server

When designing and implementing a server configuration, you should configure memory to
achieve the best possible performance. Next, we will look at some best practice guidelines and
the DDR3 Memory Configuration Tool.

Memory configuration best practices

DDR3 memory delivers improved performance over DDR2 memory. With the NUMA
architectures, the way DDR3 DIMMs is installed in the system affects performance.

Maximizing system throughput

The key to maximizing system throughput is to populate as many of the system memory
channels as possible. This helps to ensure that the memory bandwidth of all channels is
available to the system. With 2P ProLiant G6 servers based on the Intel Xeon 5500 series
processors, you must install a minimum of six DIMM modules (one in each memory channel) to
ensure that the memory bandwidth of all channels is available to the system.

Minimizing memory latency

You can optimize memory latency, particularly loaded memory latency, by running at the
highest data rate. For systems that are capable of supporting the higher data rates, achieving
this memory speed depends on the number of the DIMMs installed in each channel, as well as
their rank.

Using balanced memory configurations

For nearly all application environments, the optimal configuration for DDR3 memory is to
balance installed memory across memory channels and across processors. Balancing installed
memory across memory channels on a processor optimizes channel and rank interleaving, and
this interleaving ensures maximum memory throughput.
Balancing installed memory across processors ensures consistent performance for all threads
running on the server. If you have installed more memory on one processor, threads running on
that processor will achieve significantly higher performance than threads on the other
processor. A performance imbalance can degrade overall system performance, particularly in
virtualization environments.

DDR3 Memory Configuration Tool

The DDR3 Memory Configuration Tool is available at:
When you launch the DDR3 Memory Configuration Tool, you first need to accept the license
agreement. A page is displayed that explains a little about the types of DDR3 memory. Click
Next to continue.
You are then prompted to select whether you have a build-to-order (BTO) model part number. If
you do, click Yes. Otherwise, click No. If you click No, you are prompted to select an HP
ProLiant Server Series, as shown in Figure 3-18.

Figure 3-18: Server Selection

Select a series and then expand the Select your ProLiant server dropdown list. Choose the
server model from the list. You are then prompted to answer two questions: how many
processors the system has installed and whether memory is installed in the server. For the sake
of this demonstration, we choose 2 and answer No, as shown in Figure 3-19.

Figure 3-19: Server Configuration

Click Next to continue. Enter the amount of memory that you need or drag the slider to select
the appropriate value. Choose whether you want to optimize the server for performance, power
efficiency, low cost, or general purpose, as shown in Figure 3-20.

Figure 3-20: Specify Requirements

Click Next to continue. You are shown configuration options, with those that are recommended
shown as Optimal, as shown in Figure 3-21.

Figure 3-21: Memory Configuration Options

Click the Select button for an option to display the part numbers and a diagram to show how
the DIMMs should be installed. The diagram for Option 1 above is shown in Figure 3-22.

Figure 3-22: Parts List and Installation Diagram

Scenario: Stay and Sleep

In your meeting with the Stay and Sleeps CEO, you learn that the company expects to launch
a television advertising campaign after the website goes live. The database server and the
web server will both need to be able to scale up if the advertising response overloads the
What should you consider when choosing memory now to allow for scalability in the future?

Expansion Buses
The most common types of expansion slots on a server motherboard are PCIe expansion slots.
Some servers also support PCI and PCI-X expansion slots. These expansion slots can be used
to connect graphics adapters, network adapters, and other peripheral cards.
We will now take a closer look at each of these standards.


The PCI standard is a shared bus topology that allows a single bus to be shared by up to 5
peripheral devices. A PCI card has 47 pins. A 32-bit PCI card that operates at 33 MHz can
support data transfers of up to 133 Mbps, and a 64-bit PCI card can support data transfers of up
to 533 Mbps at 66 MHz.
A PCI slot can be used to connect most peripherals. However, it is not fast enough to support
graphics cards or other cards that require high bandwidth.
PCI cards can have power requirements of 7W, 15W, or 25W.

PCI Hot-Plug specification

The original PCI specification did not support inserting and removing PCI cards without
shutting down the computer. However, the PCI Hot-Plug supplement describes the steps
hardware and driver developers needed to take to make a device hot-pluggable.
Hot-plug devices are supported in Windows operating systems starting with Windows 2000, as
well as most current Linux distributions.
To be hot-pluggable, a device must support Plug and Play. It also needs to be able to execute
without setting any CMOS options.
plug and play
The process by which an expansion card or other peripheral is automatically detected and configured. Prior to plug-and-play
support, devices had to be manually configured with an IRQ and I/O address. Manual configurations led to conflicts when
two devices were assigned the same value.
Interrupt Request (IRQ)
A number assigned to a device and used by the processor to signal to the device that it has something to communicate.
I/O address
Memory address that the CPU uses to communicate with the device.
Complementary Metal Oxide Semiconductor (CMOS) options
Configuration settings that are stored on a non-volatile memory chip.

PCI-X (also called PCI Extended) is also a shared bus. However, it is 64 bits wide instead of 32
bits wide. It is available with a 66 MHz clockrate or a 133 MHz clock rate. At 133 MHz, it
supports data transfers up to 1 Gbps.

PCIe (also called PCI-Express) is a serial connection technology that uses a point-to-point
switching connection to facilitate direct communication between devices.
A PCIe card has one or more 4-wire lanes that are used to transmit data. The number of lanes
determines the width of the slot or card. Each lane supports data transfer rates of 200 Mbps in
each direction. This means that a 16-lane PCIe card can have a data transfer rate of 6.4 Gbps
in each direction. The notation that determines the number of lanes is x1, x2, x4, x8, and x16
(pronounced "by 16").
Another advantage of PCIe is that it can supply up to 75W of power. This is especially
important for high-powered graphics cards.

You can install a PCI or PCI-X card in a PCIe slot by using a PCIe to PCI adapter card. This is
important if you have a legacy peripheral that you need to use in a newer server that does not
have a PCI slot. You can also install a PCIe card into a PCIe slot that is wider (has more lanes)
than the card. However, only the lanes of the card will be used.
You cannot install a PCIe card into a PCI slot or into a PCIe slot with fewer lanes.
There are three different generations of PCIe. Their characteristics are covered in Table 3-6.
Table 3-6: PCIe Characteristics

The bandwidth for PCIe 3.0 is double that for PCIe 2.0, but the transfer rate is less than double.
The primary reason for this is that PCIe 3.0 has a much lower overhead for encoding data for

Internal Storage
One of the primary functions of a server is to store and provide access to data. Therefore, the
storage devices on a server must have the following characteristics:
Sufficient storage capacity
Optimal access times
Redundancy to protect against data against hard disk failure
HP servers have a Smart Array processing engine to allow you to connect internal or external
Serial Attached SCSI (SAS) and Serial Advanced Technology Attachment (SATA) hard drives
over a PCIe bus.
Serial Advanced Technology Attachment (SATA)
An interface standard that evolved from the Integrated Device Electronics (IDE) standard. It calls for serial communication
and offers data rates higher than those provided by IDE, but lower than those supported by SAS.
Serial Attached SCSI (SAS)
An extension of SCSI interface technology that allows higher data transfer and better reliability for server hard disks than that
provided by SATA.
Small Computer System Interface (SCSI)

A set of standards that define interface parallel connectivity for devices, including storage devices, printers, and scanners.

Smart Array controller features

A Smart Array controller transforms the read and write requests made by an application into
individual instructions required by the hard disk or RAID array.
Two Smart Array controllers are shown in Figure 3-23.

Figure 3-23: HP Smart Array Controllers

A Smart Array controller includes a cache module that improves I/O performance over that
offered by a standard hard disk controller. It provides both read-ahead caching and write-back

Read-ahead caching
The HP Smart Array controller family uses an adaptive read-ahead algorithm that anticipates
data needs to reduce wait time. It can detect sequential read activity on single or multiple I/O
threads and predict when read requests for sequential data will follow. The algorithm then
reads ahead from the disk drives. When the read request occurs, the controller retrieves the
data from high-speed cache memory in microseconds rather than from the disk drive in
milliseconds. This adaptive read-ahead scheme provides excellent performance for sequential
small block read requests.
sequential data
Data stored on adjoining sectors of a hard disk.

The controller automatically disables read-ahead when it detects nonsequential read activity.
HP Smart Array controller adaptive read-ahead caching eliminates issues with fixed readahead schemes that increase sequential read performance but degrade random read
random read
The process of reading data from diverse areas of the drive.

Write-back caching

HP Smart Array controllers use a write-back caching scheme that lets host applications
continue to perform other tasks without waiting for write operations to the disk to complete. A
controller without a write-back cache returns completion status to the OS only after it writes the
data to the drives. Because writing to a drive is more time-consuming than writing to cache
memory, this causes applications to wait and performance to suffer.
A controller with write-back caching can post write data to high-speed cache memory and
immediately return completion status to the operating system. As far as the operating system in
concerned, the write operation completes in microseconds rather than milliseconds. The
controller then writes data from its write cache to the disk later, at an optimal time for the
Once the controller locates write data in the cache, subsequent reads to the same disk location
come from the cache. This is known as a read cache hit. Subsequent writes to the same disk
location will replace the data held in cache. Thus, write-back caching improves bandwidth and
latency for applications that frequently write and read the same area of the disk.
The write cache will typically fill up and remain full most of the time in high-workload
environments. The controller uses this opportunity to analyze the pending write commands and
improve their efficiency. The controller can use write coalescing to combine small writes to
adjacent logical blocks into a single larger write for quicker execution. The controller can also
perform command reordering, rearranging the execution order of the writes in the cache to
reduce the overall disk latency. With larger amounts of write cache memory, the Smart Array
controller can store and analyze a larger number of pending write commands, increasing the
opportunities for write coalescing and command reordering while delivering better overall
ECC DRAM technology protects the data while it is in cache. Smart Array battery-backed or
flash-backed cache backup mechanisms protect the cache data against a server crash and
power loss.
The controller disables caching when battery-backed or flash-backed cache is not
installed. You can override this behavior, but doing so opens a window for possible data loss.

RAID technology combines multiple physical disk drives into a logical disk drive to improve
resiliency and increase performance of computer storage.
There are several RAID levels in common use, each with its own strengths and weaknesses,
and each designated by a number. Some RAID levels improve performance (striping), some
improve resiliency (parity), and some improve both. In this section, we will examine five RAID
levels that are commonly implemented:
RAID 1+0

However, first we will define a couple of terms.

RAID set
The term RAID set, also called a RAID array, is used to refer to a collection of drives that are
combined and configured to work together as a logical drive, with a RAID configuration applied
to them.
Figure 3-24 shows a simple RAID set containing five disks that have been combined in to a
single logical disk.

Figure 3-24: Simple RAID Set

It is common practice when representing storage in technical and architectural

diagrams to draw disk drives and logical drives as cylinders.

RAID controllers
A RAID controller is dedicated hardware that takes physical disks and performs RAID functions
on these disks. These RAID functions include: creating logical disks (referred to as Logical
Units and usually abbreviated as LUNs), striping, mirroring, parity calculation, and recovering
from failures. Figure 3-25 shows how a RAID controller fits into the architecture.

Figure 3-25: RAID Controller

Internal RAID Controllers, such as HP Smart Array RAID Controllers, are installed inside of
physical servers and control the physical disk drives installed in the particular server that the
RAID controller is installed in. Internal RAID controllers sit on the PCIe bus of the server.
A hard disk attached to an internal controller is known as Direct Attached Storage

Striping is a technique used by many RAID algorithms to improve storage performance. At a
high level, this improved performance is achieved by aggregating the performance of all drives
in the RAID set (sometimes referred to as the stripe set). This is shown in Figure 3-26.

Figure 3-26: Striping

The following simple example highlights the performance benefits of striping. Imagine that you
are saving a 10 MB MP3 file to disk and that your disk can write (save data) at 1MB/sec. Saving
your MP3 file will take 10 seconds. However, if you had four of those disk drives and created
them in to a stripe set, you would be able to save your MP3 file in 2.5 seconds. This is because
each drive is capable of 1MB/sec, and creating a stripe set combines the four drives in to a
single logical drive with a write performance of 4MB/sec.
Striping also allows for increased capacity. In the above diagram, the logical disk has the
capacity of all four of the physical disks used in the stripe set.

Parity is a technique used in RAID algorithms to increase the resiliency of a RAID set. Some
RAID algorithms reserve a percentage of the capacity of the entire RAID set to store parity data
that will enable the RAID set to recover from certain failure conditions, such as a failed drive.
Figure 3-27 shows how a single disk drive can be reserved in a RAID set for parity data.

Figure 3-27: Parity Drive

The above RAID set has a single drive for parity and can therefore suffer a single drive failure
(any drive in the RAID set) and still continue to function.
Figure 3-28 shows RAID sets with a single parity drive and how they deal with single and
multiple failed data drives.

Figure 3-28: Impact of Drive Failure on RAID Set

In the bottom illustration, the RAID set is failed (data is lost) because two drives have failed and
there is only a single parity drive. If this RAID set had contained two parity drives, it would have
been able to survive two failed data drives.
Now that we have covered some concepts that are fundamental to RAID, we will take a look at
some common RAID configurations.

In the opinion of some purists, RAID 0 is not true RAID because it does not provide the R
(Redundant) in RAID. Any failure to any drive in the RAID set will cause the entire set to fail
and data to be lost.
RAID 0 does, however, provide increased capacity and improved performance by combining
multiple disk drives into a single logical drive.
RAID 0 is also known as striping, or striping with no parity.
Figure 3-29 shows a server writing data (D1, D2, D3, D4.) to the logical drive that is made up
of 4 physical drives. As you can see, the data blocks (D1, D2, D3) are evenly striped across
all 4 drives in the stripe set.

Figure 3-29: RAID 0

RAID 1 is also known as mirroring without striping.
A RAID 1 set contains a minimum of two drives. As the term mirroring suggests, half of the
RAID set is used as a mirror copy of the other half. This is almost always implemented as one
drive being a mirror copy of the other. If either of the drives in the RAID set fails, the other can
be used to service both reads and writes.
RAID 1 is considered a very safe RAID level, but it comes at a cost. The cost of RAID 1 is that
you must always use half of your capacity to protect against failure. This means that the costper-terabyte of storage will be double that used without RAID. For example, to store 250 GB of
data on a RAID 1 set, you would need to purchase two 250 GB drives.
Figure 3-30 shows a RAID 1 set created from two 600 GB drives. You can see that the resulting
logical drive is also 600GB, exactly half of the total capacity of the drives in the RAID Set. One
drive is a mirror copy of the other.

Figure 3-30: RAID 1 Mirror Set

RAID 5 reserves the equivalent of 1 disks worth of space for parity data, so that a RAID set can
recover from a single drive failure. No matter how many disks are in the RAID set, only the
space of a single drive is reserved for parity, and only a single drive can fail without the RAID
set failing and losing data.
RAID 5 is also known as block level striping with distributed parity.
Common RAID 5 configurations include:
RAID 5 (3+1) Three drives worth of capacity for data and one drive worth of capacity
for parity data
RAID 5 (7+1) Seven drives worth of capacity for data and one drive worth of capacity
for parity data
RAID Notation:
RAID configurations are generally expressed as follows:
X = RAID Level
Y = number of data spindles/drives
Z = number of parity spindles/drives
For example, a RAID 5 configuration with 7 drives worth of data and a single drive worth of parity would be expressed as
RAID 5 (7+1)

Although the above are popular RAID 5 configurations, other configurations are also possible,
including configurations such as RAID 5 (2+1) and RAID 5 (15+1). As always, only a single
drive failure can be tolerated before losing data.
Figure 3-31 shows a RAID 5 (3+1) RAID set.

Figure 3-31: RAID 5 (3+1)

Because RAID 5 can have different configurations, each configuration has a different RAID
overhead. For example, a RAID 5 (3+1) has a RAID overhead of 25% (There are 4 drives in the
RAID set and 1 is used for parity. *100=25. Therefore 25% of the capacity of the RAID set is
lost to parity). In RAID 5 (7+1), this overhead is reduced to 12.5%.
Normally RAID 5 uses an exclusive OR (XOR) Boolean logic operation to create parity
and to recover data from parity.
An operation in which bits that are the same result in 0 and bits that are different result in 1. For example:
10110000 XOR 01000000 = 11110000

RAID 6, shown in Figure 3-32, is very similar to RAID 5 in that it performs block level striping to
increase performance and capacity and it also uses parity to protect and recover data. The
major difference between RAID 5 and RAID 6 is that RAID 6 provides two sets of parity and can
therefore recover from two failed disks.
RAID 6 is also known as block level striping with double parity.
Common RAID 6 configurations include:
RAID 6 (6+2)
RAID 6 (14+2)
As with RAID 5, other configurations of RAID 6 are possible. However, no matter how many
disks form a RAID 6 set, a maximum of two failed drives can be tolerated before data is lost.

Figure 3-32: RAID 6

Most common RAID levels do not dedicate a parity drive. Instead they use a distributed or
rotating parity scheme as shown in Figure 3-32.

RAID 1+0
RAID 1+0, often referred to as RAID 10 (pronounced RAID ten), is a hybrid of RAID 1 (mirror
sets) and RAID 0 (stripe sets) and brings the benefits of both mirroring and striping.
Figure 3-33 shows how a RAID 10 set is created.

Figure 3-33: RAID 10

RAID 10 is a good option in many situations because it provides good reliability (mirroring) as
well as performance (striping). A RAID 10 array can recover from a failure of multiple drives, as
long as no two failed drives are in the same mirrored pair. For example, a RAID array with 4
drives (the minimum) can recover from failure of two drives in different mirrors. However, the
drawback to RAID 10 is its cost. As with RAID 1, 50% of purchased capacity has to be reserved
for mirroring.
There are other RAID configurations possible, but the most commonly deployed are those
discussed here.
Scenario: Stay and Sleep
The CEO is concerned about losing reservation data stored in the database. Discuss the
available RAID options and their impact on resiliency, performance and cost.

ProLiant Server Family

The HP ProLiant family provides many innovative advantages. It offers Intel- and AMD-based
servers that are designed for a wide range of uses. HP positions ProLiant servers by two
criterialine and seriesto help customers choose the server that best fits their requirements.

The ProLiant family servers include four linesProLiant ML, DL, SL, and BL.
The ML, DL, SL, or BL prefix indicates the type of environment for which the
server is best suited.
The ProLiant ML and DL lines are divided into four series scaled in terms of
availability and performance100 series, 300 series, 500 series, 700 series,
and 900 series. The DL line also offers multinode servers such as the HP
ProLiant DL2000 server.
You learned about the various server lines in Chapter 2. Now, we will look at the numbering
conventions for the G7 and Gen8 series servers.

ProLiant G7 series
ProLiant ML, DL, and SL lines are divided into series. Each series is defined by performance
level and server availability.

100 series (1P and 2P)

The 100 series offers an affordable server optimized for:
Clustered solutions for high-performance technical computing applications that
support demanding workloads
Entry-level, general-purpose servers

300 series (1P and 2P)

The servers in the 300 series are best suited for applications such as:
File/print and domain servers
Web server functions
Small databases and infrastructure applications

500 series (4P)

Increased availability and performance make the 500 series servers ideal for:
Complex web applications
Large databases
Critical file server applications
Multi-application tasks and server consolidation

700 series (up to 8P)

The scalable ProLiant 700 series servers support up to eight processors with quad-core
processing power. They deliver outstanding flexibility, scalability, and performance for
customers growing enterprise-class database, consolidation, and virtualization environments.
They are excellent in running the following types of applications:

High performance computing (HPC)

Electronic design automation (EDA)/Semiconductor
Large database applications
Enterprise resource planning (ERP) and customer resource management (CRM)
Life science and material science
Video rendering

900 series (4P or 8P)

The ProLiant 900 series is an example of a scale-up x86 workhorse because it is designed for
superior performance and improved server efficiency and utilization. It is the ideal choice for
enterprise-class database, consolidation, and virtualization environments that need:
Outstanding performance, flexibility, and scalability
Easy integration and management, with all the familiar industry-leading ProLiant

ProLiant DL2000 Multi Node Server (1P and 2P)

The DL2000 Multi Node Server was designed to double the density to maximize data center
floor space, increase performance while lowering energy consumption, and provide flexible
configurations that fit into existing industry-standard racks.
The DL2000 solution consists of up to four independent DL170e G6 servers in the 2U HP
ProLiant e2000 G6 chassis. The servers share power supplies and fans, providing greater
power and cooling efficiencies. The DL170e G6 server is optimized for efficiency, density, and
flexibility. One server node can be serviced individually without impacting the operation of other
server nodes sharing the same chassis.
Features include:
Flexible design for integrated solutions
4-in-1 efficiency
Double the density in an industry-standard design

Other ProLiant model numbering conventions

The BL series follows similar logic with model numbering within a class, where the numbering
reflects the performance scalability features.
The naming conventions for ProLiant servers indicate whether the server uses Intel or AMD
processors. Servers with a zero as the last number in the name (xx0) are based on Intel
processors. Servers with a five as the last number in the name (xx5) are based on AMD

Gen8 ProLiant server family

With the ProLiant Gen8 servers, HP is realigning the ProLiant server platform naming strategy
around three major areas, as shown in Figure 3-34.

Figure 3-34: Gen8 Categories

These categories are described as follows:

Mainstream and SMB
In the mainstream area, the ProLiant DL500 and DL900 families represent the
scale-up x86 portfolio of rack-mount servers. The ProLiant DL300 family
represents the HP dual-processor family of performance and value-oriented
rack-mount servers.
ProLiant x86 blades include BL600 servers, which represent the HP scale-up
portfolio of server blades, and BL400 servers, which represent the dualprocessor family of server blades.
The ProLiant 200 series servers are performance-oriented hyperscale solutions.
The ProLiant 100 series are more value-oriented hyperscale solutions.
As with the G7 servers, a 5 in the last digit of the model number indicates an
AMD Opteron processor, and a 0 indicates an Intel processor.

In this chapter, you learned:
Servers require architectures that can handle a large number of requests and remain
operational 24x7.
The Intel Xeon and AMD Opteron are processors frequently used on servers.
The NUMA processing architecture allows you to associate DIMMs with a specific
You cannot mix UDIMMs, RDIMMs, and LRDIMMs on a server.
UDIMMs provide faster performance, but support lower capacity than RDIMMs.

LRDIMMs are only available on Gen 8 servers.

PCIe buses can be up to 16-lanes.
PCIe 3.0 provides the highest bandwidth.
RAID allows you to combine multiple physical disks into a single logical volume.
A Smart Array controller performs caching and acts as a RAID controller.
The model number of an HP server allows you to determine its form factor, as well as
the type of workload it is designed to handle.

Review Questions
1. Which Intel processor supports the NUMA architecture?
2. You are planning to install UDIMMs on a server with 16 DIMM slots. What is the maximum
memory capacity?
3. How does the bandwidth of a PCIe 3.0 bus compare with that of a PCIe 2.0 bus?
4. For what type of activity does read-ahead caching improve performance?
5. Which type of RAID can protect against failure of two hard disks?
6. Which ProLiant G7 series is most appropriate for an entry-level general purpose server?

1. _____________ is an Intel Xeon processor feature that automatically powers down
unused cores.
2. On an AMD Opteron processor, _____________ buses are used to communicate with I/O
subsystems and between processors.
3. On a DDR3 DIMM, the bandwidth of the internal data path is _____________ bits.
4. The maximum size of an RDIMM is _____________ GB.
5. PCIe 3.0 has _____________ encoding overhead than PCIe 2.0.
6. _____________ maximizes both performance and resilience, but reduces capacity.
7. The ProLiant DL385 G7 server has a(n) _____________ processor.

Essay questions
1. Explain the advantages provided by the NUMA architecture.
2. Compare the RAID configurations shown below. Discuss the resiliency and capacity
provided by each. When discussing capacity, make sure to specify the percentage
available for storage.
RAID 5 (3+1)
RAID 5 (5+1)

RAID 10 - 8 disk configuration

Research assignment
Stay and Sleep
You are researching possible configurations for Stay and Sleeps web server and database
server. You have gathered the following information:
* The minimum memory required for the database server is 12 GB
* The minimum memory required for the Web server is 8 GB
* The database server must be able to scale to 40 GB of RAM
* The web server must be able to scale to 20 GB of RAM
* The company wants to lower power consumption.
Research server options and select a server that will meet the requirements. Use the Memory
Configuration Tool to select an optimal memory package. Save your results to turn in along
with your server recommendation.
Create alternative recommendations for performance and cost.

This eBook is licensed to Catalin Dinca,

Chapter 4: Installing a Rack Server

In the last chapter, you learned about the internals of a server. In this chapter, we begin our
discussion about implementing a basic data center. Our focus in this chapter is a rack
configuration and basic server setup.
We will begin by exploring the features of various HP rack series and overview the steps for
installing a rack at a facility. Next, we will discuss power distribution units and rack options.
From there, we move on to discussing the general procedure for installing a rack-optimized
server. We conclude with a discussion of bare-metal configuration, including ROM-based setup
and configuring internal storage.
bare-metal configuration
The configuration performed prior to installing an operating system.


In this chapter, you will learn how to:

Describe the features and options of various rack series.
Identify and describe power distribution units.
Install a rack and its accessories.
Assemble system hardware.
Verify that components were installed correctly.
Manage system firmware.
Perform ROM-based setup.
Configure internal storage.

Comparing Racks
HP has a number of rack models available to meet a wide range of requirements. In this
section, we will compare the features of the following rack series:
HP Value Series Racks
Rack 10000 G2 Series
HP Intelligent Series Racks
Modular Cooling System (MCS)

HP Value Series racks

The HP V142 100 Series rack, shown in Figure 4-1, is a 19" (48.3 cm) Value Series rack that
can hold up to 2000 lbs (907 kg) of static load. It meets the industry standard for 19" racks and
can host all HP rack-mounted devices, as well as third-party equipment that is designed to
mount in a 19" rack.

Figure 4-1: HP V142 100 Series Rack

The HP V142 100 Series rack has the following features:

Front and rear doors
Locks on the front and rear doors
2-part side panels
You can also purchase the following accessories:
Cable management
Rack stabilizer kit
Baying kit
Grounding kit
A rack stabilizer kit is used to provide stability and support when you install, remove, or access
equipment within the rack. A stabilizing kit is also recommended when only a single rack is
A baying kit is used to join two or more racks. The 100 series baying kit can be used to join two
100 series racks, but it cannot be used to join a 10000 series or 10000 G2 series rack to a 100
series rack.

10000 G2 Series racks

The HP 10000 G2 Series racks, as represented in Figure 4-2, are available in sizes from 14U
to 47U. They have the following features:
Interchangeable perforated front and split rear doors
Cable access panel on the rear door
Perforated rack top with egress slot
egress slot
A slot through which cable can be routed.

Figure 4-2: 10647 G2 Rack

The HP 10000 G2 Series rack can be used to house any HP or industry standard rack-mounted
device. Optional kits can be purchased for baying, grounding, stabilization, cable management,
cooling, and supporting monitors and a console switch. You can also purchase a Monitor/Utility
Shelf kit that allows you to install a shelf inside the rack unit, as shown in Figure 4-3. Such a
shelf is typically used to hold a monitor that is connected to a KVM switch.

Figure 4-3: Rack Utility Shelf

HP Intelligent Series rack

The HP Intelligent Series (i-Series) rack is compatible with all HP rack mounted products,
including ProLiant and Integrity servers, as well as HP Storage products. An empty i-Series
rack is shown in Figure 4-4.

Figure 4-4: i-Series rack

The i-Series rack integrates with iLO to provide better support for monitoring and management
than other racks. The i-Series rack:
Uses 7% less computer room cooling energy.
Communicates the location of each server.
Offers total annual energy savings when used with ProLiant Gen8 servers.
Optimizes cable management accessories and brackets.
Enhances workload placement, enabled by automated system location data
combined with graphical power, temperature, and performance display.
Eliminates manual, error-prone tracking; system administrators no longer need to use
manual, inaccurate, and tedious asset collection and tracking approaches.
Provides 24-hour maximum observed temperature in each rack.
Allows identification of hotspots and poor air flow.
Enables you to drill down to the rack level.
Features six casters to support up to 3000 lbs.
Features a flexible chimney on all racks.

Improves security through front and rear handle lock bars.

Eases manageability and mounting through side panels that ship in three parts.

HP Location Discovery Services

HP Location Discovery Services are software technologies built into iSeries racks that enable a
server to determine where it is in the data center racks and rows, which power distribution units
(PDUs) it is using, and what the thermal and power conditions are like around it.
With Location Discovery Services, ProLiant Gen8 servers self-identify and inventory
themselves, providing the rack identification number and precise U to HP Insight Control
software, along with power and temperature data.
Location Discovery Services provides detailed server information by location, helping you
avoid hours of tedious manual data entry into spreadsheets. The combination of this location
data with real-time, auto-populated power and thermal data enables optimal workload

HP Thermal Discovery Services

The ProLiant Gen8 family builds on the efficiency achievements of ProLiant G7 servers, which
were the industrys first servers to earn the ENERGY STAR Qualification rating by
implementing Thermal Discovery Services to deliver more compute capacity per watt.
With Thermal Discovery Services, a 3D array of temperature sensors allows server fans to be
precisely controlled. This directs cooling and reduces unnecessary fan power.
With high-efficiency HP Platinum Plus Power Supplies, a 3D Sea of Sensors, and
SmartMemory, you can power and cool 20 ProLiant Gen8 servers for the same cost as 18
ProLiant G6 servers.

HP Modular Cooling System G2

Figure 4-5: HP Modular Cooling System

The HP Modular Cooling System (MCS) G2, as shown in Figure 4-5, is designed for data
centers that have reached the limit of their cooling capability or that need to reduce the effect of
the heat produced by high-density racks within their facilities. The MCS G2 allows the use of
fully populated high-density racks while eliminating the need to increase the facilitys air
conditioning. With three times the kilowatt capacity of a standard rack, the MCS G2 extends the
life of the data center considerably. The standard MCS G2 enclosure, shown in Figure 4-6,
consists of a cooling unit and an empty, modified HP 10000 Series G2 rack. The cooling unit
includes three fan controllers that control six high-volume, hot-swappable fans. The heat
exchanger is an air-to-water heat transfer device that discharges cold air to the front of the rack
via a side portal. The heat exchanger receives chilled water from the facilitys chilled water
system or a dedicated chilled water unit. It supports cooling two HP 42U racks at a time at
17.5kW per side or 35kW in a single rack.
The MCS G2 can operate from a single AC power source. It also supports power redundancy
for facilities that offer AC redundancy through a transfer switch module that accepts AC power
from two sources.

Figure 4-6: MCS G2

MCS G2 air distribution

The MCS G2 supports the front-to-back cooling principle used in most server designs.
An HP 10000 Series G2 rack with an attached HP MCS G2 requires approximately 1.5
times the width and 1.25 times the depth of a standard server rack to allow for the fan and heat
exchanger modules and front and rear airflow.
The MCS G2 evenly distributes cold supply air at the front of the rack of equipment. Each server
receives adequate supply air, regardless of its position within the rack or the density of the rack.
The servers expel warm exhaust air out the rear of the rack. The fan modules redirect the warm
air from the rear of the rack into the heat exchanger. The air is cooled and then circulated to the
front of the rack. Any condensation that forms collects in each heat exchanger module and
flows through a discharge tube to a condensation tray integrated in the base assembly.
For the MCS G2 configured with an expansion rack, the left side panel of the MCS G2 cooling
unit is removed to allow cool air from the heat exchanger to be evenly distributed to both IT
equipment racks. The capability of the MCS G2 to cool 35 kW of IT equipment is, therefore,
evenly divided to 17.5 kW for each of the two racks.
For controlled airflow, the MCS G2 enclosure should be closed during normal operation. All
rack space must be either filled by equipment or covered by blanking panels so that the cool air
is routed exclusively through the equipment and cannot bypass through or around the rack.

An Automatic (emergency) Door Release Kit is included with every HP MCS G2. The
Automatic Door Release Kit is designed to open the HP MCS G2 front and rear doors in the
case of a sudden increase in the temperature inside the HP MCS G2. The open doors will
allow the IT equipment to cool using the air from the data center.

Rack Installation
Before you can install components in a rack, you need to install the rack itself. First, access and
read the documentation for the rack unit that you plan to install, the accessories that you need,
and the rack-optimized devices. Installation guides are available from HP Help and Support.
In the United States, HP can make arrangements to have qualified guaranteed service
providers install your rack system, from unpacking the components to routing cables and
running a system test.

Preparing to install
A successful installation requires sufficient preparations. You need to:
Choose the proper location for your rack.
Plan for power and grounding requirements.
Assemble the necessary tools.

Choosing a location
A rack installation requires sufficient airflow to ensure that the equipment inside it can operate
safely and reliably. You can consult the documentation for the servers and other rackmountable components that you plan to install to determine the Maximum Recommended
Ambient Operating Temperature (TMRA). Remember that because of the heat generated by
electronic components, the temperature inside the rack will be higher than the temperature of
the room outside the rack.
Another primary concern is to ensure sufficient space around the rack. The requirements are to
allow clearance of:
At least 48 in (122 cm) all around the pallet and above the rack to allow removal of
packaging materials.
At least 34 in (86 cm) in front of the rack to allow the door to open all the way.
At least 30 in (75 cm) in the rear of the rack to provide access to components.
At least 15 in (38 cm) around a power supply to facilitate servicing.
Another important point is to never block airflow at the front or rear of the rack. Most HP rackmountable products draw in cool air through the front and exhaust warm air out through the rear
of the rack, as illustrated in Figure 4-7. Therefore, the ventilation apertures in the front and rear
of the rack must not be blocked.
ventilation apertures
Narrow openings that allow air to flow into and out of the rack.

Figure 4-7: Airflow in and out of Devices

Planning for power and grounding

Another critical part of planning your rack installation is to ensure that you can supply sufficient
power to the components that will be installed in the rack. When you plan power, keep in mind
the following guidelines:
Distribute power usage evenly between the branch circuits.
Do not attach components that consume more than 80% of the total current supplied
by a circuit.
Ensure that the AC Voltage Selector Switches are configured to match your AC line
Make sure to wire your data center in accordance with local and regional electrical
Ensure that all components are properly grounded.
branch circuit
A circuit associated with a specific circuit breaker.

Assembling the necessary tools

You will need the tools described in Table 4-1 to install a rack and its components.
Table 4-1: Rack Installation Tools



Flat-bladed screwdriver

Phillips screwdrivers


Torx screwdrivers


Adjustable wrench

Allen wrench

Cage nut fitting tool

The cage nut fitting tool is included with the original rack hardware kit.
Phillips screwdriver
A screwdriver that has a tip with four points.
Torx screwdriver
A screwdriver that has a tip in the shape of a six-pointed star.
Allen wrench
A wrench that has a hexagonal surface and is used to tighten or loosen bolts or screws that have a hexagonal socket.
cage nut
A nut that has a winged cage. The wings are compressed to allow the cage to be pressed into square holes in the rack.

If you have battery-operated tools available, use them to make installation easier.

Preparing the rack

Now you are ready to unpack the rack. Verify the contents of the package against the packing

list to ensure that you received all of the components. To prepare the rack for component
installation, perform the following steps:
1. Remove the rack doors.
2. Remove the side panels, if they are installed.
3. Stabilize the rack.

Removing the rack doors

You must remove the front and rear rack doors. To remove the front door:
1. Unlock the door and press the handle release button.
2. Lift the handle to open the door.
3. Remove the top hinge pin, as shown in Figure 4-8.
4. Lift the door up (2) and pull it out to remove it from the bottom hinge (3).

Figure 4-8: Removing the Front Door

To remove the rear doors:

1. Rotate the handle to the right and then pull it to open the door.
2. Open the hinge brackets by pulling up on the top hinge and down on the bottom hinge for
each door (1), as shown in Figure 4-9.
3. Lift the door out and away from the rack (2).

Figure 4-9: Removing the Rear Doors

Store the doors in an upright position in a location where they are protected from

Removing the side panels

If the rack has optional side panels, you must remove them before you can install mounting
brackets or other hardware. The procedure for doing so is shown in Figure 4-10.

Figure 4-10: Removing the Side Panels

To remove the side panels:

1. Unlock the panel (1).
2. Lift up to unhook the side panel from the hangers bolted on the rack frame (2).
3. Remove the side panel from the rack.
As with the doors, store the side panels upright.

Stabilizing the rack

It is essential that you properly stabilize the rack before installing any components. Improper
stabilization can result in personal injury and equipment damage if a component falls out of the
rack or the rack itself falls. The procedure for stabilization differs depending on whether you are
installing a single standalone rack or two racks connected by a baying kit.

Standalone rack
Before you can stabilize a rack, you must ensure that it is situated in its final location. A rack
has leveling feet next to each caster that allow you to compensate for uneven surfaces. Use the
adjustable wrench to extend the leveling feet into the leveling bases. The full weight of the rack
should be on the leveling feet instead of on the casters.
Casters are not designed to bear weight.
You should also attach one full-sized stabilizing foot to the front and one to each side of the
rack. The three full-sized stabilizing feet are included with the Stabilizer Rack Option Kit that is
included with the rack.

Multiple racks
If you have more than one rack, you should connect them with baying kits, as shown in Figure

Figure 4-11: Connecting Multiple Racks

There are three mounting brackets in the baying kit. Use two T-30 Torx screws in each bracket
to connect the racks.
You must remove the front and rear doors and the side panels before you can bay

racks. However, if a rack has a rear extension, it does not need to be removed.
rear extension
A kit that increases the depth of the rack to accommodate cabling and PDUs in densely populated racks.

If you install an HP rack between two third-party racks, you will also need to attach a modified
stabilizing foot to the front of the HP rack. The Stabilizer Rack Option Kit contains two modified
stabilizing feet.

Heavy Duty Stabilizer Kit

If you have a stand-alone rack with a single rack-mountable component that exceeds 99.8 kg
(220 lb), or if you have three or fewer bayed racks with a single rack-mountable component that
exceeds 99.8 kg (220 lb), you must use the heavy duty stabilizer. Instructions for installing the
Heavy Duty Stabilizer Kit are available in the HP 10000 G2 Series Rack Options Installation
Guide at
Scenario: Stay and Sleep
You are preparing to install a rack configuration for Stay and Sleep. Discuss the
environmental concerns that you need to address first.

Installing Power Distribution Units

Before you install components in a rack, you need to install the Power Distribution Units
(PDUs) that will supply power to them. We will now look first at the PDU options available from

HP power distribution units

PDUs provide power to multiple objects from a single source. In a rack, the PDU distributes
power to the servers, storage units, and other peripherals.
HP monitored and modular PDUs provide power to multiple objects from a single source. PDU
systems provide the following advantages over individual power cords:
Address issues of power distribution to components within the computer cabinet.
Reduce the number of power cables coming into the cabinet.
Provide a level of power protection through a series of circuit breakers.
Benefits of HP PDU systems include:
An increased number of outlet receptacles.
Flexible 1U/0U rack-mounting options.
Easy accessibility to outlets.
When choosing PDUs, you must consider the number of C13 and C19 receptacles. These are
the receptacles used to plug in devices. C13 receptacles can supply up to 10 A. C19
receptacles can handle loads up to 15 A. A C13-to-C14 or C19-C20 jumper cable is used to

connect a servers power supply to the receptacle in the PDU. The power connections
supported by various HP servers are shown in Figure 4-12.

Figure 4-12: AC Input Connection by ProLiant Server Model

You must also choose a model that supports the right input power plug for your data center.
Figure 4-13 shows the available options.

Figure 4-13: Supported Input Plugs

The following types of HP PDUs are available:

We will now look at the features of each.

HP Modular PDUs
HP Modular PDUs, shown in Figure 4-14, have a modular architecture designed specifically for

data center customers who want to maximize power distribution and space efficiencies in the

Figure 4-14: Modular PDU

Modular PDUs consist of two building blocks:

Control unit (core)
The Control Unit is 1U/0U. It contains the main power switch, circuit breakers, load
groups, and power on light.
Extension bars (sticks)
You can plug between one and four extension bars into the control unit. They extend
the outlet receptacles along the length of the rack. You can mount extension bars
directly to the frame of the rack or to a rack extension.
Available models range from 16A to 48A current ratings, with output connections ranging from
four outlets to 28 outlets.
Benefits of modular PDUs from HP include:
An increased number of outlet receptacles.
Superior cable management.
Flexible 1U/0U rack mounting options.
A limited three-year warranty.

Fixed-cord PDUs
Cable management and rack airflow are increasing concerns in any rack-mount environment.
The HP Fixed Cord Extension Bars and Fixed Cord PDUs, shown in Figure 4-15, can
significantly help you manage cable clutter in the back of the rack, which results in improved
airflow through the rack.

Figure 4-15: Fixed Cord PDU

These PDUs are designed exclusively for 1U fixed-rail servers. Fixed Cord PDUs offer the
following features, which are different from the modular PDUs:
Output cords are attached to the PDU extension bar (stick), which means one less
possible point of accidental disconnect.
Each output cord is only 13 inches long (33 cm).
Each extension bar has seven output cords.
The Fixed Cord Extension Bars and Fixed Cord PDU are available in two configurations:
Modular PDU core with extension bar sticks
Extension bar sticks only
fixed-rail server
A server mounted on rails that are not designed to slide in and out of the rack.

HP Monitored PDUs
The monitored vertical rack-mount PDUs, shown in Figure 4-16, provide both single- and threephase monitored power, as well as full-rack power utility ranging from 4.9 kVA up to 22 kVA.

Figure 4-16: HP Monitored PDU

Available monitored PDUs include:

Full-rack models with 39 or 78 receptacles and half-rack versions.
Three-phase models with 12 C-19 receptacles.
Single-phase models with 24 C-13 and 3 C-19 receptacles.

HP Intelligent PDUs
The HP Intelligent PDU (iPDU) is a power distribution unit with full remote outlet control, outletby-outlet power tracking, and automated documentation of power configuration. An iPDU can
be recognized by its blue outlets, as shown in Figure 4-17.

Figure 4-17: Rear View of 12-Outlet iPDU

HP iPDUs track outlet power usage at 99% accuracy, showing system-by-system power usage
and available power. The iPDU records server ID information by outlet and forwards this
information to HP Insight Control, saving hours of manual spreadsheet data-entry time and
eliminating human wiring and documentation errors.
HP Insight Control
A management tool that receives data from various devices and can be used to monitor and manage multiple devices on the

HP iPDUs provide power to multiple objects from a single source. In a rack, the iPDU
distributes power to the servers, storage units, and other peripherals.
The iPDU uses the same core-and-stick architecture as the HP modular PDU line. The iPDU
monitors power consumption at the core, load segment, stick, and outlet level with unmatched
precision and accuracy. Remote management is built in. The iPDU offers the power cycle
ability of individual outlets on the Intelligent Extension Bars.
An iPDU:
Helps you track and control power that other PDUs cannot monitor, with 99%
accuracy greater than 1 watt.
Gathers information from all monitoring points at half-second intervals to ensure the
highest precision.
Measures current draws of less than 100mw, so it can detect a new server even
before it is even powered on.
Discovers and maps servers to specific outlets, ensuring correlation between
equipment and power data collected, as a function of Intelligent Power Discovery.
The iPDU is the key element of HP Power Discovery Services.
When combined with the HP line of Platinum-level high-efficiency power supplies, the iPDU

communicates with the attached servers to collect asset information for the automatic mapping
of the power topology inside a rack. This capability greatly reduces the risk of human errors that
can cause power outages.

Power Distribution Rack

The HP Power Distribution Rack (PDR), shown in Figure 4-18, improves power management
in the data center by moving power distribution to the row level.

Figure 4-18: HP Power Distribution Rack

Decentralizing power improves cable management, decreases diagnostic time for problems,
and saves installation costs by reducing the size and number of long power feeds required to
reach from large wall-mounted distribution units. Housed in a single HP 10000 G2 42U Rack,
the PDR also saves floor space and allows you to move heat-generating transformers off the
data center floor to improve cooling.
The HP PDR is capable of delivering 400 amps redundantly. Therefore, it can power several
high-density racks with shorter cable runs than conventional site-level power distribution
systems. Fully redundant inputs and outputs provide dependable power while protecting
valuable IT hardware with high-quality circuit breakers. Individual branch circuit monitoring and
redundant management modules provide the status and power consumption for each attached
Scenario: Stay and Sleep
You are selecting a PDU for a rack. Discuss the advantages and disadvantages of each type
of PDU.

Installing Rack Components

After the rack is stable and the PDUs are installed, you are ready to begin installing
components. Regardless of which components you are installing, you should follow these

Install the heaviest components at the bottom and work your way up.
Install the bottom components first.
Balance the weight load between bayed racks.
Follow appropriate electrostatic discharge (ESD) protection guidelines.
Use blanking panels in empty slots to ensure proper air flow.
Do not extend more than one component at a time.
blanking panel
A panel used in an empty slot to ensure proper airflow.

ESD protection
Static electricity can damage system boards and other devices. Static electricity can be
transferred from your finger or other conductive material. Therefore, you should take the
following precautions:
Transport and store components in static-safe containers.
Place components on a grounded surface before removing them from their
Avoid touching pins, leads, or circuitry.
Make sure that you are properly grounded before touching static-sensitive
There are several methods of grounding. Use one or more of the following methods when you
handle or install components that are sensitive to static electricity:
Use an ESD wrist strap connected by a ground cord to a grounded workstation or
Wear heel straps, toe straps, or boot straps and stand with both feet on a conductive
floor or static-dissipating floor mat.
Use conductive field service tools.
Use a portable field service kit with a folding static-dissipating work mat.

Installation steps
The following steps outline the sequence for installing rack-mountable components in a rack.
You should install any 0U devices prior to performing these steps.
1. Use the template included with the component to mark the rack for correct placement of
installation hardware.
2. Install cage nuts into the rack.
3. Prepare and install rails.
4. Install and secure the component in the rack.
5. Attach and route cables.

Marking installation hardware placement

A rack-mountable component includes a template that is used to mark the location of the
mounting hardware on the mounting rails of the rack, as shown in Figure 4-19.

Figure 4-19: Measuring with the Template

Each template has tabs marked with a star. Push them back and place them in the correct holes
in the mounting rails. Match up the pattern indicated on the sides of the template with the
pattern of holes in the mounting rails.
If another component is already installed in the position immediately beneath the one where
you plan to install the component, place the template on top of the previously installed
component and against the front mounting rails.
Use the front of the template to mark the attachment points for brackets, rails, components, or
cage nuts on the front and the back of the template to mark the attachment points for the
attachment points on the back.

Installing cage nuts

Install the cage nuts on the inside of the mounting rails using the cage nut insertion tool, as
illustrated in Figure 4-20. To do so:
1. Hook the bottom lip of the cage nut in the square rail perforation.
2. Insert the tip of the insertion tool through the perforation and hook the top lip of the cage
3. Use the insertion tool to pull the cage nut through the hole until the top lip snaps into

Figure 4-20: Cage Nut Installation

Installing rails
There are two types of rails:
Adjustable fixed rails, which are designed to slide into place at initial installation and
rarely be extended.
Sliding rails, which are designed to be slid in and out of the rack frequently.
In general, adjustable fixed rails require you to install the rail into the rack, but they do not
require any component preparation. Sliding rails include a sliding rail assembly, which is
installed on the rack and a component rail, which is installed on a component.
Refer to the component installation documentation for detailed instructions about how to install
the rails for that component.

Installing the cable management arm bracket

If the component requires a cable management arm, attach the bracket to the component using
two 6/32 x 1/4 screws, as shown in Figure 4-21.

Figure 4-21: Installing the Cable Management Arm Bracket

Installing components
After you have installed the necessary rack-mounting hardware, you are ready to install the
component. If a component is heavy, be sure to have sufficient help lifting and stabilizing it
during installation. If the component has a pluggable power supply, you can reduce its weight
by removing it.
If you are installing the component onto sliding rails, use the following steps:
1. Fully extend the sliding rails.
2. Lift the component up and align the component rails with the sliding rails.
3. Press the component rail release latches on either side of the component and slide the
unit all the way back into the rack. You may need to apply some pressure to loosen the
ball bearings.
4. Use the cage nuts to tighten the thumbscrews on the front of the component.
To install a component onto adjustable fixed rails, simply insert the component and then tighten
the thumbscrews.

Managing cables
A cable management arm can help you route component cables in an orderly manner to reduce
airflow obstruction and decrease the likelihood that a cable will become tangled or accidentally
disconnected. When you slide a component mounted on a sliding rail in and out of the rack, a
cable management arm will contract and extend.
To attach a cable management arm to the bracket you mounted onto the component prior to
installation, use the following steps:
1. Extend the cable management arm and bend the hinged bracket to the right, as shown in
Figure 4-22.
2. Attach the arm to the bracket using two M6 x 12 Phillips screws.

Figure 4-22: Attaching the Cable Management Arm to the Component

3. Align the screw retaining plate behind the rack mounting rail at the rear of the rack and
attach the cable management arm to the rack with two 10-32 x 5/8 screws, as shown in
Figure 4-23.

Figure 4-23: Attaching the Cable Management Arm to the Rack

4. Attach the cables to the component.

Remember to set the input voltage selection switch to the correct position before
attaching the power cord.
5. Bundle the cables, including the power cord.
6. Secure the cables to the extended management arm with the fasteners provided. Make
sure to leave enough slack in the cables so that you can easily bend the arm.
7. Route the bundled cables over the top of the cable management arm and down the cable
conduit, if one is present.
8. Remove the cable access panel from the left rear door, if one is installed.
9. Connect the power cords to the PDU or to a grounded AC wall or floor outlet.
It is important to properly label cables to make configuration changes and troubleshooting

Installing Rack Options

Now you are ready to install the rack options, including the KVM and cooling options. As with
the rack components, you should consult the product documentation for detailed installation

It is a good idea to test a KVM by plugging it into a power source before you install it in the rack
and turn it on. When the KVM is powered on, the activity light should be lit.
A KVM can be installed using a standard installation, a cantilever installation, or a side-mount

installation. With a standard installation, shown in Figure 4-24, the brackets are mounted to
each side of the KVM, and the console switch is slid in through the rear of the rack.

Figure 4-24: KVM Standard Installation

With a cantilever installation, the KVM is attached to the rack using cage nuts instead of a rail,
as shown in Figure 4-25.

Figure 4-25: KVM Cantilever Installation

In a side installation, the KVM is attached to the side of the rack using cage nut screws, as
shown in Figure 4-26. In a side installation, it is important for the rear of the KVM to point toward
the ceiling, not the floor.

Figure 4-26: KVM Side Installation

After you have installed the KVM, you are ready to connect the cables to the rear of the KVM,
connect the KVM to the power source, and power it on. The components of an HP KVM Server
Console Switch G2 are shown in Figure 4-27.

Figure 4-27: HP KVM Server Console Switch G2 Components

The cooling options required for your rack will depend on the ambient temperature of the room,
and also on the components that you plan to install in the rack. Some cooling options include:
Fan kit
The procedures for installing cooling components will depend on the component that you
choose. For example, when installing the HP MCS G2, you must ensure that the data center
has a raised floor that can support the load. You also must consider where and how the chilled
water will be provided. You can choose from three potential chilled water choices:
Direct connection to the buildings chilled water system
A dedicated chilled water system
A water-to-water heat exchanger unit connected to a chilled water or building water
In a chilled-water system, an external refrigeration system cools water typically to between
40F and 45F (4.4C and 7.2C).
You also must ensure that space and power requirements are met.
For the fan option kit, you must determine the impact on power and rack space requirements,
but you will not need to provide a chilled water system.
Scenario: Stay and Sleep
You are installing a rack. Discuss the importance of proper cable management. Identify best

Configuring the Server

After the rack has been configured, you are ready to begin configuring servers. The general
steps for server configuration are shown in Figure 4-28.

Figure 4-28: Setup Configuration Flowchart for a ProLiant Server with RBSU and an Array Controller

To deploy a ProLiant server, perform the following tasks:

1. Verify and update system and option ROMs.

2. Configure the BIOS.
3. Configure the iLO card.
4. Configure the array controller (if it is not configured automatically during the
boot process).
5. Install the operating system.
6. Install the SPP.
This chapter focuses on the steps defined in gold in Figure 4-28. You will learn about the
SmartStart CD and operating system installation later in this course.

HP ROM overview
Server read-only memory (ROM) is the system component that stores most of the basic server
functionality. HP ROM provides for essential initialization and validation of hardware
components before control is passed to the operating system. The ROM also provides the
capability of booting from various fixed media (hard drive, CD-ROM) and removable media
HP ROM is digitally signed using the HP Corporate Signing Service. This signature is verified
before the flash process starts, reducing accidental programming and preventing malicious
efforts to corrupt system ROM.
HP ROM performs very early configuration of the video controller, which allows monitoring of
the initialization progress through an attached monitor. If configuration or hardware errors are
discovered during this early phase of hardware initialization, suitable messages are displayed
on the connected monitor. Additionally, these configuration or hardware errors are logged to the
Integrated Management Log (IML) to assist in diagnosis.
ProLiant ROM is used to configure the following:
Processor and chipset status registers
System memory, memory map, and memory initialization
System hardware configuration (integrated PCI devices and optional PCIe cards)
Customer-specific BIOS configuration using the ROM-Based Setup Utility (RBSU)

ROM-Based Setup Utility

The HP ROM-Based Setup Utility (RBSU) is a configuration utility embedded in ProLiant
servers that performs a wide range of configuration activities. The purpose of RBSU is to help
you configure server hardware settings and prepare a server for an operating system
installation. RBSU enables you to do the following:
Configure system devices and installed options
Display system information
Select the primary boot controller
Configure memory options
View and establish server configuration settings during the initial system startup

Modify the server configuration settings after the server has been configured
The default language for RBSU is English.
The ROM on the G7 and Gen8 ProLiant servers contains the functionality provided by the
system partition utilities on older ProLiant servers. RBSU provides the same functions,
eliminating the need for a system partition on the primary drive and the use of boot diskettes.

Default configuration settings

Default configuration settings are applied to the server at one of the following times:
Upon the first system power-up
After defaults have been restored
Default configuration settings are sufficient for proper typical server operation, but configuration
settings can be modified using RBSU. The system prompts you for access to RBSU with each
power-up. You access RBSU by pressing the F9 key during the POST sequence.
The BIOS Serial Console allows you to configure the serial port to view POST error messages
and run RBSU remotely through a serial connection to the server COM port. The server that you
are remotely configuring does not require a keyboard and mouse.
You can also access RBSU through Remote Console or Integrated Remote Console, as long
as iLO Advanced licenses are installed on the server.

Using RBSU
The main menu of RBSU, shown in Figure 4-29, shows a list of setting categories on the left.
When System Options is selected, information about the server is displayed, including the
model, serial number, product ID, BIOS version, backup BIOS version, amount of RAM, and
processor information.

Figure 4-29: RBSU System Options

You can press the Tab key to display additional information about the server, such as the MAC
addresses for the NICs and whether user-defined defaults has been enabled, as shown in
Figure 4-30.

Figure 4-30: Displaying Additional Information

You can navigate through the menu system using the arrow keys. To enter a settings category,
select an option and press the Enter key. Press the Esc key to display a confirmation to exit, as
shown in Figure 4-31, and then press the F10 key to restart the server with the new
configuration settings. For help about a specific option, press the F1 key.

Figure 4-31: Confirm Exit

RBSU automatically saves settings when you press the Enter key. The utility does not
prompt you for confirmation of settings before you exit the utility. To change a selected setting,
you must select a different setting and press the Enter key.
Be aware that RBSU can look different on different servers. Different versions of the system
ROM can also affect the RBSU display.

Power Management Options menu

The Power Management Option menu, shown in Figure 4-32, has the following options:
HP Power Profile
HP Power Regulator
Redundant Power Supply Mode
Advanced Power Management Options

Figure 4-32: Power Management Options

HP Power Profile
As shown in Figure 4-33, the HP Power Profile option enables the user to select the
appropriate power profile based on power and performance characteristics.

Figure 4-33: HP Power Profile

The available options are described as follows:

Balanced Power and Performance
Provides the optimum settings to maximize power savings with minimal
performance impact for most operating systems and applications.
Minimum Power Usage
Enables power reduction mechanisms that could negatively affect performance.
This mode guarantees a lower maximum power usage by the system.
Maximum Performance
Disables all power management options that could negatively affect
Combination of user settings that do not match the three pre-set options.

HP Power Regulator
The HP Power Regular option is used to determine whether processors run to maximize
performance or power efficiency. The possible settings are shown in Figure 4-34.

Figure 4-34: HP Power Regulator Settings

The available settings are described as follows:

HP Dynamic Power Savings Mode
This option automatically varies processor speed and power usage based on
processor use to reduce overall power consumption with little or no impact to
performance. Dynamic Power Savings Mode does not require operating system
HP Static Low Power Mode
This option reduces processor speed and power usage while guaranteeing a lower
maximum power usage for the system. Servers with higher processor utilization will
suffer a more severe impact on performance under this option.
HP Static High Performance Mode
Processors run in the maximum power and performance state, regardless of the
operating system power management policy.
OS Control Mode
Processors run in the maximum power and performance state, unless the operating
system enables a power management policy.

Redundant Power Supply Mode

The Redundant Power Supply Mode setting allows you to configure how the system handles
redundant power supply configurations, as shown in Figure 4-35.

Figure 4-35: Redundant Power Supply Mode

When a server is configured for Balanced Mode, power delivery is shared equally between all
installed power supplies. In High Efficiency Mode, half of the power supplies are kept in
standby mode to conserve power usage levels. You can choose High Efficiency Mode (Auto) to
cause the standby power supplies to be selected using a semi-random process, based on the
serial number of the server. As an alternative, you can select all odd or all even power supplies
to be used as standbys.

Advanced Power Management Options

The Advanced Power Management Options menu, shown in Figure 4-36, configures the
following features:
Intel QPI Link Power Management
Minimum Processor Idle Power State
Maximum Memory Bus Frequency
Memory Interleaving
PCI Express Generation 2.0 Support
Dynamic Power Savings Mode Response
Collaborative Power Control
Turbo Boost Optimization

Figure 4-36: Advanced Power Management Options

Most of these features are beyond the scope of this course. For more information about these
features, refer to the HP ROM-Based Setup Utility User Guide at the HP website. A few of these
features are discussed later in this course.

PCI Settings
The PCI IRQ Settings option allows you to view and modify the Interrupt Request Line (IRQ)
settings assigned to various controllers, as shown in Figure 4-37. These include storage device
controllers, the iLO controller, processor, and virtual memory, network adapters, and video

Figure 4-37: PCI IRQ Settings

The PCI Device Enable/Disable option, shown in Figure 4-38, allows you to enable and
disable network adapters and storage controllers.

Figure 4-38: PCI Device Enable/Disable

Modifying boot order

The Standard Boot Order (IPL) allows you to select the order in which storage and network
devices will be checked for boot files during server startup. The default boot order is shown in
Figure 4-39.

Figure 4-39: Default Boot Order

To change the boot order, select a device and press the Enter key. You will be prompted to
select the new boot number for the device, as shown in Figure 4-40.

Figure 4-40: Setting the IPL Device Boot Order

The Boot Controller menu option allows you to select the order in which storage controllers
are accessed when the server searches for boot files. As with the IPL devices, you can change
the order by selecting a device and pressing the Enter key, as shown in Figure 4-41.

Figure 4-41: Boot Controller Order

Date and Time

You can use the Date and Time menu option to configure the servers clock, as shown in
Figure 4-42.

Figure 4-42: Date and Time

System Default Options menu

The System Default Options menu, shown in Figure 4-43, allows you to restore a

Figure 4-43: Restore Defaults

The three choices are described as follows:

Restore Default System Settings
Resets all configuration settings to their default values. All RBSU changes that have
been made are lost.
Restore Settings/Erase Boot Disk
Resets the date, time, and all configuration settings to default values. Data on the
boot disk drive is erased, and changes that have been made are lost.
User Default Options
Enables you to define custom default configuration settings. When the default
configuration settings are loaded, the user-defined default settings are used instead
of the factory defaults. To save the configuration as the default configuration,
configure the system and then select User Default Options. You will be prompted to
either Save User Defaults or Erase User Defaults, as shown in Figure 4-44. Select
Save User Defaults. You will be prompted to confirm your action.

Figure 4-44: Saving a Configuration

Scenario: Stay and Sleep

You are configuring the server that will run SBS. Discuss the advantages and disadvantages
of various power options you could configure using RBSU.

Configuring Storage Subsystems

After configuring the system BIOS, you can configure a storage subsystem differently from the
default by using Option ROM Configuration for Arrays (ORCA) or Array Configuration Utility
ORCA supplies basic configuration settings during initial setup and assists users
who have minimal requirements.
ACU provides a more comprehensive set of configuration options than ORCA. It can
be used for initial setup and for modifying the configuration.
Although most recent HP Smart Array controllers can be configured using ORCA or ACU, older
HP Smart Array controllers support only ACU.

Option ROM Configuration for Arrays (ORCA)

ORCA is a basic ROM-based configuration utility that runs automatically during initial startup. It
executes from the option ROM that is located on the array controllers. ORCA can also be run
manually. To do so, on a ProLiant DL360, press the Ctrl+S keys during startup to enter system
configuration (Figure 4-45) and then press the F8 key to launch the ORCA menu-driven
interface or the F6 key to launch the ORCA command-line interface.

Figure 4-45: Configuration Menu

The Main Menu for the menu-driven ORCA interface is shown in Figure 4-46.

Figure 4-46: ORCA Main Menu

The ROM-based ORCA does not require diskettes or CDs to run and allows you to:
View the current logical drive configuration.
Create, configure, and delete logical drives and assign an online spare for the
created logical drives.
Configure a controller as the boot controller if it is not set already. ORCA enables you
to set the boot controller order during initial configuration only. When more than one
controller exists, other controllers do not have to be array or Smart Array controllers.
Specify a RAID level.
Register a Smart Array Advanced Pack license.
Manage cache settings.
Because ORCA is designed for users who have minimal configuration requirements, ORCA
does not support the following:
Drive expansion
RAID level migration
Setting the stripe size or controller settings

If the boot drive has not been formatted and the boot controller is connected to six or fewer
physical drives, ORCA runs as part of the auto-configuration process when the new server is
first powered up. During this auto-configuration process, ORCA uses all of the physical drives
on the controller to set up the first logical drive.
The RAID level used for the logical drive depends on the number of physical drives:
One drive = RAID 0
Two drives = RAID 1+0
Three or more drives = RAID 5
If the drives have different capacities, ORCA locates the smallest drive and uses the capacity of
that drive to determine how much space to use on each of the other drives.
If the boot drive has been formatted, or if there are more than six drives connected to the
controller, you are prompted to run ORCA manually. For more complex configurations, use
ACU, which has more features than ORCA and allows you to configure more parameters.
The default configuration for a system with two drives is shown in Figure 4-47.

Figure 4-47: Default Configuration with Two Physical Drives

You can view details of the logical drive configuration by pressing the Enter key. As you can
see in Figure 4-48, there are two SAS HDDs in the configuration. Each has a capacity of 146.8
GB, so when they are configured in a RAID 1+0 configuration, half the capacity is required for
resilience and the other half is used for storage.

Figure 4-48: Logical Drive Details

Deleting logical drives

You can also use ORCA to delete a logical drive. To do so, use the up and down arrow keys to
select Delete Logical Drive, as shown in Figure 4-49 and press the Enter key.

Figure 4-49: Delete Logical Drive

The next screen displays the available logical drives. Select one using the up and down arrow
keys and then press the F8 key to delete the drive.
You will be warned that deleting the logical drive will result in complete data loss, as shown in
Figure 4-50.

Figure 4-50: Warning

Press the F3 key to delete the logical drive or the Esc key to cancel. If you press the F3 key, a
message showing that the configuration has been saved will be displayed, as shown in Figure

Figure 4-51: Configuration Change Confirmation

Press the Enter key to return to the main menu.

Creating logical drives

To create a logical drive, select Create Logical Drive from the Main Menu screen, as shown in
Figure 4-46, and press the Enter key.
A screen like the one shown in Figure 4-52 will be displayed.

Figure 4-52: Create Logical Drive

A check in a box indicates a configuration setting for the logical drive that will be created when
you press the Enter key. You can select a drive or configuration setting by navigating to that
setting using the Tab key to move between rectangles and the up and down arrow keys to
move between items within a rectangle. Press the Space Bar to toggle the selected setting.
Toggling a setting will sometimes cause other settings to be changed if the new setting would
make the configuration invalid. For example, if you remove one of the drives from the volume,
the RAID configuration will automatically be changed to RAID 0, as shown in Figure 4-53,
because RAID 1+0 cannot be performed with only a single volume.

Figure 4-53: Settings Changed

When you finish configuring the logical drive, press the Enter key. You will be prompted to
confirm your change, as shown in Figure 4-54. Press the F8 key to save the configuration or the
Esc key to cancel.

Figure 4-54: Confirmation

Press the Enter key to return to the main menu.

Selecting the boot volume

The boot volume is the logical drive that the server will check for boot files during startup. You
can configure a logical drive as the boot volume by choosing Select Boot Volume, as shown
in Figure 4-55, and then pressing the Enter key.

Figure 4-55: Select Boot Volume

You will be prompted to select either Direct Attached Storage or Shared Storage. Direct
Attached Storage (DAS) is a drive attached directly to the server. Shared Storage is a storage
array that is attached to the network. For this example, we will use Direct Attached Storage, as
shown in Figure 4-56.

Figure 4-56: Direct Attached Storage

Press the Enter key to select Direct Attached Storage. The screen shown in Figure 4-57 will
show the logical drives that are configured as Direct Attached Storage. In this example, only
one logical drive exists. To configure it as the boot volume, press the Enter key.

Figure 4-57: Logical Drives Configured as Direct Attached Storage

Once again, you will be prompted to confirm your change, as shown in Figure 4-58. Press the
F8 key to save the volume as a boot volume or the Esc key to cancel the change.

Figure 4-58: Confirmation

After you confirm the change, you are prompted to either press the F8 key to boot from the LUN
that you configured as the boot drive or press the Enter key to go back to the main menu, as
shown in Figure 4-59.

Figure 4-59: Boot Selection Saved

When you are finished using ORCA, press the Esc key to exit.
Scenario: Stay and Sleep
You are configuring the server that will run SBS. The server has three SAS drives installed on
a single Smart Array controller. All of the drives have the same capacity.
What is the default configuration that ORCA will create? Which other configurations would be

In this chapter, you learned:
A baying kit is used to join two or more racks.
A rack stabilizer kit is used to provide stability and support.
A monitor/utility shelf kit allows you to install a shelf inside a rack unit.
The iSeries rack uses less energy and simplifies monitoring and management.
The Modular Cooling System G2 has a heat exchanger that uses chilled water to
provide extra cooling.
You need to leave sufficient space around a rack to ensure proper airflow.
Before installing rack components, you need to remove doors and side panels and
stabilize the rack.
A ProLiant model power supply can use either a C14 or C20 AC input connection.

The following PDUs are available:

You can install a KVM as a cantilever installation or a side-mount installation, in
addition to a standard installation.
The RBSU is a configuration utility that is embedded in ProLiant servers and can be
used to configure boot order, power management, and system defaults.
The first time a server starts up, ORCA configures a default logical drive configuration
based on the number of drives attached to a Smart Array controller.

Review Questions
1. Which type of option kit is recommended when you have only a single rack?
2. What is the purpose of a blanking panel?
3. Are casters designed to bear weight?
4. What are the two building blocks of a modular PDU?
5. How can you launch RBSU?
6. How can you configure a customized default that can be used to revert a system to a
specific configuration?
7. What is the default configuration of for a server that has four hard disk drives?

1. A(n) ___________ is used to join two or more racks.
2. Most servers draw in ______________ air from the ___________ and expel __________
from the ___________.
3. A 42U rack has 30U populated. You need to use ___________ to ensure proper airflow.
4. A(n) ___________ circuit is one that is associated with a specific breaker.
5. The full weight of the rack should be on the ___________.
6. When you are installing 1U servers, using ______________ PDUs can improve airflow
and reduce the chance of accidental disconnect.
7. A(n) ___________ is designed to slide into place at initial installation and rarely be
8. When you install a KVM in a side-mount installation, the rear of the KVM should face the
9. Pressing the F8 key during startup launches ___________.

10. When a server with five drives attached to a Smart Array controller is first booted, one
___________ logical drive is created.

Identify the component

For each component shown, write its name and purpose.

Essay question
You are assessing a customers site for installation of a rack enclosure and several rack


1. Describe the desired characteristics of the room that will accommodate the rack.
2. Will you be able to install the rack yourself? Why or why not?
3. List the steps that you will take to install the rack, a server, a PDU, a keyboard, and a
4. What steps will you take to ensure sufficient airflow?
5. What steps would you take to configure a server to boot from a USB drive, then a CDROM, and then the hard disk drive?

Scenario question
Scenario: FI-Print
You are planning to install a rack configuration for FI-Print.
Research racks and PDUs on the HP website.

1. Write an essay describing the advantages provided by an iSeries rack and Intelligent PDUs.

This eBook is licensed to Catalin Dinca,

Chapter 5: Solution Stacks

Up to now, we have focused primarily on server hardware, but hardware is only part of the
picture. A server can only function when the physical components that make up a server are
merged with the software that provides a server with the instructions that tell it what to do. So, a
server can only truly become a useful part of your network solution once it has software: an
operating system and server applications. Server software will be our main focus starting with
this chapter.
The operating system is what brings the computer to life. Server applications are the reason
that servers are deployed and sit at the core of network solutions. Server solutions provide the
necessary support for everything from user authentication to business-critical applications.
In this chapter, we will look at two families of server solutions. The chapter first briefly

addresses available solutions as an introduction to the topic. We then take a look at operating
system choices, specifically Windows Server and Linux, including licensing models.
We will next look at Microsoft Windows server, exploring Windows Server editions, license
requirements, and the more common solution applications. After that, we move on to open
source and the Linux operating system. Many of the same topics we addressed in our
discussion of Windows Server editions will be revisited in our discussion of Linux, including
popular Linux distributions, the open source licensing model, and Linux-based solutions.
Finally, we will address how you should select an appropriate solution for your environment.

In this chapter, you will learn how to:
Identify Windows Server editions.
List and describe popular Linux distributions.
Compare Windows licensing and open source licensing options.
Compare proprietary and open source solutions.

About Server Solutions

Server solutions are a critical part of any network solution. Earlier in this course, you were
introduced to several server applications, including:
Network Infrastructure servers
Directory service
File and print servers
FTP servers
Web servers
Proxy servers
Database servers
Messaging servers
Authentication servers
Terminal servers
Virtual server host
In this chapter, we look at some of these technologies in more detail and provide detailed
information about some specific products. We will be exploring solutions for:
Directory services (Active Directory and eDirectory)
Web services (IIS and Apache)

Mail services (Exchange and Sendmail)

Database services (SQL Server and MySQL)
Collaborative services (SharePoint)
File and print services
Notice that two server applications are available for most of the items in the above list. In many
cases, you can choose between proprietary commercial software from the Microsoft
Corporation or open source products from various sources.
Software that is exclusively licensed under the rights of the softwares copyright holder.
open source
Software that is made available for use or modification.

It should be noted that many of these server solutions, like web services and file and print
services, come bundled as part of the server operating system. Others, like database and mail
services, are typically purchased separately, though not always.

Commercial vs. open source

Before we start to look at server solutions in detail, we need to spend a little time assessing
software licensing models. Most people are more familiar with the proprietary commercial
software model. This model includes products like Microsofts Exchange and SQL Server.
With proprietary software, the manufacturer holds the exclusive license and retains complete
control over the software. You purchase a right to use the software but not to modify it. With
these rights, you can customize the software to some extent to meet your needs, but you are
limited to configuration parameters provided to you by the manufacturer and, in some cases,
add-ons that run on top of the software. You are not allowed to make any modifications to the
underlying software code.
Many people have a misconception about what open source software is. It is often equated with
freeware and shareware software. While freeware and shareware are often open source,
server-level open source applications are commercial applications, just under a different
licensing model than proprietary software.
Software than can be freely used and distributed without any charge.
Software that is freely distributed, but users are expected to make a donation for its use.

Proprietary software and open source software have one major difference: With open source
software you have access to the source code, whereas with proprietary software, you do not.
You can study open source software code, and you are allowed to make changes and
improvements to it. However, if you do make changes and distribute the modified software, you
must also pass on the ability to modify the code to those who purchase the software that
contains your modifications.
Some open source products are developed by commercial manufacturers, such as SUSE
Linux, which is sold by and licensed through Novell, Inc. Others are developed in a

collaborative model that does not involve commercial enterprise.

Operating Systems
At the core of any computer is its operating system. The operating system is the software that
controls the computer and manages its interactions with the user and with connected devices.
An amazing amount of functionality is built into modern operating systems, from user interface,
to file management, to network communications. Much of this functionality started out as
separate applications, but these separate applications were gradually built into the operating
By its classic definition, an operating system allocates hardware resources under
software control.
As mentioned earlier, this added functionality has included integration of server solution
applications. Without exception, a computer operating system provides support for file and print
services. You will find directory services and web services as part of Windows Server, as well
as many Linux distributions.
Linux distribution
Refers to different Linux releases from various manufacturers, development groups, and individuals.

We will start our look at server solutions by examining operating system options.

Windows Server
Windows Server was designed from the ground up as a network operating system and a
platform for server applications. It has evolved through many generations over the years and
has grown into a powerful, flexible, and mature operating system. In most installation scenarios,
Windows Server provides an easy-to-use GUI desktop environment (Figure 5-1).

Figure 5-1: Windows Server 2008 R2 Desktop

The available functionality varies somewhat by Windows Server edition. Also, many features
(referred to as server roles), such as IIS, are not enabled by default. However, they can be
enabled during or after setup. The Add Roles Wizard is provided to lead you through the
process of configuring server roles (Figure 5-2).

Figure 5-2: Add Roles Wizard

We will now take some time to examine the available Windows Server 2008 R2 editions and
the capabilities of each.
Only enable those roles that are actually needed on the server. It is strongly suggested
that you disable unused and unneeded services and features as a security measure. Disabling
unused and unneeded services will also make more resources available to critical applications
and help improve server performance.

Windows Server editions

The fact that Windows Server is available in different editions is one of its strengths. You can
select an edition best matched to application requirements. The price structure is such that the
fewer features you need, the less the server operating system will cost.
Windows Server 2008 R2 has three general-purpose editions designed to act as the working
backbone of your server solutions. Their general-purpose nature makes them well suited for a
wide variety of solution scenarios, and just one server can often take on multiple roles.
Currently supported server editions are:
Windows Server Standard is a full-featured server platform that can meet a wide
variety of solution needs. Standard edition includes bundled support for web
services, virtualization, remote applications, and remote management. Standard
editions communication capabilities also make it an optimal choice for remote office
applications. This server edition also has a server core installation option that installs
only those capabilities necessary to support targeted applications.

server core installation

Installation option that installs Windows Server without a desktop interface and with a reduced attack surface.

Windows Server Enterprise is an enhanced server platform that includes all the
functionality of Standard edition. Enterprise edition is designed to meet the needs of
mission-critical applications like database and messaging applications. Enterprise
edition includes features that help ensure improved uptime and enhanced scalability
through failover clustering configurations that can include up to 16 nodes.
failover clustering
Configuration in which one computer can take over automatically for a failing computer.

Windows Server Datacenter is Microsofts enterprise-class server solution platform. It
expands the capabilities of Enterprise edition to support even more computing
power, as well as greater availability and scalability. Datacenter edition is designed
and optimized for large-scale virtualization applications. It includes unlimited
virtualization rights, which means you could, in theory, run an unlimited number of
Hyper-V virtual machines. A single server can support up to 256 logical processors
(up to 64 physical processors) and up to 2 TB of RAM. The actual number of virtual
machines you can support will depend on system resources and the resources
required by each of the virtual machines.
virtual machine
Process that enable running multiple independent operating system support by a single physical computer.
Effectively a computer within a computer emulating a separate physical computer.

In addition to the general editions, Microsoft offers three special purpose editions. These target
specific solution requirements. The supported special purpose editions are:
Entry-level solution operating system that includes file and print sharing, remote
access, and security. Foundation is designed to be a first server and a platform to
support small business applications. The distribution model for Foundation differs
from other editions in that individuals cannot purchase it directly. It is available
through an original equipment manufacturer (OEM) only, typically bundled with new
equipment sales.
High Performance Computing (HPC)
As the name implies, Windows Server HPC is designed to provide a high-powered
computing platform to those who need it, such as engineers and analysts. This is
accomplished by configuring multiple servers in an HPC cluster, in which each
server performs specific tasks. This gives you a way to leverage off-the-shelf
computer hardware for applications that previously required specialized
supercomputing hardware.
Web Server
Windows Server Web Server is designed specifically as a platform to host websites,
web applications, and web services. Web Server comes with the necessary

infrastructure technologies built-in, like IIS 7.0, ASP.NET, and the Microsoft .NET
Framework. It provides a platform that is both easy to use and easy to manage while
helping keep related costs to a minimum. Because of its specialized nature, some
Windows Server functionality is not supported.
The different Windows Server 2008 R2 editions have some variations in their capabilities.
Table 5-1 gives you a partial list of server functionality, including the edition on which each
function is supported.
Table 5-1: Windows Server Functionality


Embedded feature

Supported on

Active Directory

Standard, Enterprise, and Datacenter

edition. Limited support on Foundation and

File server

File services

Enterprise and Datacenter edition. Limited

support for Distributed File System (DFS) on
Standard, Foundation and HPC.

Print server

Print and Document


Foundation, Standard, Enterprise, and

Datacenter edition. Limited support on HPC.

Directory server


Active Directory


Web and FTP

Remote access and

Active Directory
Federation Service
Internet Information
Services (IIS)
Routing and Remote

Supported on all editions, except Web

edition does not support DHCP server.

Foundation, Standard, Enterprise, and

Datacenter edition. Active Directory
Federation Service supported on Enterprise
and Datacenter only.

All editions.

Standard, Enterprise, Datacenter,

Foundation, and HPC.

Terminal server

Remote Desktop
Services (RDS)

Enterprise and Datacenter. Supported, but

with limited connections on Standard,
Foundation, and HPC.

Virtualization hosts


Standard, Enterprise, Datacenter, and HPC.

Distributed File System (DFS)

File management system designed to ease user access to geographically dispersed files.

Additional functionality is available through server applications, including those applications

available from both Microsoft and other manufacturers.
Exchange Server and Microsoft SQL Server are supported on the Standard, Enterprise,
and Datacenter editions.
Microsoft also offers an additional server solution targeted directly at the small business,
Windows Small Business Server (SBS).

Windows SBS
Windows Small Business Server (SBS) 2011 is designed to meet the server platform
requirements of most small businesses in a cost-effective package. SBS is designed to be easy
to deploy and maintain.
Windows Server 2008 R2 is the core operating system in SBS. SBS comes with server
applications preinstalled and preconfigured. Management is simplified through available
management consoles (Figure 5-3).

Figure 5-3: Windows Small Business Server 2011 Standard

Preinstalled applications include support for email, contacts, and calendars. SBS provides
Internet connectivity and the ability to maintain internal websites. It also leverages the power of
the cloud, integrating with both collaboration and line-of-business (LOB) applications.
In addition to giving small businesses a platform for server applications, SBS helps improve
client PC management through automatic PC backups. It also monitors client computers for
security health, including compliance with updates and antivirus status.
The specific features SBS provides vary somewhat by edition. The Essentials edition supports
up to 25 users. It also offers support for:

Cloud services
PC Backup
Active Directory
File and print server roles
The Essentials edition is targeted at small businesses and home networks. Users need
minimal IT experience and knowledge to deploy and maintain Essentials edition.
The Standard edition supports all of the functionality of the Essentials edition, plus some
additional features. The Standard edition can support up to 75 users and other client devices. In
addition to the features provided through the Essentials edition, Standard edition supports:
Microsoft Exchange Server 2010 with SP1
Microsoft SharePoint 2010 Foundation
Microsoft SQL Server 2008 R2 Express
Windows Server Update Services (WSUS) 3.0
Remote Web Access
Standard edition is meant for use as the foundation of a small business network. Its intended
users are expected to have some understanding of strategic uses of technology.
Windows Small Business Server cannot be used as a Hyper-V host platform.
The Windows SBS Premium Add-on lets you deploy an additional server in your SBS network.
In addition, Premium Add-on edition supports:
Active Directory
SQL Server 2008 R2 for Small Business
Hyper-V virtualization
Remote Desktop Services
Premium Add-on gives you a cost-effective way of deploying additional servers in a Windows
SBS network. Premium Add-on can be deployed with Standard or Essentials edition.

Windows licensing
Microsoft has two basic license options. These are:
Server license
A single server license supports any users connecting to the server up to the
maximum number available for the edition.
Client access license (CAL)
A CAL is required for each client connection to the server.
There are two types of CALs. One option is per-device licensing. Per-device licensing allows
users to connect from only one device, but any user can connect through that device. The other
is per-user licensing. Peruser licensing allows the licensed user to connect from any number of

Windows Server 2008 R2 Foundation and Windows SBS 2011 Essentials are both licensed
under a server license. Windows Server 2008 R2 Foundation supports 1 to 10 simultaneous
users. Windows SBS 2011 Essentials supports 1 to 25 users.
Products requiring CALs are known as volume licensing products. Windows Server 2008 R2
Standard, Enterprise, and Datacenter editions, as well as Windows SBS 2011 Standard
edition, require CALs for user access. In addition, Premium Add-on for SBS requires client
licenses for SQL Server access.
Microsoft provides an online License Advisor that lets you research licensing products,
programs, and pricing (Figure 5-4).

Figure 5-4: Microsoft License Advisor

When you choose the Quick Quote option, the first page of the License Advisor prompts you for
some basic information, such as licensing program and product information. You can also
select to generate a full quote by product or license program. You can also choose a guided
quote, which can launch a wizard to lead you through the process (Figure 5-5) or allow you to
view products associated with different IT solution scenarios.

Figure 5-5: Guided Quote

If you choose the Step-by-Step Wizard option, you are prompted for program information about
your organization and your product license needs (Figure 5-6).

Figure 5-6: Program Selection

Next, you are prompted for your organization type (Figure 5-7).

Figure 5-7: Organization Type

The next screen is the product selection screen. You can pick the product or products that you
want to license (Figure 5-8).

Figure 5-8: Product Selection

Next, you are prompted to supply CAL information, including the type of license and the number
of CALs you need (Figure 5-9).

Figure 5-9: Core CALs

The next screen reviews your selections and displays a quote based on the selections that you
have made (Figure 5-10).

Figure 5-10: License Quote

The final screen displays a report based on your selections. You can save the report online,
download the report, or print a copy (Figure 5-11).

Figure 5-11: Final Report

You have the option of generating a report that is based on multiple products so that you can
get an idea of your total costs over time.

Open source and Linux

The Linux operating system is based on an operating system known as UNIX that was
originally developed before the advent of PCs. In its original form, Linux exactly matched UNIX
and its command-line based user interface.
Linux started out with a reputation as a hobbyist and computer enthusiast operating system with
limited appeal. However, Linux has strengths and benefits that helped it move quickly into the
mainstream, including:
Open source license
Open source license allows you to modify the Linux source code and even distribute
your customized version, within the limits of the GNU general public license.
GNU general public license
The license model on which the Linux open source license is based.

Mature, familiar technology

UNIX was used widely throughout the industry and on college campuses before the
introduction of the PC and Microsoft operating systems. Because of this familiarity, a
large base of users, programmers, and administrators were already available when
Linux was introduced.
Multiple distributions

Because Linux was released from its inception under an open source license, a very
large number of distributions (different flavors of Linux) have been developed,
supporting nearly any computing scenario you can image.
Anti-Microsoft bias
Some individuals and groups in the computer industry have an anti-Microsoft attitude.
Linux has provided a non-Microsoft option for PCs and other microcomputers.
GNU is another UNIX-like operating system developed by the Free Software
Foundation (FSF). GNU was originally developed in 1983.
Most modern Linux distributions include a GUI desktop interface, giving it a look and feel very
similar to the Microsoft Windows family (Figure 5-12).

Figure 5-12: Linux Desktop

In addition, most Linux distributions come with a wide variety of built-in applications that often
exceed those that are supplied with Windows. However, when using Linux, a Windows user
will still find many of the applications familiar because some these applications, such as Adobe
Reader, are available in both Linux and Windows versions (Figure 5-13).

Figure 5-13: Sample Bundled Applications

Open source versions of server solutions are also readily available. These provide much the
same functionality of Windows and other proprietary solutions (Table 5-2).
Table 5-2: Option Source Solutions



Web server



MySQL Postgres



Directory service

Novell eDirectory

File sharing

Network File System (NFS)

Many of these solutions, like Apache and MySQL, are available in both Linux and Windows
versions. However, in this case, both versions are distributed under an open source license.
Many even have free versions available that support a subset of the functionality supported by
paid versions.
Several manufacturers have purchased the rights to produce and distribute some of the most
popular Linux distributions as commercial (but still open source) products. Many manufacturers,
including HP, offer product options with Linux preinstalled and preconfigured. We will now
briefly examine different Linux distributions.

Linux distributions
If you do an Internet search for Linux, you will find a staggering variety of Linux distributions

with implementations ranging from freeware desktop to high-end professional servers. This
variety has helped lead to the rapid growth in Linux use in all sorts of situations. You can find
distributions that will run on nearly any computer. Linux has helped to breathe new life into
older computers with limited hardware resources.
You can find distributions that:
Operate in a command-line, rather than GUI, environment.
Run on obsolete hardware, including non-PC computers.
Boot and run as a fully functional operating system from CD or USB flash drive.
Are targeted at particular groups or organizations, including branding with logos and
other information.
Are designed for specialized applications like process control and manufacturing.
During this course, we will focus on commercially produced and distributed Linux distributions.
Some of the most popular include:
Red Hat Enterprise Linux (RHEL)
Novell SUSE Linux Enterprise Server (SLES)
Oracle Linux
Examples and screen samples used in this course are based on Novell SUSE Linux
Enterprise Server (SLES) 11. SUSE Linux separated its ties with Novell in 2011, but HP
continues to provide SUSE-based solutions.
HP has Linux solutions based on both RHEL and SLES. These solutions range from hand-held
devices up through server clusters. HP also provides information about Linux compatibility for
its server platforms on their website at the following address:
A portion of the compatibility matrix for SLES is shown in Figure 5-14.

Figure 5-14: SLES DL Compatibility Matrix

Other Linux distributions can also run on HP platforms, but they do not receive official
support from HP.
In addition to preconfigured Linux solutions, you can install a Linux server solution on your
existing HP hardware. When doing so, you need to make sure that the destination computer
meets or exceeds the Linux distributions minimum system requirements. You also need to
ensure that system devices are compatible with Linux, including:
Storage devices
Tape devices
Network adapters
HP also provides several online references to help you deploy and maintain your Linux
solution. You can also access a list of open source organizations (Figure 5-15).

Figure 5-15: Open Source Resources

These links give you access to additional resources, including technical documentation and
development assistance, as well as a vast user community.

Open source licensing

Open source licensing can be a bit confusing. The first thing to remember is that open source
does not mean free. Even though you might be able to modify and redistribute a commercial
open source product, you do not have the rights to give away the product itself.
License agreements for commercial open source products are based on standard open source
licensing (and the GNU general public license), but they typically include stipulations specific
to the manufacturer or distributor. These include standard clauses that you would expect to see
in a license agreement, like terms of use or limits on concurrent users. One thing to keep in
mind is that when you receive a Linux distribution as a part of a bundled system package, you
are typically limited to installing that copy on that specific machine only.

Operating system summary

As you can see, both Windows SBS and Linux can have a place in a SMB networking
environment. You may find that a mix of the two is the best fit for your needs, using the strengths
of each operating system to form the best solution. Both are flexible, powerful platforms and
both have a proven track record of performance and reliability.
One factor that can help you make this decision will be the server applications that you want
and need in your network environments. If specific applications meet your needs better than
others, specifically Windows applications instead of open source, this will obviously influence
your decisions. Another factor to consider is the support network available to you. If you do
have support people in your organization, they may be more experienced in one operating

system over another.

Table 5-3 provides a comparison between Windows Server and Linux.
Table 5-3: Comparison between Windows Server and Linux

Windows Server


Propriety operating system

Open source operating system

Single source for purchase and support

Multiple sources for purchase and support

Cannot modify operating system source

Can modify and redistribute operating


Multiple licensing options

Multiple licensing options

Currently seen as broad-based industry


Often considered best choice for some

applications, such as web hosting.

Scenario: BCD Train

BCD Train provides both live classroom and online training. Students are given temporary
user names so that they can access resources stored on the network. Counting students, no
more than 50 users are connected to the network at any time. Currently, all of the companys
computers run either Windows Server or Windows 7 Professional.
The company wants to switch from an outside mail provider to an on-site mail server. They
also want to implement collaboration software to help manage the course development
process. They want to replace the current Active Directory domain controller running an older
Windows version and implement a new Active Directory domain name at the same time.
Discuss the operating system options that would be most cost effective for the company. Be
sure to include license requirements.

Infrastructure Server Solutions

Infrastructure server solutions are in most cases mission-critical applications. If they fail, your
business operations and possibly your network as a whole will come to a complete halt. They
support critical network activities, like logon and authentication, provide a basis for
communication, and provide the underlying communication structure to LOB applications.
It is important to understand the services and functionality provided by each of these serverlevel applications. It is also important to understand when each application can and must be
implemented as part of the network infrastructure. In most instances, we will look at both
proprietary Windows and open source solutions.
You can also deploy a solution that integrates both Windows Server and Linux.

Directory services
Directory services is a system that organizes network objects in a directory and controls access
to object information. Directory services has become the standard for network access
Under a directory services system, everything on the network is treated as an object, including
users, computers, files, and folders. Even management entities like policies are treated as
objects. Each object is described through its object properties, known as attributes. In the
classic definition of directory structure:
Each object is stored as an entity, made up of a set of attributes.
Each attribute has a name and an associated value.
Each entity has a uniquely identifying name that is known as its Distinguished Name
Directory services was defined by the Open Systems Interconnect (OSI) as part of the
standards it described to allow interoperability in a network environment. The fundamental
building blocks of modern directory service applications are the X.500 standard and the
Lightweight Directory Access Protocol (LDAP). Two common implementations of X.500/LDAP
are Microsofts Active Directory (shown in Figure 5-16) and Novell, Inc.s eDirectory.
Lightweight Directory Access Protocol (LDAP)
TCP/IP protocol designed to enable accessing and maintaining directory services over an IP network.

Figure 5-16: Windows AD Users and Computer

The directorys hierarchical structure makes it relatively easy to organize and maintain your
network. You can set up logical divisions within the network and control management access
within these divisions. At a lower level, you can apply management polices at the container

level, also known as an organizational unit (OU) level, to apply to all of its members or even
down to the individual object level. You can also use this hierarchical organization to facilitate
delegating management responsibilities.
When you organize your network, another important piece of the puzzle is Domain Name
Service (DNS), which is used in TCP/IP networking environments and the Internet. DNS is the
standard for naming devices in TCP/IP networks and in domains, which act as security
boundaries in an Active Directory network.

Active Directory
Active Directory domain services (AD DS) is Microsofts proprietary implementation of the
X.500 specification in Windows networks. In AD DS, each network is organized into one or
more domains containing the network objects. Each domain has one or more domain
controllers. A domain controller is the managing entity for the domain. Computers running
Windows Server 2008 R2 can act as domain controllers. In Windows SBS 2011, AD DS can be
configured during or after setup.
Active directory gives a network a central location in which to manage configuration
information, authentication requests, and access permissions. You can control access down to
the individual user and directory object, such as access to a specific file, for example. You can
even safeguard your data when it is distributed outside your network, if you choose to
implement Active Directory Rights Management Services (AD RMS). AD RMS requires both a
server and a client component.
AD also integrates with your directory-enabled applications through Active Directory
Lightweight Directory Service (AD LDS). It lets you store application configuration and directory
information separately from your networks directory database, adding a layer of security. AD
LDS information is stored only on those servers supporting the application, reducing the
background traffic needed to keep it up to date.
AD LDS can be used in an environment that does not have a domain controller.
However, if a domain controller is available, it can be used to authenticate users for access to
AD LDS applications.

Novell, Inc. has its own open source implementation of directory services, eDirectory. Novell
designed eDirectory to support a broad range of architectures, including its own NetWare
products, as well as Windows, Linux, and even many UNIX versions. Like Microsofts AD,
eDirectory is an X.500-compatible directory service with support for Lightweight Directory
Access Protocol (LDAP) applications.
In regards to features and functionality, eDirectory is very similar to Microsofts AD. They even
use the same authentication protocol for user (and device) authentication.
Key features include:
Secure authentication, including support for secure wireless networks.
Network management and support, including the ability to delegate management

Role-based auditing, reporting, and monitoring capabilities.

If the features supported sound much like those offered by AD, that is because the need to
support the same directory environment leads to similar functionality. Both directory services
organize objects in a hierarchical directory structure. Some of the terms used to describe
directory objects and structures differ, but their functions are the same. Both are scalable from
small business needs to large enterprises. Even the core background services underlying
eDirectory perform the same activities as those supporting Windows AD DS.
In recent releases, eDirectory has evolved into the base of a set of Novell products
known as Identity management products.
The most significant difference between eDirectory and AD DS is that eDirectory is licensed
under an open source license. The same is true of the built-in database engine used to store
eDirectory objects and information.

Database services
Many, if not most, businesses see database services as a mission-critical component. The
basic roles of data services are data storage, retrieval, and manipulation. In nearly all
applications, database services are based around a relational database management system
(RDBMS). In nearly every case, these use the standard Structured Query Language (SQL) for
data operations.
relational database management system (RDBMS)
Database system where data is stored in related tables. Relationships are based on table columns.

Most database applications fall into one of two categories. These are:
Online transaction processing (OLTP)
Online analytical processing (OLAP)
OLTP applications are characterized by the need to quickly insert, modify, or delete large
amounts of data. An example of this type of application is a retail application, where the
database has to process changes such as:
Customer purchases
Customer balances
Inventory changes
Many OLTP applications span multiple computers and even cross multiple networks. In most
cases, the databases used to support these applications are designed to optimize manipulating
database data and writing new data into the database.
OLAP applications are also commonly known as business intelligence applications. They are
optimized to retrieve, analyze, and compare data to find relationships. A common example is
data mining, in which you look at the data in different ways to find trends and other information
in the data. Budgeting and financial forecasting applications are also types of OLAP
applications. In these applications, the data is refreshed as new data becomes available, but
typically the type of data collected seldom changes.
You will also find database servers used simply for data storage. You might have all of the data

for a website, even the code for web pages, stored on a database server. It reduces storage
requirements on the web server because the web server can retrieve what it needs as it is
Database services also include the administrative and maintenance tools that let you access
databases directly (Figure 5-17).

Figure 5-17: SQL Server Management Studio

The database server, which is at the core of database services, does more than just store and
retrieve data. It can be configured to automatically perform periodic activities, like backups and
other maintenance activities. It can pass updates between different database servers, letting
you distribute data to where it is needed. You can even write custom executable routines to
support data applications.
You could broaden the definition of database services to go beyond the database server to
other supporting resources. Many database applications are implemented as multi-tier
applications (Figure 5-18).

Figure 5-18: Multi-tier Application

In a multi-tier (also called n-tier) design, the application is spread out, with different parts of the
application implemented where they make the most sense. For example, you would put things
like the data entry interface and initial data validation on the client as part of the client
application. More extensive validation intelligence, or some data manipulation and formatting,
might be implemented at an application server. Low-level data operations, which can also
include manipulation and formatting, take place at the database server. Requests, commands,
and incoming data filter down, and results and retrieved data filter up.
We will now investigate two database solutions: the proprietary Microsoft SQL Server and open
source MySQL. These solutions are only two out of a vast selection of database products
available that are designed to meet the needs of applications at every level.

SQL Server
SQL Server is Microsofts premier RDBMS product. It has all of the features that you would
expect in a database server and is available in several different editions to meet different
application needs. The editions supported with SQL Server 2012 are:
Enterprise edition is at the top of the list as Microsofts high-end database application
platform. Enterprise edition supports more features and greater functionality than any
other edition (except Developer, see below). It is also the highest performance

edition and designed to support both OLTP and OLAP applications.

Business Intelligence
The Business Intelligence edition is targeted more at business intelligence
applications. While not as full-featured as Enterprise edition, it gives you the tools
that you need build a business intelligence backend.
Standard edition offers basic database, reporting, and data analysis functionality. It is
suitable as the back-end support for a wide variety of database applications. One
major difference between the Standard and Enterprise editions is that Standard does
not support some of the advanced security and high-availability features built into
Enterprise edition is strongly recommended for business-critical applications. Standard
edition should be limited to non-critical applications.
The Developer edition is a full-featured SQL Server edition, but it is designed for
developers rather than for use as the foundation for live applications. It provides a
cost-effective platform for building, testing, and demonstrating database applications.
Web edition is available to third party software service providers only. It is targeted for
use on public websites, providing a powerful and highly scalable database platform.
Unlike the other editions already mentioned, licensing is through a Service Provider
License Agreement (SPLA) giving service providers a cost model based on low
monthly fees.
Express edition is designed to be embedded and distributed with client applications.
While it is a fully functional RDBMS, this product has built-in limitations that prevent
its use with higher-end applications, such as limiting database size to no more than
10 GB. Despite these limits, Express edition can be used to build sophisticated
database applications.
Compact 4.0
SQL Server Compact is, like Express, freely distributable. It is designed to aid in the
development and distribution of desktop applications and websites that need to
support less of a load than those using Web edition.
Azure is a cloud-based SQL Server RDBMS. It gives you a means for developing
database applications based in the cloud. Pricing is month-to-month and based on
database size.
Even though it is licensed as a proprietary commercial product, you still have some ability to
customize SQL Server to your particular requirements. While you cannot modify the source
code, you can create and store executable routines that can be called and run by other
For a detailed look at features supported by various SQL Server editions, go to

The Enterprise, Standard, and Web editions support a per core licensing option. Server plus
client CAL licensing is available for the Business Intelligence, Standard, and Developer

As with SQL Server, MySQL is a full-featured RDBMS. Unlike SQL Server, MySQL is licensed
under an open source license. MySQL is flexible and powerful and is considered a very costeffective database solution. It is consistently rated as the worlds most popular open source
database solution.
There are versions of MySQL that run on Windows, MAC OS, Linux, and various UNIX
versions, including HP-UX. There are also several editions available, designed to meet the
solution requirements ranging from small desktop applications to mission-critical enterprise
applications and data warehouses.
Hewlett-Packard UNIX (HP-UX)
HPs proprietary implementation of the UNIX operating system.
data warehouse
Data store used for analysis and reporting. A data warehouse lets you consolidate data from multiple sources.

Current supported editions of MySQL are:

MySQL Cluster Carrier Grade Edition (CGE)
The Cluster CGE and Enterprise are both designed for high-end, mission-critical
applications. Cluster CGE includes MySQL Cluster Manager to help ensure high
performance and high availability for your most critical applications. When deployed
in a cluster configuration, Cluster CGE can be expected to deliver 99.999%
MySQL Enterprise
Enterprise is designed to meet the database solution needs of most enterprises and
provide support for critical applications. It is similar to SQL Server Enterprise in that it
is a scalable, secure solution that provides a full set of monitoring and management
tools. Enterprise does not include the MySQL Cluster Manager but does support use
in Windows failover clusters when deployed on a Windows Server platform.
MySQL Standard
Standard edition is designed to provide a scalable, high-performance platform for
OLTP applications. It is designed to supply the levels of performance needed to
support these applications while keeping total cost of ownership (TCO) to a
MySQL Classic
Availability of MySQL Classic is limited to ISVs, OEMs and VARs to license as an
embedded database for their applications. It is designed to be quick and easy to
install. It is also designed to act as a zero administration database, which is important
when distributing the database with applications.
independent software vendor (ISV)
A company that specializes in developing and selling software.
value-added reseller (VAR)

A company that adds features to an existing product and then resells the new product configuration.
zero administration database
A database deployment that is designed to not require any ongoing administrative support.

MySQL Embedded
The Embedded editions are made available to ISVs, OEMs, and VARs so that they
can embed MySQL in their applications. Standard, Classic, Enterprise, and Cluster
Carrier Grade editions are available for embedding. The costs related to MySQL are
lower than most other database options, making it possible for ISVs and OEMs to
keep the price point lower on their custom applications. MySQL Embedded is
scalable and reliable enough to use as the basis for demanding applications. It is
also designed as a zero-administration platform after deployment, keeping support
costs to a minimum.
MySQL Community
MySQL Community is a freely downloadable and freely distributable version of
MySQL. As with other editions, it is distributed under an open source license, giving
you the option of modifying the source code to help meet your specific requirements.
Even though it is free, it is still a full-featured and powerful database solution.
However, rather than being targeted as a server solution, it is primarily a desktop
One thing that makes MySQL popular as a base for applications is its multiple platform support,
which allows it to run on over 20 operating system platforms. This makes it much easier for
ISVs to port their applications between different platforms, increasing their potential customer
In addition, at the time of this writing, HP has HP Cloud Relational Database for MySQL in beta
testing. HP Cloud Relational Database for MySQL is a managed, web-based service that
provides you with on-demand access to your application data stored in a relation-based
structure. One advantage of HP Cloud Relational Database is that HP manages your database
administration tasks so that you can focus on developing and managing your applications and
your business.
For a MySQL features comparison, go to
Scenario: BCD Train
BCD Train is implementing an application to manage student enrollment and track student
progress. Customers will be able to access the information through a dedicated website that
will host the database and its data. The application requires a SQL Server database for data
storage and retrieval.
Discuss the database options that would best support this project. Discuss any possible
issues with licensing and support.

Web services
What do you think of when you think about web solutions and web services? Websites typically

first come to mind for most people, but this is far from the complete picture. Web services are
the most common type of Web solution, but the web-based technologies have grown to become
a major part of the worlds business and technical infrastructure. These include:
Web services
Web-based applications that allow interactions between computers. Web services
can be simple applications like currency converters or sophisticated transaction
validating and processing applications.
Web applications
Applications hosted on the web server and accessible through client browser.
Depending on the application, the code can be executed on the client, the server, or
Collaborative services
Applications that make it easier to share information and work together.
FTP servers and services
A traditional mainstay of TCP/IP network technologies. FTP support is now typically
implemented as part of a web solution suite.
When websites first appeared on the Internet, they were little more than an interesting curiosity.
They have since evolved into a technical and social revolution. Any business without a website
immediately loses credibility. Online shopping continues to grow into a major part of the retail
sales mix.
Web services often go unnoticed because many are targeted to very specific markets, such as
providing order processing support for a supply chain or facilitating financial processing.
Web applications serve an important role in the growing cloud computing environment. For
example, Microsoft now provides access in the form of web applications to its Office application
suite and other productivity applications.
Some companies and individuals may choose to use a web hosting service for their websites
and other Web-related services. They find this advantageous because these dedicated hosts
can usually ensure high availability and have a staff available 24/7 to handle any problems that
might arise. However, some companies find it is better to implement these services internally,
either as their complete solution or to supplement externally hosted websites. One benefit of
hosting internally is that a company can maintain complete control over the web solution,
including content, management, and security.
Security can become a critical issue for publicly accessible websites. Websites are
often targeted by hackers for attack, sometimes just for the fun of it.
It is important to remember that even though these are commonly referred to as Internet
technologies, they are not implemented solely on the Internet. You will also find these
technologies used on internal business networks, often as a way to share information and
facilitate communication within the company. The term intranet is often used to refer to Internet
technology-based solutions deployed internally.
Implementation using Internet technologies designed to share data and services on a private internal network.

The server applications used to support web solutions are often simply referred to as web
servers, but this description falls far short of modern Internet services applications. While
website hosting remains a central role, server applications are also designed to support FTP
access, web services, web applications, and other web-based technologies.
Two of the most common solutions are:
Microsoft Internet Information Services (IIS)
Apache HTTP Server
IIS is a proprietary web solution that is included with Microsoft Windows Server and other
Windows family products, including client operating systems. Microsoft also makes updates to
IIS readily available for download and installation. Apache HTTP Server, often simply referred
to as Apache, is an open source product developed and maintained by the Apache Software
Foundation (ASF)

Internet Information Services (IIS)

Microsoft provides web solutions by including IIS with its Windows Server operating systems,
including Windows Server SBS. IIS has evolved over the years to become a broad-based web
solution foundation. Microsoft makes IIS updates available as free downloads as they come
IIS is designed as a secure platform for hosting websites, web applications, and web services.
It unites Microsofts web platform technologies, which include ASP.NET, Windows
Communication Foundation (WCF) web services, and Windows SharePoint Services.
IIS comes with a complete set of development and deployment tools. IIS is also designed to
make site administration as easy as possible. You can delegate site-level configuration,
management, and control to developers or to the sites content owners. You can also share
configuration information across multiple servers in a web farm.
web farm
Web server configuration with multiple servers working cooperatively to support web services and content.

It is also possible to customize IIS through its modular architecture. You can add, remove, or
replace built-in or third-party modules to adjust IIS to suit your specific needs. You can also
customize content delivery. For example, you can create content playlists, such as targeted
advertisements, to control additional content delivered with web pages.
There is also a web playlist extension to IIS that lets you manage media playlists. See
Microsoft continues to work toward making IIS as secure as possible and, in the process,
providing better security for your websites. Microsoft has reduced the server footprint, which
reduces the potential attack surface, and improved request filtering rules. The advanced
filtering helps prevent potentially hazardous requests from reaching the server. You can also
securely publish content through FTP and WebDAV.
Web Distributed Authoring and Versioning (WebDAV)
An extension to the HTTP protocol that makes it possible for clients to publish, lock, and manage web-based resources.

IIS has often been the target of hackers and other Internet-based attacks. Many of the
security updates regularly provided by Microsoft relate to IIS and the services that it provides.
As mentioned earlier, Microsoft makes an edition of Windows Server specifically targeted at the
web server market, the Windows Server 2008 R2 Web edition. However, you have the option of
enabling and using IIS with any Windows Server edition.
Microsoft also has an IIS Express version optimized for developers. It is a self-contained,
lightweight version of IIS. It gives developers easy access to the latest IIS version for
development and testing. In addition to core IIS functionality, it includes features to help ease

With a history going back nearly 20 years, Apache web server is a stable, mature product and
is generally accepted as the most popular web server in the world since 1996. In 2009, it was
the first web server to reach the 100 million deployments mark. Apache web server is a project
of the Apache Software Foundation (ASF). ASF is a development group responsible for the
development and distribution of Apache and other web solution products.
One reason for Apaches popularity is that Apache continues to be completely freefree to
download and free to use. It is also well supported through ASF and through a vast network of
Apache developers and users. The goal of the Apache HTTP Server Project all along has been
to provide a secure, efficient, and extensible server platform.
Apache server gives you much the same functionality as IIS. One of the factors that makes
Apache so popular is that, like many open source offerings, it can be implemented on a wide
variety of operating system platforms, including Linux, UNIX, Mac OS, Novell NetWare,
Windows, and others. There is also a large volume of documentation available from ASF and
from different user groups.

Mail and messaging services

Electronic mail (email) has roots that actually go back to before the advent of the Internet. Early
bulletin board sites would offer email along with other services such as file downloads and
discussion forums. As access to the Internet became available to home users, Internet Service
Providers (ISPs) were some of the first to offer public access to email that could transfer
between servers. In those early days, many businesses used ISP-provided email services as
their business email. As websites and web hosting became popular, many web hosts also
began offering email service, using mail addresses based on the websites URL.
web host
Business that provide dedicated website hosting and maintenance support services for a subscription-based fee.

The environment has changed over the years. The use of email and messaging has become
nearly universal. It is nearly impossible for a business to maintain any credibility without an
email contact address. Email has become a critical part of business communication, in some
cases replacing telephone communication completely. Email and messaging were once nice
amenities, but today they have become critical to any business.
The critical nature of email communication is one of the reasons email services are

often targeted by viruses and worms, because immobilizing email services can do a great deal
of damage in a very short time.
The reliance on email is not limited to businesses. Even outside of business, people exchange
email addresses like they once exchanged postal mail addresses (now often called snail
mail) and phone numbers.
These days, a number of email options are available. ISP or web host email still works for
some, but an internal email system gives you more control over your mail services. More
recently, a popular option is to use cloud-based online mail services. With cloud-based
services, the initial cost is significantly less, and there is a per-month fee pricing structure.
There are even hybrid implementation models, combining internally-hosted and cloud-bases
The single term mail services can be a little misleading because it fails to describe the full
scope of services provided. Most include a calendar function, which can include a private
calendar for users and a public calendar for scheduling meetings and facilities resources, like
conference rooms. They also let users keep contact lists, including contact information and any
notes they might want to keep about the contact. Some mail solutions also offer instant
messaging services. This gives users the ability to chat, either text-only or with video, making
instant communication not only possible, but easy to implement.
Email solutions have also become more flexible over the years, with the expectation now that
they will be able to support both PCs and smartphones as mail clients. PC users will usually
have the option of accessing mail services through a web browser or a mail client, such as
Microsoft Outlook. There are also a number of open source mail clients available, such as
Mozilla Thunderbird.
Two of the most popular messaging solutions are Microsoft Exchange Server (and, more
recently, Exchange Online) and Sendmail, a popular open source solution.

Exchange Server
Microsoft Exchange Server is Microsofts messaging solution. It gives users access to email,
calendar, contacts, voice mail, and SMS text messages, all through the same universal
mailbox. Users can access Exchange from their PCs using either a mail client or a browser,
and they can also access their mail using portable devices like smartphones. Browser support
is provided through the Outlook Web App built into Exchange. This browser support gives
employees the ability to work from nearly anywhere.
Exchange supports a variety of browsers options, including Internet Explorer, Firefox,
Safari, and Chrome.
Exchange Server is a messaging solution that you can implement and manage yourself. It can
be deployed on Windows Server Standard, Enterprise, or Datacenter editions. However, it
requires a 64-bit processor and a 64-bit version of Windows. It also comes embedded as part of
Windows SBS 2011. Exchange Online is hosted by Microsoft as a cloud application, but you
still have control of the settings. You even have the option of deploying as a combination of onpremise and in the cloud.
Instant messaging and unified communication support is provided through Microsoft Lync
Server. Lync Server is designed to bring together the different ways people communicate into

one solution. It works hand-in-hand with Microsofts Office suite and with mobile communication
applications. In addition to instant messaging, Lync Sever provides support for voice mail
systems and online conferencing.
One important improvement to Exchange has been its security enhancements. Mail servers are
often targeted by hackers, both because of the severity of business interruptions that doing so
can cause and because of the vast amount of information stored on the servers. Exchange has
advanced antivirus and anti-spam to help keep information safe and secure. With each release,
Microsoft has improved Exchange security. Microsoft also releases security updates on an asneeded basis.

Sendmail is a popular open source messaging solution. Even though the percentage of
publically-accessible mail servers on the Internet that use Sendmail has declined in recent
years, it continues to be one of the most popular options. As with many other open source
solutions, you can deploy Sendmail on a variety of operating system platforms.
Sendmail began as a project of the free and open source software and Unix communities, but
the project branched off into a corporate entity early in its history. This did not bring an end to
the open source Sendmail community. Sendmail, Inc. continues to support open source
solutions and continues to develop the Sendmail message transfer agent (MTA). As with other
open source products, developers are free to distribute applications they have built based on
the Sendmail MTA, and they also have the option of modifying and redistributing the MTA itself.
Commercial versions are currently produced and distributed by Sentrion, a part of Sendmail,
Inc. Sentrion also offers a preinstalled mail platform known as the Sentrion Email Infrastructure
Platform. It is based on a Red Hat Linux operating system platform. Sentrion has VMware-ready
virtual mail solutions that are ready for deployment within an existing network infrastructure.
Once again, the functionality provided by Sendmail is much the same as that provided by
Microsoft Exchange. This includes the ability to filter and secure messages, hybrid
deployments including on-premises and cloud mail servers, protection against data loss or
disclosure, and secure storage and delivery. Sendmail includes a send option called RPost
Registered Email that provides verifiable proof of sending and delivering attachments without
needing any special modifications to e-mail client applications.
HP has a number of messaging solutions based on the Sendmail product line. HP
recommends three Sendmail product lines as ways of implementing a mail solution:
Sendmail Mailstream Manager
Ensures the reliable and secure flow of business-critical email inside an organization
and across the Internet while enabling content policy controls and creating a
predictable model to scale the system. Email is scanned for viruses, spam, and other
malicious code, filtered against policy enforcement policies, and transmitted through
secure channels to its destinationall controlled from a central administration
Sendmail Mailcenter
Combines all the components necessary to deploy a complete email system of any
Sendmail High Volume Mail

A high-performance, fault-tolerant email messaging system for businesses that send

large volumes of unique messages to opt-in subscribers.
The above list includes only a few of the available Sendmail versions, many of which are
redistributed versions based on modifications made under the open source license.
Although it is generally considered open source software, there are some proprietary
licensed Sendmail versions.
Scenario: BCD Train
BCD Train plans to use Microsoft Exchange for their company messaging solution. They want
to have mailboxes available for both employees and their customers, as well as repeat
students. Currently, they do not have anyone on staff experienced in implementing and
configuring Exchange. They are trying to choose the most cost-effective solution.
Discuss their implementation options. Include potential benefits and drawbacks of each

Collaboration services
Many businesses consider providing an environment where users can more easily work
together and share information to be a major part of the justification for networking PCs. Early
efforts toward this goal included file sharing and internal websites and FTP servers.
Unfortunately, these fell short of what was really needed.
This need led to the development of collaborative solutions (also known as groupware) that not
only made it easier for people to work together but also made it easier to manage and control
the process. The impact of collaboration solutions has been felt throughout businesses, as they
help facilitate all sorts of projects. It is now possible to more easily and more securely control
access to files, such as documents or application source code, and to ensure that only one
person at a time can make any changes to files. In most cases, you can have versions of files
saved automatically so you can go back to previous versions of a file if necessary.
Collaboration software has deep roots, going back to collaborative game software
(multiuser gaming) developed as early as 1975.
We will now look at one solution option: Microsofts SharePoint. However, the features and
functionality supported by SharePoint are mirrored in other collaborative solutions, such as:
Google Apps
Novell GroupWise
Oracle Beehive and Domino
Saba Software Saba Central
Zing Technologies
The majority of collaborative solutions are licensed under proprietary commercial licenses,
though there are a small number of open source solutions available.

Microsoft SharePoint gives you an easy-to-deploy collaborative environment. You can use
SharePoint within your internal network, access it from the Internet, or do both at once. It is
designed to be simple to install and configure. SharePoint is built around an administration
model that lets you manage anything from a single server to a server farm from one central
SharePoint includes several features that make it easy for network users to find information and
work together on projects. Some key features include:
Collaboration tools to make it easy to share ideas and work together
Access to information in databases, reports, and business applications
Search tools to help users find both information and contacts
Content management tools and automatic content sorting and organization
Tools for creating no-code, do-it-yourself business solutions
Microsoft SharePoint Server is just one product out of a suite designed to offer collaboration
solutions matched to your particular needs. Other related products include:
SharePoint Foundation 2010
A freely downloadable version of SharePoint. It provides a way for small
organizations to implement a secure, manageable, web-based collaboration
SharePoint Designer 2010
A platform for quickly developing SharePoint applications. It helps you quickly
develop applications to meet business needs.
SharePoint Workspace 2010
A platform that enables easy access to Microsoft team sites, no matter what the
location. It also gives you a way to synchronize SharePoint Server 2010 document
team site
Business website created for team collaboration and to coordinate team activities.

You can also help secure your SharePoint implementation by deploying Forefront Protection
2010 for SharePoint. It helps protect SharePoint from viruses, unwanted files, and inappropriate
You can access a link to the SharePoint 2010 Overview Evaluation Guide at
HP has a business intelligence appliance solution called the HP Business Decision Appliance
that is optimized for use with Microsoft SQL Server and SharePoint, giving you a hardware
platform designed to support a collaboration solution. It is designed to let you have a business
intelligence (BI) solution up and running in less than an hour.
business intelligence
Using available information and resources to generate analytic views of past and present operations and future directions.

BI applications are designed to search through your warehoused data to give you a detailed
analysis of your business practices and to help provide future directions. SQL Server provides
the warehousing and data mining. SharePoint helps users bring together resources from inside
and outside an organization and analyze them. SharePoint also makes it easier to share and
distribute the analysis results.
data mining
The process of attempting to identify patterns in large data sets.

File and print services

We will now look at file and print services as one general category, rather than break them
down into proprietary and open source solutions. It is easier to look at both categories at once
because most common file and print solutions are built into the basic functionality of the
operating system. In fact, some level of support for file and print services is an expectation for
both desktop and server operating systems.
In the early days of networking, the greatest driving forces were a need for file security and a
need for users to share files. Businesses also wanted to be able to easily share printer
resources between users.
These have continued as core components of any network solution. File servers remain a
critical resource, and various services have grown up around their support. Shared printer
support has grown to include other devices, like shared scanners and networked fax services.

File services
Both Windows Server and Linux solutions provide extensive file services. It is important to note
that NFS servers allow shared file storage between disparate operating systems. This shared
file storage capacity makes Linux-based file servers a good choice in very mixed environments.
Key features of file services are file sharing and access, with the ability to control access rights
and audit access to sensitive files. Centralized file storage also helps to make sure that critical
files get backed up regularly. Other features include:
File encryption
Encryption options help prevent accidental disclosure of files on the file server and
can also protect files when they are distributed out of the network.
File compression
File compression is used to minimize the space needed to store files on disk. As
storage media prices have gone down, this has become less of an issue, but the
service is still offered.
Storage quotas
Sometimes businesses will want to limit how much space a user can have on a
shared file server. You can set global limits or peruser limits.
Distributed file systems (DFS)
In a routed network or WAN environment, you can reduce network traffic by keeping
files physically near the users that need them. DFS achieves this reduction in
network traffic, letting you keep copies in different locations and even keeping the file
versions synchronized. Using DFS, you can also logically group files that are
physically hosted in different locations so that they appear to be part of the same

hierarchical structure to users.

These services are managed in various ways, but the end result is much the same. Windows
Server, for example, provides a File Server Resource Manager to handle some management
tasks (Figure 5-19).

Figure 5-19: File Server Resource Manager

For mission critical files, you can implement high-availability through the use of server clusters.

Print services
At one time, it was thought that networking would bring about the paperless office. So far, this
has not happened, but at least you can get tight controls on printing through the print services
provided through networked operating systems.
Print services give a business a way to take control over printing and related expenses.
Features supported by server solutions include:
Print queuing to prevent delays while documents are prepared.
Managed queued print jobs to change print order, reprint jobs, place jobs on hold, or
delete them from the queue.
The ability to control access to printers, such as limiting access to specialty printers.
The capacity to set times when printers are available and defer some print jobs until
after normal business hours.
HP also offers extensive print management services. HP provides solutions for all sizes of
environments, with expected print cost savings of up to 30%. HP also offers a HP ePrint Public

Print for supported smartphones, which lets users directly print at public locations such as
FedEx and UPS store locations and many hotels, airport lounges, and public libraries.

Choosing a Solution
So, how do you choose a solution appropriate to your needs? To some extent, it will come
down to the preferences of those who run the business. However, the following are some
selection criteria that you should consider when making your decision:
Solution applications needed
Operating system versions and editions
Number of users/client supported
Disk space requirements (including space for shared files)
Printing requirements
Available management utilities
Licensing options
It is important to collect technical requirements for the business today and for future growth
plans before evaluating your options. For smaller businesses, you might build the solution
around a preconfigured product like Windows Server SBS. With larger businesses, you will
probably need to pull together different pieces and configure the components.
To help you make your selection, HP provides a server buyer guide, organized by solution
(Figure 5-20).

Figure 5-20: Server Buyer Guide

Find a scenario that matches your solution needs, and the buyer guide returns one or more
recommended server platforms (Figure 5-21).

Figure 5-21: Recommended Server

HPs optimal solution is displayed by default. Recommendations will usually also include those
identified as standard solutions.
This Server buying guide is available at

In this chapter, you learned:
Server solutions provide critical resources to networks.
Windows Server 2008 R2 is available in several editions to meet differing
implementation needs.
Windows SBS 2011 is based on Windows Server 2008 R2 with preinstalled and
preconfigured solution software.
Linux is an open source solution available in several distributions and from several
Direct modifications to source code are not allowed under a proprietary license.
Open source products can be modified and redistributed.
Directory services solutions organize, manage, and control access to network
Database solutions provide RDBMS support to a network and its applications.
Web solutions provide for websites, web applications, web services, and FTP

Messaging solutions provide email, calendars, contacts, voice mail, and instant
Collaborative solutions make it easier for users to find information and work together.
File and print solutions control file and print resources and make them available to

Review Questions
1. Windows Server SBS 2011 Essentials Edition supports up to how many users?
2. What are the two basic Windows licensing options?
3. What is Microsofts collaboration solution?
4. What is the Novell, Inc., directory services solution?
5. Which Windows Server 2008 R2 edition does not support acting as a DHCP server?
6. What is the worlds most popular web server?
7. Which MySQL editions are limited to ISVs and OEMs only?
8. Support for using a browser as a mail client is through what application built into Microsoft

1. When a product has an open source license, this means that you can modify the
_________________ and redistribute the modified software.
2. HP ePrint Public Print lets supported ____________ print at public locations.
3. Windows Server provides high-availability through the use of ______________ clusters.
4. Linux is based on the _____________ operating system.
5. Windows Server 2008 R2 Foundation is licensed under a ________ ________.
6. A ______ _______ installation installs Windows Server without a desktop interface and
with a reduced attack surface.
7. ____________ database applications are characterized by the need to quickly insert,
update, and/or delete large amounts of data.
8. MySQL ____________ is a freely downloadable and freely distributable version of

1. All open source software is free to download and use.
2. Proprietary solutions are always more popular and more dependable than equivalent
open source solutions.
3. Microsoft makes a SQL Server edition available that can be embedded and freely
distributed with custom applications.

4. The requirement for file and print solutions is rarely a justification for implementing a
5. Sendmail is distributed in both proprietary and open source versions.
6. You can run MySQL on Windows, Linux, and other operating systems.
7. HP is the sole distributor for SUSE Linux.

Essay question
Write at least two paragraphs comparing and contrasting the use of proprietary and open
source server solutions. Include issues relating to compatibility, implementation options, and

Scenario questions
Scenario: Stay and Sleep
Stay and Sleep is considering implementing a reservation web server in-house. They have
not yet determined whether to use an Apache web server or an IIS web server. They expect
traffic of no more than 400 users per minute.

1. Access and compare
the optimal, standard, and minimum rack server configurations for each web server. What
advantages are provided by each?
2. Research the options provided by SQL Server. Obtain recommendations for two different
concurrent connection levels. Explain how the recommendations differ.

This eBook is licensed to Catalin Dinca,

Chapter 6: Installing and Configuring Windows Small

Business Server
You were introduced to server solutions in the last chapter. During this chapter, we are focusing
in closer on one of those solutions.
Consider this situation. A small business wants to deploy its first Windows network server.
They have a limited budget, limited in-house technical skills, and only a few users. What they
need is an easy-to-manage, cost-effective solution.

What to do? Microsoft Windows Small Business Server 2011 (Windows SBS) was developed
exactly for situations like this. HP makes the solution even easier by offering servers with
Windows SBS already installed so that small business customers can have it up and running in
minimal time.
You have already been introduced to Windows SBS. We dig in deeper in this chapter, focusing
on Windows SBS implementation and use. This includes a look at installation, configuration
procedures, an introduction to available management tools, and a few notes on managing
Windows services.

In this chapter, you will learn how to:
Compare Windows SBS edition options.
Install Windows SBS.
Manual installation
Gen8 Provisioning
Recognize and use embedded management utilities.
Configure network and Active Directory support.
Manage services.
Set up and use file and print services.

About Windows SBS

We will start by reviewing a few key points about Windows SBS. Windows SBS 2011 is built on
the Windows Server 2008 R2 operating system. The most significant difference between
Windows SBS and traditional Windows Server is that Windows SBS comes with some of the
more popular solutions pre-enabled or preinstalled. The features supported depend on which
Windows SBS edition you select.
Windows SBS Essentials comes with support for:
Cloud services
PC Backup
Active Directory
File and print server roles
Cloud services
Hardware and software resources delivered as a service by way of the Internet or from an internal server.

SBS Essentials is also limited in its client support, with no more than 25 users allowed.
Standard edition lets you have up to 75 users, and also supports:
Microsoft Exchange Server 2010 with SP1

Microsoft SharePoint 2010 Foundation

Microsoft SQL Server 2008 R2 Express
Windows Server Update Services (WSUS) 3.0
Remote Web Access
Windows Server Update Services (WSUS)
Process whereby Windows updates are downloaded from Microsoft to a local server and then redistributed through a local
Remote Web Access
A service that enables a remote user to access a network computer, as well as shared files and folders, from a remote
location through Internet access.

This bundled approach and its intuitive GUI-based management tools make Windows SBS an
ideal first server for many (if not most) small business networks. In addition, the pricing and
licensing structure is designed to make Windows SBS a cost-effective solution.
Windows SBS also provides features to support employees who need to access the network
from remote locations. Employees have easy access to information that they need. They can
also access their office computers (including documents and files) from a remote location
through a web browser. Each user has a personalized web address that allows access to his or
her computer.
Most Windows SBS implementations are based around a single server. The bundled approach
used with Windows SBS makes this possible. However, you have the option of implementing
an additional server through the SBS Premium add-on.

Windows SBS interface

The presence of a GUI interface is nothing new to Windows users, but you will find that the
Windows SBS interface is structured a little differently than other Windows products. By default,
after you log in, Windows SBS takes you directly into the Windows SBS 2011 Standard
Console (Figure 6-1).

Figure 6-1: Windows SBS Standard Console

The Standard Console gives you quick access to an array of management tools and utilities.
These selections include:
Provides a checklist of post-installation tasks and displays a summary of network
Users and Groups
Lets you manage users, user roles, and groups.
User role
Defined set of access permissions, mail quotas, remote access permissions, and disk quotas that can be applied to users as
a way to simplify user management.

Lets you manage network computers and connectivity settings.
Shared Folders and Web Sites
Used to create and manage shared folders and websites.
Backup and Server Storage
Allows you to configure server backup options and manage data storage locations.
Lets you configure and manage report definitions, including custom reports.
Allows you to view virus and malware protection and firewall settings for clients, as
well as server antispam and firewall settings.

The taskbar has links to other useful utilities. One is Server Manager. Server Manager lets you
manage roles and Windows features, use diagnostic utilities, configure tasks and services, and
manage disk storage (Figure 6-2).

Figure 6-2: Server Manager

You can also launch a Windows PowerShell prompt on the desktop (Figure 6-3).

Figure 6-3: Windows PowerShell

Windows PowerShell is a command-line execution shell and scripting language. PowerShell is

designed as an additional system administration tool for IT professionals. Along with a powerful
scripting language that lets you perform custom tasks, there are also several built-in
commands, known as cmdlets, that perform specific management tasks.
You can also launch Windows Explorer from the taskbar (Figure 6-4).

Figure 6-4: Windows Explorer

This is the standard Windows Explorer, the same as that used in other members of the
Windows family.
The Windows SBS desktop itself is effectively the same as for other Windows family products.
Initially, the only desktop shortcuts are for the Recycle Bin and Windows SBS Console (Figure

Figure 6-5: Standard Desktop

Additional commands and utilities are available through the Start menu (Figure 6-6).

Figure 6-6: Start menu

The menu itself is much the same as for any other Windows family product. Default menu items
are shown in Figure 6-6.

HP and Windows SBS

HP offers preinstalled solutions based on either Windows SBS edition. For example, you can
purchase the HP ProLiant MicroServer with Windows SBS Essentials preinstalled to allow for
fast deployment (Figure 6-7).

Figure 6-7: HP ProLiant MicroServer and Windows Small Business Server 2011 Essentials

This product comes configured with RAID disk storage, which helps ensure data availability.
HP also provides selector tools that help you choose appropriate server platforms for Windows
SBS. Select an operating system to view compatible servers (Figure 6-8).

Figure 6-8: Server Selector

You can then click to display detailed information about any particular server (Figure 6-9).

Figure 6-9: Server Details

This makes it easy to compare products before you make your final selection.
You can access a Windows operating system selector tool at

Scenario: BCD Train

BCD Train provides both live classroom and online training. They are preparing to open
several field office training locations, which will vary somewhat in size and support
requirements. They will be small offices with minimal support requirements, so BCD Train
has decided to base each offices network around a Windows SBS deployment.
Discuss the information needed when you select the OS edition and hardware platform to be
used in each of the offices. Discuss the selection tools provided by HP and how they can aid
in making deployment decisions.

A reliable deployment begins with software installation. Although some HP products come with
Windows SBS already installed, you also have the option of purchasing a bare metal server
and installing Windows SBS yourself. The advantage of purchasing a preinstalled solution is
that your requirements to get the server up and running are minimal. However, more
customized configurations require that you install the operating system yourself. You can install
Windows SBS using a manual installation or by using SmartStart. To get a good idea of what is
involved, we will now look at a manual installation first.

Manual Installation
Most of the initial steps in a manual installation are automatic. First, you insert the installation
media in the drive and restart the computer. The setup program attempts to detect mass storage
devices that exist in your computer. If you have one or more unpartitioned hard disks, you are
prompted to create a destination partition, or to let setup automatically partition all available
space on the hard disk as an installation destination.
If you have a SCSI controller that is not on the Windows Hardware Compatibility List
(HCL) or a controller that was not detected, you must manually install appropriate drivers from a
manufacturers driver disk.
The installer copies and expands the Windows files, installs features, and installs updates
(Figure 6-10).

Figure 6-10: Initial Installation Steps

After installing the updates, if any, the installer updates the Windows registry (Figure 6-11).

Figure 6-11: Registry Update

You are informed that the initial installation steps are complete (Figure 6-12).

Figure 6-12: Initial Steps Complete

At this point, installation becomes an interactive process, and you are prompted for information
needed to configure your server.
First, you must choose whether this is a new (clean) installation or a migration. With a clean
installation, you create a Windows Active Directory (AD) domain as part of the installation
process. During a migration, you add the server to an existing domain (Figure 6-13).

Figure 6-13: Installation Options

Next, you are prompted for clock and time zone settings. This lets you adjust the date and time
and select your local time zone (Figure 6-14).

Figure 6-14: Clock and Time Zone Settings

You are then prompted for basic network configuration parameters. The default is to configure
the server to use DHCP to automatically configure network settings (Figure 6-15). You can also
manually enter the servers IP address and default gateway address. The prompts during
installation ask for IPv4 address information. It is a best practice to manually assign server
addresses based on a previously designed addressing scheme. You might need to work with

the network administrator to obtain the IP address for the server.

Figure 6-15: Network Configuration

After you configure network address information, setup configures network connectivity and
downloads and installs available updates. If you choose to install updates, setup needs to
reboot the computer after the updates have been installed (Figure 6-16).

Figure 6-16: Connecting the Server

Now, you are prompted for the company name and address. This information is optional (Figure

Figure 6-17: Company Information

You need to enter a server name and the network domain name. If you entered company
information, default computer and domain names will be based on the company name (Figure

Figure 6-18: Server and Domain Name

WARNING! You cannot change this information after installation.

You need to create an Administrator account by specifying first and last name, administrator
user name, and password. You will use this account to manage Windows SBS (Figure 6-19).

Figure 6-19: Administrator Account

The password must meet the default length and complexity criteria. It must be at least eight
characters long and include at least three of the four types of characters:
Setup displays a summary of the information you entered. At this point, you still have the option
of going back and making changes.
After you click Next, the installation finishes and you will be unable to change the server or
domain names (Figure 6-20).

Figure 6-20: Final Screen

Setup completes the process (Figure 6-21). It may be necessary to restart the computer multiple

Figure 6-21: Final Installation

As you approach the end of the installation (after the last restart), you are prompted to enter a
problem ID and comment for the restart. You can enter any information you want. This
information is logged, as are any subsequent shut downs, for reference and troubleshooting
(Figure 6-22).

Figure 6-22: Shutdown Information

Setup reports that the installation is complete (Figure 6-23).

Figure 6-23: Successful Installation

Click Start using the server to display the login prompt. You are prompted to log in with the
default Administrator account (Figure 6-24). That account will have the same initial password
as the password you specified for the administrator account you created during installation.

Figure 6-24: Log-in Prompt

The first time you log in, you are taken to the Standard Console that you saw earlier in the
chapter (Figure 6-25).

Figure 6-25: Standard Console

Now that you have seen what is involved in a full installation, we will look at setting up a
system using the HP SmartStart DVD that comes with an HP server.

SmartStart is a single-server deployment tool designed to simplify HP ProLiant G7 server (and
earlier) setup and provide a repeatable way to deploy consistent configurations. It is included in
the HP Insight Foundation suite for ProLiant.
Insight Foundation Suite
A set of management solutions that are designed to simplify server installation, deployment, configuration, and maintenance
throughout a server products life-cycle.

SmartStart products are at end-of-life (EOL) with no further development or updates

coming, and the last servers they ship with are ProLiant G7 servers.
SmartStart offers an easy-to-use, intuitive graphical interface that guides you through preparing
your system for the OS installation process. During this process, SmartStart performs
dependency and validity checks to ensure that all supported ProLiant software can be installed.
There are a number of assisted manual activities that can be performed through SmartStart as
part of your installation. Assisted activities include:
Performing validity checks to ensure you have the appropriate hardware
configuration before the OS installation.
Configuring server and array controller hardware using ROM-based setup utilities
(accessed either by pressing the F8 and F9 keys or by launching from SmartStart).
Validating the hardware configuration by booting from the SmartStart CD and
walking through interview questions to prepare your server for OS installation.
Completing the OS installation and automatic installation of the ProLiant Support
Pack server support software from the SmartStart CD.
You can also create a driver diskette from your SmartStart CD. You can create driver diskettes
from the latest ProLiant Support Packs (PSP) or Service Pack for ProLiant (SPP) available from
HP. You can use the driver diskettes to apply the most recent device drivers during the
installation process. For example, you will need to provide appropriate drivers if your mass
storage device is not recognized by the drivers provided with your Windows installation set.
Refer to for supported servers and
You might see the terms PSP used in some places and SPP used in others. PSP was
the name HP previously used to refer to a package containing updates. HP now calls the
package containing updates an SPP. The latest SPP can be downloaded from
The best way to see what SmartStart can do is to take a look at it in action.

SmartStart installation
SmartStart gathers information about your installation, starting with language (Figure 6-26). The
language and keyboard settings configured here will apply to the SmartStart tool itself, as well
as the operating system you will install.

Figure 6-26: Language Selection

Next comes the end user license agreement (Figure 6-27). You must agree to the license
agreement to continue.

Figure 6-27: License Agreement

You are then prompted to select the operation that you want to start (Figure 6-28).

Figure 6-28: Select Operation

You can choose to:

Install the operating system.

Run a saved installation.
Perform system maintenance.
Perform a system erase, clean the hard disks and take a system back to a base state.
Reboot the system.
If you choose to install an operating system, you are prompted to select a destination drive
(Figure 6-29).

Figure 6-29: Installation Destination

You need to identify the type of installation source, the operating system you are installing, and
the source of the installation files. For the type of installation source, you can choose either HPBranded Media or Retail Media (Figure 6-30).

Figure 6-30: Media Type

In this case, we are installing HP-branded media. From here, you can expand the selection list
and choose a specific OS (Figure 6-31).

Figure 6-31: OS Selection

We are going to install an HP-branded version of Windows SBS. SmartStart displays potential

issues as warnings before proceeding with the installation (Figure 6-32). For example, SMS
has detected less than the recommended amount of physical memory, but you have the option
of continuing the installation.

Figure 6-32: Memory Concern

With some OS choices, you are prompted for additional information. For example, if you choose
Microsoft Windows Server 2008 R2, you are prompted to choose a specific edition (Figure 633).

Figure 6-33: Windows Server Standard Edition

You are prompted for the media source and media format for the installation, such as DVD
(Figure 6-34).

Figure 6-34: Media Selection

You are also prompted to make disk partitioning choices. You need to choose the file system,

which defaults to NTFS for a Windows installation. NTFS is required to install Windows Active
Directory. You also have to set the boot partition size (Figure 6-35).

Figure 6-35: Disk Partition Selections

You are prompted for product setup information that will be used to configure the OS during
installation (Figure 6-36).

Figure 6-36: Product Setup Information

This is information that you are also asked to provide during a manual installation. Here, the
information is requested first and applied automatically.
SmartStart asks you if you want to install SNMP, and if you do it prompts you for configuration
information. Here we are installing SNMP and HP Insight Management WBEM providers
(Figure 6--37).
Simple Network Management Protocol (SNMP)
Industry standard protocol that provides monitoring and management functionality for IP network devices.
Web-based Enterprise Management (WBEM)
A standard for managing server hardware, operating systems, and applications over an enterprise network.
Insight Management WBEM providers
Software that uses WBEM to send information about servers and devices to HP SIM.
HP Systems Insight Manager (SIM)
A utility that allows you to monitor and manage multiple servers.

Figure 6-37: Install SNMP

If you choose to install SNMP, you must also provide SNMP configuration information (Figure

Figure 6-38: SNMP Configuration

If you want to enable remote access to the system through Windows Management
Instrumentation (WMI), you must also allow SmartStart to configure Windows Firewall (Figure
Windows Management Instrumentation (WMI)
Microsofts implementation of WBEM for accessing and sharing management information over an enterprise network.

Figure 6-39: Windows Firewall Configuration

You are prompted to choose an option for applying Proliant Service Packs (PSP) (Figure 6-40).
Typically, you will choose Express.

Figure 6-40: PSP Installation

SmartStart displays a summary of installation information and prompts you to start the
installation (Figure 6-41). Clicking Next begins the installation.

Figure 6-41: Ready to Install

SmartStart begins by partitioning the hard disk and then keeps you informed of the installation
progress (Figure 6-42).

Figure 6-42: Installation Progress

SmartStart reboots the computer after the installation is complete.

Gen8 provisioning
Instead of SmartStart, ProLiant Gen8 servers use Intelligent Provisioning, which can be
initiated during the server boot (POST) process. The goal of Intelligent Provisioning is to enable
you to configure and deploy HP ProLiant Gen8 servers and server blades more rapidly. The
tools you need to set up and deploy the servers are built in. Intelligent Provisioning also helps
to simplify system maintenance updates.
Rather than coming on a separate CD like SmartStart, Intelligent Provisioning comes
embedded on a flash chip on ProLiant Gen8 servers and Gen8 server blades. In addition, all
drivers and firmware are also embedded on the same flash chip. You can even connect to HP
during setup to install firmware updates during operating system installation. On an existing
installation, if the system software is out-of-date, Intelligent Provisioning automatically
downloads the latest update for you.
Pressing the F10 key during POST launches the Intelligent Provisioning software and gives
you access to configuration utilities. This makes the utilities even more readily available than
they were with SmartStart. Also, because the utilities provided are fully integrated with your
system, server installation goes even faster.
Intelligent Provisioning provides several enhancements over SmartStart, including:
The ability to update your drivers and system software by connecting directly to and perform firmware updates and install an OS in the same step
The ability to provision a server remotely using HP iLO 4
Remote Support registration
A revised and simplified user interface (UI)
In addition to supporting a simplified manual operating system installation, the new UI gives
you automated installation options:
Recommended Install - express installation process that has HP-recommended
preset defaults and performs a software and firmware update if the network is
available at your location
Customized Install (Guided or Assisted install) - full HP interview with a start-to-finish
wizard for deploying HP ProLiant Gen8 servers
Because software, drivers and tools are already embedded, you can simply run the installation.
You do not need to locate installation and driver CDs/DVDs to complete your installation.
Intelligent Provisioning supports system configuration and firmware updates, but not
operating system installation, on ProLiant Gen8 SL servers.
Scenario: BCD Train
BCD Train needs to determine the most appropriate installation option to use when deploying
Windows SBS in each of its field offices.
Discuss key factors that will help determine the best installation option. Be sure to include

factors that could limit or exclude the use of a particular installation option.

Management Tools
It is important to know what management tools you have available and what they can do for
you. Windows SBS includes standard Windows utilities, plus the Windows SBS Console as an
easy-to-use management tool.
You had a brief introduction to some of available management tools at the beginning of the
chapter. Now we will take a little closer look at some of these, along with a few additional
management options.

Windows SBS Console

As you have already seen, the Windows SBS Standard Console launches automatically when
you boot Windows SBS. There is also an Advanced Mode version of the console that lets you
perform some tasks not supported by the Standard Console.
Console Home (Figure 6-43) displays key status information about your network in the righthand pane. The left pane lists common tasks for completing Windows SBS configuration and
setup. Click on a task to receive information and link to the appropriate tools. You will also be
told if you need to meet additional prerequisites.

Figure 6-43: Console Home

In this example, Updates is identified as critical because the computer has not received and
applied the most recent Windows updates. Regular updates are important to provide OS and
security fixes. Backup shows a warning because, in this case, regular backups have not been
configured for the computer.
The drop-down arrow to the right of information provides additional status details.

You can also click Frequent Tasks and Community Links to get easy access to common
tasks, like creating new users or adding a shared folder.
Click Users and Groups to manage user accounts, groups, and user roles (Figure 6-44).

Figure 6-44: Users and Groups

Users and Groups and most of the remaining selections are organized in much the same way.
The left-hand pane lists objects of the type you are managing. On the right are task-related
items and help links.
Click Network to view network items and related tasks. Management options are located on
tabs named Computers (including client computers), Devices, and Connectivity (Figure 645).

Figure 6-45: Network Management

The Devices tab lets you manage printers and fax machines.
The Connectivity tab brings up a list of connectivity-related tasks (Figure 6-46).

Figure 6-46: Connectivity Tasks

Shared Folders and Web Sites gives you the tools to manage shares and websites hosted on

the server (Figure 6-47). You can also redirect user folders from client computers to the server.

Figure 6-47: Shared Folders and Web Sites

The Web Sites tab lets you browse websites, enable or disable websites, view site properties,
and manage permissions (Figure 6-48).
Three websites are created when you install Windows SBS:
Remote Web Access
Outlook Web Access
Internal Web site
Remote Web Access was introduced earlier in this course. Through the Remote Web Access
website, you can remotely access network computers and shared resources. The Outlook Web
Access Web site lets clients access and manage their e-mail through a standard web browser.
The Internal Web site is a standard website you can use like any other website, except that it is
targeted to your internal users. For example, you might use it for internal company

Figure 6-48: Web Sites

Backup and Server Storage lets you manage server backup and manage installed hard disks
(Figure 6-49).

Figure 6-49: Backup and Server Storage

From the Backup tab you can back up the server or restore from the installed backup. You
must have at least one removable hard disk to act as the backup destination.
Microsoft recommends two removable hard disks, so that one can be stored as an
offsite backup.
The Server Storage tab lets you manage mass storage devices and stored data (Figure 6-50).

Figure 6-50: Server Storage

You can use available tasks to move data to a different location, including:
Exchange Server data
SharePoint Foundation data
Users shared data
Users redirected Documents data
Windows update registry data
Reports displays reports already defined on the server and lets you create new reports (Figure

Figure 6-51: Reports

Select a report to:

View report properties.
Generate a copy of the report.
Generate the report and have it emailed.
The Security tab lets you view status information for network clients and servers (Figure 6-52).

Figure 6-52: Security

Status information is taken from the utilities used to manage security, such as the Windows
Action Center and Windows Firewall.
The Updates tab displays information about updates and lets you change the update settings.

Control Panel
The Control Panel in Windows SBS is the same as in other Windows products. The initial
screen lists general categories of settings that you can configure (Figure 6-53).

Figure 6-53: Control Panel

The settings categories are:

System and Security
Allows you to view computer status information, configure Windows Firewall,
manage updates and update settings, configure power options, and access Windows
Administrative tools, including hard disk management tools.
Network and Internet
Allows you to view and manage network settings, including network adapter and
TCP/IP configuration parameters. The Internet options let you manage browser
settings, not IIS.
Allows you to manage devices and printers, set autoplay options, manage sound,
attached display(s), and power management options.
Allows you to uninstall programs and view installed updates. You can enable or
disable Windows optional features. You can also change default settings for file and
programs, such as the default program to open a type of file.
User Accounts
Allows you to manage local accounts, those local to this server rather than part of the
Active Directory. You can also manage access security for Windows CardSpace and
the Windows Credential Manager.
Allows you to adjust display settings like screen resolution and desktop background,
customize the Start menu and taskbar, configure ease of access settings, manage

folder options, and manage installed fonts.

Clock, Language, and Region
Allows you to manage regional information like date and time, time zone, regionspecific settings, and supported languages.
Ease of Access
Manage accessibility settings for visual display, sounds, mouse, and keyboard.
Many of the tasks in the Windows SBS Console link back to settings in the Windows Control

Server Manager
Server Manager is a Windows Server 2008 R2 management utility that is also supported with
Windows SBS. A detailed look as Server Manager is beyond the scope of this course, but we
will look at a few key areas.
By default Server Manager displays summary information about the server. It also provides
links to some management and configuration tasks (Figure 6-54).

Figure 6-54: Sever Manager

The Navigation tree in the left pane lists management areas. These are:
Allows you to manage enabled roles, like Active Directory, File services, and Web
Allows you to manage group policy and message queuing (MSMQ).

Provides access to diagnostic tools including the Event Viewer, Performance (the
performance monitor), and Device Manager.
Configuration Allows you to manage services, define and manage tasks, and
configure advanced security settings for Windows Firewall.
Allows you to configure and manage backup and manage installed hard disks.
Group policy
A means by which users, groups, and Active Directory can be managed through predefined policy settings.
Message queuing (MSMQ)
A feature that allows applications that are running at different times to communicate across heterogeneous networks and
with computers that are temporarily offline.

Many of the management options are available through other management utilities, such as
Windows SBS Console. For example, you have links to configure server backup from both
Server Manager and Windows SBS Console. Most of the management links open utilities that
can also be launched directly from the Administrative Tools submenu in the Start menu.
For example, if you expand Web Server (IIS) and select Internet Information Services
Manager (IIS), IIS Manager opens in the right-hand pane (Figure 6-55).

Figure 6-55: Server Manager - IIS

While we are discussing Server Manager, we will look at role management, an important part of
managing Windows SBS server.
Click Roles in the navigation pane to view Roles summary information (Figure 6-56). The

status information is taken from the Windows event logs. You can also add and remove roles.

Figure 6-56: Server Manager Roles

You can click on a role that displays an error or warning for more information about that error or
warning (Figure 6-57). Error information is taken from the Windows event logs.

Figure 6-57: Error Events

Double-click the error to view the Event log entry for more detailed information about the error
(Figure 6-58).

Figure 6-58: Error Details

You can select a role to view summary information. In addition to events, you can:
View related system services and their status.

Run the Best Practices Analyzer to view suggestions for the role.
View installed role services, if any.
Link to advanced management tools.
There are also links to appropriate references and help files (Figure 6-59).
The roles shown here are the roles enabled by default during Windows SBS

Figure 6-59: Role Management

Expand the role to access available management utilities. In this example, expanding Active
Directory Domain Services (AD DS) gives you access to Active Directory Users and Computers
and Active Directory Sites and Services. These are the two primary utilities for managing AD.
Before leaving Server Manager, we need to take a look at the Best Practices Analyzer. Part of
making sure that you optimize both security and performance is verifying that roles are
configured properly. The Best Practices Analyzer scans the role and reports any errors or noncompliance (Figure 6-60).

Figure 6-60: Best Practices Analyzer

As mentioned before, most of the utilities linked to the Server Manager can be launched directly
from the Start menus Administrative Tools submenu.

Other administrative tools

The Administrative Tools menu links you to Windows tools and utilities. The list of utilities
available for Windows SBS is very similar to what you would see for other members of the
Windows Server family (Figure 6-61).

Figure 6-61: Administrative Tools Menu

Sometimes, when you are searching for the appropriate tool, just seeing it in the tools list will
help you remember what you need to use.

Configuring Windows SBS

Windows SBS is an extensive subject, and far more than we could cover completely in this
chapter. Complete books have been written on the subject. For this course, we will limit our
coverage to a few key areas that will aid in making sure that your implementation is secure,
reliable, and performing at optimal levels. We start with a couple of general areas before
looking at some specific server solution roles.

About devices and drivers

We begin our discussion with devices and drivers. Windows maintains information about
system devices and connected peripherals in Device Manager (Figure 6-62).

Figure 6-62: Device Manager

Expand a device category to see the actual devices installed. You may see some devices listed
in an Other devices category. These are devices that Windows was not able to recognize and
configure automatically.
Each device has an associated device driver that enables the operating system to
communicate with and control a device. In fact, you may have multiple device drivers available.
When a new device driver update is installed, the old driver is kept on the hard disk in case you
need to roll back to an earlier version.
Device driver
Software that places a device under control of the operating system. A device driver provides an interface that the operating
system can use to communicate with the device.

The Device Manager lets you update device drivers and also disable or uninstall a device.
Disabling a device makes it unavailable, but does not remove the device driver and can
be a useful troubleshooting tool.
To view and manage device drivers, right-click a device, click Properties, and click the Driver
tab (Figure 6-63).

Figure 6-63: Device Driver Properties

Click Update Driver to locate and install an updated device driver. If you experience problems
after installing an updated driver, you can use Roll Back Driver to return to the previous driver
Roll Back Driver is disabled unless there is an older version available for rollback.

Managing services
Sitting at the core of any operating system is the kernel. It is the software layer sitting between
applications and the system hardware. Software services supplement and extend the kernel.
Many background server activities are implemented through services.
Main component of a computers operating system.
Operating system or third-party application that runs in the background in support of operating system and application

Service management is an important part of protecting and optimizing a computer. It is

recommended that you disable any services that are not needed, both to free up computer
resources and to reduce the potential attack surface.
Services are managed through the Services utility. The initial screen lists installed services,
and for each presents a brief description and the status, startup type, and user context under
which the service logs on when it starts. Click a service to view a more detailed description of
that service (Figure 6-64) in the Extended pane.

Figure 6-64: Services

From here, you can start a stopped service, pause or stop a service, or restart a paused service.
Right-click a service and select Properties to view and manage service properties (Figure 665).

Figure 6-65: Service Properties

You set the startup type from the General tab. You can choose:
Automatic (Delayed Start)

Setting the startup type to Disabled prevents the service from starting.
The Log On tab lets you specify the account used for log on when the service starts (Figure 666). This sets the security context for the service while it is running.

Figure 6-66: Log On Tab

The Recovery tab lets you specify the actions to be taken when a service fails (Figure 6-67).
You can choose to take no action, launch a program, restart the service, or restart the computer.

Figure 6-67: Recovery Tab

Before disabling a service, you need to check the Dependencies tab (Figure 6-68).

Figure 6-68: Dependencies

The Dependencies tab identifies other services that this service depends on, as well as
services that depend on it.
If you disable a service, any services that depend on that service will not be able to

Network configuration
Windows SBS is designed to be the base around which you can build a small business
network. Because of this, network configuration is a very important issue. Exactly what you
need to do will depend on the roles that you plan for Windows SBS to fill. Some configuration
settings can be accessed through Server Manager, others through the Control Panel, and still
others directly from the Start menu.
Configuration requirements primarily relate to TCP/IP parameters and support services. These
Basic communication properties
IP address, subnet mask, and default gateway.
Name services
Address management
Subnet mask
Value used to identify the portion of an IP address that is used to define the network address and the client computer

Default gateway
In a TCP/IP network, the default router through which packets are sent when destined for a different subnetwork.
Windows Internet Naming Service (WINS)
Windows service used to manage NetBIOS names.
NetBIOS name
Older computer naming system that uses names no longer than 15 characters.

While setting up your network, you need to keep performance and security in mind. You want to
make sure that you do not inadvertently do anything that would make the server susceptible to
attack or degrade server performance.

TCP/IP configuration
Windows SBS is configured for automated IPv4 address assignment during installation by
default. You typically configure a Windows SBS server with a static IP address to ensure that it
has a known address. You can modify the IP configuration after installation. You can access
TCP/IP properties (Figure 6-69) through the Control Panel.

Figure 6-69: Default Addressing

Figure 6-70 shows a static IP address configuration for a server.

Figure 6-70: Static IP Address

There is an advantage to using a static address. Windows SBS, by default, will be the DNS
server for your network. When configuring network clients, you will use this address as the
primary DNS address for client computers. In this example, the Preferred DNS server is this
Windows SBS computer, which is a DNS server configured with an Active Directory-integrated
DNS zone.
Active Directory-integrated zone
A DNS zone that is stored in the Active Directory database and is replicated to all domain controllers, along with other Active
Directory data.

If you assign Windows SBS a static address, be sure to remove that address from the
DHCP scope so that DHCP does not accidentally try to assign it to another computer.
DHCP scope
IP addresses available for assignment from a DHCP server. You can also include additional IP configuration settings as part
of the scope.

Click Advanced to configure advanced TCP/IP configuration parameters. These include:

Additional static IP addresses
Additional static gateway addresses
DNS servers, in order of use

WINS servers
You must configure the server for static IP addresses to enable the ability to configure
additional IP and default gateway addresses.

Network support services

The DNS role is enabled and configured with default parameters during setup. If you select
either role from the Configuration pane in Server Manager, you can view additional
configuration recommendations. For example, the configuration settings available for the DNS
role are shown in Figure 6-71.

Figure 6-71: DNS Recommendations

You can manage DNS from Server Manager or launch DNS from the Administrative Tools
menu. From there, you can view the DNS view and manage DNS configuration and database
information, as well as manually add new resource records associating resources with IP
addresses (Figure 6-72).

Figure 6-72: DNS Utility

This example shows a default DNS configuration. This is what you would see initially after
installation. The only computer entries are for the Windows SBS server, which is configured as
the DNS server and the Active Directory (AD) domain controller.
Even though they usually are not needed in a small business network, you have the option of
deploying additional DNS servers. Because your Windows SBS server is the primary server for
the domain, any additional DNS servers you deploy would receive their DNS information from
Windows SBS.
Primary server
Name server storing the definitive versions of resource records.

Remember, you cannot change the computer name or the fully qualified domain name
(FQDN) after installation.
fully qualified domain name (FQDN)
The computer name that includes the host name and the domain name, with each component separated by dots. For

The DHCP server role is enabled, but not configured, during installation. As with DNS, Server
Manager can provide you with configuration recommendations (Figure 6-73).

Figure 6-73: DHCP Recommendations

Launch the DHCP utility to add a scope and configure other parameters for DHCP (Figure 674).

Figure 6-74: DHCP Utility

To define a new scope, select New Scope from the Action menu. This launches the New
Scope Wizard. The Wizard will step you through the process of defining a scope.
The process begins with a Welcome page (Figure 6-75). Click Next to continue.

Figure 6-75: New Scope Wizard

You are prompted for a scope name and description (Figure 6-76). Give the scope a name that
makes it easy to identify in case you need to edit it later.

Figure 6-76: Scope Name and Description

You are next prompted to enter the starting and ending IP addresses and the subnet mask
(Figure 6-77). This information identifies the addresses that are available to lease to clients.

Figure 6-77: IP Address Range

You can also enter address exclusions (Figure 6-78). These are addresses from within the
scope range that will not be assigned to DHCP clients. For example, these might be assigned
as static addresses on network computers or printers.

Figure 6-78: Address Exclusions

The lease duration sets how long a client can use the address before the lease expires and the
client must renew the lease or lease a new address (Figure 6-79). The lease duration defaults
to 8 days.

Figure 6-79: Lease Duration

Additional DHCP configuration options are additional settings that will be applied to a client
that leases an address (Figure 6-80).

Figure 6-80: DHCP Options

Available options include defining one or more router addresses, and a DNS parent domain,
DNS server address, and WINS server address.
DNS Parent Domain
Top-level domain in the DNS naming hierarchy.

Finally, you will be prompted to activate the scope (Figure 6-81). At that point, the DHCP server
can start leasing addresses to clients.

Figure 6-81: Activate Scope

Other configuration settings

We should make quick mention of some other options that you may want or need to configure
for Windows SBS. Things that you may need to consider are:
SNMP settings
eDirectory compatibility
RADIUS authentication
LDAP support
If you plan to use an SNMP management server to help manage your network, SNMP settings
go from optional to mandatory. SNMP properties you will need to configure include:
SNMP Agent
Configure the contact name, location, and service-based options.
SNMP Traps
Configure SNMP notification events, as well as the device to which the traps are
sent, known as the trap destination.
SNMP Security
Configure security parameters such as SNMP community names and authentication.
You can specify one or more community names along with management permissions
associated with that community.
Community name
Name used to organize managed devices and authentication process.

It is important that you coordinate SNMP settings with IT management. The settings
need to be consistently applied to all devices managed in the community.
If your network connects to the Internet, you need to make sure that you block SNMP-related
traffic into and out of your network.

Active Directory Domain Services (AD DS) Configuration

Windows SBS setup installs, enables, and configures AD support for you during installation.
AD is configured in a default configuration with a single domain based on the domain name
that you specify and a single default site.
Active Directory site
AD sites are used to represent the physical topology of a domain and manage communication between physical locations.

Server Manager gives you direct access through the Navigation tree to Active Directory Users
and Computers and to Active Directory Sites and Services (Figure 6-82).

Figure 6-82: AD DS in Server Manager

As with other roles, Server Manager provides you with a list of configuration recommendations.
Some apply to AD DS configuration in general, rather than focusing directly on a small
business environment. Many are more applicable to an enterprise network environment. Others
apply in situations that may occur, but are uncommon in a small business, such as a network
that is spread across a wide geographic area (Figure 6-83).

Figure 6-83: AD DS Configuration Recommendations

Server Manager also gives you access to AD tools and directories through links in the AD DS
window. Each is listed along with a brief description (Figure 6-84).

Figure 6-84: AD Tools

We will now take a look at some of the tools you are most likely to need while administering a
small business network. These are the same tools you use to configure and administer AD DS

on a computer running Windows Server 2008 R2.

Active Directory Administrative Center

First is the Active Directory Administrative Center. The Administrative Center lets you navigate
through your AD structure and manage user and computer objects (Figure 6-85). You also have
the option of customizing the Administrative Center to help meet your particular needs.

Figure 6-85: Overview

The Administrative Center opens at the Overview screen. From here you can change any
domain users password and search for objects in AD. You also have links to useful information
about managing AD.
Click the domain name in the Navigation tree to view the organizational units (OUs) and built-in
containers defined in the domain. A list of tasks that you can perform is provided at the right
(Figure 6-86).

Figure 6-86: Computers Container

An OU has special properties a container does not have. For example, you can attach a
Group Policy object to an OU, but not to a container.
Organizational unit (OU)
Container object used to hold other AD objects. Used to categorize and manage objects.

Double-click an OU or container to view and manage its contents (Figure 6-87). The tasks that
you can perform are somewhat object-type specific.

Figure 6-87: Users Container Contents

You can also double-click an object to view and manage its properties (not shown). You can
also create new objects from this screen. In this example, you can create new domain users.
The Global Search page lets you search AD for objects (Figure 6-88). You can specify search
criteria to narrow the scope of the search.

Figure 6-88: Global Search

The task list, in this case, depends on the objects that are returned by the search.

Active Directory Users and Computers

You can also use Active Directory Users and Computers to manage AD objects (Figure 6-89).
As you saw with the Administrative Center, you can navigate through the AD structure, manage
OUs and containers, and manage objects.

Figure 6-89: Active Directory Users and Computers

The SBSUsers OU is the default OU for creating new users. When migrating to Windows SBS,
existing users are placed in the SBSUsers OU.
Rather than prompting with a task list, tasks that you can perform are available through the
Action menu. One that is available whenever an OU is selected that bears special mention is
the Delegate Control task. This launches the Delegation of Control Wizard, which lets you
delegate management of a domain object to a specified user or group, including the specific
tasks that you want to delegate.

Active Directory Sites and Services

If the network is spread across multiple physical locations, you may find it appropriate to
configure sites through AD Sites and Services (Figure 6-90).

Figure 6-90: AD Sites and Services

During installation, one default site is created that represents the network as a whole. After
installation you can create additional sites. One advantage of this in a geographically dispersed
network is that you can control replication of AD information between the sites. Sites can also
be used to direct clients to the physically nearest domain controller. Microsoft Exchange Server
also uses site definitions for mail routing.
Site definitions are independent of domain and OU structure.

Joining client computers

You can add a client computer to a domain through the computer properties. You must be
logged on as a local Administrator on the client computer to add it to a domain. You need to first
open the System properties, either from the Control Panel or by right-clicking Computer in the
Start menu and selecting Properties.
Figure 6-91 shows the System window.

Figure 6-91: System Window

Click Change settings to view and modify the domain and workgroup settings. Enter the

domain name to join the domain.

Scenario: BCD Train
Each field training office will be managed as a separate AD domain. The office will be able to
configure its domain to meet its particular requirements. Access to the main office will be
supported through computers that are not configured as part of the local domain, but as
remote VPN clients to the main office.
Discuss the types of configuration decisions that must be made at each field office. Describe
the tools available for configuring the offices.

Security management
Security is a critical issue for any network. It is vital that you protect your network and, central to
that, Windows SBS server. We have mentioned some things that you can do to help secure
Windows SBS. Here is a checklist of some other items that can help you secure your server
and network:
Authentication security
This is a first line of defense. At the very least, set minimum password length and
complexity requirements. Disable user accounts that are not currently in use. Monitor
security logs for unusual activity. Regular monitoring can often alert you to attacks
being organized from inside your company or to permission assignments that are too
Widows SBS supports two authentication options. The default is to use Kerberos to
authenticate users and computers. Kerberos is an industry standard protocol that
makes functionality such as mutual authentication (a client and server authenticating
each other) possible. Computers running Windows NT 4.0 and earlier do not support
Kerberos. For these clients, Windows SBS supports an older authentication method,
NTLM authentication.
Securing network portals
Network attacks can originate from internal and external sources. If you are
connected to the Internet, you are at some risk of attack. You need to ensure that your
internal network is protected by firewalls filtering all traffic between your network and
the Internet.
Windows Firewall
In addition to configuring firewalls to protect your network, you need to configure
Windows Firewall to protect SBS server. It gives you a way to control the traffic into
and out of the server. If you need to provide access to an application outside of the
default configuration, you need to configure an exception for that application.
Anti-malware software
Every computer on the network should have up-to-date anti-malware software. This
is the best protection against viruses and other infections. However, for anti-malware
software to be effective, it must be kept up to date.
Network monitoring
Monitoring network activity is a great way to identify potential problems. You can

often detect attempted attacks early on and head them off before they become a
critical problem. Watch for sudden spikes in activity, especially with traditional targets
like web servers.
Windows Defender
Windows Defender is a free download from Microsoft. It provides real-time protection
against spyware and adware. These are sometimes discounted as less serious
threats, but both can put your computer and network at risk. Spyware, as the name
implies, collects information and sends it to a third party. This often includes personal
information that could lead to identity theft, but it could be information about your
network that could leave it vulnerable.
Windows Update
It is important to configure Windows Updates after installation. This ensures that you
receive critical updates, including regular security updates, to your operating system.
Microsoft releases updates on an as-needed basis. You can choose whether or not
critical updates are installed automatically.
Backup and Restore
One of the most important things that you can do to protect your server is to back it up
on a regular basis.
Tools like the Windows SBS console, Server Manager, and Action Center can help you monitor
your server and keep it safe. The Action Center reports any issues that it detects relating to
security or maintenance (Figure 6-92).

Figure 6-92: Action Center

Another tool for configuring server security is the Security Configuration Wizard. The Security
Configuration Wizard steps you through the process of securing important aspects of your
server. It does this by creating a security policy (or modifying an existing security policy) that
can be applied to any server on your network. You can also apply or roll back a security policy.

You start by choosing a server to act as the baseline for the policy, defaulting to the current
server. From there, you:
Identify the roles that the server is running to enable services and open needed ports.
Identify installed features to enable necessary services.
Select administrative options to be applied.
Select any additional services that you need to install on the server.
Select and enable firewall rules to control traffic filtering.
Configure registry security settings.
Configure LDAP security options.
Select authentication methods supported for client computers.
Configure audit policy.
The security policy will be created at this point. You need to then apply it manually.

File and print services

File and print services encompass a number of subjects. Just in file services, you have issues
File systems
File shares
File and folder security
Distributed file system (DFS)
Disk quotas
Print services is also a major support area for most networks. Shared printers help reduce a
businesss operational cost. In fact, HP even has printer solutions specifically designed to
reduce print-related costs.
The ability to share files has traditionally been one of the prime motivations for implementing a
network environment. Because of this, it is important to have an understanding of file services
and some of the underlying technologies as well. We will start this discussion with an overview
of a few disk fundamentals.

Disk fundamentals
Especially when you are faced with disk problems, it can be helpful to understand a little about
disk file systems. The file system controls how data is organized on the disk and the features
that are supported, like encryption and local access permissions.
Shared access permissions can be applied on shares from any file system.
The file systems supported by Windows SBS are:
File Allocation Table (FAT)
Extended File Allocation Table (exFAT)
New Technology File System (NTFS)

CD-ROM File System (CDFS)

Universal Disk Format (UDF)
The FAT file system is primarily used to support upgrades from earlier Windows versions and
to support multiboot configurations. FAT is also used for formatting flash drives and memory
Multiboot configuration
Computer configured to selectively boot from different operating system versions.

There are three versions of FAT: FAT12, FAT16, and FAT32. FAT12 and FAT16 are older
versions. FAT12 uses a 12-bit cluster identifier and supports volumes up to 32 MB. FAT16 uses
a 16-bit identifier that supports volume sizes up to 4 GB, but 2 GB or less is considered optimal.
FAT32 uses a 32-bit identifier and can support up to 8 TB volumes in theory. However,
Windows limits FAT32 volume size to 32 GB. Maximum file size is 4 GB. You cannot configure
file and folder permissions on a FAT partition.
The exFAT file system, also known as FAT64, was designed for use with flash drives. In theory,
it can support volumes and files up to 16 exabytes and allows for over 1000 files in a directory.
It is designed to improve write and delete performance. It also supports assignment of local file
access permissions.
10 18 bytes

NTFS is the default file system for Windows SBS hard disks. Microsoft limits NTFS volumes to
slightly less than 256 TB. Microsoft also limits the maximum file size to 16 TB. NTFS offers
support for local access security. A number of other advanced file system features are
supported only on NTFS volumes, including:
File permissions
File encryption
File compression
NTFS also provides better reliability than other files systems. Errors such as unexpected power
loss can result in file and directory data being corrupted on a FAT file system. NTFS logs
changes to help ensure recovery after a failure. File and directory structure is preserved, but file
content may be lost. NTFS is also able to make minor repairs to the file system without losing
file access or requiring a system reboot.
NTFS lets you set access permissions at the file and folder level, through the Security tab of
file and folder properties dialogs (Figure 6-93).

Figure 6-93: NTFS Permissions

A file or folder will, by default, inherit permissions from its parent folder. That is, a new file or
folder will have the same permissions as the folder that contains it by default. You can override
these by assigning permissions directly to the file or folder. Permissions can be defined for
users and groups. Permissions assigned to a group are inherited by the group members.
It is important to note that you can allow or deny permissions to a resource. A denied
permission takes precedence over allowed permissions. For example, you might allow Full
control to all users for a file, and then deny the Write permission to a specific user. That user
would not be able to modify the file.
CDFS is implemented as a read-only file system for use with CDs. It supports a maximum file
size of 4 GB (larger than a CD) and no more than 65535 directories. While CDFS is still
supported, the industry has moved to the UDF format for optical media implementations.
UDF is a vendor-neutral file system specification used with CD, DVD, and newer optical disc
formats. It is a read-write specification supporting both recordable (writable) and rewritable
optical storage format.
Additional file systems supported by other operating systems, including Linux, include
EXT2, Reiser, JFS, HPFS, and NFS.
A feature common to bootable disks is the master boot record (MBR). It is located on the first

sector of a storage disk, such as a hard disk, and has the code to initiate the operating system
boot process. MBR also refers to the partition table used with older systems. An MBR disk can
have up to four primary partitions or three primary partitions and one extended partition. An
extended partition can support multiple logical volumes.
In current operating systems, MBR has been replaced by the globally unique identifier (GUID)
partition table (GPT) for tracking disk partition information. This is the method used for
managing disk partitions with modern operating systems such as current Windows versions,
MacOS, and most Linux distributions. This is a redundant partition table, written to both the
physical beginning and end of the disk. GPT does not limit the number of partitions, as does
MBR. When configuring hard disks, OS distributions allow you to choose to use MBR or GUID.
The MBR actually has two purposes. It tracks disk partition information and acts as a boot
loader for starting the operating system. As hard disk capacities increased, it became
necessary to replace MBR because it is limited to supporting hard disks no larger than 2 TB.
GPT was designed to allow for larger hard disks, up to 9.4 ZB (9.4 1021 bytes).
Zettabyte (ZB)
10 21 bytes

Disk management
Now we will talk about disk management and maintenance. Before a hard disk can be used, it
must be partitioned. Partitions are logical divisions of a physical hard disk. Once a partition is
created, it is formatted with a file system, usually NTFS.
By default, the hard disk is partitioned as a single disk partition during Windows SBS
installation. You can view and manage disk partition information through Disk Manager (Figure
6-94). You can run Disk Manager inside Server Manager or as a separate utility.

Figure 6-94: Disk Manager

Disk Manager lets you view and manage disks. It can be used to create and format new disk
partitions from available space. You can also scan disks and repair minor partition errors.
Disk Manager can be used to create software RAID implementations for data reliability.
Windows supports two fault tolerant RAID configurations:
Disk mirroring
Disk striping with parity
As you will recall, disk mirroring, known as RAID 1, requires space on two hard disks (Figure 695). Data is written to both disks. Should either disk fail, you still have access to your data.

Figure 6-95: Disk Mirroring

Disk striping with parity, known as RAID 5, requires disk space on at least three physical hard
disks. Data and parity information are written across all three disks (Figure 6-96).

Figure 6-96: Disk Striping with Parity

Parity information is generated during disk write. Should any one disk fail, the parity information
can be used to reconstruct the data from that disk. When you replace the failing disk, the parity
information is used to regenerate the data.
There are also hardware RAID disk solutions that typically have higher performance
than software RAID.
As mentioned earlier, your best protection against data loss is to back up data on a regular
basis. Should a hard disk or multiple disks in a RAID set fail, this might be your only chance of

Network file services

File sharing has been one of the primary justifications for implementing a network. File and
directory sharing is an assumed feature in modern operating systems. Windows SBS supports
file sharing, as well as other file servers. We are going to take a look at file sharing and security,
Distributed File System (DFS), and disk quotas.

File and Folder Sharing

File sharing is a bit of a misnomer. Shares are configured at the folder (directory) level rather
than the file level. One way to define shares is through a folders Sharing properties (Figure 697).

Figure 6-97: Sharing Properties

Click Share to identify users who have remote access to this folder. Available permissions are
limited to read and read/write only. Click Advanced Share to configure custom share
permission, as well as caching options.
Making a file from a shared folder available to users offline.

Share permissions can occasionally become something of a management issue. When the file
is located on an NTFS disk partition, a users effective permission is based on both local
access permissions and share permissions. The more restrictive of the two applies.

Distributed File System (DFS)

You were introduced to DFS in the last chapter. DFS lets you take folders from different
locations, from different computers, and even from different geographic locations and combine
them in the same virtual folder. When shared, this folder gives you a way to make related
folders easily accessible to users. The folder is identified by its DFS namespace, for example:

DFS namespace
Path to a shared DFS folder.

You can also make the files available from other servers by hosting them on remote servers. In
a geographically dispersed network, this gives you a way to keep the folders physically near
the users who need to access them. Windows SBS replicates the folder contents whenever
changes are made, keeping all copies of the data synchronized.

Disk quotas
You were also introduced to disk quotas briefly in the last chapter. Disk quotas can be defined
on disk volumes formatted with the NTFS file system only. When you establish quotas,
Windows SBS tracks the storage space taken up by user files. Quotas are managed through
the Quotas tab of the volume properties (Figure 6-98).

Figure 6-98: Quota Properties

Click Show Quota Settings to view and manage quota settings (Figure 6-99).

Figure 6-99: Quota Settings

You can set whether users are prevented from exceeding their limit or just warned when they
reach the limit. You can set the both the limit and the point at which a user starts receiving
warnings. You can also track quota limits by having events logged when users reach the
warning point or maximum limit. Click Quota Entries to identify which users and groups will be
limited by quotas.

Provision a Shared Folder Wizard

The Provision a Shared Folder Wizard, which you can use to configure folder shares as well as
related properties, is accessible from the Home page of the Windows SBS Console. Click Add
a new shared folder to launch the wizard (Figure 6-100).

Figure 6-100: Shared Folder Location

Start by selecting the folder that you want to share. From here, you can decide:
Whether to change the NTFS permissions or leave them unchanged.
Which protocols users can use to access the folder, defaulting to server message
block (SMB), and how those protocols are configured.
Server message block
Application layer protocol that enables clients to access shared files, printers, and miscellaneous communication devices.

Whether or not quota settings apply to the share.

Whether to apply an optional file screen to limit the types of files that can be saved in
the folder.
Whether to optionally publish the share as part of a DFS namespace.
At that point, you are prompted to review your share choices and create the share.

Print services
Print services are designed to give users easy access to shared printers and to centrally
manage printing. It has been shown that central management of printing can help reduce
related expenses.
Modern print services implementations support other shared devices as well, including
shared scanners and faxes.
You can use the Windows SBS server to add shared printers and faxes. You can also manage
shared devices through the Control Panel Devices and Printers utility (Figure 6-101).

Figure 6-101: Devices and Printers

Click Add a printer to create and configure a shared printer. This can include:
Printers connected to the local computer
Printers connected to different computers
Network-connected printers
Wireless printers
Bluetooth printers
Shared printers are not necessarily physical devices For example, the XPS Document Writer
generates an XML document and writes it to a disk file rather than printing a copy of a
You can manage printer properties and control user access to shared printers. You can also set
the times that printers are available.
You can define different parameters for the same printer by adding multiple definitions
for the printer under different names.
Print jobs are written to a print queue before they print. The Windows spooler manages the print
queue and ensures that the documents are sent to the appropriate printer. You can change the
order in which jobs will print, pause or reprint documents, suspend, or delete print jobs.
Print job
A ready-to-print document.


In this chapter, you learned:

Windows SBS 2011 is a bundled small business solution based on Windows Server
2008 R2.
Installation options include manual installation, SmartStart, and Gen 8 provisioning.
Windows SBS Console is a general-purpose management tool that is unique to
Windows SBS.
Control Panel and Server Manager are effectively the same utilities as on Windows
Server 2008 R2.
Device drivers enable the operating system to communicate with hardware devices.
Use Service Manager to configure service startup and disable unneeded services.
Network configuration after installation includes TCP/IP, DNS,
DHCP, and SNMP parameters.
DNS and AD are configured using default parameters during setup.
AD management utilities include AD Administrative Center, AD Users and
Computers, and AD Sites and Services.
File and print services support shared resources including files, printers, and faxes.
File systems supported by Windows SBS are FAT, exFAT, NTFS, CDFS, and UDF.

Review Questions
1. Which ProLiant Gen8 servers do not support OS installation through Intelligent
2. Power management options are configured through which Control Panel category?
3. Where are devices that Windows was not able to recognize and configure automatically
listed in Device Manager?
4. Which Windows SBS edition(s) support Windows Server Update Services (WSUS) 3.0?
5. What single-server deployment tool is designed to simplify installation and configuration
of HP ProLiant G7 (and earlier) servers?
6. What term is used to refer to the main component of a computers operating system?
7. Which server role is installed automatically to provide support for automatic assignment of
IP addresses to client computers?
8. Which file system is used by default when preparing a hard disk for Windows SBS

1. The ________ Web Access, __________ Web Access, and ________ website are
created during Windows SBS installation.

2. Windows SBS Standard edition lets you have up to __________ users.

3. Windows PowerShell is a command-line execution shell and ___________ language.
4. After you select the OS you want to install, SmartStart will prompt you for the media
_______ and media ________ for the installation.
5. Automated installation and maintenance features are supported for HP Gen8 servers
through __________ ___________.
6. To prevent a system service from starting, set its startup type to __________.
7. ____________ ____________ gives you a way to control the traffic into and out of the
8. Configuring RAID 5 through software requires at least ____________ physical hard

1. Windows SBS Server Storage lets you move SharePoint Foundation data to a different
storage location.
2. Many of the tasks in the Windows SBS Console link back to settings in the Windows
Control Panel.
3. You can use Server Manager roles to define operational roles for users.
4. The HP ProLiant MicroServer and Windows Small Business Server 2011 Essentials
comes configured with RAID storage.
5. Windows SBS Essentials supports manual and SmartStart installation only.
6. A Windows Active Directory (AD) domain is created during a clean or migration
7. You cannot change internal domain name information after installation is complete.

Essay question
Write at least one paragraph describing improvements of Intelligent Provisioning over

Scenario question
Scenario: Stay and Sleep
Stay and Sleep is trying to determine the appropriate Windows SBS edition to use in a new
installation. You need to identify features what would require them to deploy Windows SBS
Standard edition.

1. What features would require them to deploy Windows SBS Standard edition?

This eBook is licensed to Catalin Dinca,

Chapter 7: Installing and Configuring Linux

Now that we have looked at Windows Small Business Server (SBS) in some detail, we will
move on to Linux. There are variations between Linux distributions, so we will focus our
discussion on one particular distribution, SUSE Linux Enterprise Server.
Although Linux distributions have similar architecture, different distributions can have
distinctly different interfaces.
Linux has been a growing segment of the server market over the past several years.
Implementations range from the small business market up through the largest enterprises.
Because it can run on different hardware platforms, Linux adds flexibility to your network
options. Its flexibility makes it relatively easy to integrate Linux with other network operating
During this chapter, we will cover many of the same topics that we discussed with Windows
SBS. We will look at installation first, comparing different installation methods. After that, we will
take a quick tour of available management tools. We finish with some configuration and
management topics.

In this chapter, you will learn how to:
Describe basic features of Linux.
Install Linux using the following installation methods:
Manual installation
Gen8 Provisioning
Recognize and use built-in management utilities.
Manage users.
Configure network and directory services settings.
Manage services.
Manage files and file services.

About Linux
One of the reasons for the strength and stability of the Linux operating system is that it comes

from sturdy roots. The Unix operating system, which served as a basis for Linux, had been in
common use on a wide variety of platforms. It was (and is) so well known and so stable that
even its command bugs are fully documented. In fact, programmers have come to use them as
Linux was developed separately from Unix, down to its kernel, but was purposely written to look
and act just like Unix. Early Linux distributions were limited to a command line-based interface,
known as a shell. However, Linux quickly evolved to add a Graphical User Interface (GUI)
interface and additional functionality.
Unix roots meant that Linux had a ready-made support network. Unix has been a popular
choice on college campuses, in businesses, and among computer hobbyists for years.
Programmers have experience building applications for Unix in several languages, and in
writing device drivers and other support libraries.
Another reason why Linux has gained widespread commercial acceptance is that it is
supported by a large number of major vendors. This includes both hardware and software
Linux has the ability to evolve quickly to meet changing business requirements. Manufacturers
and developers even have the option of modifying a Linux distribution to meet their particular
needs. Linux also brings a level of flexibility to your network environment through its support for
a wide variety of hardware platforms, including ARM, SPARC, and MIPS architectures, and
mobile devices.
Advanced RISC Machine (ARM)
A CPU designed for low power consumption, commonly used in smartphones and tablets.
Scalable Processor Architecture (SPARC)
A processor architecture developed by Sun Microsystems that is designed to be able to use the same instruction set for
processors with a wide variety of uses. SPARC is now an open source architecture.
Microprocessor without Interlocked Pipeline Stages (MIPS)
A computer architecture that uses a minimal set of instructions. Typically used in embedded systems, including routers and
video game consoles.

The Linux desktop is very similar in look and functionality to the Windows desktop (Figure 7-1).

Figure 7-1: Linux Desktop

Linux actually has multiple GUI desktops available. Most distributions default to using either
XWindows or KDE, both of which are very similar to Windows and keep the learning curve for
Windows users to a minimum. Most distributions also give you the option of the Gnome
desktop, which is more like a Macintosh computer in its style. XWindows, KDE, and Gnome
also refer to suites of interface-specific desktop applications.
Popular Windows-style GUI interface, typically rated as one of the three most popular Linux GUIs.
K Desktop Environment (KDE)
Windows-style GUI interface typically rated as the most popular Linux GUI.
Macintosh-style GUI interface, typically rated as one of the three most popular Linux GUIs.

Rather than trying to keep all of your open application windows on one desktop, you can swap
between multiple desktops by clicking in the window icon (Figure 7-2).

Figure 7-2: Swapping between Desktops

You can click on the window icon to move to a different desktop.

You are logged on as the same user in each of the desktops.

For most distributions, Linux installation is relatively easy. The setup program steps you
through the process, asking about configuration information as necessary. Some distributions
make more installation decisions automatically for you than others.
HP makes installation even easier through its SmartStart and Gen8 provisioning. These are
designed to help you get your Linux server up, running, and configured to meet your needs
even faster.
We are going to take a quick look at installation procedures, starting with a manual installation.

We will start with the procedures for a manual install. We are using SUSE Linux Enterprise 11
as our example. You are prompted first with an installation menu. Select Installation (Figure 73).

Figure 7-3: Installation Menu

There is a delay while the initial installation files are loaded. Once the files are loaded, the
installation wizard prompts for language and keyboard layout information. You also must agree
to license terms before you can continue (Figure 7-4).

Figure 7-4: Welcome Screen

You are prompted to verify the installation media (Figure 7-5).

Figure 7-5: Media Check

You are prompted to identify the installation as new, update, or if you are repairing an installed
system. The default is to use automatic configuration and let the installation wizard set basic
configuration parameters for you (Figure 7-6).

Figure 7-6: Installation Mode

You are prompted for time, date, and time zone information (Figure 7-7).

Figure 7-7: Date, Time, and Time Zone

You are prompted to create your initial user account. You have the option of using the
password that you enter here as the system administrator (sa) password (Figure 7-8).

Figure 7-8: User Information

After you review the installation settings and make the changes you need (if any), you are ready
to continue with the actual installation (Figure 7-9).

Figure 7-9: Installation Settings

You are prompted to confirm the package license for any software addons you are installing
(not shown).
The installation wizard graphically displays the installation progress (Figure 7-10). Clicking
Details displays detailed information about what is being installed along with the progress bar.

Figure 7-10: Installation Progress

The system reboots after the basic installation portion completes. If you chose automatic
configuration, that is the next step in the installation (Figure 7-11).

Figure 7-11: Automatic Configuration

During SUSE Enterprise installation, you are prompted with the Novell Customer Center
Configuration Center. You have the option of configuring this now, or after installation (Figure 712).

Figure 7-12: Novell Customer Center Configuration

Configuration requires an Internet connection. If you cannot connect to Novells server, you will
be prompted to configure later.
The final screen reports that your installation is complete (Figure 7-13).

Figure 7-13: Installation Complete

The computer restarts when you click Finish. After it starts up, you are prompted for a
username and password to log in (Figure 7-14).

Figure 7-14: Login Prompt

From there, you are taken to the default desktop (Figure 7-15).

Figure 7-15: Default Desktop

At this point, you are ready to start using the computer.

SmartStart is provided for the same purpose and works the same way with Linux installation as
it does with Windows installation. As with Windows, SmartStart prompts you for the information
needed to assist with automated Linux installation.
Assisted activities include:
Performing validity checks to ensure that you have the appropriate hardware
configuration before the OS installation.
Configuring server and array controller hardware using ROM-based setup utilities (by
pressing the F8 and F9 keys or launched from SmartStart).
Validating the hardware configuration by booting from the SmartStart CD and
walking through interview questions to prepare your server for OS installation.
Completing the OS installation and automatic installation of the ProLiant Support
Pack server support software from the SmartStart CD.
When working with SmartStart for a Linux installation, you can still create a device driver CD,
but this time, with Linux-specific device driver files. SmartStart will create driver diskettes from
the latest PSP or SPP available from HP for Linux. Available SPP versions include:
Red Hat Enterprise Linux 4

Red Hat Enterprise Linux 5

SUSE LINUX Enterprise Server 10
SUSE LINUX Enterprise Server 11
Refer to for supported servers and
The information prompts will be much the same as you saw in the previous chapter, including:
Operating system to install
Installation source
Disk partitioning
Computer name and password information
SmartStart software for Linux can be downloaded from the HP website.

Gen8 Provisioning
Intelligent Provisioning is also supported for servers that ship with HP supported versions of
Linux. As with Windows SBS, the goal of Intelligent Provisioning is to enable you to configure
and deploy HP ProLiant Gen8 servers and server blades more rapidly and to simplify applying
maintenance updates.
Setup software, device drivers, and firmware come embedded on a flash chip installed in the
server. As mentioned in the last chapter, Intelligent Provisioning can connect to HP during the
installation to download and apply the most recent firmware updates.
Improvements over SmartStart include:
Ability to update drivers and system software by connecting directly to and
perform firmware updates and install an OS in the same step.
Ability to provision a server remotely using HP iLO 4.
Remote Support registration.
Revised and simplified user interface (UI).
The same installation options are supported as for Windows SBS:
Recommended Install - Express installation process that has HP-recommended
preset defaults and performs a software and firmware update if the network is
available at your location.
Customized Install (Guided or Assisted install) - Full HP interview with a start-tofinish wizard for deploying HP ProLiant Gen8 servers.
Once again, because the software, drivers and tools are already embedded, you can simply run
the installation. You do not need to locate installation and driver CDs/DVDs to complete your
Scenario: BCD Train

BCD Train is planning to deploy two new Linux servers as part of an existing Windows Active
Directory network. The customer has ordered Gen8 servers with identical hardware
configurations. Discuss available installation options and the best option in this scenario.

Management Tools
Linux comes with a wide variety of built-in management tools. The exact set of tools depends
somewhat on the Linux distribution. Distributions designed to run on hardware with limited
resources will likely have fewer tools. Those used as server platforms might include more
advanced server tools.
We will be looking at the management tools selection provided with SUSE Enterprise 11. The
tool mix that you encounter with other distributions will be similar.
To see a list of available applications, click More Applications from the main menu (Figure 716).

Figure 7-16: Main Menu

This opens the Application Browser. Applications are organized by category for easy access
(Figure 7-17).

Figure 7-17: Application Browser

The application categories are:

Audio & Video
Use the audio and video players, sound recorder, and CD/DVD burner.
Browse with the File and Internet browsers.
Communicate with Main, instant messaging, and FTP tools.
Play assorted games.
Use the scanner, webcam application, image viewers, and drawing tools.
Use personal productivity tools.
Use system utilities and configuration tools.
Use assorted management and configuration tools.
Additional tools are available from the Control Center (Figure 7-18).

Figure 7-18: Control Center

Once again, the tools are organized into categories:

Configure and manage peripheral devices, including mouse and keyboard.
Look and Feel
Manage main menu and desktop appearance.
Configure assistive technologies, encryption, and keyboard shortcuts.
Manage system settings, like updates, date and time, power, and so forth.
Many of these tools are the same as those available through the Application Browser. They are
just organized differently.
One System selection that deserves special mention is YaST Control Center, which can be
launched by clicking the YaST icon (Figure 7-19).

Figure 7-19: Launching YaST

This opens the YaST Control Center with many of the same utilities plus some additional
utilities (Figure 7-20).

Figure 7-20: YaST Control Panel

This time, the groups are:

Manage various hardware components.
Configure automatic installation and view logs.

Network Devices
Configure network devices and network settings.
Network Services
Configure network services and Windows domain membership.
Novell AppArmor
Manage application profiles.
Security and Users
Manage the firewall and security center.
Manage software and software updates.
Configure system settings and backup and restore.
Install and manage virtualization support.
Access additional support links.
As you can see, these different application groups give you a number of ways to access
management tools and utilities.
Some tools and utilities require you to be logged on with system administrator
privileges. The default system administrator user is named root.
Our discussion focuses on a few key tools as examples. For now, the discussion is limited to a
brief overview, but we will be looking at some of the tools in more detail later in the chapter.

File tools
Linux provides several file tools. The File Manager lets you set file management preferences
(Figure 7-21).

Figure 7-21: File Manager

File Manager lets you control how files and folders are displayed. You can also specify the
default actions to take for media files.
File Browser lets you navigate the file system, much the same as Windows Explorer in the
Windows operating system (Figure 7-22).

Figure 7-22: File Browser

You can manage files and folders in the File Browser. You can also view and manage folder
and file properties. Right-click a file or folder and click Properties.

File and folder properties

You can manage file and folder settings and permissions through file and folder properties. We
will start with folder properties (Figure 7-23).

Figure 7-23: Folder Properties

The Emblems tab (Figure 7-24) lets you define additional information about a file or folder and,
if you want, change the icon for the item.

Figure 7-24: Emblems

The Permissions tab lets you manage permissions for the folder and the files it contains. The
default permissions for the usr folder are shown in Figure 7-25.

Figure 7-25: Folder Permissions

You can set permissions by owner, group, and other users. You can also change the folders
owners. Permissions can be set as:
List files only
Access files
Create and delete files
If you want, you can have the folder level permissions applied to all of the files that the folder
You must be logged on as the file or folders owner or as root to modify permissions.
The Notes tab (not shown) is simply a text window that lets you enter any notes you want about
the folder. This gives you a way of documenting a folder or leaving notes to yourself.
The Share tab (Figure 7-26) lets you share a folder and its contents to the network.

Figure 7-26: Share Tab

The default share name is the folder name. You can allow other users to write to the folder. If
you want, you can also allow guest access for individuals who do not have a user account on
the computer, such as Windows network users.
File properties are very similar to folder properties. The file permissions are different in that you
control file permissions only. There is also an additional Open With tab. The remaining tabs
are the same as for folders.
File permissions (Figure 7-27) let you set access permissions for individual files. These can
override permissions inherited from the folder.

Figure 7-27: File Permissions

You can set the permission as read-only or read-write. Once again, permissions can be set for
the owner, a group, and for all other users.
The Open With tab (Figure 7-28) lets you identify the default application used to open the file.

Figure 7-28: Open With Tab

Linux lists different applications for different file types.

Personal File Sharing

Launch Personal File Sharing (Figure 7-29) to configure file sharing properties for the current

Figure 7-29: Personal File Sharing Properties

You can configure properties for sharing files over the network or over a Bluetooth connection.
You have the option of password-protecting network shares.
We finish our look at file tools with a look at the Expert Partitioner. When you first launch Expert
Partitioner, it displays a warning dialog (Figure 7-30).

Figure 7-30: Warning Dialog

Changing disk partitioning, if it is done improperly, can render the system unusable. Click Yes
to continue to the Expert Partitioner (Figure 7-31).

Figure 7-31: Expert Partitioner

One choice that you have to make when partitioning a hard disk is the file system(s) for each
partition. The one exception to this is the Swap partition, which is used to support virtual
memory. Linux formats the partition with some data structures, but it is not considered a file
system because it does not store any files in the traditional sense.
Unless you have a good understanding of Linux partitions, file systems, and their use,
you should avoid making partition changes.

File systems
The most commonly used Linux file systems are known as journaling file systems. They track
changes to the disk partition, helping avoid the lengthy disk integrity checks that run when
starting up a system using earlier Linux file systems. With earlier Linux file systems, Linux
would run a lengthy file system check upon restart after unexpected shut downs or power
failures. Because changes are tracked in a journaled file system, this check is not necessary.
Changes can be rolled back or rolled forward automatically as necessary.
Journaling file system
File system that tracks partition changes to directories and files to assist in recovery if a system reboots or is shut down

The Linux-specific journaling file systems you are most likely to see include:
Third extended filesystem (ext3fs)
Extent Filesystem (XFS)
Journaled Filesystem (JFS)
Of these, ext3fs and ReiserFS are the most stable and most commonly used. XFS and JFS are
more advanced and are not commonly used.

Development on ReiserFS has ceased.

Linux also supports a number of non-Linux file systems, including:
File Allocation Table (FAT) used primarily by Windows.
New Technology File System (NTFS) the current default file system for Windows.
High-Performance File System (HPFS) similar to NTFS, developed for IBMs OS/2.
Unix Filesystem (UFS) used by several versions of Unix.
Hierarchical Filesystem (HFS) used by Mac OS.
ISO-9660 used with CDs/DVDs.
Joliet used with CDs/DVDs.
Universal Disk Format (UDF) used with CDs/DVDs.
One reason for including non-Linux file systems is to support a multiple boot configuration. After
booting Linux, you will be able to access files on both Linux and non-Linux file systems.
Multiple boot
Computer configured with multiple OS installations and OS versions installed, supporting the option of choosing the OS
during startup.

File access security

File access is tracked through a 10-character file access control string. The first character
identifies the file type. The remaining characters identify the file access permissions, three each
for the owner, group, and other users. Most files are identified as normal data files, but you can
also use the file type to identify a hardware device used to transfer data by blocks or character
by character.
Supported file permissions are:

Allow read access to the file.

Allow users to write to or delete the file.

Allow file execution.

These permissions are a carry-over from the Unix operating system.

Scenario: BCD Train
BCD Train is planning to deploy one training server in each classroom. The servers will be
set up in a dual-boot configuration so that you can start up either Linux or Windows Server.
Discuss the file system options that you should use when setting up the computer.

Hardware tools
Several tools are available to help you manage system hardware and peripherals. Most are
easy to use, like the Keyboard and Mouse utilities that let you set parameters relating to
keyboard and mouse use and responsiveness. You can also adjust parameters, such as color
depth and resolution, for your computers monitor. Other utilities control power use and manage

print services.

Viewing hardware information

You can use the YaST Hardware Information module to view information about installed
hardware (Figure 7-32).

Figure 7-32: Hardware Information

You have the option of saving this information to a file as a way of documenting the system

Power management
The Power Management utility lets you adjust power management settings. You can use these
settings to optimize power use (Figure 7-33).

Figure 7-33: Power Management

You can control when the display and computer are put to sleep to minimize power usage.
The General tab (Figure 7-34) lets you control the actions when the user presses the power or
suspend button.

Figure 7-34: General Power Properties

The Scheduling tab (Figure 7-35) lets you configure a computer to wake up automatically from
a suspended or hibernation state.

Figure 7-35: Scheduling Properties

You can set both the time and days for wake up.

Printer configuration
Printers can be managed through the Control Center Printer utility and through YaST. When
you open the Control Center Printer utility, it displays detected printers (Figure 7-36).

Figure 7-36: Printer Configuration

To manage a printer, open the printer properties. Select Settings to display general information
about the printer (Figure 7-37).

Figure 7-37: Printer Settings

Select Policies to control whether or not a printer is shared and if it is accepting print jobs
(Figure 7-38).

Figure 7-38: Printer Policies

Selecting Access Control lets you control user access to the printer. You can either allow
access or deny access for all but specifically identified users (Figure 7-39).

Figure 7-39: Printer Access Control

Selecting Job Options sets the default options for print jobs for the printer (Figure 7-40).

Figure 7-40: Printer Job Options

These include options such as the number of copies, default orientation, image scaling, and
text options.
The YaST Printer utility (Figure 7-41) lets you configure system-wide options as a print server.
By default, Linux setup does not create a print queue to support print server operations.

Figure 7-41: YaST Printer Utility

Click Add to create a print queue for a shared printer. You can also create a print queue that
holds jobs for a shared network printer.
Selecting Print via Network (Figure 7-42) lets you configure connections to Common Unix
Printing System (CUPS) printers.

Figure 7-42: Print via Network

Common Unix Printing System (CUPS)
Unix-shared printer management system.

You can use Share Printers (Figure 7-43) to allow or deny access to printers configured on the
computer. This applies to all printers configured on the local system.

Figure 7-43: Share Printers

Use Policies (Figure 7-44) to set error and standard operation policies for printers.

Figure 7-44: Policies

If an error occurs, you can stop the printer and either save the print job for later or delete the job.
You can also choose to wait and then resend the job.
You can use Autoconfig Settings (Figure 7-45) to determine how USB printers attached to the
computer are automatically configured by default.

Figure 7-45: Autoconfig Settings

Configuring and Managing Linux

Linux provides a wide variety of tools for system configuration and management. These include
tools for managing Linux itself, network configurations, users, and security.

System management
System configuration information is stored in files stored in the /etc/sysconfig directory (Figure

Figure 7-46: Sysconfig Folder

These configuration files are text files. In early Linux versions, it was necessary to directly edit
the text files. Current versions make this editing process easier, through use of the YaST
/etc/sysconfig Editor (Figure 7-47).

Figure 7-47: /etc/sysconfig Editor

When you select a configuration setting, you are prompted with:

The current configuration value.

Supported values.
A brief description.
Most configuration changes are not applied until the next time that you restart Linux.
You can also manage how a Linux computer boots, and which applications are loaded
automatically at startup. The YaST Boot Loader lets you select the default image used to boot
the computer, as well as the boot loader itself (Figure 7-48).
Boot Loader
Program that controls system startup by loading a selected OS image file.

Figure 7-48: YaST Boot Loader

The Section Management tab lists available boot images and lets you select the default
image. Linux lets you have multiple images installed on a computer. These could be different
versions of Linux.
The Boot Loader Installation tab lets you choose the boot loader that will be used to start up
the system. You can also choose the boot loader location, which is the partition from which you
want to load the boot image file.

Figure 7-49: Boot Loader Installation

The Control Center Startup Applications module lets you choose the applications that are
launched when you boot Linux (Figure 7-50).

Figure 7-50: Startup Applications

Checked applications will start automatically. Click Add to add more programs to the default
list. The Options tab (not shown) lets you choose to have any applications that were running
when a user logs out restart automatically when the user logs in again.

Software updates
You can configure Linux to automatically check for software updates through the Software
Updates module (Figure 7-51).

Figure 7-51: Software Updates

The default is to check for updates daily. You can have all updates installed automatically,
install security updates only, or not install any updates automatically.

Managing services

The YaST System Services (Runlevel) is similar to the Windows SBS Services utility in that it
lets you manage services that run at the system level (Figure 7-52).

Figure 7-52: System Services Simple Mode

When in Simple Mode, you can enable or disable a service. Exercise caution because
disabling a service will cause any services that depend on that service to fail.
You have more control over services when in Expert Mode (Figure 7-53).

Figure 7-53: System Services Expert Mode

Not only can you enable and disable services, but you can also manually start, stop, or restart a
selected service. You can also configure the runlevels at which the service starts. Runlevels
control whether or not the service runs as a multiuser application, if the service has access to
network services, and so forth.

Security management
The YaST Security Center and Hardening module lets you view and manage system security.
When you launch System Center and Hardening, it displays an overview of system security
settings (Figure 7-54).

Figure 7-54: Security Overview

Configurable security settings include:

Predefined Security Configuration
There are three predefined configurations: Home Workstation, Networked
Workstation, and Network Server. You can also choose a custom configuration.
Password Settings
You can use these settings to have Linux set password length, password complexity,
the number of passwords to remember, the minimum and maximum password ages,
and the encryption method used to encrypt passwords.
Boot Settings
You can choose whether Linux reboots when the Ctrl+Alt+Del keys are pressed and
configure the behavior of the Shutdown command by user.
Login Settings
You can specify whether to have an account temporarily locked after a set number of
failed login attempts, defaulting to 3. You can also choose to have successful logins
User Addition
You can configure the minimum and maximum numeric values used internally by

Linux for tracking user and group IDs.

Miscellaneous Settings
These settings let you set default file permission information.
Linux, like Windows SBS, supports the option of configuring a local firewall on a computer.
Unlike Windows SBS, the firewall is disabled by default (Figure 7-55).

Figure 7-55: Firewall Configuration

The Start-Up tab lets you enable or disable the firewall and start the firewall if not running. If
you want to immediately apply any changes to firewall settings without restarting Linux, click
Save Settings and Restart Firewall Now.
Additional available firewall settings include:
Choose the network interface(s) for which the firewall is configured.
Allowed Services
Open the necessary ports for services that you want to support, such as DHCP and
various network client and server types.

Configure masquerading to protect an internal network. Masquerade requires at least

two network interfaces, one connected to the internal network and the other
connected to external resources.
The process of hiding an internal network behind the firewall, while providing the internal network access to external
services, such as the Internet. This is similar to Windows NAT.

Configure support for forwarding broadcasts. Settings are available for the internal
network, DMZ, and external resources. By default, broadcasts are not accepted from
external resources.
Demilitarized Zone (DMZ)
A physical or logical subnet network that protects a companys internal IT infrastructure. Firewalls separatethe DMZ from the
internal network and from the Internet. A DMZ is also called a perimeter network.
Internet Protocol Security (IPsec)
A protocol that provides encryption and authentication for IP network traffic.

IPsec Support
Enable or disable IPsec support.
Logging Level
Choose logging options for accepted and rejected packets. You can choose to log all
packets, critical packets only, or not log any packets.
Custom Rules
Define custom rules for accepting packets based on source network, protocol, source
port, and destination port.
Custom rules are configured separately for the internal network, external network, and
You must restart the firewall (or restart Linux) before any changes are applied.

User management
The Control Center User and Group Administrator module lets you create, delete, and manage
user and group accounts. You can also configure authentication settings.
The Users tab lets you view and manage users (Figure 7-56).

Figure 7-56: Users Tab

Click Add to create a new user account. You are initially prompted for the users full name, user
name, and password (Figure 7-57). This is the minimum information that you must provide to
create a user account.

Figure 7-57: New User

The Details tab (Figure 7-58) lets you enter additional information about the user that you are

Figure 7-58: Details Tab

From here, you can define:

The user ID
User home directory and permissions
Login shell
Default group and additional group membership
Login Shell
The initial user operating environment, also defining the default command line prompt.

The Password Settings tab (Figure 7-59) lets you override the default password settings seen
earlier in this chapter. This lets you define exceptions as needed for specific users.

Figure 7-59: Password Settings

The Plug-Ins tab (not shown) lets you associate additional applications with a user account.
You can also view, create, delete, and manage groups (Figure 7-60).

Figure 7-60: Groups Tab

You are prompted for the same information when creating a new group or editing an existing
group (Figure 7-61).

Figure 7-61: New Local Group

You are prompted for group name, group ID, and password. You can also manage group
membership. One of the primary purposes of groups is for permission management. Note that
users, applications, and services can be group members, and you can set their access security
within the file system.
Use Defaults for New Users (Figure 7-62) to configure default parameters that should be
applied when creating new users.

Figure 7-62: Defaults for New Users

New user defaults can be overridden on a user-by-user basis.

You use Authentication Settings (Figure 7-63) to configure login authentication for users. As
you can see, Linux supports a wide variety of authentication options, including Kerberos.

Figure 7-63: Authentication Settings

Click on an authentication type to configure authentication. You can also modify configuration
settings (Figure 7-64). In the following figure, you see the configuration settings prompts for
Kerberos authentication.

Figure 7-64: Kerberos Authentication

To have Linux users authenticated by Windows Active Directory, you would enter the Active
Directory domain information. The KDC server address would be an Active Directory domain
Key Distribution Center (KDC)
The service responsible for issuing resource access keys in an environment that uses Kerberos authentication.

The configuration parameters shown here are the same as those you will see when
using the YaST Kerberos Client module.
Scenario: BCD Train
BCD Train wants to use temporary user accounts to provide students with access to select
resources on training workstations. You need to make sure that students change the current
password the first time they log on. The accounts should not be available for logon between

classes. Discuss management options to support this scenario.

Network configuration
There are several configuration utilities available to view and manage network connectivity.
Linux supports wired, wireless, mobile broadband, VPN, and DSL connectivity. You have the
option of whether or not IPv6 support is provided in addition to IPv4.
Internet Protocol version 6 (IPv6)
An address format developed to extend the address space by providing a 128-bit address, represented as a series of
hexadecimal numbers, to each device.

Network connections
The Control Center Network Connections module lets you manage network connections. Click
the connection type to add a new connection or edit an existing connection (Figure 7-65).

Figure 7-65: Network Connections

We will start by looking at wired connection parameters (Figure 7-66).

Figure 7-66: Wired Settings

The MAC address shown is the MAC address that is hard coded on the network interface. You
can override this value with one that you enter. You can also set the size of the maximum
transmission unit (MTU).
Maximum transmission unit (MTU)
Sets the maximum packet size for the network interface.

You have the option of enabling 802.1x security for the connection (Figure 7-67).

Figure 7-67: 802.1x Security

This gives you the option of selecting the authentication type and applying security certificates.
You can also identify a file containing private key information for the connection.
The IPv4 settings let you configure IPv4 address management parameters for the connection
(Figure 7-68).

Figure 7-68: IPv4 Settings

The default is to use automatic address assignment through DHCP. You can also use manually
assigned addresses. In addition, there is an address option that is not supported by Windows
SBS. You can configure the connection for link-local addressing only, which means it will be
able to communicate on the local subnet only.
You can configure wireless connections in addition to, or instead of, wired connections (Figure

Figure 7-69: Wireless Settings

You are initially promoted for:

Service Set ID (SSID)
Text string used to identify the wireless access point (AP) to wireless clients.
Access Point (AP)
Device supporting connectivity for wireless clients operating in infrastructure mode.

The mode can be set to Infrastructure or Ad hoc. In infrastructure mode, wireless
clients connect through an AP and can be connected through the AP to a wired
network. Ad hoc mode uses peer-to-peer connections established directly between
wireless devices.
Basic Service Set ID (BSSID)
An alternate way of identifying the AP using the APs MAC address.
MAC address
Wireless client adapters MAC address. You can override this value.
Sets the maximum packet size.
By default, the connection is configured for the current user only.
There are several wireless security options available for wireless clients. The configuration

prompts that are displayed depend on the security option that you choose (Figure 7-70).

Figure 7-70: Wireless Security

The IPv4 configuration settings for a wireless client are the same as you saw earlier for wired
clients (Figure 7-71).

Figure 7-71: IPv4 Settings

As before, you can use automatic or manual IP address definition.

Network settings
You can define additional networking configuration information through the YaST Network
Settings module (Figure 7-72).

Figure 7-72: YaST Network Settings Module

You can control network settings through an optional applet named NetworkManager or
through traditional management methods. In many situations, such when supporting multiple
network interfaces, you must use traditional management. The Global Options tab lets you
enable or disable IPv6 support and enter DHCP client options.
The Overview tab (Figure 7-73) lists network interfaces that are installed in the computer.
Select an interface for an overview of interface settings.

Figure 7-73: Overview Tab

The Hostname/DNS tab (Figure 7-74) lets you define the computers FQDN and enter DNS
server information. You can identify up to three DNS servers.

Figure 7-74: Hostname/DNS Tab

You can also specify additional domain name suffixes to use when qualifying a machine name.
You can also create a HOSTNAMES file on the computer that contains name resolution
information. This is a manual way of associating machine names with IP addresses.
The Routing tab (Figure 7-75) lets you enter the default gateway and manually enter routing
table information for the computer.

Figure 7-75: Routing Tab

Check Enable IP Forwarding if you want this computer to act as a network router.

Network integration
Linux is designed to let you integrate Linux servers and clients into various network
environments. These include:
Network Information Service (NIS) client Client access to Sun Microsystems NIS
Network File System (NFS) client Client access to shared Linux/Unix resources
through NFS protocol.
Lightweight Directory Access Protocol (LDAP) client Client access for LDAP
distributed directory information services.
Windows Domain Membership Client access to a Microsoft Windows Active
Directory domain.
We will look at configuring Windows Active Directory membership as an example (Figure 7-76).

Figure 7-76: Windows Domain Membership

You can specify a domain or workgroup name for membership. You can also specify
authentication options and directory sharing for Linux users who are logged onto the AD
Click Expert Settings to specify the supported range of user and group IDs that can have AD
access. You can also specify WINS name resolution options and shared directories to be
mounted when connecting to the AD (Figure 7-77).

Figure 7-77: Expert Settings

Monitoring Linux
Linux provides tools for system monitoring and maintains two logs that can provide useful
management and troubleshooting information.
System Monitor provides information about the system and its activities (Figure 7-78).

Figure 7-78: System Monitor

The System tab provides an overview of system information, including the Linux and BIOS
version information.
The Processes tab (Figure 7-79) gives you information about running processes. To stop a
process, click the process and click End Process.

Figure 7-79: Processes Tab

The Resources tab (Figure 7-80) gives you an overview of resource usage on the computer.

Figure 7-80: Resources Tab

You can view information about CPU and memory usage, as well as network traffic.
The File Systems tab (Figure 7-81) lists detected file systems, excluding the Swap partition.
Double-click a disk partition to open the partition in the File Browser.

Figure 7-81: File Systems Tab

The Hardware tab (Figure 7-82) provides an overview of the system hardware configuration.

Figure 7-82: Hardware Tab

The boot log (Figure 7-83) records system activity during system startup. Log information can
be useful when startup errors occur.

Figure 7-83: Boot Log

There is also a system log (Figure 7-84) that gives you information about system activity.

Figure 7-84: System Log

You can copy and paste the log contents into a text file for future reference.

In this chapter, you learned:
Linux is an open source operating system based on the Unix operating system.
The Linux desktop is very similar to the Windows desktop.
You can install Linux manually, using SmartStart, or using Intelligent provisioning.
Management tools are available through the Control Center, YaST Control Center,
and the main menu More Applications.
YaST Control Center and many management tools require you to enter the root
password for access.
Available file tools include File Manager, File Browser, Personal File Sharer and
Expert Partitioner.
Linux supports file systems specific to Linux, as well as non-Linux file systems.

Hardware tools include Hardware Information, Power Management, and Printer

System configuration information is stored in files maintained in the /etc/sysconfig
Use /etc/sysconfig Editor to edit configuration parameters.
System Services (Runlevel) lets you view and manage services.
Security Center and Hardening and Firewall let you manage system security.
The Users and Group Administrator module lets you create, delete, and manage user
and group accounts.
You can manage network connectivity through Network Connections and Network
System Monitor provides summary information about the system and its activities.

Review Questions
1. PSPs are currently available for what versions of SUSE Linux?
2. Which type of file system tracks partition changes to directories and files to assist in
recovery if a system reboots or is shut down improperly?
3. What management options are supported for System Services (Runlevel) running in
Simple Mode?
4. Which value identifies an AP to clients using the APs MAC address?
5. Through which utility can you enable IPv6 support?
6. What is the purpose of the Enable IP Forwarding option?

1. Running More Applications from the main menu opens the __________________
2. The ___________ and ReiserFS are the most stable and most commonly used Linux file
3. ______________ protects an internal network by hiding internal addresses from an
external network.
4. Automatic IPv4 address assignment is provided through ________.
5. Windows Active Directory membership lets you set up membership in a Windows
_________________ or AD domain.

1. Any user has access to the utilities in the YaST Control Center.
2. The Linux firewall supports settings for the internal network, DMZ, and external network.

3. A computer running Linux can be configured as a router.

4. You cannot configure IPsec support for Linux Firewall.
5. Password protection parameters can be configured at the system level only.
6. Linux groups can include both users and applications as members.

Essay question
Write at least a paragraph describing configuration options for Linux firewall.

Scenario question
Scenario: Stay and Sleep
Stay and Sleep wants to add a second wired adapter to an existing Linux server. You will
need to configure the adapter with a MAC address other than the one encoded on the adapter
and have it use a static IPv4 address.

1. Which utility should you use and what are the basic steps.

This eBook is licensed to Catalin Dinca,

Chapter 8: HP System Management

This chapter introduces HP system management. You were introduced to iLO earlier in the
course. However, there are other HP utilities that facilitate server management. We will
introduce the basics in this chapter, but we will be using management utilities and concepts
introduced here at various times throughout the rest of the course.
The chapter opens with a closer look at iLO, including how to configure iLO using the HP Lights
Out Configuration Utility. Next, we provide an introduction to the ProLiant Essentials
Foundation Pack and its contents. After that, we discuss System Management Homepage
(SMH). From there, we cover centralized management using HP Systems Insight Manager
(SIM), including installation, configuration, and basic functionality. We conclude the chapter
with a brief look at Gen8 ProActive Management. The tools that we discuss in this chapter are
part of a larger set of tools for complete IT infrastructure management.


In this chapter, you will learn how to:

Describe the HP Insight Foundation Suite for ProLiant.
Prepare for HP SIM installation.
Install and configure HP SIM.
Identify key features of HP SIM.
Describe the functionality built into System Management Homepage (SMH).
Describe the HP Lights-Out Configuration Utility.

More about Integrated Lights-Out (iLO)

HP Integrated Lights-Out (iLO) management processors for HP ProLiant servers virtualize
system controls to help simplify server setup, engage health monitoring and power and thermal
control, and promote remote administration across the HP ProLiant Server product line.
For customers looking for HP iLO Advanced functionality, such as graphical remote console,
multi-user collaboration, and video record/playback, these and many more advanced features
can be activated with the optional licensing for HP iLO Advanced or HP iLO Advanced for
BladeSystem licenses.
HP iLO Advanced or HP iLO Advanced for BladeSystem functionality is activated using a
license key. Software updates are not required.

HP Lights-Out Online Configuration Utility

The HP Lights-Out Online Configuration Utility allows you to view and configure iLO. It enables
you to manage and apply iLO configuration settings without requiring a system reboot. Click
Summary and then click iLO Summary to display an overview of configuration information
(Figure 8-1).

Figure 8-1: iLO Configuration Summary

Clicking Summary |Reset iLO/Set to Factory Defaults allows you to either reset the iLO

firmware or reset to factory default settings.

The Network menu allows you to manage network configuration settings. You can choose to
manage either Standard network settings or Advanced network settings. Standard network
settings (Figure 8-2) include IP address information, gateway address, and DNS name. These
settings apply to the iLO network adapter and are used to connect the iLO management
processor to the management network.
A dedicated management network helps improve security by segregating management
traffic from normal business network traffic.

Figure 8-2: Standard Network Settings

Additional configuration information can be provided through the Advanced Network Settings
(Figure 8-3).

Figure 8-3: Advanced Network Settings

Clicking the User menu allows you to view and modify accounts that can manage iLO

configuration information and access iLO functionality, such as Remote Console or virtual
media (Figure 8-4).

Figure 8-4: iLO Configuring Users

Click Add to create new iLO user accounts (Figure 8-5). You can also specify the management
rights assigned to the user.

Figure 8-5: New User Account

Click the Settings menu to manage additional iLO settings (Figure 8-6).

Figure 8-6: iLO Configuration Settings

Click Directory to manage directory services (Figure 8-7). This includes user account, LDAP,
and LOM object information.

Figure 8-7: Directory Configuration

Lights-Out Management (LOM) Object Distinguished Name
The object name that refers to Lights-Out Management in the directory schema. Not all directory services require the schema
to be extended to include an LOM object.
The structure of a database, including a directory database.

Click Settings | Configure iLO to display Global Settings (Figure 8-8), which can be used to
configure global iLO configuration features, including enabling or disabling iLO functionality.

Figure 8-8: Global Settings

Click Settings | Command Line Interface (CLI) to configure CLI settings (Figure 8-9). This
screen allows you to enable or disable the serial interface and configure its speed. You can
also determine whether the iLO command interface can be accessed over SSH and specify an
SSH port.

Figure 8-9: CLI Settings

Click Settings | Capture/Restore Configuration to write the iLO configuration information to a

file or retrieve and apply a stored iLO configuration file (Figure 8-10).

Figure 8-10: Capture and Restore Configuration

Finally, click Settings | Remote Console Performance Tuning to set remote console
performance options (Figure 8-11).

Figure 8-11: Remote Console Performance Tuning

HP Insight Foundation Suite for ProLiant

The HP Insight Foundation Suite for ProLiant provides management solutions that are
designed to simplify server installation, deployment, configuration, and maintenance throughout
the product lifecycle and provide customers with higher levels of operational efficiency and
highly reliable solutions.
The HP Insight Foundation Suite has downloadable versions to support both Windows and
Linux. In each case, the features and functionality supported are essentially the same.
Components include:

Server deployment and software tools.

Smart systems software maintenance.
HP Systems Insight Manager (SIM).

Server deployment and software tools

Server deployment and software tools include:
Service Pack for ProLiant (SPP)
This includes both 32-bit and 64-bt versions, as well as SPP for Windows and Linux.
SmartStart Scripting Toolkit (SSSTK)
Provides a flexible way to create standard server configuration scripts. These scripts
are used to automate many of the manual steps in the server configuration process.
Subscription Services
This is available to keep customers up to date. Subscription updates include regular
enhancements and upgrades.

Smart systems software maintenance

Smart systems software maintenance includes a number of tools, such as:
HP Smart Update Firmware DVD
Combines the Firmware Maintenance CD, HP BladeSystem Firmware Deployment
Tool (FDT), and Online Firmware Bundles into single DVD.
HP Smart Update Manager (SUM)
SUM helps to reduce the complexity of provisioning and updating HP ProLiant
Servers, options, and Blades within the datacenter. It is included with the Firmware
DVD, Service Packs, and Easy Setup CDs.
Intelligent firmware and software deployment
Both GUI and command line scriptable interfaces are available to deploy software
and firmware on Windows and Linux. Dependency checking is performed
automatically, and only necessary component updates are applied (including the
latest component versions, which are downloaded from the web).
Flexible firmware and software deployment
Provides support for both offline and online deployments and makes it possible for
software and firmware to be deployed together.
HP USB Key Utility
This utility allows system administrators to easily store, transport, and deploy
SmartStart and the Firmware DVD. All of the media can be put on a single USB key
for portability to allow you to eliminate CD and DVD swapping.

HP Systems Insight Manager (SIM)

The goal of HP SIM is to provide a focal point for proactive system management, rather than
always having to react to service and support requirements. To this end, the Insight Foundation
Suite includes the Management DVD containing HP SIM. HP SIM helps maximize system
uptime and performance and reduces infrastructure maintenance costs by providing proactive
notification of problems before those problems result in costly downtime and reduced

HP SIM provides proactive fault and inventory management to prevent unplanned downtime
and helps maximize system uptime and performance. It also includes HP SIM Version Control
Agents and Version Control Repository, which provide system software version control
capabilities to make it easier to maintain consistent software baselines across ProLiant servers.

System Management Homepage

The System Management Homepage (SMH) is an operating system-independent browserbased tool for managing a single server. You can also configure SMH to deploy updates to
servers by using the Version Control Agent (VCA). When you first launch SMH, you are
prompted for user and password information (Figure 8-12).

Figure 8-12: System Management Homepage Login

Then you get to the main screen (Figure 8-13).

Figure 8-13: System Management Homepage

Commands are available through both onscreen prompts and through drop-down menus. We
will look at these in some detail.

You can access the Version Control Repository Manager (VCRM) by clicking HP Version
Control Repository Manager under the Version Control category.
VCRM, shown in Figure 8-14, allows you to manage updates and support packs.

Figure 8-14: Version Control Repository Manager

The catalog tab (Figure 8-15) allows you to manage repository contents, including configuring
components and moving and deleting components.

Figure 8-15: The catalog Tab

The reports tab (Figure 8-16) displays managed system status information.

Figure 8-16: The reports Tab

The archive tab allows you to control automatic archiving of components. You can archive
based on the number of versions stored or the age of the component (Figure 8-17).

Figure 8-17: The archive Tab

The log tab (Figure 8-18) allows you to view log information.

Figure 8-18: The log Tab

You also have access to online help through the help tab.

You can also access the Version Control Agent (VCA) from the Version Control category of
SMH. The Version Control Agent (VCA) (Figure 8-19) provides access to installed software
version information.

Figure 8-19: Version Control Agent

You can limit the information displayed to information about a particular computer. VCA also
provides access to logged information and online help.
Click the log tab to view entries written to the log by VCA tasks (Figure 8-20).

Figure 8-20: Version Control Log

Network adapter information

The Network category of SMH (Figure 8-21) lists network adapters installed in the computer.

Figure 8-21: SMH Network section

Clicking an adapter allows you to view detailed configuration information for that adapter
(Figure 8-22).

Figure 8-22: Network Adapter

Storage entries
The Storage category (Figure 8-23) provides information about installed storage controllers and
the devices attached to them.

Figure 8-23: Storage Category

Figure 8-24 shows information for the Smart Array controller. As you can see, the physical
drives, logical drives, and tape drives associated with the controller are listed in the left-hand

Figure 8-24: Storage

When you click an item in the left pane, detailed information is provided for that component. For
example, Figure 8-25 shows detailed information for the drive in Bay 1.

Figure 8-25: Storage Details

Figure 8-26 shows the configuration and options available for the SATA storage controller,
which you obtain by clicking Standard Dual Channel PCI IDE Controller on the home page.
As you can see, there are no drives attached to this controller.

Figure 8-26: SATA Storage Controller Information

System category
Items under the System category report detailed information about various system components
(Figure 8-27).

Figure 8-27: SMH System Category

Click Show all to view the complete list. The reports available to you include:
Management processor

Auto Server Recovery
PCI Devices
System Board
System Summary
Unit Identification Device (UID)

The Cooling report shows information about the installed fans, including their speed, the
number of fans installed, and the minimum number of operational fans required for the system
to function (Figure 8-28).

Figure 8-28: SMH Cooling Report

Management processor
The Management Processor report (Figure 8-29) displays information about the iLO
management processor, including the version, NIC and connection information, whether to
report the iLO status, and the license type.

Figure 8-29: Management Processor Report

The Memory report (Figure 8-30) shows information about installed memory, including the
amount and the memory protection mode. Detailed information about each memory board is
also listed, including the status and removal conditions.

Figure 8-30: Memory Report

The Power report, shown in Figure 8-31, displays information about each power supply,
including the current power usage. It also displays redundancy information.

Figure 8-31: Power Report

The Processors report (Figure 8-32) shows information about each installed processor.

Figure 8-32: Processors Report

The Temperature report, shown in Figure 8-33, displays the current temperature and the
caution threshold for the room, as well as for each component.

Figure 8-33: Temperature Report

Auto Server Recovery

The Auto Server Recovery report shows the servers recovery configuration, as shown in
Figure 8-34.

Figure 8-34: Automatic Server Recovery Report

PCI Devices
The PCI Devices report, shown in Figure 8-35, displays information about each embedded PCI
device, including iLO management controllers, NICs, mass storage controllers, and the video

Figure 8-35: PCI Devices Report

System Board
The System Board report, shown in Figure 8-36, displays information about the system board,
including the ROM version and the bus type. It also displays information about I/O devices.

Figure 8-36: System Board Report

System Summary
The System Summary report, shown in Figure 8-37, displays various system configuration
settings, as well as the operating system version that is installed.

Figure 8-37: System Summary Report

Unit Identification Device (UID)

You can use the UID page (Figure 8-38) to turn the systems UID light on or off. However, you
cannot change the status of the UID light when the light is blinking.

Figure 8-38: UID State Cannot Be Changed

Operating System Storage Volume Management

Figure 8-39 shows additional categories on the SMH main page.

Figure 8-39: SMH More Options

The Storage Volume Management link in the Operating System category allows you to view
information about configured storage volumes (Figure 8-40).

Figure 8-40: Storage Volumes

Finally, the Software category allows you to view firmware (Figure 8-41) and software (not
shown) version information.

Figure 8-41: Firmware Version Information

Settings menu categories

Clicking the Settings menu gives you access to additional categories (Figure 8-42).

Figure 8-42: Settings Menu Selections

Auto Refresh settings

Under Auto Refresh, you can configure page refresh information (Figure 8-43). This allows you
to set the refresh interval for SMH.

Figure 8-43: Page Refresh

SMH data source

Under Select SMH Data Source, you can manage the source selections for management data
(Figure 8-44).

Figure 8-44: Select Data Source

Notice that SNMP and WBEM are available as data sources. However, you must choose one
or the other as your data source. In this case, WBEM has been selected as the data source.

The Security option, under the System Management Homepage category, provides access to a
wide variety of security configuration settings and options (Figure 8-45).

Figure 8-45: Security

The Security settings available are described in Table 8-1.

Table 8-1: Security Settings



Anonymous/Local access

Allows you to enable anonymous access to unsecured

pages. You can also enable automatic local access, which
allows the user who is logged in to access SMH without
being prompted for credentials. Both of these options are
disabled by default.

IP binding

Allows you to limit the subnets on which SMH will be

available. If no subnets are defined, only local access will be

IP restricted login

Allows you to restrict login access based on the IP address

of the system. You can define both permitted and restricted
addresses. A user at a restricted address will not be allowed
to log in even if it is listed in the permitted list. If there are no
addresses in the permitted list, any address that is not
restricted can be used to access SMH.

Local server certificate

Allows you to use a certificate issued by a certificate

authority instead of the self-signed certificate created by

Port 2301

Determines whether access to the HP Web-Enabled System

Management Software is supported.


Allows you to set session timeout and UI timeout values.

Session timeout indicates the number of minutes that a user
can remain idle before the SMH session is terminated. The
UI timeout indicates the number of seconds the SMH waits
for data requested from webapps.

Trust mode

Allows you to configure the type of security used to

authenticate a server running HP SIM. Options include
authentication by trusted certificate, trusting only HP SIM
servers with a specified name, or trusting all systems.

Trusted management

Allows you to manage the certificates for trusted servers.

Kerberos Authentication

Used to configure Kerberos authentication in an Active

Directory network.

User groups

Allows you to assign SMH access levels of Administrator,

Operator, or User to Windows or Linux groups. Users with
Administrator access have permission to view and configure
all settings. Operators can view and configure most values.
Users can only view information.

The UI Properties (Figure 8-46) option under System Management Homepage allows you to
configure the appearance of the user interface. The settings shown in Figure 8-46 are the
default settings for all users.

Figure 8-46: UI Properties

The User Preferences option under System Management Homepage (Figure 8-47) allows you
to configure appearance preferences for the current user only.

Figure 8-47: User Preferences

Use Send Test Indication (Figure 8-48) under Test Indication to test event handling.

Figure 8-48: Test Indication

Server Configuration
Under the Tasks menu, you have one selection, Server Configuration (Figure 8-49).

Figure 8-49: Tasks Menu

Through Server Configuration, you can manage server configurable properties, as shown in
Figure 8-50.

Figure 8-50: Server Configuration

The Logs menu gives you access to an assortment of logs containing useful system
information (Figure 8-51).

Figure 8-51: Logs Menu

Available logs include:

Version Control Agent
Version Control Repository
Integrated Management Log
System Management Homepage Log
The System Management Homepage Log is shown as an example (Figure 8-52). It shows
configuration changes, as well as successful and failed login attempts.

Figure 8-52: System Management Homepage

Click the Webapps menu (Figure 8-53) to access links to installed Webapps, as well as HP
Web-enabled System Management Software.

Figure 8-53: Webapps


The Support menu provides access to support and forum links with additional information
(Figure 8-54).

Figure 8-54: Support

The Help menu, not shown, lets you access available help files.

HP Systems Insight Manager (SIM) Introduction

HP Systems Insight Manager (SIM) is designed to facilitate managing HP servers and storage
by being the easiest, simplest, and least expensive way for HP system administrators to
maximize system uptime and health. Key benefits include the following:
Provides hardware level management for HP ProLiant, Integrity, and HP 9000
servers, HP BladeSystem, storage array, and networking products
Integrity server
An HP server built on the Itanium processor and designed for enterprise applications.
HP 9000 server
An enterprise server that was discontinued in 2008.

Integrates with HP Insight Remote Support, provides contracts and warranty

management, and automates remote support
Enables control of Windows, HP-UX, Linux, OpenVMS, and NonStop environments
Open Virtual Memory System (OpenVMS)
An operating system that runs on Alpha and Itanium processors, including HP Integrity blade servers.
A highly scalable HP server that provides 24x365 availability for enterprise applications.

Integrates easily with Insight Control and Matrix Operating Environment to enable
you to proactively manage the health of physical or virtual servers and to deploy
servers quickly, optimize power consumption, and optimize infrastructure confidently
with capacity planning
Insight Control
Management functionality that is integrated with HP SIM that allows you to proactively monitor server health and
performance, deploy and migrate servers, and configure servers remotely.
Matrix Operating Environment
HP cloud management software

Table 8-2 provides an overview of HP SIM features.

Table 8-2: HP SIM Features



Inventory Management

Discovers and identifies devices and stores inventory

data in a database. Users can generate various reports
based on the latest industry standards.

Health Management

Provides proactive notification of actual or impending

component failure alerts. Automatic Event Handling
allows users to configure policies to execute scripts,
forward events, and notify appropriate users of failures
via e-mail, pager, or SMS.

HP Version Control

Automatically downloads the latest BIOS, driver, and

agent updates for ProLiant and Integrity servers running
Windows and Linux. Identifies systems running out-ofdate software and enables system software updates
across groups of servers.

HP Insight Remote Support


Enhances HP SIMs event monitoring capabilities by

sending automatic hardware event notification securely
to HP, including entitlement, acknowledgement and
status returns. It extends the intelligent analysis and
diagnosis of events, optimizing availability and reducing
manual intervention. Available as a feature of systems
under warranty or contract obligation at no extra cost.

Warranty and Contract


Ability to automatically retrieve and download warranty

and contract details. Provides predefined and custom
reports. Automatic notification 90, 60, and 30 days before
warranty or contract expires. Needs HP Insight Remote
Support to be installed.

Easy and rapid installation

First time Wizard provides step-by-step, online guidance

for performing the initial configuration of HP SIM. Helps
you configure HP SIM settings on the Central
Management Server (CMS).

Consistent multisystem

A single command from a CMS can initiate a task on

multiple systems or nodes, eliminating the need for
tedious, one-at-a-time operations on each system.

Secure remote management

Leverages OS security for user authentication and

Secure Sockets Layer (SSL) and Secure Shell (SSH) to
encrypt management communications.

Role-based security

Allows effective delegation of management

responsibilities by giving systems administrators
granular control over which management operations
users can perform on selected devices.

Tool definitions

Simple XML documents that allow customers to integrate

off-the-shelf or custom command line and web-based
applications or scripts into the HP SIM user interface.

Data collection and inventory


Performs comprehensive system data collection and

enables users to quickly produce detailed inventory
reports for managed devices. Saves reports in multiple
formats for easy incorporation into popular reporting

Snapshot comparisons

Allows users to compare configuration snapshots of up

to four different servers or configuration snapshots of a
single server over time.

Two user interfaces

Provides a web browser graphical user interface (GUI)

and a command line interface (CLI) to make it easy to
incorporate HP SIM into your existing management

Web-Based Enterprise
Management (WBEM)

Allows users to subscribe and unsubscribe to WBEM

indications for HP-UX, Linux, and Storage Management
Initiative Specification (SMI-S) devices through the GUI
or through the command line interface (CLI).

Windows and Linux versions of HP SIM are available as separate downloads from HP.
Short Message Service (SMS)
protocol used to send text messages
Central Management Server (CMS)
The server responsible for collecting data from and deploying configuration and updates to other servers. You install HP SIM

on the CMS.
Secure Sockets Layer (SSL)
A protocol used to encrypt data being transmitted over an IP network. Typically used to encrypt data transmitted between a
browser and a Web site.
Storage Management Initiative Specification (SMI-S)
Specification that defines how a SAN communicates with a management system using WBEM.

A management domain is comprised of the CMS, which runs HP SIM, and a collection of
managed systems. The CMS maintains a database of persistent objects that can reside locally
or on a separate server. The CMS typically manages itself, but it can be managed by a
separate CMS in a different management domain.
A managed system can be any device on the network that can communicate with HP SIM.
However, it must be running an Insight Management Advisor (also called an agent). Agents
include Insight Management Agents for a ProLiant server and can be either WBEM or SNMP
A collection is a grouping of systems that share common attributes, such as operating systems
or types of hardware. There are default collections, but you can also create custom collections.

Management agents
HP SIM allows you to manage systems through SNMP and by receiving incoming SNMP trap
events. HP SIM provides tools to integrate third-party (non-HP) SNMP v1/v2 MIBs into HP SIM
to provide support for processing and displaying traps from other systems. This includes MIB
syntax extensions that are supported by HP SIM and provide additional value in customizing
specific trap information.
Tools that are provided include:
Verifies the syntax of all MIBs that will be loaded into the system. mcompile resolves
all MIB dependencies. Where necessary, it converts SNMP v2 MIBs into v1 format for
loading into the HP Systems Insight Manager database.
Management Information Base (MIB)
A database that manages the objects that relate to the characteristics of a managed device.

Registers MIBs into the HP SIM. This tool also has the ability to list all registered
MIBs, to display a list of traps contained in each individually registered MIB, and to
unregister MIBs that the user or the system has previously registered.
HP SIM includes a set of MIBs to provide out-of-the-box support for many HP
systems. Refer to for key MIBs that ship with HP SIM.
In addition, to take full advantage of the capabilities of HP SIM, you can install WBEM providers
on your managed systems. Each hardware/operating system platform has its own set of
providers. HP SIM uses the information returned by these providers to:

Identify the operating system on the managed systems.

Associate the virtual or physical instances with their respective containers.
Inventory and report on hardware and software configurations.
Monitor the system remotely for hardware faults.
Available WBEM providers include:
Storage providers
Scenario: BCD Train
BCD Train wants to install a centralized monitoring and management solution. The network
includes both Windows and Linux servers and 10 network switches.
Discuss the advantages to implementing a solution based on HP SIM.

HP SIM Installation
Our discussion of HP SIM begins with a look at SIM installation. However, before you can
install SIM, you must ensure that the target system environment will support HP SIM
installation. Once these requirements are met, you can begin the installation process.

The first screen displayed when you start HP SIM installation is the Insight Management
license agreement (Figure 8-55).

Figure 8-55: End User License Agreement

When you launch installation, the Welcome screen prompts you to run the Insight Management
Advisor, which checks the target system to see if installation prerequisites are met (Figure 856).

Figure 8-56: Welcome Screen

When you run the HP Insight Management Advisor, it checks for installation prerequisites and
reports any potential problems, as shown in (Figure 8-57.

Figure 8-57: HP Insight Management Advisor

You must correct major errors, and you should correct minor errors, before starting the
installation. You can filter results by severity to make it easier to find issues that need to be
resolved, as shown in Figure 8-58. You can also filter by component to ensure that only issues
reported for the product you are installing are displayed.

Figure 8-58: Filter Selection

You can double-click on any issue reported by the Management Advisor for additional details
(Figure 8-59).

Figure 8-59: Additional Details for Windows Firewall Test

This provides you with additional information about the prerequisite being tested, potential
problems, and recommended solutions.

Installation and configuration

After resolving issues relating to installation prerequisites, you are ready to start the installation.
We will limit the discussion to HP System Insight Manager installation, but the installation set
also supports HP Insight Control and HP Matrix Operating Environment installation (Figure 860).

Figure 8-60: Installation Selection

You can customize the details of your installation choices (Figure 8-61). This allows you to
choose the HP SIM components to be installed.

Figure 8-61: Select Components

You are prompted with installation prerequisites with a reminder to make sure that you check
that they have been met (Figure 8-62).

Figure 8-62: Prerequisites

You are then prompted for the installation destination. On a computer running Windows, the
default is a subdirectory of the Program Files directory (Figure 8-63).

Figure 8-63: Installation Directory

You also must identify the service account associated with HP SIM (Figure 8-64). If HP SIM is
already installed, you must specify the current user credentials.

Figure 8-64: Service Account

The account used as the service account must have a password, or an error is reported (Figure

Figure 8-65: Password Error

You are also prompted for database information. You can specify an existing database or
choose to install SQL Server Express and create a database for HP SIM (Figure 8-66).

Figure 8-66: Database Configuration

If HP SIM is already installed on the computer, the existing database is used by this installation.
Any detected errors are reported by the installation program. These will need to be corrected
before you continue (Figure 8-67).

Figure 8-67: Database Error

To make it easier to complete the installation, you can configure HP Insight Management to
sign in automatically after reboot and continue (Figure 8-68).

Figure 8-68: Automatic Sign-In Configuration

HP SIM requires Internet access if you want to automatically download updates to be deployed

to servers. If a proxy server is needed to access the Internet, you must provide proxy server
information (Figure 8-69).

Figure 8-69: Proxy Configuration

You are also prompted for the port to use for WMI data collections, discovery, and identification
(Figure 8-70).

Figure 8-70: WMI Port Configuration

You are prompted to determine whether to enable automatic download of HP system software
(Figure 8-71). You cannot enable automatic download without an Internet connection.

Figure 8-71: Automatic Download

An installation summary is displayed before the physical installation begins (Figure 8-72).

Figure 8-72: Installation Summary

When you click Install, additional installation notes are displayed (Figure 8-73).

Figure 8-73: Installation Notes

The installation program reports when installation is complete (Figure 8-74).

Figure 8-74: Completed Installation

You are also prompted with post installation steps.

BCD Train
BCD Train has selected a server running Windows Server on which it will install HP SIM. You
need to verify that the computer meets the installation prerequisites.
Discuss using the Management Advisor to identify and help correct problems with


SIM Desktop
The first time that you launch the HP SIM desktop interface, you are prompted to register HP
SIM (Figure 8-75).

Figure 8-75: Registration

After you register the server or click Register Later, the desktop interface of HP Insight Control
starts. The Home page is displayed with links to configuration activities necessary to complete
the installation (Figure 8-76) and begin using HP SIM.

Figure 8-76: Insight Control - Home

There are also links to many of the monitoring and management features that you will use. The
left pane organizes the managed systems in collections.
You can return to the Home page at any time by clicking the Home link in the upperright corner.

For now, we will look at the additional configuration steps you will need to take to begin using

Additional configuration
Click discovery and credentials to configure the automatic discovery process (Figure 8-77).

Figure 8-77: Discovery

While discovery is running, the discovery target information and results are displayed (Figure 878).

Figure 8-78: Discovery Results

Next, you can launch the Managed Systems Setup Wizard from the Home page to locate and
configure systems for management (Figure 8-79). You start by selecting the type of systems for
which you are searching, such as all servers or all systems.

Figure 8-79: Managed System Setup Wizard

Next, verify the target systems (Figure 8-80). You can also add additional target system types.

Figure 8-80: Target Systems

Next, an introduction to the setup wizard is displayed (not shown), which describes the process.
Then, you are prompted to choose management features for the target systems (Figure 8-81).

Figure 8-81: Select Features

Next, you are prompted to choose management options (Figure 8-82).

Figure 8-82: Options

The wizard then analyzes the selected systems (Figure 8-83).

Figure 8-83: System Analysis

The analysis results are reported, including whether there are any detected problems. For
some systems, it may be necessary for the wizard to automatically configure the system for
Next, you are prompted with license information for target systems (Figure 8-84). You can click
Add License to add a license for a management capability.

Figure 8-84: System Licenses

A summary of actions that the wizard will perform is displayed (Figure 8-85).

Figure 8-85: Summary

The Results screen has the same target system list but will report configuration results (Figure

Figure 8-86: Results

You can click Add new users on the Home page to identify users authorized to use HP SIM
(Figure 8-87).

Figure 8-87: Add Users

You are initially prompted with a list of configured users and groups. Click New to add a new
user authorization (Figure 8-88).

Figure 8-88: New User

You are prompted for information about the user, including the users name, domain, full name,
phone, and e-mail address. You can also add IP address restrictions that limit the address from
which the user can log in and add pager contact information.
You can add click email or paging on the Home page to enter e-mail and paging information
for messages generated by HP SIM (not shown). When doing this, you must enter mail server
and address or pager number information.
Finally, you can click automatic event handling on the Home page to configure autoevent
handler information as part of your initial configuration (Figure 8-89).

Figure 8-89: Event Name

First, you need to enter the event name. Next, select the events for which you are configuring
automatic event handling (Figure 8-90).

Figure 8-90: Select Events

You are prompted to identify the target systems for event handling (Figure 8-91).

Figure 8-91: Target Systems

You also must select the actions to take in response to the event (Figure 8-92).

Figure 8-92: Event Action

You can specify a time filter to keep the event from repeatedly firing over a short period of time
(Figure 8-93).

Figure 8-93: Event Timer

A summary is displayed before the event handler is created (Figure 8-94).

Figure 8-94: Automatic Event Handler Summary

After the event handler is created, it is then listed with other handlers (Figure 8-95).

Figure 8-95: Event Handlers

HP SIM commands
For the desktop interface, management tasks are accessible through dropdown menus (Figure

Figure 8-96: Management Tasks

We are not covering all menu selection in detail, but we will review the menus and types of
commands accessible through each.
The Tools menu (Figure 8-97) allows you to view system information and gives you access to
command line tools and Insight management.

Figure 8-97: Tools Menu

The Deploy menu (Figure 8-98) allows you to deploy drivers, firmware, and management
agents. You can also manage licenses.

Figure 8-98: Deploy Menu

The Configure menu (Figure 8-99) gives you access to the Managed System Setup Wizard,
which was reviewed earlier in this chapter. You can also set disk thresholds, manage
communications, configure and repair agents, and replicate agent settings.

Figure 8-99: Configure Menu

The Diagnose menu (Figure 8-100) provides access to the ping command and to performance
management, if it is installed. Performance management supports both online and offline

Figure 8-100: Diagnose Menu

The Reports menu (Figure 8-101) allows you to create, view, and manage reports.

Figure 8-101: Reports Menu

The Tasks & Logs menu (Figure 8-102) gives you access to configured tasks and available
information logs.

Figure 8-102: Tasks & Logs Menu

The Options menu (Figure 8-103) gives you access to a wide variety of SIM configuration
options and wizards.

Figure 8-103: Options Menu

Finally, the Help menu (Figure 8-104) allows you to access available help for HP SIM.

Figure 8-104: Help Menu

For most of the available help, you will need Internet access.

Viewing System Information

Before we conclude our discussion of HP SIM, we will take a quick look at how you can obtain
information about managed systems. As you can see in Figure 8-105, the systems are
organized in collections. Click All Systems to show a list of all the systems that can be
managed by HP SIM. You can manage various types of systems including switches,
management processors, storage devices, printers, racks, and BladeSystem enclosures.

Figure 8-105: All Systems

The Health status for each system is shown in the HS category. You can obtain more detailed
information by clicking the icon in the HS category or by clicking the link to the system.

Clicking the icon in the HS category for a server displays the SMH.
The MP column reports the health status for the management processor. This column will only
contain an icon if a device has a management processor.
For example, as you can see in Figure 8-106, the servers have MP icons, but the other systems
do not.

Figure 8-106: Management Processor Icons

The SW icon allows you to track software and firmware versions for the system.
The PF icon reports performance statistics and will only be populated if a performance
management report has been run on the system.
The ES icon indicates whether there are events that have been logged for the system. You can
click the icon to view events.

ProActive Insight Architecture (Gen8)

The HP iLO Management Engine and iLO 4 technologies in HP ProLiant Gen8 servers
incorporate HP Agentless Management. This makes it possible to manage core hardware
monitoring and alerting without the use of Agents. HP SIM communicates with the HP iLO
Management Engine to collect hardware and OS management data without the need to install
Agents on the managed nodes. This is ideal for lights-out data centers, branch offices, retail
locations, or other locations that seek to minimize data center traffic or do not have onsite IT
Key features include:
HP Agentless Management

The base hardware monitoring and alerting capability is built into the system (running
on the HP iLO chip set) and starts working the moment that a power cord and an
Ethernet cable are connected to the server.
HP iLO Mobile App
HP iLO 4 brings additional efficiency and effective remote management at the touch
of your fingertips by allowing you to obtain secure access to your server from the
touch of your Smartphone or Tablet devices. HP currently supports Apples iOS
(iTouch, iPhone4, iPad) and Android Smartphones and Tablets with Android 2.3.
HP iLO multi-language support
Provides customers with the ability to read the HP iLO GUI in English, Japanese, and
simplified Chinese.
HP Sea of Sensors 3D
Adds approximately 28 additional thermal sensors on HP branded networking and
storage PCI cards, backplanes, and mezzanine cards for a 3D view of system
cooling that automatically tracks thermal activity (heat) across the server. When
temperatures get too high, sensors can kick on fans and make other adjustments to
reduce energy usage.
Scenario: BCD Train
You created management user accounts for the network administration staff. The staff wants
your recommendations on tools to use for management and monitoring network computers.
Discuss interface options and the advantages of each.

In this chapter, you learned:
HP Insight Foundation Suite includes downloadable versions to support both
Windows and Linux.
Smart systems software maintenance includes a number of tools.
HP SIM is designed to facilitate proactive system management from a CMS.
HP SIM supports SNMP and WBEM as management agents for ProLiant systems.
HP Insight Management Advisor checks to ensure that a computer meets HP SIM
installation prerequisites.
After HP SIM installation, you should run discovery and credentials and the Managed
Systems Setup Wizard to configure managed systems.
SMH is a browser-based interface for managing a single server.
HP Integrated Lights-Out (iLO) management processors for HP ProLiant servers
virtualize system controls.
The HP Lights-Out Online Configuration Utility allows you to view and configure iLO.
The HP iLO Management Engine and iLO 4 technologies in HP ProLiant Gen8

servers incorporate HP Agentless Management.

Review Questions
1. Which SMH category includes reports for various system components?
2. What restriction must you set to limit users to access HP SIM from specific computers
3. After configuring discovery, what do you use to search for managed systems?
4. HP Insight Management Advisor has an exclamation point inside a yellow triangle. What
does this signify?
5. Which SMH utility allows you to manage upload and manage support packs?

1. Supported data sources for SMH are ______ and _________.
2. HP SIM Version Control Agents and _______________
__________________________________ help to provide system software version
control capabilities.
3. Use the ________________ ________________ Setup Wizard to locate and configure
systems for management.
4. You can archive components in the VCRM based on number of versions stored or on the
_______ of the component.

1. The Lights-Out Online Configuration Utility allows you to save and restore iLO
configuration information.
2. You are required to use WBEM management agents with computers running Linux.
3. SMH only allows you to view managed system information.
4. A G7 managed system requires an agent.
5. The Managed Systems Setup Wizard can automatically configure target systems.

Essay question
1. Explain how you can ensure that a server only acts as a managed system if the CMS has
a trusted certificate.
2. Describe the access levels for SMH.

Scenario question
Scenario: Stay and Sleep

Stay and Sleep has a database server and a web server. The database administrator needs
to be able to log on to the database server using Remote Console and transfer files using
virtual media. The web developer needs the same permission on the web server.

1. Explain how you would configure the necessary permissions.

This eBook is licensed to Catalin Dinca,

Chapter 9: Configuring Networking

Now that you have the server up and running, you are ready to attach it to the network so that it
can begin servicing client requests. In this chapter, you will learn how to configure a network
adapter. We will begin with a discussion of basic NIC functionality. Next, we will look at several
features available with multifunction NIC cards that can help you configure your server to
efficiently handle requests from multiple clients.

In this chapter, you will learn how to:
Discuss NIC configuration.
Describe basic NIC features and functionality.
Identify added functionality present in multifunction network adapters.

NIC Configuration
To understand what you can do in todays network environment, you need to know something
about the features and functionality provided by network adapters.
We will start with a discussion about NIC configuration. Some advanced features will be
introduced during this discussion and explained in greater detail later in this chapter. We will
look at adapter properties, basic configuration settings, and advanced configuration for
multifunction adapters.

Sample network adapter properties

You can access network adapter properties through Windows Device Manager or through the
Local Area Connection Properties for the adapter (Figure 9-1).

Figure 9-1: Local Area Connection Properties

The General tab provides a general description of the adapter and reports the adapter status
(Figure 9-2).

Figure 9-2: General Tab

The Advanced tab lets you view and manage adapter configuration settings (Figure 9-3).

Figure 9-3: Advanced Tab

When you select a property, you are prompted with the current value. Values are specific to
each property, and many are simply either enabled or disabled.
The Driver tab lets you view and manage the device driver associated with the adapter (Figure

Figure 9-4: Driver Tab

You can view detailed information about the driver, update the driver, or revert to an earlier
version if the driver has already been updated. You also have the option of removing
(uninstalling) the driver. You can click Disable to disable the device.
The Details tab lets you view detailed information about device properties (Figure 9-5).

Figure 9-5: Details Tab

Click the Property drop-down arrow to view a list of supported properties (Figure 9-6).

Figure 9-6: Properties Selection List

Finally, the Power Management tab lets you configure power management for the adapter
(Figure 9-7).

Figure 9-7: Power Management Properties

For this adapter, the default is to let the computer to turn off the adapter when it is not in use to
save power.

TCP/IP configuration
You can manage IPv4 properties for a network adapter by selecting Internet Protocol Version
4 (TCP/IPv4) on the Networking tab and clicking Properties, as shown in Figure 9-8.

Figure 9-8: Configuring IPv4 Properties

The standard IPv4 properties are shown in Figure 9-9.

Figure 9-9: IPv4 Properties

If your network includes a DHCP server, you can configure the server to obtain its IP
configuration automatically. Otherwise, you need to manually define values for the parameters
described in Table 9-1.
Table 9-1: IPv4 Configuration Parameters



IP address

The address used to identify the NIC. The IP address must be unique on
the network.

Subnet mask

The value that differentiates the host portion of the address from the
network portion of the address.


The router used to access resources outside the subnet.

DNS server

The server used to resolve a fully-qualified domain name to an IP


You can click Advanced to configure advanced settings in the Advanced TCP/IP Settings
dialog. The IP Settings tab, shown in Figure 9-10, allows you to assign multiple IP addresses
to the NIC. A NIC with multiple IP addresses is called a multihomed NIC. For example, you
might want to configure a web server as a multihomed NIC if you want to assign a different IP
address to different hosted websites. You can also assign multiple default gateways.

Figure 9-10: Advanced TCP/IP Settings

The DNS tab, shown in Figure 9-11, allows you to configure multiple DNS servers. It also
allows you to configure name resolution settings that impact how suffixes are appended when
the server attempts to resolve the name of a resource. You can also determine whether the
server will dynamically register itself with the DNS server.

Figure 9-11: DNS Tab

The WINS tab, shown in Figure 9-12, allows you to configure a WINS server. The WINS tab
also provides other options that can determine how NetBIOS names are resolved to IP
addresses. You can also disable NetBIOS over TCP/IP. You should only do so if you know that
you do not have any services or applications on the network that use NetBIOS.

Figure 9-12: WINS Settings

HP Network Configuration Utility

HP provides a Network Configuration Utility that lets you configure advanced properties for
multifunction adapters. Select HP Network Configuration Utility, and then click Properties to

manage adapter properties (Figure 9-13).

You can also launch the Network Configuration Utility from Control Panel.

Figure 9-13: HP Network Configuration Utility

HP Network Configuration Utility displays a Welcome screen when you first launch it (Figure 914). Click OK to dismiss the Welcome screen.

Figure 6-14: Welcome Screen

The initial configuration screen displays the installed adapters (Figure 9-15).

Figure 9-15: Initial Configuration Screen

From here you can manage properties for an adapter and configure adapter teaming.
The process of grouping multiple NICs into a single logical channel to allow them to share the load.

General adapter settings are configured through the Settings tab (Figure 9-16).

Figure 9-16: Adapter Properties Settings Tab

From here, you can configure the adapter speed and choose whether to use half-duplex or fullduplex communication. You can also set a local Ethernet address, enable iSCSI, and configure
a local iSCSI address.
A local Ethernet address is a MAC address that overrides the one specified by the
NIC's hardware.
Media Access Control (MAC) address

A hexadecimal value that uniquely identifies a NIC.

half-duplex communication
Communication in which the hosts take turns sending and receiving packets.
full-duplex communication
Communication in which the packets are sent and received simultaneously.
A protocol used to connect to a SAN over a TCP/IP network.
Storage Area Network (SAN)
A shared storage device attached to the network.

The Advanced Settings tab lets you manage various Ethernet properties (Figure 9-17).
Adjusting these settings can sometimes help improve network performance. For example, you
can enable the TCP Connection Offload option to reduce the load on the servers processor
and improve both network and server performance.

Figure 9-17: Advanced Settings Properties

Select a property to view its current value. You are prompted with possible values for the
property. Click Restore Default to return a property to its default value.
The VLAN tab lets you specify VLAN connections for both Ethernet and iSCSI (Figure 9-18).

Figure 9-18: VLAN Properties

Virtual Local Area Network (VLAN)
A logical grouping that segments a network to reduce collisions.
A networking event that occurs when multiple devices try to communicate over the same cable at the same time. When a
collision occurs, the data must be retransmitted.
A network device that is used to attach multiple devices to the network and direct traffic to the correct device or segment.

When you use VLANs, switches are used to segment the network, and segmentation is usually
done by port. A VLAN can be made up of ports assigned from a single switch or ports gathered
from multiple switches. Each VLAN will have a different ID number. A VLAN can be associated
with multiple subnets (Figure 9-19).

Figure 9-19: Switch with Two VLANs

VLANs have become a popular segmentation option for LANs. Routers are still the primary
means of segmenting over a wider area and between wide area links.
A network segmentation device that is used to connect two or more subnets. Some switches, called routing switches, are
also routers.

On the VLAN tab, click Add for Ethernet or iSCSI, and you are prompted for a VLAN name and

ID (Figure 9-20).

Figure 9-20: VLAN Name and ID

You can also enter multiple VLANs. In this example, we have added one for Ethernet (Figure 921).

Figure 9-21: Ethernet VLAN

The Statistics tab displays summary statistics for the adapter (Figure 9-22).

Figure 9-22: Adapter Statistics

The TOE Statistics tab displays TCP offload engine statistics (Figure 9-23).
TCP offload engine (TOE)
Technology used to offload TCP/IP protocol stack processing completely to the network adapter.

Figure 9-23: TOE Statistics

The Information tab displays additional detailed information about the adapter (Figure 9-24).
Information shown here includes the following:
Current and burned in MAC address
Information about installed drivers
Information about the physical cable
Information about the bus the NIC is attached to
IP address configuration

Figure 9-24: Adapter Information

The Diagnostics tab is used to run adapter diagnostics (Figure 9-25).

Figure 9-25: Diagnostics

You can click Continuous to run tests in a continuous loop. This gives you a good way to look
for intermittent problems.

Network Adapter Features

We will now take a little closer look at a few network adapter features. Feature support is one of
the primary criteria for selecting one network adapter over another. We will start with features
that are standard to most network adapters and then move on to features found on multifunction

Standard adapter features

Two features that are considered standard because they are found on current network adapters
deserve special mention. These features are:
Wake On LAN (WoL)
The ability to wake up a computer after receipt of a message from the LAN.
Preboot eXecution Environment (PXE)
The ability to boot a computer from files stored on the network rather than locally on
the computer.

WoL can be implemented in a wired or wireless network environment. When you use WoL, the
computer is in a stand-by state, and the network adapter remains powered. The adapter
monitors for a wake-up packet, sometimes called a magic packet, containing its MAC address.
When the wakeup packet is received, the adapter will cause the computer to power up.
In traditional implementations, the wake-up packet is treated as a broadcast packet, and it can
be sent through the local subnetwork only. However, Subnet Directed Broadcasts (SDB) can
be used to send the packet to other subnets, if routers are configured to support SDB.
Subnet Directed Broadcasts (SDB)
Packets routed to a remote subnetwork and then treated as a broadcast packet at that destination.

PXE gives you a way of supporting remote boot for client computers. When you use PXE,
startup files are located on a server referred to as a PXE boot server rather than being stored on
the local computer. The adapter uses TFTP to download the files from the server.
Trivial File Transfer Protocol (TFTP)
A TCP/IP protocol used to send and receive small files on the network.

PXE is often used for remote operating system installation from a network source. The target
computer loads the boot files and then the installation source files from a network server.

Multifunction network adapters

The remaining features discussed in this section are found on HP multifunction network
adapters. Most of these are configured through the HP Network Configuration Utility, seen
earlier in this chapter. The features we will be discussing include:
TCP offload engine (TOE)
Receive-side scaling (RSS)
Remote direct memory access (RDMA)
iSCSI support
Adapter teaming
We will now take a quick look at each of these.

TCP Offload Engine (TOE)

The increased bandwidth of Gigabit Ethernet networks increases the demand for CPU cycles to
manage the network protocol stack. This increased demand for CPU cycles means that the
performance of even a fast CPU will degrade when it simultaneously processes application
instructions and transfers data to or from the network. Computers that are most susceptible to
this problem are application servers, web servers, and file servers that have many concurrent
ProLiant TOE for Windows speeds up network-intensive applications by offloading TCP/IPrelated tasks from the processors onto the network adapter. TOE network adapters have onboard logic to process common and repetitive tasks relating to TCP/IP network traffic. The
capacity of onboard logic to process tasks effectively eliminates the need for the CPU to
segment and reassemble network data packets. Eliminating this work significantly increases
the application performance of servers attached to gigabit Ethernet networks.
TOE is included on integrated Multifunction Gigabit Ethernet adapters and optional
multifunction mezzanine cards. It is supported on Microsoft Windows Server 2003 when the
Scalable Networking Pack is installed and on Windows Server 2008.
TOE is enabled through the Network Configuration Utility on the Advanced Settings tab
(Figure 9-26).

Figure 9-26: TOE Enabled

As you saw earlier in the chapter, the TOE Statistics tab displays TOE statistics information.

Receive-side scaling (RSS)

RSS is also enabled through the Network Configuration Utility on the Advanced Settings tab
(Figure 9-27).

Figure 9-27: RSS Enabled

RSS balances incoming short-lived traffic across multiple processors while preserving packet
delivery in the correct order. Additionally, RSS dynamically adjusts incoming traffic as the
system load varies. As a result, any application with heavy network traffic running on a multiprocessor server will benefit. RSS is independent of the number of connections, so it scales
well. RSS is particularly valuable to web servers and file servers that handle heavy loads of
short-lived traffic.
For RSS support on servers running Windows Server 2003, Scalable Networking Pack must be
installed. Windows Server 2008 supports RSS as part of the operating system.

Remote direct memory access (RDMA)

RDMA is a feature of many network adapters that allows the adapter to directly access
computer memory without involving the operating system. This direct access allows for
significantly faster data transfers between the network adapter and memory, which helps to
improve performance of the network and network-related activities. It is especially helpful in
reducing network communication latency.
RDMA is enabled by default and is not accessed through the Network Configuration
Utility Advanced Settings.

Multifunction adapters provide support for iSCSI acceleration.
Accelerated iSCSI offloads the iSCSI function to the NIC rather than placing the burden on the
server CPU. Accelerated iSCSI is enabled by the HP ProLiant Essentials Accelerated iSCSI
Pack, which is used with certain embedded Multifunction NICs in Windows and Linux
iSCSI support is enabled through the General tab of the Network Configuration Utility (Figure

Figure 9-28: Network Configuration Utility General Tab

Click iSCSI IP Settings to manage IP address assignment for iSCSI (Figure 9-29).

Figure 9-29: iSCSI IP Settings

When you enable iSCSI, the default is to receive IPv4 and IPv6 addresses automatically.

Network adapter teaming

We end this section with a quick look at network adapter teaming. You can increase a servers
network bandwidth through network adapter teaming, also known as NIC teaming (Figure 930).

Figure 9-30: Network Adapter Teaming

NIC teaming is also referred to as link aggregation. NIC teaming allows two adapters to work
together, sharing the bandwidth load and effectively doubling the bandwidth.
Unfortunately, the current version of Windows SBS does not in itself support NIC teaming.
There is a solution, however. Many NIC manufacturers (including HP) offer software solutions
to support NIC teaming for their adapters.
You must have two adapters from the same manufacturer, and usually the same model,
to configure NIC teaming with third-party software.
Network adapter teaming is configured from the first screen of the Network Configuration Utility
(Figure 9-31).

Figure 9-31: Configure Network Adapter Teaming

Select the adapters for which you want to configure teaming and click Team. The Network
Configuration Utility now indicates that the adapters are teamed (Figure 9-32).

Figure 9-32: Teamed Adapters

To end adapter teaming, select the team and click Dissolve.

Click Properties to configure adapter teaming properties (Figure 9-33).

Figure 9-33: Teamed Adapter Configuration Properties

Basic team settings are configured through the Teaming Controls tab. The team type
selections are described in Table 9-2. You can also select a load balancing method.
Table 9-2: Team Type Selections




The system automatically selects the type of team based on the

adapter configuration. This setting is the default setting and is
recommended, except when the team includes an iSCSIenabled adapter. A team that includes an iSCSI-enabled
adapter cannot use automatic configuration.

803ad Dynamic with

Fault Tolerance

Team members are placed into a port-trunk/channel by the

switch. Transmit packets are load balanced between all team
members by the teaming device driver. Receive packets are
load balanced by the switch. A team that includes an iSCSIenabled adapter cannot use this configuration.

Switch-assisted Load
Balancing with Fault
Tolerance (SLB)

Transmit packets are load balanced between all team members

by the teaming device driver. Receive packets are load
balanced by the switch. There is no Primary adapter.

Transmit Load
Balancing with Fault
Tolerance (TLB)

Transmit IP packets are load balanced between all team

members by the teaming device driver. Non-IP packets are
transmitted by the Primary adapter. Receive packets are
received by the Primary adapter.

Transmit Load
Balancing with Fault
Tolerance and
Preference Order

Transmit IP packets are load balanced between all team

members by the teaming device driver. Non-IP packets are
transmitted by the Primary adapter. Receive packets are
received by the Primary adapter. The user can set the priority of
each adapter, which is used to select which adapter should be
considered primary.

Network Fault
Tolerance Only (NFT)

Traffic is not load balanced. The Primary adapter sends and

receives packets. The Secondary adapters are used for failover

Network Fault
Tolerance Only (NFT)
with Preference Order

Traffic is not load balanced. The Primary adapter sends and

receives packets. The Secondary adapters are used for failover
only. The user sets the priority of the adapters.

The Settings tab lets you view and configure advanced settings for the teamed adapters
(Figure 9-34). These settings are similar to the advanced settings for a single NIC, but they
apply to the team as a whole.

Figure 9-34: Team Settings

You can also manage the MAC address assigned to the team. To the network, the adapter
team will appear as a single adapter with one MAC address.
The VLAN tab lets you configure VLAN information for the adapter team (Figure 9-35).

Figure 9-35: VLAN Connections

As you saw earlier in the chapter for Ethernet and iSCSI, you can click Add to add VLAN name
and ID information.
Scenario: BCD Train

BCD Train is looking to upgrade its network infrastructure. It wants to be able to maximize
throughput and performance. The solution should be as flexible as possible.
Discuss network adapter solution options.

In this chapter, you learned:
Network adapter properties allow you to view and manage properties and the active
device driver.
HP Network Configuration Utility allows you to configure advanced properties for
multifunction adapters.
Advanced functionality implemented with multifunction adapters includes TCP
offload engine (TOE), Receive-side scaling (RSS), Remote direct memory access
(RDMA), iSCSI support and network adapter teaming.

Review Questions
1. What utility can you use to configure adapter teaming for HP multifunction adapters?
2. Teamed adapters will expose how many MAC addresses to the network?
3. Which feature can help improve server performance by allowing the network adapter to
take load off the CPU?
4. Which feature allows a server to be powered up from standby mode if a network packet is
5. Which IP configuration setting differentiates the host part from the network part of the IP
6. Why would you create a VLAN?

1. In network adapter properties, click _______ to make an adapter unavailable.
2. TCP offload engine (TOE) is a technology used to offload TCP/IP protocol stack
processing completely to the ___________.
3. _______ provides the ability to boot a computer from files stored on the network rather
than locally on the computer.
4. NIC teaming is also referred to as _______.
5. RSS is particularly valuable to servers handling heavy loads of _______-lived traffic


1. IP forwarding prevents traffic from being exposed unnecessarily to the outside network.
2. When it is implemented on a multifunction adapter, iSCSI cannot be associated with a
specific VLAN.
3. Wake on LAN is implemented on multifunction adapters only.
4. You can assign multiple IP addresses to a single NIC.
5. RMDA allows a network to directly access a hard disk drive without operating system

Essay question
1. What is a VLAN and why would you create one?
2. How can you assign a different MAC address to a NIC?

Scenario question
Scenario: Stay and Sleep
Stay and Sleeps web server has four NICs, plus the iLO NIC. The web server accesses a
database server on the internal network. Customers access the web server through an
Internet-accessible IP address. Eighty percent of the server traffic is web traffic.

1. Explain how you can maximize performance. Explain why.

This eBook is licensed to Catalin Dinca,

Chapter 10: Storage Technologies

Storage is a critical component of all IT environments. Storage and storage design serve
central roles in the performance, resiliency, and scalability of all IT solutions. Understanding the
fundamentals of storage technologies will help you design, plan, and deliver storage solutions
that meet business requirements.
In this chapter, we will explore the fundamentals of storage technologies and learn how to
deploy them appropriately.


In this chapter, you will learn how to:

Compare and contrast the performance, reliability, and compatibility of ATA and
SCSI technologies.
Describe and contrast DAS, NAS, and SAN storage implementations and their
implications on customer needs.
Explain storage configuration and redundancy options and their implications on
customer needs.
Identify and describe storage adapters.
Identify and describe hard disk drive offerings.
Describe S.M.A.R.T.

Storage Fundamentals
The term storage is typically used to refer to persistent storage, also sometimes referred to as
non-volatile storage. Persistent and non-volatile both mean that storage does not lose its
contents when the power is turned off. You can remove the power to a disk drive for years, and
when you turn it on again, the disk drive will still retain its data.
The most common type of storage in use is the electromechanical disk drive. It is common for
the term to be shortened to disk drive, hard drive, or hard disk drive (HDD). Throughout this
chapter we will use the term disk drive to refer to electromechanical hard disk drives.
Businesses, from the smallest local law firms to the largest banks and social media companies,
use disk drives as their primary means of storing data. Large organizations usually have
thousands of disk drives in their data centers.
Other than the disk drive, other types of storage media are commonly in use in business IT.
These other types of media include but are not limited to:
Magnetic tape
Optical media such as CD and DVD media
Solid State Drives (SSDs)
In general, tape is used for long-term storage of backup data and archived data. Tape usually
offers very high capacity at a good price point. Tape is well suited for sequential access to data
but not random access. Accessing data in random locations on tape (random access) requires
the tape to drive to perform multiple fast forward and rewind operations.
Optical media used to be quite common for storing read-only data, for example, data that had to
be kept for long periods of time in a tamper-proof format. This kind of tamper-proof requirement
is often referred to as Write Once Read Many (WORM). However, disk drives and tape are still
used far more often than optical media.
By far, the most common form of storage is the disk drive.

Hard disk drives

The disk drive is more than 50 years old, and the basic design of drives we use today is

fundamentally very similar to the original IBM 350 RAMAC disk drive that first came into use
over 50 years ago.
The IBM 350 RAMAC drive that shipped in 1956 had 50 x 24-inch platters, had to be
lifted by a forklift truck, and had a capacity of approximately 4MB. Compare that to some of
todays drives that are 3.5-inches and store 3TB (~3 million MB!).
As shown in Figure 10-1, a disk drive has multiple magnetically coated platters (think of a
platter as a thin disk, similar to a CD/DVD) that are used as recording surfaces. Data is read
from and written to the platter surfaces by the read/write heads that charge tiny areas of the
platter surface with positive or negative charges (positive charge = 1, negative charge = 0).

Figure 10-1: Hard Disk Drive with Cover Removed

Figure 10-2 shows the major components of a disk drive. The spindle rotates the platters
clockwise. Data is written as sectors that run along a circular track. A sector is typically
composed of 512 bytes. The actuator motor is responsible for moving the actuator arms, which
position the read/write heads.

Figure 10-2: Parts of a Disk Drive

In modern disk drives, the distance between the platters and the read/write heads is
less than a width of smoke particle or the height of the oil left by a finger print.

As shown in Figure 10-3, the read/write head of a disk drive hovers extremely close to the
platter but does not touch it. If the read/write head ever does make contact with the platter, this
is known as a head crash. If a head crash occurs, the drive platter will almost certainly be
damaged and data will be lost. Disk drives that have suffered a head crash typically cannot be

Figure 10-3: Comparison between Distance between Head and Platter and Other Small Objects

The disk drive, optical drives, and cooling fans are the only mechanical components in most
modern servers. All other components are silicon based. As a result, the disk drive is usually by
far the slowest component in most computers and servers. In some scenarios, the disk drive
can be over 1,000 times slower than DRAM memory.
Most of the advancements made in disk drive technology over the last 50 years have increased
the capacity of drives, not the performance. As a result, todays disk drives can store massive
amounts of data compared to the disk drive of 50 years ago, but they are not much faster.

Solid state drives (SSDs)

An SSD stores data on flash memory chips, as shown in Figure 10-4. Unlike a mechanical disk
drive, an SSD has no mechanical moving parts. Therefore, it has some advantages over
mechanical drives. These include:
Better performance
Quieter operation
Resilience to physical shock
Lower power consumption
Lower heat generation

Figure 10-4: 32 GB SSD with Cover Removed

However, SSDs also have disadvantages:

Smaller capacities
More expensive to purchase
Length of lifespan limitations (finite number of read/write cycles)
Dollars per terabyte ($/TB), or sometimes dollars per gigabyte ($/GB), is a way of
measuring acquisition cost of a disk drive. A simple example would be a 100 GB drive that cost
$100 dollars. This drive would have 1 dollar per gigabyte acquisition cost. If that same 100 GB
drive cost $50, it would have a $/GB cost of 50 cents per gigabyte.
Scenario: Stay and Sleep
Stay and Sleep has decided to upgrade the storage capacity for its reservation database.
They currently have a server with two internal hard disk drives. One disk drive has the
operating system and Microsoft SQL Server files. The other drive stores the database.
What questions should you ask to help the customer decide between HDD and SSD drives?

Disk Drive Characteristics

When choosing storage media, you need to consider various characteristics of a drive that
affect how well it meets your requirements. These characteristics include:
Connection interface
Communication protocol
Form factor
An interface is the physical plug that allows disk drives to be plugged into a server. Disk drives
are often plugged in to the backplanes or midplanes on servers. Attaching a disk drive to a
server in this way is sometimes referred to as mating.
A circuit board that contains sockets used to connect components and that distributes power and sometimes communication
A circuit board that contains sockets used to connect components and that distributes communication signals.

Parallel versus serial communication

Historically, disk drives implemented what are referred to as parallel interfaces and parallel
communication processes. With parallel communication, several streams of data (usually bits)
are sent across multiple channels simultaneously. Figure 10-5 illustrates parallel
communication across an 8-bit wide bus.

Figure 10-5: Parallel Communication

Parallel communication worked fine when disk drives were installed only using relatively short
cables inside of servers and computers that had low clock speeds. However, as clock speeds
and cable lengths increased, the parallel interface showed its weakness. In these
circumstances, the parallel interface could not keep transmissions synchronized and, as a
result, it is now very rare to see disk drives with parallel interfaces.
Many modern data centers include disk drives located outside the server chassis,
requiring even longer cables.

Figure 10-6: Out-of-sync Parallel Communication

Figure 10-6 shows the same 8-bit wide bus, but with two sets of transmissions (Seq1= DO
D7, and Seq2 = D8 D15). As you can see, the two streams are starting to overlap each other,
making it difficult to determine which transmission, Seq1 or Seq2, the bits belong to.
Parallel communication has been superseded by serial communication in almost all disk
Serial methods of communication (Figure 10-7) send only a single bit at a time in series
(sequentially). Serial communication methods do not suffer from synchronization difficulties,
and they are far simpler to implement and far better suited for high-speed and long-distance

Figure 10-7: Serial Communication

Common examples of serial interfaces include Serial Advanced Technology Attachment

(SATA) and Serial Attached SCSI (SAS). Traditional rotating magnetic drives, as well as more
modern SSD drives, are available in both SATA and SAS flavors. However, very few SSD
drives have parallel interfaces.
SCSI devices cannot connect to and communicate with devices that use the ATA
protocol and vice versa.
We will now take a look at each of these types of interfaces.

Serial Advanced Technology Attachment (SATA) is both a protocol and a physical interface.
A protocol is a set of commands and communication rules.

SATA is common in personal computers and laptops. SATA replaced the older Parallel ATA
(PATA) standard that used to be common in home computer. PATA was also sometimes called
Integrated Drive Electronics (IDE).
While SATA started in home computers and laptops, it has also become widely used in servers
and storage. SATA drives are synonymous with cheap, low-speed, high-capacity drives.
There are different versions of SATA. Table 10-1 shows the specifics of each.
Table 10-1: SATA Versions

Transfer speed

Maximum cable length


133.5 MBps

18 inches (0.46 meters)


150 MBps

39.37 inches (1 meter)


300 MBps

39.37 inches (1 meter)


600 MBps

39.37 inches (1 meter)


300 MBps

78.74 inches (2 meters)

Serial Attached SCSI (SAS) is also commonly used to refer to both a protocol and a physical
interface. SAS is based on the SCSI (Small Computer Systems Interface) protocol, which has
been used for many years. SAS is more suited to high-performance, mission-critical workloads.
It has a richer command set than SATA, and it is typically used in more expensive higher-speed
drives. SAS and other SCSI-based products are generally accepted as being more reliable
than SATA.

Due to the wide use of SCSI in IT, especially in storage, we will now take a closer look at some
of the major features and characteristics of SCSI.
SCSI is pronounced skuz-ee
The term SCSI can be used to refer to both the SCSI protocol and the physical SCSI interfaces.
In a SCSI relationship, there are two components: the initiator and the target. A common
example of a SCSI initiator is a server, and a common SCSI target is a disk drive. (Technically
speaking, the SCSI initiator is a SCSI device inside of the server.)
A SCSI target can be many things including disk arrays, tape drives, and even printers.
Most of the time, the initiator sends commands to the target, and the target executes those
commands and returns the results. An example of these commands are Read and Write
commands. A SCSI initiator issues Read commands to a SCSI target, and the SCSI target
reads the relevant data and returns it to the SCSI initiator.
Figure 10-8 shows a SCSI initiator issuing a command to a SCSI target. In this example, the

SCSI initiator is a server, and the SCSI target is an HP P2000 G3 MSA disk array. However,
the SCSI target could have been a disk drive installed inside of the server.

Figure 10-8: SCSI Communication

SCSI addressing
For SCSI initiators and targets to talk to each other, they both need addresses. Because SCSI
is a relatively old (some would say mature) technology, it has its roots in the early days of
computing. As a result, SCSI initiators and SCSI targets (referred to as simply initiators and
targets when used in context) communicate over a bus, called the SCSI bus. Historically that
SCSI bus was a ribbon cable inside of a server, but SCSI communication can be implemented
over many different cable types, and even across networks. A server can have multiple SCSI
buses, over which it communicates with targets.
Each device on the bus is assigned a unique SCSI ID. Each target can have one or more
Logical Units normally referred to as a LUN (Logical Unit Number). Commands are normally
targeted at the LUN.
The three address components: bus, target, and LUN can be used to identify any device on a
SCSI Bus. For example, the following address identifies bus 0, target 2, LUN 6:

Figure 10-9: Naming Example

Figure 10-9 shows an example with three targets on bus 0. We can see in the above diagram
that the server can see three targets on Bus 0, or in other words, the server is connected to 3
SCSI targets. But what about the third component of the SCSI address, the LUN? Suppose that
Target 0 is a P2000 G3 MSA disk array that has 12 disk drives. The drives are configured as six
RAID 1 sets numbered from 0 to 5. If the server needs to access LUN 5, it would use the
address 0:0:5 (bus 0, target 0, LUN 5).

SCSI standards
The older SCSI standards support three electrical digital signaling systems:
Single Ended (SE)
High Voltage Differential (HVD)
Low Voltage Differential (LVD)
Although these are now considered legacy standards and are becoming rare, you should have
a general idea of how they function.
SE was the first signaling method used by SCSI. It suffered from noise (electrical interference
and signal loss), and it did not support very long cables. SE was essentially superseded by
HVD, which maintained signal integrity better than SE and supported longer cables. LVDs
capacities surpassed those of HVD as it allowed faster transfer rates and lower power
consumption. SE and HVD are very rare these days.
You should not plug an LVD SCSI drive into an HVD bus.
Figure 10-10 shows one type of SCSI cable.

Figure 10-10: SCSI Cable

Table 10-2 lists the parallel SCSI standards and some of the more important and meaningful
Table 10-2: Parallel SCSI Standards

Table 10-3 lists some of the serial implementations of SCSI that are used more commonly in
modern data centers.
Table 10-3: Serial SCSI Standards

Gigabits per second. Standard notation for expressing networking speeds.
Fibre Channel Arbitrated Loop (FC-AL)
A not commonly used loop topology that connects SCSI Initiators and Targets over Fibre Channel networks.
Fibre Channel
A high-speed, low-latency network technology commonly used for sharing storage. Fibre Channel networks are normally
used only for storage traffic.
Serial Storage Architecture (SSA)
A serial transport protocol that supports bidirectional cable and performs automatic reconfiguration in response to cable

Drive size
More often than not, when referring to a drive size, you will be referring to the drives capacity.
Capacity refers to how much data the drive can hold and is generally expressed in Gigabytes
(GB) or Terabytes (TB).
However, drive size can also refer to the physical dimensions of a drive, or more correctly the
diameter of its platter.
When referring to how much data a drive can hold, you should use the term capacity
or drive capacity. When referring to the physical dimension of a drive, you should use the term
form factor. (For example, a 2.5-inch form factor drive with 600 GB of capacity)
The two most common drive form factors in todays market are:
Small Form Factor (SFF) - 2.5-inch
Large Form Factor (LFF) - 3.5-inch
Both of the above form factors are available in a wide range of capacities. However, as might
be expected, 3.5-inch form factor drives can have larger capacities than 2.5-inch form factor
More surface area = more space to store bits.

The smallest capacity drive normally seen in todays market is the 146 GB drive, and the
largest stores 3 TB and above.
The larger capacity drives are often SATA drives (big and slow), and the smaller drives are
usually SAS or Fibre Channel (FC), with some overlap of drive types in the middle sizes.
Traditional rotating magnetic drives and more modern SSD drives are available in both 3.5-inch
and 2.5-inch form factors.

Drive speed
Another important characteristic of a drive is its speed, more correctly referred to as its rotational
speed or spindle speed. The rotational speed of a drive is usually expressed as revolutions per
minute (RPM).
Common rotational velocities are shown in Table 10-4.
Table 10-4: Common Rotational Velocities

Rotational velocity

Common drive types

7,200 RPM


10,000 RPM

SAS or Fibre Channel

15,000 RPM

SAS or Fibre Channel

When a drive has a higher RPM, its performance is typically higher and so is its cost. A drive
with a higher RPM usually costs more than a drive with a lower RPM for a couple of reasons. Its
initial acquisition cost is higher. A high RPM disks operating cost also rises because its faster
drive spin makes it consume more power and generate more heat.
Although drives spin exceptionally fast, the seek time is noticeable to a computer that
works at speeds far faster than a human being.
Because SSDs do not have a rotating drive platter and the drive never has to wait for the drive
to spin to the correct location, they perform exceptionally well at random workloads (especially
random reads). They also cost less to operate because there is no power required to spin the

Sequential versus random workloads

A sequential workload, or sequential access pattern, is one in which the data is accessed in the
same order as it was stored. A non-technical example would be reading an alphabetized class
roster in alphabetical order. This is usually the easiest way to read such a list. The same is true
for reading and writing to storage, especially when that storage is magnetic disk.
Consider the disk drive diagram in Figure 10-11. The track is made up of continuous sectors.
We will assume that our track has sectors numbered 0-49. A sequential read or write operation
would read or write to those sectors in order, starting at sector 0 and finishing at sector 49. With
sequential access patterns like this, the drive can read or write the entire track in a single
revolution of the disk (with the disk only spinning all the way around once). Also, the read/write
heads will not have to move.

Figure 10-11: Sequential Access

We will now look at a random access pattern. This time our workload needs to read from the
sectors in the following order:
40, 48, 12, 18, 3, 42, 47, 44, 45, 31.
Assuming a starting position of sector 0, the above random access pattern would require six
revolutions of the disk, as illustrated in Figure 10-12. Therefore, the random workload would
take six times as long as the sequential workload, yet only read one fifth of the amount of data.

Figure 10-12: Random Access

Things get even more complicated if the random workload starts skipping between tracks,
because this requires the read/write head to move or seek the data.

The process of moving the read/write head to a different track.

seek time
The amount of time required to move the read/write head to a different track.

As a result, mechanical disk drives struggle with random workloads, even though they perform
reasonably well with sequential workloads.
Interestingly, there are more sectors per track on the outer tracks of a platter/disk than there are
on the inner tracks. This discrepancy is due to the fact that there is less platter space per track
near the center than there is on the outer edges. As a consequence, for sequential workloads,
data on the outer tracks of a disk will perform faster than data on the inner tracks. We will now
assume that the outer tracks of a disk have 100 sectors whereas the inner tracks only have 50.
This means that for every rotation of the disk 100 sectors pass under the read/write head on
outer tracks but only 50 on inner tracks.
However, it is usually not possible to control where data is placed on a disk drive.

Input/Output Operations per Second (IOPS) is a disk drive performance measurement that is
normally used to express a disk drives performance under random workloads.
Examples of random workloads include:
Databases table transactions
Message Queuing systems
IOPS is pronounced "eye-ops."

Megabytes per second (MBps) is another disk drive performance measurement normally used
to describe a disk drives performance under sequential workloads.
Examples of sequential workloads include:
Streaming media
Database logging


DAS, SAN, and NAS all refer to how a computer system connects to its storage.

DAS stands for Direct Attached Storage. As the name suggests, storage is usually directly
connected to the computer or server.
DAS storage is either plugged directly into a server through a bus interface on the backplane of
the server, or through an external cable that is an extension of the servers internal bus.
Examples include:

Disk drives that are plugged directly into SATA plug/port on a server backplane.
Disk drives that are installed in an external disk drive cage that is connected to the
server via an external cable. These cables are used to extend the servers
motherboard and buses outside of the physical chassis of the server.
The diagram below shows the front bezel of an HP ProLiant DL360 Gen8 server with eight 2.5inch form factor disk drives. You can also see the burgundy colored release tabs on each of the
8 drives, indicating that these are hot swappable components.

Figure 10-13: DL360 Gen8 Server with Eight SFF Drives

Disk drives installed in the front of servers like those pictured above are examples of DAS. Only
this server can access these disks because they are directly connected to the backplane of the
Hot Swappable
Hot-pluggable. A drive that can be added or removed without powering down the computer.

A characteristic of DAS storage is that it is not shared, meaning that the disk drives in a DAS
configuration are only accessible from a single computer.
Traditionally, DAS configurations were common in smaller business that could not afford a
SAN or a NAS system. However, DAS systems are becoming popular again for certain
application-specific workloads because all of the performance of the storage in a DAS system
is dedicated to the application and not shared with other applications that could potentially
impact performance. One example is the HP E5000 Messaging System for Microsoft Exchange,
shown in Figure 10-14. It has four DAS drives, which are dedicated to storing Microsoft
Exchange data.

Figure 10-14: E5000 Messaging System

A Storage Area Network (SAN) provides block-level sharing of storage between multiple
computer systems.

To the technical purist, the term SAN refers only to the network components and not the
disk arrays. However, it is common practice to refer to either or both the storage networking and
the disk array components as the SAN.
block-level storage
A volume that provides raw storage capacity. Various operating systems can implement their file system on top of block-level
storage. Data is written to and read from the storage device by referencing a block address.

In a SAN, a large number of disk drives are pooled together in storage arrays and are
simultaneously accessed by multiple computer systems over a dedicated network. Figure 1015 shows an oversimplified example of how multiple servers access shared SAN storage. We
will discuss shared SAN storage in more detail a little later.

Figure 10-15: Multiple Servers Accessing a SAN

The colors show that certain servers have access to certain disks and, in some cases, only
certain portions of certain disks. It is possible for an entire disk to be dedicated to servers or
only parts of disks to be dedicated to servers.
Although the storage is accessed over a dedicated network and shared by multiple computers,
it is made to appear as if it is locally attached so that the operating system can access it the
same way that it accesses DAS. As far as operating systems and applications are concerned,
the storage from a SAN looks just the same as storage plugged directly into the backplane of
the server. This makes it relatively simple to implement SANs in existing environments.
It is possible to boot servers from SAN disks in a configuration known as Boot From SAN
(BFS). In a BFS configuration, a Windows C: drive is really a LUN from the SAN that looks
exactly the same as if it were a physical disk installed in the chassis of the server.
There are three major types of SANs
Fibre Channel (FC)
Fibre Channel over Ethernet (FCoE)
Fibre Channel Protocol (FCP)
A protocol used to transport SCSI commands over FC networks.
Internet SCSI (iSCSI)

Pronounced eye-scuz-ee. An IP-based storage network used for sharing storage. iSCSI carries SCSI traffic over traditional
TCP/IP networks, making it very flexible and relatively cheap to deploy.

FC SANs are generally thought to be higher performance and higher cost than iSCSI SANs.
FC SANs are common in large established businesses, and they are commonly used for
mission-critical applications. iSCSI SANs are usually cheaper and lower performance than FC,
although manufacturers of iSCSI products will contest the performance claim.
Fibre Channel Over Ethernet (FCoE) is an emerging technology that allows Fiber Channel
Protocol (FCP) to run over Ethernet networks. FCoE also has limitations that are similar to
One of the potential negatives of using a dedicated network to access shared storage is cost. A
separate network usually requires its own cables, switches, network cards (called Host Bus
Adapters), rack space, power, cooling, and administration team.
FCoE allows FC traffic to run over dedicated logical lanes within an existing Ethernet network,
allowing TCP/IP and FC traffic to share the same physical network infrastructure.
However, FCoE is still a relatively immature technology. As a result, most organizations are
taking a cautious approach to deploying it.
Examples of HP SAN storage arrays include:
HP 3PAR Storage Systems (Fibre Channel)
HP P9500 and HP XP Disk Arrays (Fibre Channel)
HP EVA/P6000 Disk Arrays (Fibre Channel)
HP LeftHand/P4000 Disk Arrays (iSCSI)
HP MSA/P2000 Disk Arrays Fiber Channel or iSCSI
The front of an HP StorageWorks P2000 G3 Fibre Channel LFF Smart Array is shown in Figure

Figure 10-16: HP StorageWorks P2000 G3 Fibre Channel LFF Smart Array

The rear of the P2000 is shown in Figure 10-17. As you can see, there are two controllers: one
on top of the other.

Figure 10-17: Rear of P2000 - Two Controllers

In addition to sharing storage resources, SANs can also provide advanced data services,
Remote replication
Thin Provisioning
Advanced data resiliency (RAID)
Removable media backup
RAID has already been discussed, but we will review some aspects of RAID later in this
chapter. Archival and backup are discussed later in the course. We will now take a quick look
at the other services mentioned above.

Replication is the act of sharing and synchronizing the same data across two or more storage
arrays or servers. With replication, data is transmitted to the replica on a scheduled basis.
Typically only changes to the data are transmitted across the network. Figure 10-18 shows
SAN replication. New York is the primary site, and most data (all except that allocated to the
yellow server) is replicated to Boston. In the event of a disaster or other failure of services in
New York, all servers with replicated storage can operate from Boston.

Figure 10-18: Replication

Cloning is the act of creating a mirror image of a LUN on a second LUN. Capacity is used on
both storage arrays.

A snapshot is a copy that does not use any disk space when it is first created. As data is
changed, only the changes are made to the snapshot, a process known as copy-on-write.
Snapshots allow a restore to a particular point in time, but only if the original LUN is available.

Thin provisioning
When thin provisioning is used, only the minimum required storage is allocated to a specific
server. Thin provisioning improves disk efficiency because a higher percentage of storage
capacity is actually used instead of sitting vacant.

Auto-tiering is another way to improve disk efficiency. Software automatically moves data that is
not frequently used to less efficient, less expensive storage.

Network Attached Storage (NAS) is another way to share storage between multiple computer
systems. NAS is conceptually very similar to SAN. In both NAS and SAN, a large number of
disk drives are pooled together in storage arrays and are accessed by multiple computer
systems. NAS arrays also provide advanced features similar to those provided by SAN storage
arrays, including:
Multiple drive types
Advanced data resiliency (RAID)
NAS has one major difference from SAN. Unlike SAN, NAS does not usually have a dedicated
storage network. Instead, computer systems usually access NAS storage over the shared
corporate network, the LAN or WAN, using TCP/IP.
NAS and SAN systems also have different protocols. NAS predominantly uses file sharing
protocols such as Network File System (NFS) and Common Internet File System (CIFS).
NFS is very common in UNIX, Linux, and VMware environments.
CIFS (pronounced sifs) is common in Microsoft Windows environments. CIFS is more
properly referred to as Server Message Block (SMB). It was developed by the Microsoft
Corporation, and it is included in all versions of Windows.
The HP X1400 G2 4TB SATA Network Storage System, shown in Figure 10-19, is a NAS array
and supports both NFS and CIFS. It also includes iSCSI support. The HP X1500 G2 Network
Storage system, shown in Figure 10-20, has a tower form factor. The X1400 and X1500 are
X1000 G2 Network Storage Systems models. X1000 G2 Network Storage Systems run
Windows Storage Server 2008 R2 to provide file sharing. You can learn more about these
storage systems in the QuickSpec at

Figure 10-19: HP X1400 G2 4TB SATA NAS

Figure 10-20: HP X1500 G2 Network Storage System

Shared storage caveat

Technologies like SAN and NAS are commonly used to provide multiple servers with shared
access to a pool of disk drives. Still, we should note that even though each server has shared
access to the disk drives, each server still does not necessarily access the same data on the
those disk drives, as illustrated in Figure 10-21.

Figure 10-21: Access to Shared Storage

Providing multiple computer systems with shared access to the same data on shared disk
drives requires special technology called clustering technology.
Clustering technology is required to ensure that two computers do not try to update the same
file at the same time and therefore cause corruption or data loss. Imagine if Server A from
Figure 10-21 deletes a file that is important to an application on Server B. When Server B tries
to access the data, it will not be there, and its absence could cause the server or application to

Optimizing Disk Availability and Performance

When you implement a storage solution, you must make decisions to provide adequate data
availability and performance. In this section, we begin by reviewing RAID and examining how a
RAID configuration impacts both resilience and performance.

Next we will look at how caching can help improve performance. We will also examine ongoing
monitoring and maintenance.

Earlier in the course, you were introduced to RAID. RAID technology increases both
performance and resiliency of storage. As a review, common RAID levels are described in
Table 10-5.
Table 10-5: RAID Levels

RAID configurations that calculate parity to provide resiliency, such as RAID 5 and RAID 6,
have a performance overhead known as the write penalty, which means that RAID 5 and RAID
6 can suffer slower write performance than non-parity resiliency, such as mirroring. However,
this is only true for small writes. Consider the example in Figure 10-22. If the server only writes
to D2 and not to D1 and D3, in order to calculate the parity data for that top row, the RAID
controller must read D1, D2, and D3, read the P1 parity, calculate what the new parity will be,
and then write the new value to D2 and P1.

Figure 10-22: Write Penalty Example

However, if a server performs larger write operations in which it writes to the entire row (D1, D2,
and D3 in this example), there is no requirement for reading the blocks that are not being
updated. This approach to write operations is known as a Full Stripe Write (FSW). FSW avoids
the write penalty of parity-based RAID configurations. RAID 6 has a higher write penalty than
RAID 5, meaning that it suffers more of a performance drop than RAID 5 when dealing with
small writes.

Stripe size
On a RAID configuration that performs striping, the space of each physical drive is broken down
into stripes that can be as small as one sector (512 bytes) or as large as several megabytes.
Stripe size is often referred to as chunk size, with each physical disk being broken down into
chunks. Data writes are then distributed evenly across the stripes from all physical disks in the
RAID set.
I/O intensive environments (which are generally environments with a large number of small
random reads and writes) usually benefit from a large stripe size, allowing a single file/record to
fit entirely into one single stripe/chunk, and therefore a single spindle. This allows each
physical disk to be working on different I/O requests. This technique is called overlapping I/O,
and it is supported by Windows and most other operating systems.
In this context, the term spindle refers to an individual hard disk drive.

Low I/O environments that access large records often benefit from smaller stripe sizes so that a
single file/record is spanned across all physical drives in the array. This allows larger records to
be accessed much more quickly by using multiple spindles to read or write a single file. This
configuration is not conducive to overlapping I/O because all spindles in the array will be used
to access a single record. Candidates for small stripe arrays are applications that access large
audio or video files or scientific imaging applications that access large files.

RAID controllers
RAID controllers are generally separated into two categories:

Internal RAID controllers

External RAID controllers
You learned about internal RAID controllers earlier in the course.
The most popular kinds of external RAID controllers tend to be SAN Storage Arrays such as:
HP 3PAR Storage Systems (Fibre Channel)
HP P9500 and HP XP Disk Arrays (Fibre Channel)
HP EVA/P6000 Disk Arrays (Fibre Channel)
HP LeftHand/P4000 Disk Arrays (iSCSI)
HP MSA/P2000 Disk Arrays Fibre Channel or iSCSI
These external RAID controllers provide hardware RAID. The RAID functions are performed by
the storage arrays and not by software installed on top of the operating system running on the

Hardware versus software RAID

Both internal and external controllers perform hardware RAID. Hardware RAID requires
dedicated hardware that performs all of the RAID calculations and functions.
The alternative to hardware RAID is software RAID. In a software RAID setup, RAID functions
and calculations are performed by software running on the CPU(s) in the server. Software RAID
has a performance impact on the server because the servers CPU(s) must spend time
performing RAID functions when the CPU(s) could otherwise be performing application-based
functions. For this reason, software RAID is not commonly used.
Some operating systems natively support the ability to perform software RAID. There are also
third-party applications that perform software RAID. Table 10-6 compares software RAID with
Hardware RAID.
Table 10-6: Software and Hardware RAID Comparison

Software RAID

Hardware RAID


More expensive

Low performance

High performance

In Windows Server 2012, Microsoft introduced Storage Spaces. Storage Spaces allows
administrators to create storage pools from multiple disks. The pools can be configured to
support mirroring or parity for resilience. Storage Spaces is a form of software protection and
allows administrators to protect data without having to purchase RAID controllers. Storage
Spaces will not do away with RAID controllers. Each has its own place.


Many HP Smart Array RAID Controllers have a BBWC (Battery Backed Write-back Cache) or
FBWC (Flash Based Write-back Cache) or an option to add one. Both technologies can
improve the performance of RAID controllers.

The practice of placing relatively small amounts of memory in front of disks so that frequently accessed data can be
accessed from cache memory rather than having to go to disk.

The concept is simple - you add a small amount of fast cache memory to a RAID controller and,
in the case of BBWC, protect it with a battery so that if the power goes out, the battery will keep
the data in the cache so that it is not lost. This cache memory performs at an order of magnitude
faster than the spinning disks attached to the RAID controller and therefore improves the
performance of the RAID controller. Conceptually speaking, adding a BBWC is like adding
DRAM to the RAID controller in front of the disks and can give the impression that the disks are
operating at RAMlike speeds. Such configurations have their limits but are quite popular in realworld deployments.
Flash Based Write-back Caches are becoming more and more popular and are replacing
BBWC. Flash memory is non-volatile (meaning that it does not lose its contents when the
power is turned off), so it does not need a battery. Most organizations are choosing to deploy
Common sizes for BBWC and FBWC include 128MB, 256MB, 512MB, and 1GB, and these
sizes are getting larger all the time. Such upgrades and options increase the cost and
performance of RAID controllers.

Cache tuning
Many systems (on board RAID controllers as well as some external array controllers such as
the HP EVA) allow administrators to tune the cache for maximum performance. One common
cache-tuning parameter is read/write ratio. For some applications, it may be beneficial to
increase the amount of cache memory allocated to read operations, and in other situations it
may be more beneficial to increase the amount assigned to write operations. This can be a
delicate balance, so a cache should only be tuned by administrators who understand what they
are doing and who are acting in response to recommendations from vendors.
Some systems do not allow such tuning and have built-in algorithms to automatically
and dynamically tune cache settings so that administrators do not need to intervene.
A common way of improving storage performance is to increase the amount of cache in a
system. This applies to both on-board RAID controllers, such as HP Smart Array RAID
Controllers, and external RAID controllers. Generally speaking, the more cache a system has,
the higher its performance is.
Cache memory usually carries a high cost, and it is common for people working in storage to
use the phrase cache is cash to emphasize that cache is not cheap.

Hot spares
Having a hot spare in a storage configuration is the practice of having a spare standby disk that
can be used automatically by the system to replace a failed disk. For example, a system with
six disk drives could be configured as RAID 5 4+1 with the leftover disk configured as a hot
spare. In this configuration, when a drive fails, the system will remain up and working because
the RAID 5 configuration can survive a single failed disk. However, because one disk is failed,
the RAID set effectively becomes a stripe of 4 data disks. In order to regain parity (protection),
the hot spare disk can be dynamically added into the RAID configuration and the parity can be

rebuilt. It then becomes less urgent to replace the failed disk quickly. Without a hot spare, it is
urgent to replace the failed disk because a second failed disk would cause the RAID set to fail.
Hot spares are sometimes referred to as online spares.
A hot spare can be associated with more than one RAID group. In fact, it is common for there to
be a high ratio of RAID groups to hot spares.
Some RAID controllers (on-board and external) allow administrators to specify a rebuild priority.
Giving rebuild operations a high priority will make sure that the failed drive is rebuilt to the hot
spare as quickly as possible. Prioritizing rebuild operations has the advantage of reducing the
amount of time during which you are exposed to potential data loss. (Data loss will occur in a
RAID 5 set if a second drive fails before the first is rebuilt.) However, high rebuild priorities can
result in slower performance during the rebuild. Therefore, setting a rebuild priority is a trade-off
between affecting overall performance and exposing the system to potential data loss.

Just a Bunch of Disks (JBOD) is a collection of disks that are not formed into a RAID set.
Therefore, they have no parity or mirroring and therefore no protection against failure. As a
result, it is very uncommon to see them, especially in server environments.
If you are using JBODs, it is highly recommended that you run some software RAID or some
form of mirroring to provide protection. Microsoft Windows Server 8 Storage Spaces may be a
good solution for protecting data on systems on which there is no hardware RAID.

Self Monitoring Analysis and Reporting Technology (SMART or S.M.A.R.T) is an industry-wide
monitoring standard for hard disk drives that attempts to predict drive failures.
Predicting a drive failure is preferable to allowing a drive to fail and then having to rebuild it
from parity (as you do with RAID). If you can predict that a drive is about to fail, you can simply
copy its contents to a hot spare. However, if the drive fails, the hot spare has to be rebuilt from
parity. Building from parity is a computationally intensive task when compared to a data copy.
Therefore, the rebuild from parity takes longer and consumes more system resources.

A common practice employed to improve system performance is defragmentation.
Most defragging tools, such as those that ship with some versions of the Microsoft Windows
operating system, are file system defragmenters. They attempt to place files in contiguous
addresses within the file system. We will now look at a simple example.
Our example file system has an address space of 0-99, and the file system is mounted as C:.
When we first write files to C:, they will be stored in contiguous address spaces. For example,
File-1 takes up 10 address spaces and will be written to address spaces 0-9. These are
contiguous. However, over time the file system fills up, and we delete 3 unwanted files (File-1,
File-6, and File-9). These deletions free up space, but the freed up space is not contiguous:
File-1 freed up 0-9.

File-6 freed up 53-55.

File-9 filled up 71-76.
Assume now that we want to write a new file named File-20 that requires 15 address spaces in
the file system. Although there are more than 15 free address spaces, these spaces are
scattered throughout the file system address space. If we were to save File-99, it would take up
addresses 0-9, 53-55, and 71-72.
File system defragmenting tools attempt to reorganize files within the file system so that files
occupy contiguous addresses and so that all free addresses can be grouped together. If we had
defragmented the file system prior to writing file-99, the defragmentation would probably have
grouped the three free address spaces together, allowing us to write file-99 to contiguous
Scenario: Stay and Sleep
One recommendation for how to configure a database server is to store the logs and database
on separate volumes. Given what you have learned about disk drives, explain the benefits of
doing so and the performance implications for configuring the volumes that will contain the
database and log files. What information will be helpful in making configuration decisions?

In this chapter, you learned:
A hard disk drive is a mechanical device that performs reasonably well for sequential
data access, but less well for random access.
An SSD is quieter, more power efficient, more resilient to physical shock, and better
at providing random access performance than an HDD is.
Modern drives communicate over a serial interface.
SATA drives are less expensive but slower than SAS drives.
A SCSI address is composed of the bus, the target, and the LUN components.
A DAS device can only be accessed by a single server.
A SAN is a storage array that provides block-level shared storage between multiple
A NAS is shared storage that is accessed over a file sharing protocol, such as NFS
or SMB.
The performance of a RAID volume can be optimized by adjusting stripe size and
adding a BBWC or a FBWC.
A hot spare is a standby spare that is striped in the background when a drive in an
array fails.
S.M.A.R.T monitors hard disk drives and attempts to predict drive failure.

Review Questions
1. Which type of drive performs better for random data access?
2. Which type of interface provides the best performance?
3. What is seek time?
4. A DAS device can only be connected directly to a servers backplane. (True or False)
5. _______ allows Fiber Channel Protocol to run over Ethernet networks.
6. A(n) _______ is a collection of disks that are not formed as a RAID set.

1. If a read/write head makes contact with the platter, a __________ occurs.
2. A SCSI device is addressed by its __________, __________, and __________.
3. When reading from an HDD, __________ workloads result in more seek time.
4. A __________ provides block-level storage that can be shared by multiple servers.
5. __________ allows access to a SAN over a traditional TCP/IP network.
6. __________ RAID offers better performance than __________ RAID.

True or false
1. The primary difference between a SAN and a NAS is that a NAS can only be accessed
by Windows servers.
2. Both HDD and SSDs are available with SAS interfaces.
3. A NAS provides only block-level data storage.
4. Spindle speed can be used to rate HDDs, but not SSDs.
5. Adjusting the stripe size of a RAID 6 array can help reduce the write penalty.
6. One drawback of FBWC is that cached data is lost during a power failure.

Essay questions
1. A customer needs to choose between purchasing a server with HDDs and purchasing
one with SSDs. Describe the advantages and disadvantages of each.
2. Compare the SATA and SAS interfaces in terms of performance, cable length, and cost.
3. Explain how a DAS, NAS, and SAN differ. Include an example of when each would be

Scenario question
Scenario: FI-Print
FI-Print has hired you to design a file sharing solution. They have Macintosh and Windows 7
computers. They currently have a rack with three servers: an Active Directory domain

controller, a web server, and a database server.

Files are currently shared from DAS storage on a legacy tower server. The server has a RAID
1 volume that is 4 TB. They are constantly running out of storage capacity. Also, they
anticipate that their storage needs will grow by 20% each year over the next three years.

1. Research NAS options on the HP website. Prepare a proposal that gives two storage
device options one that is the least expensive option and the other that will provide the
best performance.

This eBook is licensed to Catalin Dinca,

Chapter 11: Configuring Storage

You were introduced to storage configuration using HP Option ROM Configuration for Arrays
(ORCA) earlier in the course. In this chapter, we will examine how to configure storage using
other utilities.
We will begin the chapter with a look at how to configure DAS storage using the Array
Configuration Utility (ACU). Next, we will take a more in-depth look at iSCSI, Fibre Channel,
and FCoE SANs. We will conclude the chapter by examining how to configure a SAN using the
Storage Management Utility.

In this chapter, you will learn how to:
Explain the operation of fiber optical technology.
Identify the components required to implement an FC-SAN.
Describe the components required to implement an iSCSI storage architecture.
Explain and recognize FCoE technologies.
Identify and describe Fibre Channel adapters.
Install and assemble external storage.

Using the Array Configuration Utility (ACU)

The ACU is a powerful utility that allows you to perform a number of actions, including those not
supported by ORCA. Table 11-1 compares the tasks you can perform with each tool.
Table 11-1: Comparison between ACU and ORCA




Identify a device by causing its LED to illuminate



Create and delete arrays



Create and delete logical drives



Assign a RAID level to a logical drive



Change the size of a logical drive



Create multiple logical drives per array



Configure the stripe size



Assign a spare drive to an array



Share a spare drive among arrays



Assign multiple spare drives to an array



Set the spare activation mode



Migrate RAID level or stripe size



Expand an array



Extend a logical drive



Configure the expand priority, migrate priority, and

accelerator ratio



Set the boot controller



Additional features are available with the HP Smart Array Advanced Pack (SAAP), but these
features require a registered license key. A discussion of these features is beyond the scope of
this course.
ACU supports a graphical user interface, a command-line interface, and a scripting language.
In this course, we will focus on the GUI.
On a G7 server, you can access the ACU GUI from the SmartStart maintenance menu, as
shown in Figure 11-1. To launch it, click HP Array Configuration and Diagnostics.

Figure 11-1: SmartStart Maintenance Operations

The ACU displays the Welcome screen, as shown in Figure 11-2. The three tabs at the top
allow you to choose between Configuration, Diagnostics, or Wizards. To get started, select a
device from the drop-down list immediately beneath the tabs. If you have recently installed a
device and it does not appear in the list, click the Rescan System button to the right of the
drop-down list.

Figure 11-2: ACU Welcome Screen

After you select a device, the ACU displays the configuration for that device, as shown in
Figure 11-3. The left pane shows the device and its children, shown in logical view by default.

Figure 11-3: ACU - Logical View

Logical view allows you to view and manage the logical drives for the arrays that are managed
by the controller.
You can select two other views from the drop-down list: Physical view and Enclosure view.
Physical view, shown in Figure 11-4, allows you to view information about the physical drives.

Figure 11-4: ACU - Physical View

The Enclosure view, shown in Figure 11-5, shows the drives grouped by internal drive cage.
internal drive cage
The casing that holds an internal hard disk drive.

Figure 11-5: ACU - Enclosure View

The View Status Alerts link in the left pane allows you to view any alerts that have occurred.
Alerts are identified by icon as being informational (white circle with a blue i), warning (yellow
circle with an exclamation point), or critical (red circle with an X).

Managing controllers
Figure 11-6 shows the configuration tasks available when you select the Smart Array P410i in
Embedded Slot device. You can manage the following:
Controller settings
Array accelerator settings
Physical drive write cache settings
License keys
You can also view more information for the selected device or clear the current configuration.

Figure 11-6: Smart Array Controller Available Tasks

You can view detailed information about the array controllers configuration by clicking More
Info. The ACU displays the screen shown in Figure 11-7. It reports information about the slot
where the controller is installed, the controllers status, and its current configuration settings.

Figure 11-7: More Information - Smart Array P410i

Figure 11-8 shows the controller settings that you can configure. From this screen, you can
adjust the percentage of the array accelerator cache used for read vs. write operations. You can
also configure the amount of priority given to rebuild and maintenance operations. These
settings are used to optimize performance and recoverability. More about these settings will be
covered later in the course.

Figure 11-8: Smart Array Controller Settings

The Array Accelerator settings are shown in Figure 11-9. Here you can select the logical drives
that have caching enabled. You can also enable write caching even if there is not a fullycharged battery.
Enabling BBWC when no battery is present can cause data loss if a power failure

Figure 11-9: Array Accelerator Settings

Array settings

Selecting an array reveals the tasks that you can perform on that array, as shown in Figure 1110. These tasks include deleting the array and viewing information about the array.

Figure 11-10: ACU - Array Tasks

The More Information screen for the array is shown in Figure 11-11. As you can see, you can
view information about the status of the array, the type of drives in the array, the logical and
physical drives that comprise the array, spare drives, failed drives, and the array controller that
manages the array (indicated by Device Path).

Figure 11-11: ACU - Array Information

Logical drive settings

The tasks available for managing a logical drive are shown in Figure 11-12.

Figure 11-12: ACU - Logical Drive Available Tasks

As you can see, you can migrate the RAID and stripe size, delete the logical drive from the
array, or view more information.
The information screen is shown in Figure 11-13.

Figure 11-13: Logical Drive Information

Physical drive information

Selecting a physical drive and clicking More Information displays information about the
physical drive, as shown in Figure 11-14. You can learn information about the model and serial
number of the drive, as well as the firmware version. You can also monitor current

environmental and operational settings, including the current temperature, maximum

temperature, rotational speed, PHY count, and transfer speed.
PHY count
Identifies the number of physical ports in a drive.

Figure 11-14: ACU - Physical Drive

Drive cage information

Selecting the drive cage and clicking More Information displays the screen shown in Figure
11-15. On this screen, you can view whether the power supply is redundant, the number of
drive bays, and information about the drive cage location. The Device Path area lists the array
and the cages controller.

Figure 11-15: ACU - Enclosure Information

Scenario: Stay and Sleep

You are setting up the web server for Stay and Sleep. It has four DAS hard disk drives. You
want to configure three of the drives as a RAID 5 array and one as a spare. You want to
create two volumes on the RAID 5 array.
List the steps that you would take to configure storage for the web server.

Configuring a SAN Device

As you will recall from the last chapter, a SAN is an array on the network that can handle blocklevel I/O commands. Configuring a SAN device requires that you:
Set up a storage array system.
Configure iSCSI or Fibre Channel connectivity.
Provision storage.
We will discuss the procedure for configuring a SAN using a P2000G3 storage device. But first,
we will take a more detailed look at how iSCSI and Fibre Channel operate and what equipment
is required to connect devices to a SAN.

iSCSI configuration
iSCSI is a protocol used to encapsulate SCSI commands so that they can be sent over an
Ethernet network, as illustrated in Figure 11-16.

Figure 11-16: iSCSI Device on a Network

The primary advantages of iSCSI are as follows:

iSCSI requires little investment.

iSCSI can operate over an existing Ethernet LAN, MAN, or even WAN.

How iSCSI works

As with SCSI, an iSCSI communication session involves an initiator and a target (Figure 1117). SCSI commands, data, and status are encapsulated in an iSCSI wrapper. TCP and IP
information is then added to the message to allow for transport over the network. iSCSI
messages are known as protocol data units (PDUs).

Figure 11-17: iSCSI Initiator and Target

An iSCSI target has a unique IP address and port that identifies it on the network. It also has a
globally unique iSCSI name that consists of:
A type designator.
The naming authority.
A unique identifier assigned by the naming authority.
The unique identifier is known as the World Wide Name (WWN). You can also assign a userfriendly Alias name.
An iSCSI initiator can either be configured with the group address for the storage, or it can
locate an iSCSI target by contacting an Internet Storage Name Service (iSNS) server, which
functions similar to DNS.
iSCSI sessions proceed through two phases: the login phase and the full feature phase. During
the login phase, the iSCSI initiator authenticates to the target and a session is established.
SCSI commands and data are passed during the full feature phase.

Supporting iSCSI
To act as an iSCSI initiator, a device needs an iSCSI-capable Ethernet adapter or an iSCSI
host bus adapter (HBA). When an Ethernet adapter is used, the operating system must perform
iSCSI processing. With an iSCSI HBA, iSCSI processing is offloaded to the HBA, providing
better performance.
You use the Network Configuration Utility (NCU) to enable iSCSI on an HP network adapter.
First select the NIC on which you want to enable iSCSI, as shown in Figure 11-18.

Figure 11-18: HP Network Configuration Utility

Click Properties. Check the Enable iSCSI box on the Settings tab, as shown in Figure 11-19.

Figure 11-19: Enabling iSCSI

You can type a MAC address in the Locally Administered Address box if you want the iSCSI
initiator to have a different MAC address for iSCSI traffic. You can also click iSCSI IP Settings
to configure an IPv4 or IPv6 configuration that the NIC should use when sending iSCSI
packets. The default is to use DHCP to assign the IP configuration, as shown in Figure 11-20.

Figure 11-20: iSCSI IP Settings

You need to click OK to close all dialogs and then click OK to close the NCU. When you
reopen the NCU, an iSCSI Devices tab will be available. This tab lists all NICs that have iSCSI
enabled, as shown in Figure 11-21.

Figure 11-21: iSCSI Devices Tab

You can select the NIC and click Properties to change the configuration and view information.
For example, you can view statistics on the Statistics tab, as shown in Figure 11-22.

Figure 11-22: iSCSI Initiator Statistics

You can also view iSCSI configuration information on the Information tab, as shown in Figure

Figure 11-23: iSCSI Initiator Information

There is no iSCSI information available on the Diagnostics tab.

Fibre Channel configuration

Instead of operating over an Ethernet network, Fibre Channel SANs are dedicated storage
networks, as shown in Figure 11-24. Therefore, they are more expensive to implement because
you must purchase additional cable, special HBAs, and a Fibre Channel switch.

Figure 11-24: Fibre Channel Implementation

Fibre Channel offers better performance than iSCSI because Fibre Channel does not impose
the additional overhead of TCP/IP network communication. Some also consider Fibre Channel
more secure because the data is sent over a dedicated network.

Fibre Channel network

The Fibre Channel network can use either copper or optical cable. The transmission speed and
maximum distance for various cable options are described in Table 11-2.
Table 11-2: Supported Cable Types on a Fibre Channel Network

Fibre Channel connectors

Your choice of network media will determine the type of supported connectors you must use.
For copper cable, you can choose either a DB-9 or High Speed Serial Data Connector
(HSSDC). Each connector has its own advantages. The DB-9 connector is slightly less
vulnerable to EMI than the HSSDC. However, the HSSDC has better impedance control and a
lower profile. The contacts are also immune to stubbing problems typically encountered in pin
and socket connectors.

A number of optical connectors are available. The optical connector most commonly used for
HP HBAs and switches is the LC connector. The Lucent Connector (LC), shown in Figure 1125, is a small form factor ceramic-based connector used with most 2Gbps SANs.

Figure 11-25: Fibre Channel Cable and LC Connector

Fibre Channel ports

A port is an intelligent interface point on a Fibre Channel network. A port takes an active role in
transmitting, processing, and receiving data. Ports can recognize and communicate with other
ports to ensure that data is reliably transmitted and received. Devices and switches on a Fibre
Channel network have specific types of ports to allow them to function. The required ports will
vary according to the topology being supported.
An HBA allows you to create one or more ports. For example, the HP PCIe 8Gb Host Bus
Adapter, shown in Figure 11-25, allows you to connect to 2-, 4-, or 8-Gbps optical storage
networks. It has an LC connector and allows a single physical port to acquire multiple N_port
port IDs.
A port that connects a node on a Fibre Channel network. A node can be a storage device or a server.

Fibre Channel topologies

A Fibre Channel topology can be simple or complex, depending on the requirements. In an
SMB, the requirements are likely to be fairly simple. Therefore, we will limit our discussion to
the following topologies:
Arbitrated loop
Single switch fabric
A single switch or a set of switches connected to form a network. Fabric services manage device names and addresses,
timestamps, and other functionality for the switches.

In a point-to-point configuration, a single server is connected to a single storage device. A
single connection between N_ports exists, as shown in Figure 11-26.

Figure 11-26: Point-to-point Connection

The distance between the nodes will be governed by the limitations of the media that are used
to connect the devices. This topology is inexpensive to configure and offers high performance
because data is sent across a dedicated link. The drawback is that only a single server can
access the SAN storage.

Arbitrated loop
In an arbitrated loop topology (FC AL), the nodes communicate by passing messages between
themselves in a loop. The nodes can be connected directly, using L_ports or NL_ports, as
illustrated in Figure 11-27.

Figure 11-27: FC AL Topology

A port used to connect nodes in an arbitrated loop.
A port used to connect nodes to an arbitrated loop that connects to a fabric.

An alternative is to connect each node to a Fibre Channel hub (FC Hub). Similar to a network
hub, an FC Hub is physically wired as a star, but it logically acts as a loop. The benefit of using

a hub over directly connected nodes is that hubs include circuits to bypass a node that has
failed. In a directly connected loop, any node failure prevents communication.
A hub does not have intelligent ports. It only serves as a connection point for the
physical devices.

Single-switch fabric
When you connect nodes to a switch, you establish a fabric. This configuration is also referred
to as FC SW. A fabric can have one switch or multiple switches. In a switch topology, the
N_ports on each node connect to an F_port on the switch, as shown in Figure 11-28.

Figure 11-28: Single Switch Topology

You can link multiple switches together by connecting the E_ports on each switch. The
supported configurations and limitations depend on the switches involved. A discussion of
supported configurations and limitations is beyond the scope of this course.
Extension port. Port used to connect multiple switches in a single fabric.

HP has a wide range of Fibre Channel switches available. One example is the HP SN3000B
Fibre Channel Switch, shown in Figure 11-29. It is a 1U switch that supports between 12 and
24 ports at rates of 4, 8, or 16 Gbps. It requires either 8Gb or 16Gb SFP+ transceivers. The
SN3000B switch also has a D-port that allows you to connect a device to perform

Figure 11-29: SN3000B Fibre Channel Switch

An interface between the system board of a network device and a network cable.

Enhanced Small Form Factor Pluggable (SFP+)

A transceiver that supports up to 10 Gbps Ethernet or 8 Gbps Fibre Channel transmission.
Small form Factor Pluggable (SFP)
A small form factor transceiver that replaced the Gigabit interface converter (GBIC) transceiver used to connect legacy Fibre
Channel and Gigabit Ethernet networks.
Diagnostic port.

Because a switched fabric uses intelligent ports and allows multiple switches to be linked
together, it is a highly scalable solution. However, it is more expensive than point-to-point or FC
AL. It can also be configured with built-in redundancy to provide failover and high availability of

Fibre Channel over Ethernet (FCoE) is an emerging standard that allows both TCP/IP and
Fibre Channel traffic to be carried over the same Ethernet network. To support FCoE, a network
must meet specific requirements, including:
10 Gigabit Ethernet
Support for jumbo frames
Lossless Ethernet
jumbo frames
An Ethernet frame that can carry more than 1500 bytes of payload.
Lossless Ethernet
An Ethernet fabric that can guarantee timely data packet delivery even if the network is congested.

Figure 11-30 shows an example of an FCoE network. Devices on a FCoE network require a
Converged Network Adapter (CNA), which is an interface card that can handle both LAN traffic
and FCoE traffic. The network also needs an FCoE switch.

Figure 11-30: FCoE

Fibre Channel traffic is encapsulated in FCoE packets to be transferred directly over Ethernet.
FCoE performance is better than iSCSI because FC packets are not carried over IP, so
there is less processing overhead associated with the traffic.

About the P2000G3

The HP P2000 G3 MSA Array Systems are 2U dual-controller storage systems available in
SFF and LFF models. The SFF models support 24 drives and up to 24 TB of storage. The LFF
models support 12 drives and up to 36 TB of storage.
Models are available that support connectivity through SAS, 1 Gbps iSCSI, 8 Gbps Fibre
Channel, or both iSCSI and Fibre Channel. The combination model will be used for the
examples in the rest of this chapter.
The HP P2000 G3 FC MSA Dual Controller Virtualization SAN Starter Kit includes the
following components:
HP P2000 G3 FC chassis with 24 SFF drive bays
Two 16-port 8 Gbps FC switches
Six 8 Gbps HBAs, SFPs, and cables
Server Optimization Application (SOA) software
Drives are purchased separately. Bundles that include a specific amount of storage are also

Installing the P2000 G3 MSA Array System

In this section, we will look at some general steps for installing the P2000 G3 MSA Array
System and the cabling guidelines for supported configurations.
Before installing any storage array system, make sure to consult the documentation for
that system.

Installation steps
The P2000 G3 MSA Array System is a rack-optimized system that can be installed and
configured using the following steps:
1. Install equipment in the rack.
2. Install hardware controllers and drives.
3. Connect drive enclosures to the P2000 G3 MSA array controller enclosure.
4. Connect the P2000 G3 MSA System to data hosts.
5. Connect the P2000 G3 MSA System to a remote management host.
6. Connect two P2000 G3 MSA Systems to replicate volumes (if applicable to your
7. Power on the components.
8. Update the firmware.
9. Configure the MSA.
We will now take a closer look at the steps.

Installing the equipment in the rack

The P2000 G3 MSA Array System is supported in HP 10000 series racks. Although it might
work in other racks, it has not been tested for other configurations yet. Consult the racking
instructions poster or the instructions provided with the device to determine placement within
the rack.

Installing controllers and drives

Installing the controllers and disk drives in the system is the next step in the process. You
should consult the documentation for the specific options that you are installing.
You can only install the specific type of controller supported by the array system. For
example, you can only install a Combo FC/iSCSI MSA Controller in a P2000 G3 MSA Array
System. Similarly, you can only install an iSCSI MSA Controller in a P2000 G3 iSCSI Array

Connecting drive enclosures

After the controllers and disk have been installed, the drive enclosures should be connected to
the array controller enclosure. The HP drive enclosures listed in Table 11-3 are supported.
Table 11-3: Supported Drive Enclosures

Mixing 3 Gbps drive enclosures and 6 Gbps devices might limit throughput to 3 Gbps.
You can mix LFF and SFF drive enclosures. However, no more than 149 drives can be
attached to a controller. Drive enclosures are connected to the controller using either a MiniSAS to Mini-SAS or SAS to Mini-SAS connector. Consult the documentation to determine
which is supported for your configuration.
A drive enclosure can be connected to a single controller or to both controllers. Connecting a
drive enclosure to both controllers protects against controller failure. Figure 11-31 shows how
you would connect a P2000 G3 dual controller array enclosure to a P2000 drive enclosure that
has dual I/O modules. The connections are established using two mini-SAS to mini-SAS

Figure 11-31: Redundant Connection between Controllers and Drive Enclosures

Connecting to data hosts

A P2000 MSA Storage Array can be attached directly to a server or can connect through a
switch. The host ports for each model are shown in Table 11-4.
An FC/iSCSI MSA controller can support connections using both Fibre Channel and
iSCSI. However, the same LUN cannot be presented through both protocols.
Table 11-4: Supported Connections

Controller model
P2000 G3 FC MSA Controller

P2000 G3 Combo FC/iSCSI MSA


P2000 G3 SAS MSA Controller

P2000 G3 iSCSI MSA Controller

# of controllers



2 FC


4 FC


2 FC, 2 iSCSI


4 FC, 4 iSCSI









We will now look at a few supported configurations. We will start with a configuration in which a
P2000 G3 Combo FC/iSCSI MSA has redundant connections to a single server and to the
iSCSI network for replication, as shown in Figure 11-32. This configuration requires two Fibre
Channel cables and two Ethernet cables.

Figure 11-32: Redundant Connections to a Server and to iSCSI

Alternatively, you can connect the P2000 to one or more switches, and then connect the
servers to the switches. For redundant connections, use two Fibre Channel switches and/or two
iSCSI switches. A redundant configuration for FC is shown in Figure 11-33. Note that this
configuration requires two FC switches and eight Fibre Channel cables.

Figure 11-33: Redundant FC Configuration with Switches

Connecting the management host

You can connect the P2000 to a management network using an Ethernet cable to the port
referenced by #6 in Figure 11-34.

Figure 11-34: P2000 Ports

Powering on the components

The drive enclosures and array controller have separate power cords and must be powered on
in the following sequence:
1. Connect the primary power cords from the rack to separate external power sources.
2. Plug in each attached drive enclosure power supply module to one power source in the
rack and power on if necessary.
3. Wait one minute to ensure the drives are powered on.
4. Plug in and power on the P2000 G3 array controller.
If the servers were shut down for maintenance, power them on now.

Updating the firmware

After the array controller is powered on, you should verify that all controller modules, drive
modules, and disk drives have the latest firmware.

You can view and update the firmware using Storage Management Utility. To do so, right-click
the system in the Configuration View panel and select Tools | Update Firmware. A screen
like the one shown in Figure 11-35 is displayed.

Figure 11-35: Storage Management Utility - Update Firmware Screen

P2000 LEDs
One way to verify that the P2000 and the disk drives are functional is to examine the LEDs. The
P2000 has LEDs in the front panel and the rear panel. We will now look at each set of LEDs
and examine what they can tell us.

Front panel LEDs

The front panel LEDs are identified in Figure 11-36.

Figure 11-36: LFF Front Panel LEDs

The LEDs referenced by 2 and 3 are disk drive LEDs. These are oriented as shown for LFF
drive enclosures. In SFF drive enclosures, the left-hand LED is #2, as shown in Figure 11-37.
The Enclosure ID LED allows you to correlate the actual enclosure with the logical view you
see in the Storage Management Utility.

Figure 11-37: SFF Front Panel LEDs

Table 11-5 describes the various states of the LEDs and their meanings.
Table 11-5: Front Panel LED Descriptions

The disk drive LED combinations can be interpreted as shown in Table 11-6.
Table 11-6: Disk Drive LEDs


Fault ID




Normal operation; drive is

online, but not active

Blinking irregularly



Blinking irregularly



On or blinking


Drive is active and

operating normally

Amber; blinking regularly

Offline; the drive is not

being accessed. A
predictive failure alert
may have been received.

Amber; blinking regularly

Online; possible I/O

activity. A predictive
failure alert may have
been received.

Amber; blinking regularly

Active. A predictive
failure alert may have
been received.

Amber; solid

Offline; no activity. A
failure or critical fault
condition has been

Blue; solid

Offline. The drive has

been selected by a
management application.

Blue; solid

The controller is driving

I/O to the drive and it has
been selected by a
management application.

Rear panel LEDs

The rear panel LEDs differ somewhat on the P2000 model. We will look at the LEDs for the
FC/iSCSI combo model, as shown in Figure 11-38.

Figure 11-38: P2000 G3 Combo FC/iSCSI MSA Controller Rear Panel LEDs

The LEDs and their meanings are described in Table 11-7.

Table 11-7: LED Meanings

The cache flush and self-refresh mechanism is an important data protection feature. Essentially
four copies of user data are preserved: one in each controllers cache and one in each
controllers CompactFlash.
A memory module used to cache data that has not been written to disk. In the P2000, the CompactFlash is transportable,
which allows it to be transported to a replacement controller if a controller fails and the cache status LED is either on or

Scenario: BCD Train

BCD Train has purchased a Dual Controller P2000 G3 iSCSI MSA Array System and an HP
D2700 drive enclosure. They plan to use the Array System to support virtual classrooms in 14
training centers that are connected through 1 Gbps Ethernet links.
List the steps required to install the P2000. How will you connect it to protect against failure of
a controller or switch?

Using the Storage Management Utility

The HP Storage Management Utility is a web-based utility that allows you to configure and
monitor a SAN storage device. Like the ACU, it has a pane on the left-hand side that allows you

to select logical and physical devices, as shown in Figure 11-39.

Figure 11-39: SMU

The menu options depend on the item selected. When the root of the node is selected, you can
view information about the storage device, provision storage, and manage configuration for the
P2000. In this section, we will look at how to provision storage and set some configuration

Viewing information
When the storage device node is selected, the System Overview screen shows the health,
capacity, and storage space for various components. As you can see in Figure 11-40, a health
status icon is shown for the System, Enclosures, and Vdisks. It also shows the number of
each item, the storage capacity, and a graphical representation of how storage space is
A RAID array from which pools of storage can be provisioned.

Figure 11-40: System Overview

You can select a listed item to display detailed information about that item. For example,
selecting System shows detailed information about the system itself, as shown in Figure 11-41.
This includes information about system health, the system name, location, contact, and other
system information, the vendor name, product ID, product brand, SCSI Vendor ID and Product
ID, and supported locales. Information about system redundancy is also displayed, as well as
the status of each controller.

Figure 11-41: System Information

Selecting Enclosure displays the information shown in Figure 11-42, including the enclosures
WWN, vendor, model, and the number of disk slots.

Figure 11-42: System Overview - Enclosure

Selecting Disks displays information about each disk, as shown in Figure 11-43, including the
enclosure ID and the slot where it resides, the serial number, the model number, the revision
number, the drives type and capacity, as well as the way the drive is being used and its status.
In this example, three of the six disks are being used as a Vdisk and one is being used as a
spare. The other two disks are not assigned.

Figure 11-43: System Overview - Drive Details

Selecting Vdisks lists the Vdisks, their configuration, and status. As you can see in Figure 1144, there is one Vdisk configured. It has a total size of 600 GB, and it has 299.3 GB of free
space that has not been assigned to either a volume or a snap pool.
snap pool
A storage area that has been allocated for snapshots of data volumes.

Figure 11-44: System Overview - Vdisks Details

Selecting Volumes allows you to view information about the volumes provisioned on the
system, as shown in Figure 11-45. As you can see, a single 250 GB volume named vd01 has
been created.

Figure 11-45: System Overview - Volumes

You can view information about the snap pools that have been created on the system by
selecting Snap Pools. As you can see in Figure 11-46, a single 50 GB snap pool has been
created for vd01. That snap pool contains one snapshot.

Figure 11-46: System Overview - Snap Pools

Similarly, you can select Snapshots to view information about individual snapshots, as shown
in Figure 11-47. You can also view information on any scheduled tasks, if there were any
defined. You can define schedules for replication, snapshot creation, and volume copy.

Figure 11-47: System Overview - Snapshots

Selecting Configuration Limits allows you to view the maximum number of Vdisks, volumes,
LUNS, disks, and host ports supported by the system, as shown in Figure 11-48.

Figure 11-48: System Overview - Configuration Limits

You can select Licensed Features to view information about the features that are supported on
the system based on the current licenses, as shown in Figure 11-49.

Figure 11-49: System Overview - Licensed Features

Licensing is required for:

Volume Copy
Virtual Disk Service (VDS)
Volume Shadow Copy Service (VSS)
Storage Replication Adapter (SRA)
Virtual Disk Service (VDS)
A service that enables host-based applications to manage Vdisks and volumes.
Volume Shadow Copy Service (VSS)
A service that enables host-based applications to manage snapshots.

Storage Replication Adapter (SRA)

A host-software component, installed on Microsoft Windows Server, that enables disaster recovery management (DRM)
software on the host to control certain aspects of the replication feature in storage systems connected to the host.

Selecting Versions displays versioning information for both controllers, as shown in Figure 1150. This information is useful when troubleshooting problems.

Figure 11-50: System Overview - Versions

Viewing the Event Log

You can select View | Event Log to view a log of informational, warning, error, and critical
events that have occurred on the system. By default, all event severity levels are displayed, as
shown in Figure 11-51.

Figure 11-51: Event Log - All Events

However, you can select a specific event severity level to show all events logged at that level.
For example, you might want to view critical and error events periodically to learn about

Vdisk overview
You can view more detailed information about a specific Vdisk by selecting that Vdisk in the left

pane, as shown in Figure 11-52.

Figure 11-52: Vdisk Overview - Vdisk

As with the System Overview, this screen allows you to view details about the items that
belong to the Vdisk.
Selecting Disks shows information about the disks included in the Vdisks RAID array and the
spares, as shown in Figure 11-53.

Figure 11-53: Vdisk Overview - Disk Information

You can also view information about the volumes and snap pools that belong to the Vdisk. The
same information is available through System Overview, so we will not show it here.

Viewing volume information

You can select a volume in the left-hand pane to view information about the specific volume, as
shown in Figure 11-54. If maps, schedules, replication addresses, or replication images had
been defined, you would be able to view those here as well.

A list of hosts that have been given access to the volume. By default, all hosts have read/write access to a volume. You can
specifically assign read-only or no access instead.

Figure 11-54: Vdisk Overview - Volume

Viewing enclosure information

To view information about an enclosure, select the enclosure in the left pane. The Front
Graphical enclosure view shows each disk and whether it is allocated to a Vdisk, configured
as a spare, or available. It also shows the status of each disk and detailed information about the
enclosure itself, as shown in Figure 11-55. You can click one of the components, such as a
disk, to view detailed information about it.

Figure 11-55: Enclosure - Front Graphical View

After you create a Vdisk, its status will be shown as VDISKINIT while it is being initialized. This
is shown in Figure 11-56.

Figure 11-56: Vdisks Status Shown as VDISKINIT

You can view the same information in a tabular format by clicking the Front Tabular tab, as
shown in Figure 11-57. The tabular format shows additional information, including the size of
each disk and its serial number. It also allows you to select a specific disk to view detailed
information about it, including its RPM and transfer rate.

Figure 11-57: Enclosure Overview - Front Tabular View

The Rear Graphical tab provides an overview of the status of all components that are available
at the rear of the enclosure, as shown in Figure 11-58. The table that is shown below the
graphic reports the detailed information for the enclosure, unless another component had been
selected on one of the tabular lists or by clicking a component.

Figure 11-58: Enclosure Overview - Rear Graphical View

You can click a specific component to view detailed information about it. For example, Figure
11-59 shows the rear graphical view with the left power supply enclosure selected.

Figure 11-59: Enclosure Overview - Rear Graphical View, Power Supply Selected

In this example, there are no FC or iSCSI ports connected, as indicated by the blue circle with
the white question mark.
Figure 11-60 shows the rear graphical view when iSCSI port A3 is connected. You can select
the port to view details about the ports status, including its WWN (referenced by Target ID, its
configured speed, and its actual speed.

Figure 11-60: iSCSI Port A3 Connected and Operational

The Rear Tabular view displays a list of ports and components on the rear of the system, along
with information about them. You can select an individual component for more detailed
information, as shown in Figure 11-61.

Figure 11-61: Enclosure Overview - Rear Tabular View

Provisioning storage
You can provision storage either by running the Provisioning Wizard or by selecting individual
commands from the Provisioning menu. The commands available in the Provisioning menu
are determined by the object selected in the left pane. For example, you can create a Vdisk
only when the P2000 or the Vdisk node is selected. To create a volume, you need to select a
specific Vdisk.
We will first look at the procedure for provisioning storage using the wizard. Next, we will
overview the operations supported under each context.

Provisioning wizard
You launch the Provisioning Wizard by selecting the P2000 and choosing Provision |
Provisioning Wizard. An introduction screen listing the steps necessary to provision storage is
displayed, as shown in Figure 11-62.

Figure 11-62: Provisioning Wizard - Introduction

Click Next to continue. The first step in provisioning storage is to configure the Vdisk. As you
can see in Figure 11-63, you need to assign a name and a RAID level. You can optionally
assign the Vdisk to a specific controller or allow the controller to be selected automatically. If a
RAID level that involves striping is selected, you can also set the chunk size. When RAID 10 or
RAID 50 is selected, you will also be able to select the number of sub-Vdisks. The sub-Vdisks
are the number of RAID 1 mirrors used in the stripe set configuration. The Vdisks name
controller can be changed later. The RAID level and chunk size cannot.

Figure 11-63: Provisioning Wizard - RAID Level, Name, and Chunk Size

The next screen allows you to select the disks that will be included in the Vdisk. As shown in
Figure 11-64, you can select the number of RAID disks and the number of spares. In the case of
RAID 1, the number of RAID disks will always be 2.

Figure 11-64: Provisioning Wizard - Select Disks

Next you will be prompted to define the volumes. As shown in Figure 11-65, you can define the
number of volumes to create and then set the size and base name for the volumes. You can
later add, delete, and expand volumes.

Figure 11-65: Provisioning Wizard - Define Volumes

The next screen allows you to map specific LUNs to a particular port and assign access. Figure
11-66 shows a base mapping that assigns read-write access to iSCSI A3.

Figure 11-66: Provisioning Wizard - Mapping (Graphical View)

You can set the mapping using a graphical view or a tabular view of the ports.
With the final screen, shown in Figure 11-67, you can confirm your configuration. You can use
the Previous button to go back and change a value. When you click Finish, the Vdisk and any
defined volumes are created.

Figure 11-67: Provisioning Wizard - Confirmation

Provisioning operations
The provisioning operations available depend on the item selected in the left pane. We will
look at the available options at each level.

System options
When the P2000 system node is selected, you can perform the provisioning operations shown
in Figure 11-68.

Figure 11-68: System Provisioning Options

Note that if you delete Vdisks or volumes, you will lose the data stored there. A global spare is
one that can be shared between multiple Vdisks. Figure 11-69 shows how you could configure
one disk as a global spare.

Figure 11-69: Managing Global Spares

When you add a host, you are prompted for the information shown in Figure 11-70. This
information includes the host name and the Host ID, which will be either a WWN or an iSCSI
IQN depending on whether you are using Fibre Channel or iSCSI. You can also select a
Standard profile, an OpenVMS, or HP-UX profile. A discussion of OpenVMS and HP-UX is
beyond the scope of this course.

Figure 11-70: Adding a Host

Provisioning Vdisks
The provisioning options supported when the Vdisks node is selected are shown in Figure 1171.

Figure 11-71: Provisioning Objects for the Vdisks Node

The provisioning options supported when a specific Vdisk is selected are shown in Figure 1172. Notice that you can create volumes and volume sets at this level.

Figure 11-72: Provisioning Options for a Specific Vdisk

Provisioning volumes
The provisioning options for a specific volume are shown in Figure 11-73.

Figure 11-73: Volume Provisioning Options

When you create a volume copy, you create a new volume and copy the contents of an existing
volume to it. You can either perform the volume copy immediately, or schedule it, as shown in
Figure 11-74.

Figure 11-74: Volume Copy

You can copy a volume or a snapshot to a new standard volume. The destination volume must
be in a Vdisk owned by the same controller as the source volume. If the source volume is a
snapshot, you can choose whether to include its modified data (data written to the snapshot
since it was created). The destination volume is completely independent of the source volume.
The first time a volume copy is created of a standard volume, the volume is converted to a
master volume and a snap pool is created in the volumes Vdisk. The snap pools size is either
20% of the volume size or 6 GB, whichever is larger. Before creating or scheduling copies,
verify that the Vdisk has enough free space to contain the snap pool.

For a master volume, the volume copy creates a transient snapshot, copies the data from the
snapshot, and deletes the snapshot when the copy is complete. For a snapshot, the volume
copy is performed directly from the source. This source data may change if modified data is to
be included in the copy and the snapshot is mounted/presented/mapped and I/O is occurring to
To ensure the integrity of a copy of a master volume, unmount/unpresent/unmap the volume or
at minimum perform a system cache flush and refrain from writing to the volume. Since the
system cache flush is not natively supported on all operating systems, it is recommended that
you unmount/unpresent/unmap temporarily. The volume copy is for all data on the disk at the
time of the request, so if there is data in the operating-system cache, that data will not be copied
over. Unmounting/unpresenting/unmapping the volume forces the cache flush from the
operating system. After the volume copy has started, it is safe to remount/re-present/remap the
volume and/or resume I/O.
To ensure the integrity of a copy of a snapshot with modified data, unmount/unpresent/unmap
the snapshot or perform a system cache flush. The snapshot will not be available for read or
write access until the volume copy is complete, at which time you can remount/represent/remap
the snapshot. If modified write data is not included in the copy, then you may safely leave the
snapshot mounted/presented.
During a volume copy using snapshot modified data, the system takes the snapshot offline, as
shown by the Snapshot Overview panel.

Figure 11-75: Rolling Back a Volume

Choosing Roll Back Volume allows you to revert the content of a volume to match a snapshot.
You are prompted for the volume and the snapshot volume, as shown in Figure 11-75. You also
have the option of including its modified data (data written to the snapshot since it was created).
For example, you might want to take a snapshot, mount/present/map it for read/write, and then
install new software on the snapshot for testing. If the software installation is successful, you
can roll back the volume to the contents of the modified snapshot.
Before rolling back a volume you must unmount/unpresent/unmap it from data hosts to
avoid data corruption. If you want to include snapshot modified data in the roll back, you must
also unmount/unpresent/unmap the snapshot.
All data modified since the snapshot was taken will be lost.
Scenario:BCD Train

You need to provision storage on the P2000. The drive enclosure has 6 TB of storage
capacity divided evenly among 12 disks. They currently require 2 TB of storage and expect
their storage requirements to grow by 25% each year. There are two types of training centers:
one for standard courses and another for courses that require security clearance. Only
training centers with security clearance should have access to the LUN that contains the
security courses.
Discuss the provisioning options supported. Which will offer the best level of performance and
resiliency? What can you do to ensure that a failed drive can be replaced without downtime?

In this chapter, you learned:
The ACU allows you to configure DAS storage.
The ACU allows you to do everything you can do with ORCA, plus the following:
Change the size of a logical drive.
Create multiple logical drives per array.
Configure the stripe size.
Share a spare drive among arrays.
Assign multiple spare drives to an array.
Set the spare activation mode.
Migrate RAID level or strip size.
Expand an array or logical drive.
Configure performance attributes.
Set the boot controller.
The iSCSI protocol encapsulates SCSI commands so that they can be sent over an
Ethernet network.
Fibre Channel SANs are dedicated storage networks that operate over copper or
fiber optic cable.
Fibre Channel SANs can be point-to-point, arbitrated loop, or switch-based networks.
FCoE allows both TCP/IP and Fibre Channel traffic to be carried over the same
Ethernet network.
FCoE requires 10 Gbps lossless Ethernet and support for jumbo frames.
A P2000 G3 MSA Array System can be attached to servers or switches.
LEDs provide status and troubleshooting information for a P2000 G3 MSA Array
System and the disk drives it contains.
The Storage Management Utility (SMU) allows is a web-based utility that allows you
to configure and monitor a SAN storage device, including the P2000 G3 MSA Array

Review Questions
1. Which tool can you use to assign a spare drive to multiple DAS arrays?
2. What is required for a computer to be an iSCSI initiator?
3. What type of port is used to connect a device to an FC SW SAN?
4. What are the requirements for implementing FCoE?
5. Can a LUN be presented to both iSCSI and Fibre Channel hosts?
6. What does it indicate when a P2000 controller LED is blue?

True or false
1. Enabling BBWC can safely improve write performance with or without a battery.
2. An iSCSI SAN can operate over a LAN, MAN, or WAN.
3. FC AL requires two or more Fibre Channel switches.
4. FCoE can operate over any Ethernet network.
5. You cannot use an FC controller in an FC/iSCSI P20000.
6. When you create a LUN, the default mapping allows that LUN to be seen by all hosts.

1. You can use the _______ to obtain information about the current temperature and
rotational speed of a DAS drive.
2. An iSCSI unique identifier is known as a(n) _______.
3. In an FC SW topology, you can connect multiple switches together using _______ ports.
4. You connect a drive enclosure to a P2000 controller using a _______ or _______
5. When powering on a newly installed P2000, you should power on the _________ before
the _______.
6. A solid amber drive light on a drive in a P2000 Array System indicates _______.
7. You can use _______ to view detailed information about the status of the physical disks
in a P2000 using a Web browser.

1. Explain how you can view alerts related to a DAS drive enclosure.
2. Sketch the iSCSI protocol stack and explain how iSCSI works.
3. Explain the benefits of creating an FC fabric over using FC AL.
4. Explain the relationship between Vdisks and volumes.

Scenario questions

Scenario: BCD Train

You have configured a P2000 G3 FC/SCSI storage array for BCD Train. You are using
Storage Management Utility to verify your configuration. Use the exhibits shown in Figures
11-76 through 11-79 to answer these questions.

1. How many exist on the RAID5 Vdisk?
2. How many volumes exist on the RAID1 Vdisk?
3. What is the status of the RAID5 Vdisk?
4. How much storage capacity is not allocated to a volume?
5. Where does this storage capacity reside?
6. Which Vdisk is protected by a spare?
7. What SAN protocol is enabled?
8. How many management interfaces are connected?
9. How can you access the Storage Management Utility for controller A?

Figure 11-76: Front Enclosure Overview

Figure 11-77: Rear Enclosure Overview

Figure 11-78: vd01 Overview

Figure 11-79: vd02 Overview

This eBook is licensed to Catalin Dinca,

Chapter 12: Business Continuity Planning

Many businesses depend on their servers to operate. When something happens to disrupt
server operation, the business risks losing money and possibly irreparable damage to its
Earlier in the course, you were introduced to RAID, and you learned how it could be used to
provide resiliency against hard disk failure. In this chapter, you will learn additional techniques
and technologies for increasing resiliency against server failure and disaster.
In this chapter, we begin with an overview of business continuity planning. We will examine key
considerations for eliminating a single point of failure. Next, we will look at measures that
companies can take to protect against power failure. From there, we will examine how to
design and implement a backup strategy. We will conclude the chapter with a look at more
advanced fault tolerance methods, including virtualization, clustering, and load balancing.

In this chapter, you will learn how to:
Design and implement a business continuity plan.
Design and implement a backup strategy.
Describe and implement a power protection strategy.
Describe clustering and load balancing solutions.

Business Continuity Planning

Business continuity planning is a complex process that involves the following basic steps:
Identifying threats that could occur
Assessing the risk associated with those threats
Prioritizing risks based on the likelihood and cost of an occurrence
Identifying steps to take to mitigate the risk
In this chapter, we will focus on a very small area of business continuity planning: mitigating the
risk of server downtime and data loss.

Server-related business continuity objectives

When you develop a business continuity plan, the primary objectives concerning servers are:
Identifying possible vulnerabilities.
Implementing redundancy measures to reduce the impact of failure.
Ensuring that services can be restored as quickly as possible.
Minimizing the likelihood and impact of data loss.
When planning redundancy measures, you should keep in mind that servers sometimes need

to be taken offline for planned maintenance.

Identifying vulnerabilities
The most vulnerable aspects of a server infrastructure are single components that are critical to
the functioning of the server and that have no redundant backup. These components are known
as single points of failure (SPOF).
Consider the rack in Figure 12-1. How many possible SPOFs can you identify? What additional
information do you need to know to identify whether less obvious SPOFs also exist?

Figure 12-1: Single Points of Failure

The most obvious SPOFs are the servers themselves. If a problem causes an entire server to
go down, the service it performs would be inaccessible. If the domain controller goes down,
users would be unable to connect to the company network. If the web server goes down,
customers would not be able to make purchases or even learn about the company. If the
database server goes down, no purchases could be made at all.
Connections between servers can also be SPOFs. For example, this network has only a single
switch. If the switch fails, no device would be able to communicate on the network, and
operations would cease. Another SPOF is the network adapter in most of the servers.
Storage can also represent an SPOF. In this example, the NAS is configured as JBOD. This
means that there is no resiliency against failure of a disk drive. Also, the figure does not tell you
whether the NAS connects with multiple paths or just a single path. Neither the web server nor
the database server has resilient storage. The web server has a single internal disk drive. The
database servers drives are configured as a RAID 0 array, which will fail if a single disk fails.

Other components inside the systems might be SPOFs as well. How many CPUs are installed
in each server? Are memory protection options configured? What about the bus configuration?
What about power? Is the environment protected against power failure? If all components
reside on a single circuit, failure in that power circuit will take the components down.
Finally, the location itself might be a single point of failure because it is vulnerable to
environmental disasters or fire.
The key to correcting all these SPOFs is to implement redundancy within the system. We will
talk about server redundancy later in the chapter. For now, we will take a look at the fault
tolerance options available for eliminating the SPOFs that are associated with server
components and interconnects.
fault tolerance
The redundancy integrated into a system that makes it resilient to the failure of a component

Memory fault tolerance

Earlier in the course, you learned that ECC features can automatically correct memory errors.
Advanced ECC offers additional protection against memory failure.
We will now take a closer look at how data is written to memory, what types of errors can occur,
and how ECC and Advanced ECC operate to detect and correct errors.

How data is read and written to memory

Data is written to the memory cells in a DRAM chip as a series of bits. Each cell has an
electrical charge of either on (1) or off (0), as shown in Figure 12-2. Data is written one word at
a time. On a system that has a 64-bit bus, a word is 64-bits.

Figure 12-2: A Word Is a Series of Bits

A block of data. The number of bits in a word is determined by the width of the bus.

Two types of errors can occur when reading and writing data. A soft error occurs when an
electrical signal is misinterpreted (a 1 is misread as a 0 or vice versa). A hard error occurs when
there is a physical problem with a chip. A single-bit error is one that affects only one bit in the
word. A multi-bit error is one that affects more than one bit in the word.
As memory is created with higher capacity and lower voltage, errors become more
likely because the memory cells are smaller and the differential between on and off becomes
HP introduced ECC memory in 1993 and Advanced ECC in 1996. All ProLiant servers support

Advanced ECC.

Basic ECC
Basic ECC calculates an 8-bit checksum, which is written with the data. When the memory
controller reads the data, it recalculates the checksum and compares it with the checksum that
was written. If they are different, ECC identifies the bit that is different and corrects it. This
process is illustrated in Figure 12-3.

Figure 12-3: Basic ECC

ECC can also detect multi-bit errors in which two to four bits are incorrect on the same DRAM
chip. However, it cannot correct them because if more than one bit is incorrect, ECC will not be
able to identify which bits are incorrect based only on the checksum values. Instead, it issues a
Non-Maskable Interrupt (NMI) message to the operating system. This message causes the
computer to halt.
A signal to the processor that cannot be ignored. It is generally used to indicate a serious, unrecoverable error.

Advanced ECC
Advanced ECC provides multi-bit error correction as long as the cells with errors are located on
the same chip and avoid a DRAM failure. Advanced ECC works by associating four ECC
devices with each DRAM chip. Each chip contributes four bytes to the word, distributing them
between the ECC devices, as shown in Figure 12-4.

Figure 12-4: Advanced ECC

Whereas each ECC device can correct single-bit errors, Advanced ECC can correct an error
that involves multiple bits stored on the same DRAM chip.

Memory protection
Advanced ECC does not provide failover capability if a DIMM fails. Instead, if a DIMM fails, you
must either continue to operate with less memory, causing possible performance degradation,
or you must shut down the server to replace the failed DIMM, which results in loss of service.
There are three memory failure recovery technologies that provide failover. These are:
Online Spare Memory mode.
Mirrored Memory mode.
Lockstep Memory mode.
Memory Failure Recovery can only be used if all of the DIMMs associated with a memory
controller have the same type, size, and rank.

Online Spare Memory mode

In Online Spare Memory mode, you can designate a populated channel as the spare, making it
unavailable for normal use as system memory. If a DIMM in the system channel experiences
correctable memory errors that exceed a defined threshold, the channel goes offline, and the
system copies data to the spare channel, as illustrated in Figure 12-5. This duplication prevents
data corruption, a server crash, or both. You can replace the defective DIMM at your
convenience during a scheduled shutdown.

Figure 12-5: Online Spare Memory Mode

Online Spare Memory mode does not require operating system support or special software. It
only requires a BIOS that supports it. It can be implemented in the following configurations:
A memory controller that has at least two populated memory channels
A memory controller that has a single memory channel populated with dual-rank
HP SIM can be configured to log messages related to Online Spare Memory mode events as
long as the operating system has an agent that supports Advanced Memory Protection.
One disadvantage of Online Spare Memory mode is that memory capacity is reduced. A system
that has two populated memory channels and has enabled its Online Spare Memory mode will
have its memory capacity reduced by 50%. A system that has three populated memory
channels will have its memory capacity reduced by one third.
Although Online Spare Memory technology reduces the chance of an uncorrectable error
bringing down the system, it does not fully protect your system against uncorrectable memory

Mirrored Memory mode

Mirrored Memory mode provides better fault tolerance than Online Spare Memory mode
because it protects against both single-bit and multi-bit uncorrectable errors.
With Mirrored Memory mode, the memory subsystem writes identical data to two channels
simultaneously. If a memory read from one of the channel returns incorrect data due to an
uncorrectable memory error, the system automatically retrieves the data from the other channel,
as shown in Figure 12-6.

Figure 12-6: Mirrored Memory

A transient or soft error in one channel does not affect mirroring, and operation continues
unless a simultaneous error occurs in exactly the same location on a DIMM and its mirrored
DIMM. Mirrored Memory mode reduces the amount of memory available to the operating
system by 50% because only one of the two populated channels provides data.
Mirrored Memory mode actually reduces the potential memory capacity of a server by
two thirds because the third channel is unpopulated.

Lockstep Memory mode

Lockstep Memory mode uses two memory channels at the same time and offers you an even
higher level of protection. In lockstep mode, two channels operate as a single channel. Each
write and read operation moves a data word two channels wide, as shown in Figure 12-7. Both
channels split the cache line to provide 2x 8-bit error detection and 8-bit error correction within
a single DRAM. Reduction in memory capacity is the same as for Mirrored Memory.

Figure 12-7: Lockstep Memory Mode

Configuring memory protection

You can configure memory protection using RBSU. To do so, select System Options from the
main menu, as shown in Figure 12-8 and press the Enter key.

Figure 12-8: Main Menu - Select System Options

Select Advanced Memory Protection. As you can see in Figure 12-9, the default value is
Advanced ECC Support.

Figure 12-9: Advanced Memory Protection - Current Value

Press the Enter key to display a list of values, as shown in Figure 12-10. Select the protection
level appropriate for the server that you are configuring and press the Enter key.

Figure 12-10: Memory Protection Options

If you select an option that is not supported by the current memory configuration, a warning will
be displayed, as shown in Figure 12-11.

Figure 12-11: Memory Configuration Warning

Network fault tolerance

Depending on how the network and server are configured, several different points of failure are
possible along the communication path. These points of failure include:
PCIe slot
Network adapter port
Upstream network failure
As with other fault tolerance methods, implementing network fault tolerance involves adding
redundancy to the network infrastructure. One way to add redundancy is through dual homing.
The other is through network teaming. We will look at dual homing first.

Dual homing
You can dual home a server by installing multiple network adapters in the server and assigning
a unique IP address to each. Connecting all adapters to the same switch provides protection
against network adapter or cable failure, as shown in Figure 12-12.

Figure 12-12: Dual-homed Server - Single Switch

You can provide additional fault tolerance by deploying multiple switches and distributing the
network adapter connections between them, as shown in Figure 12-13.

Figure 12-13: Dual-homed Server - Multiple Switches

This configuration provides fault tolerance against network adapter, cable, switch, or upstream
network failure.
The problem with dual homing is that although it provides good fault tolerance for outbound
traffic, it does not provide very good fault tolerance or automatic failover for inbound traffic.
Client computers send packets to a specific IP address. Until the client becomes aware that the
IP address has changed, the client will continue to attempt to access the server using the IP
address where the fault exists.

HP ProLiant Network Adapter Teaming

As you learned earlier in the course, Network Adapter Teaming allows you to associate
multiple network adapters with the same IP address, as shown in Figure 12-14. Doing so
provides fault tolerance, as well as failover for inbound traffic. In this case, Network Adapter
Teaming also provides failover for outbound traffic because failover is transparent to clients.

Figure 12-14: Network Adapter Teaming

Network Adapter Teaming allows you to associate between two and eight network adapters
with a single IP address. Regardless of how it is configured, Network Adapter Teaming
provides the following protections:
Fault tolerance against network adapter or cable failure for inbound traffic
Fault tolerance against network adapter or cable failure for outbound traffic
Depending on how it is configured, Network Adapter Teaming might also provide load
balancing, the ability to choose a network adapter preference order, and/or fault tolerance
against switch failure and upstream network failure.
load balancing
The act of distributing a workload across multiple components.

Table 12-1 shows the available Network Adapter Teaming types and the features that they
support. You would use a type that allows you to select preference order if some network
adapters provided better performance than others.
Table 12-1: Network Adapter Teaming Types

Switch Assisted Load Balancing can only be used with a switch that supports Port Trunking.
Dynamic Fault Tolerance requires a switch that supports IEEE 802.3ad Link Aggregation
Control Protocol (LACP). With Dynamic Fault Tolerance, the switch ports to which the teamed
ports are connected must also be configured with LACP enabled.
Scenario: Stay and Sleep
The Stay and Sleep reservation system includes a web server and a database server. The
web server is in the companys perimeter network, and the database server is on the internal
network, as illustrated in Figure 12-15. The web server reads from and writes to the database.
Discuss options to protect against the failure of a switch. Assuming that each server has two
network adapters, what additional equipment would you need?

Figure 12-15: Stay and Sleep Server Connections

Power Protection
Ensuring that the server is protected against power failure, power surges, and spikes is another
essential part of business continuity planning. You have already been introduced to how to
configure power redundancy. However, to protect your servers against power failure and power
spikes and surges, you should invest in an uninterruptible power supply (UPS).

Rack-mountable UPS models

HP rack-mountable UPSs, like the R1500 G3 shown in Figure 12-16, are designed for dense
data center environments.

Figure 12-16: R1500 G3 North America

These UPSs offer the following features:

Industry-leading power density (more watts per U-space)
More true power (measured in watts) in smaller form factors (measured in rack Uspace)
More performance while saving valuable rack space
Remember that apparent power (VA) is not the same as true power (watts) on an AC
circuit. To calculate watts, multiply the VA by the power factor. The industry power factor is .6,
but it might be different for a specific device.
These UPSs are supported by Extended Runtime Modules (ERMs) that increase the amount of
time the UPS can supply power to the components after power failure. They are also bundled

with the free HP Power Manager software.

The rack mountable UPS portfolio includes the UPS systems shown in Table 12-2.
Table 12-2: Some Rack-mountable UPSs

A number of other UPS models are available that are larger and can supply more
wattage than those shown here. The model chosen will depend on a data centers

Online On Demand hybrid technology

The HP rack-mountable UPSs use Online On Demand hybrid technology, which combines the
high efficiency of a line interactive UPS with the stability of a double conversion online UPS
when power fluctuates beyond acceptable limits.
Online On Demand operates in line interactive mode during general use to maximize efficiency
and minimize heat output. In line interactive mode, all power passes through the
inverter/converter, as shown in Figure 12-17, providing protection against surges, spikes, and
other noise. During normal operation, excess power is used to charge the battery. When power
fails, the battery is discharged, and power is converted back to AC power and used to power
connected devices.

Figure 12-17: Line Interactive Mode

If input voltage fluctuates outside of the established range, such as if a generator comes online,
the UPS immediately switches to double conversion online mode to provide the cleanest power
possible. In double conversion online mode, power is converted to DC, the battery is charged,
and then power drawn from the battery is converted back to AC, as shown in Figure 12-18.

Figure 12-18: Double Conversion Online Mode

The Online On Demand hybrid technology provides 97% efficiency in standard mode, even
with output loads as low as 40% of maximum. Hot-swappable batteries and electronics
modules and an automatic bypass system reduce down time if service is needed.

Enhanced battery management

Batteries that are constantly trickle-charged (a constant voltage feeding a low current to the
battery) reach the end of their useful life in less than half the time of those charged using
advanced techniques such as enhanced battery management technology.
HP enhanced battery management technology incorporates an advanced three-stage battery
charging technique that doubles battery service life, optimizes battery recharge time, and
provides up to a 60-day advanced notification of the end of useful battery life.
HP enhanced battery management provides the following features:
Intelligent battery charging
Advance notification of battery replacement
Superior voltage regulation

Intelligent battery charging

Most manufacturers use a trickle-charging method that dries the electrolytes and corrodes the
plates, thereby reducing potential battery life by up to 50%. The three stage intelligent battery
charging technique allows the UPS battery to be charged in an efficient method. First, the HP
UPS rapid charges the battery to 90%. A constant voltage continues until the battery reaches
full capacity. The charger is then turned off, and the HP UPS goes into a rest mode, preserving
the battery for future power failures.

Advance notification of battery replacement

Because UPS batteries are valve-regulated, sealed, and contain lead-acid cells, there has not
been a practical way to provide users with an advanced notification of a battery failure. The
only way to determine that batteries needed replacement was to wait until the power failed,
which would take the servers and computers down with it. Enhanced battery management is
the only technology available now that reliably provides advance notification of battery failure.
A microprocessor tracks the charge and discharge characteristics of the battery and compares
these characteristics to an ideal battery state. By monitoring the battery, the user receives
advance notice when battery replacement is necessary.

Superior voltage regulation

Most UPS devices correct input voltage variations as low as 25% but transfer to battery when
a surge or sag must be filtered in the system. This type of voltage regulation shortens the
battery service life of the UPS.
Innovative HP buck/double-boost voltage regulation ensures consistent input voltage to the
load by automatically bucking it if the voltage is too high or boosting it if it is too low. Voltage
variations that are as low as -35% or as high as +20% of the nominal voltage are corrected
without transferring to the battery. As a result, the number of charge/recharge cycles is reduced,
and the life of the HP UPS battery is extended.

UPS options
HP UPS systems allow you to scale up to the capacity that you require. To scale UPS systems
together, you connect additional UPS devices to the original UPS through the UPS card slot.
HP UPS systems also offer additional modules to control them, including:

Extended Runtime Module (EMR)

UPS Management Module
Enhanced Battery Management
Intelligent Battery Charging
Advanced notification of battery replacement
Superior Voltage Regulation

ERM option
For applications requiring extended battery backup times, optional ERMs are available for most
rack-mountable HP UPSs. An ERM can extend backup times from minutes to hours, depending
on the load and model used. The HP UPSs that are ERM capable are the R/T3000, R5000,
R7000, R8000/3, R12000/3, and RP36000/3.
The ERM option extends the ability of the UPS to power equipment during a failure. At the
recommended 80% load, one ERM can extend the available UPS run time up to 30 minutes.
An ERM acts as an extra battery for a UPS and attaches to a power receptacle located on the
UPS rear panel. A UPS can support up to four ERMs. You must install an ERM at the bottom of
a rack with the UPS directly above it.
The ERM Configurator ensures that accurate run-time predictions are reported to any network
software communicating with the UPS. Network software uses run-time information to conduct
a timely shutdown of attached servers.

HP rack and power management software

HP offers the following rack and power management software (including sizing tools):
HP Power Manager
HP Power Protector UPS Management Software
HP UPS Management Module
HP Power Advisor

HP Power Manager
HP Power Manager is a web-based application that enables you to manage a single HP UPS
from a browser-based management console. It is standard with all HP UPSs, and it provides
the flexibility to monitor power conditions and to control a single UPS locally or remotely. It
enables you to broadcast alarms, perform orderly shutdowns in the event of power failures, and
schedule power-on and power-off to the UPS and attached equipment.
Power Manager software uses load segment control to schedule startups and shutdowns of
less critical devices, which extends the operation time of the mission-critical devices.
The software continuously manages and monitors HP UPSs. A familiar browser interface
provides secure access to management servers anywhere on the network. Administrators can
define UPS load segments for maximum uptime of critical servers, as shown in Figure 12-19.

Figure 12-19: Adding a Device to a Load Segment

For most UPSs, the receptacles on the rear panel are divided into one or more groups. These
groups, which are called load segments, can be controlled independently. When a load
segment that is connected to less critical equipment is shut down, the runtime for more critical
equipment is extended, providing additional protection.
Administrators can then configure different power failure settings for each load segment, as
shown in Figure 12-20.

Figure 12-20: Power Fail Settings

The exact options available will differ by UPS. These screenshots illustrate one set of
configuration options.
Power Manager also allows you to view the status of a UPS, as shown in Figure 12-21.

Figure 12-21: Power Manager Status

You can also view any active alarms and the date and time at which they occurred, as shown in
Figure 12-22.

Figure 12-22: Power Manager Alarms

HP Power Protector UPS management software

For newer UPSs, HP Power Protector has replaced HP Power Manager. HP Power Protector is
a web-based application that enables administrators to manage an HP UPS from a browserbased management console. Administrators can monitor, manage, and control a single UPS or
a redundant UPS pair locally and remotely.
HP Power Protector can be used to manage the following UPSs:
R1500 G3
R/T3000 G2
T750 G2
T1000 G3
T1500 G3
A familiar browser interface provides secure access to the UPS Administrator Software and
UPS Client Software from anywhere on the network. Administrators can configure power failure
settings and define UPS load segments for maximum uptime of critical servers. Like Power
Manager, Power Protector can be used to configure extended runtimes for critical devices
during utility power failures.
Power Protector has two components:
HPPP Administrator - Installed on a server connected to the UPS serial port or USB
HPPP Client - Installed on any computer powered by the UPS
A single HPPP Administrator can manage up to 35 HPPP Clients. An example configuration is
shown in Figure 12-23.

Figure 12-23: Power Manager with HPPP Administrator Server Connected by USB

An alternative to installing the HPP Administrator software on a server is to install a UPS

Network Module in the UPS. This configuration is illustrated in Figure 12-24.

Figure 12-24: Power Manager with a UPS Network Module

You can eliminate the UPS as an SPOF by configuring two identical UPSs in a redundant
configuration, as shown in Figure 12-25.

Figure 12-25: Power Manager Redundant Configuration

This configuration requires a UPS Network Module in each UPS. Also, the combined load does
not exceed the supported load for a single UPS.
Figure 12-26 shows the user interface for the Power Protector software. The Power Source
node allows you to view information about the UPS. This information includes the amount of
charge remaining, the input and output frequency, input and output voltage, output current,
apparent power, active power, and battery output voltage. A diagram displaying health
information is also displayed.

Figure 12-26: HP Power Protector

UPS Management Module

The UPS Management Module, shown in Figure 12-27, provides the ability to perform
simultaneous network and out-of-band communications. It enables network administrators to
monitor UPS devices and reboot network devices remotely. It provides SNMP functionality,
including power event alerts, network power diagnostics, and remote UPS reboot and testing.
When you use the UPS Management Module in conjunction with HP SIM or other SNMP
capable network management software, power-related problems on the network are quickly
discovered and remedied.

Figure 12-27: UPS Management Module

The UPS Management Module enables you to monitor and manage power environments
through comprehensive control of HP UPS devices. The module can support a single UPS
configuration or provide additional power protection with support for dual-redundant UPS
configurations for no single point of failure. The additional serial ports provide power
management control and flexible monitoring.

The UPS Management Module provides remote management of a UPS by connecting the UPS
directly to the network. You can configure and manage the UPS through a standard web
browser, through a Telnet session, or by connecting to a serial port. The browser interface is
shown in Figure 12-28.

Figure 12-28: UPS Management Module Browser Interface

The management software is already embedded in the management module, eliminating the
need for a management server. You can use this software to:
Customize alerts
Send email notification messages
Send SNMP traps
Monitor and manager UPS devices
Manage independent UPS load segments to provide separate power control of
connected equipment
Display text logs for analysis
Scenario: Stay and Sleep
Discuss the best way to configure power protection for Stay and Sleep's reservation
management system.

Businesses depend on data. In fact, in recent years, the amount of data generated and used by
businesses has grown exponentially. Designing a backup strategy that ensures that a business
can access the data it needs when it needs it requires careful planning. You need to consider:

What data needs to be backed up?

How long does the data need to be stored?
What type of data is it?
What are the security requirements for the data?
How much data loss is tolerable?
What types of restore scenarios must be supported?
Where is the data located?
How much data needs to be backed up?
Many options are available for backing up and retaining data. In this section, we begin with a
look at some ways to characterize data. Next, we examine different strategies for backing up
data. Finally, we pull it all together to look at how to devise a backup strategy that meets
defined business requirements.

About data
Businesses use many types of data generated by different sources. Some types of data include:
Documents created by employees
Emails sent and received by employees
Database transactions of sales, shipping and receiving, accounting, and human
resource activities
Different types of data have different backup options available. Database management
systems, mail servers, and other transactional systems include special capabilities for backing
up data.

Data location
You must also consider where the data is stored. Is the data located on a server at the main
office? On a server at a branch office? Or on an employees computer?
If the data is stored on employee computers or at a location where there are no on-site IT
personnel, you must ensure that the backup strategy includes automated hands-off backup.

Data retention requirements

Different types of data also have different retention requirements. Some data, such as financial
data, must be kept for a specific length of time to meet compliance requirements. Such data
also must be stored in a secure off-site location to protect against site-level disaster and theft.
Make sure to consider the regulatory requirements for your industry. For example, if a company
is a healthcare provider in the United States, it must consider HIPAA requirements. Some
regulatory documents are listed in Table 12-3.
Table 12-3: Regulatory Documents

The compliance requirements will differ depending on industry and region. Make sure to
consider national, regional, and city requirements.

Data security requirements

The same security precautions must be taken for backups as for primary data sources. Data
backups need to be protected against unauthorized access. Also, compliance regulations for
some types of data require assurance that the data has not been modified.

Data availability
Another important factor to consider is how quickly and under what circumstances data will
need to be restored. Will you need to restore individual files due to user error? Will you need to
get the database for an online web application up and running quickly if the hard disk crashes
or another critical failure occurs? At some time in the far-off future, will you need to recover files
that were backed up when the backup device you are currently using has become obsolete?

Backup window
Another important consideration is the amount of data that needs to be backed up and the
amount of time available to do so. Backups are resource intensive. They require data to be read
from and written to disk. Also, with most types of backup, a file or database record will be
locked and unavailable while it is being backed up. If the backup is being performed across the
network, it will also consume bandwidth. For this reason, most companies require backups to

occur outside of peak business hours.

Data tiers
One way to classify data with regard to backup requirements is to think of it in tiers. HP defines
three data classification tiers, as shown in Figure 12-29.

Figure 12-29: Data Tiers

Tier 1 data is characterized as data that is required for a company to operate. For a company
that relies on online sales, this data might be its product catalog and order database. For a
company that manufactures a product, this would be the data that allows the manufacturing
process to occur, including parts inventory, quality measurement data, and assembly
instructions. Tier 1 data is used frequently and requires fast restore times if a failure occurs.
These examples of how specific types of data are categorized are generic. Missioncritical data in one business might be tier 2 data in another business and vice versa.
Tier 2 data is characterized as data that is used daily, but that is not essential for missioncritical operations. It requires a restore time of less than four hours. Such data might include
payroll data, human resources data, and accounts payable or receivable, and customer contact
Tier 3 data is data that is considered inactive, but that must be archived for regulatory
requirements or other reasons. A next-day restore time is typically sufficient for Tier 3 data.

Backup devices and media

One of the critical decisions you will need to make when designing a backup strategy is the
type of media on which will store the backups. The following options are available:
Backing up to a network share
Backing up to a dedicated internal disk drive
Backing up to an external hard disk
Backing up to DVD-R or removable media
Using a tape backup system
Using a disk-to-disk (D2D) backup system
Many organizations use a mix of backup systems to meet their requirements.

Network share
If a company has only a couple of servers, it might make sense to back up to a dedicated
backup hard disk drive on one of the servers. Some operating systems, such as Windows
Server 2008, provide backup software that can perform a backup to a network share.
Windows Server backup can perform a manual one-time backup or a scheduled backup to a
network share.
One disadvantage of using Windows Server backup software to back up to a network share is
that you can only store one instance of a backup. When you perform a backup, any backups
stored at the location are erased.

Dedicated internal hard disk drive

If you are only concerned with backing up the data on a single server, you might be able to
manage using a dedicated internal disk drive as the backup location. Windows Server backup
software allows you to create a scheduled or manual backup to an internal disk drive. If you
dedicate that disk to backups, it will not appear in Windows Explorer.
Make sure that you are backing up to a different physical disk drive than the drive
where the data is stored. Backing up to a different logical volume will not protect the data
against disk failure.

External hard disk

Backing up to an external hard disk drive is similar to backing up to an internal disk drive. The
benefit to backing up to an external drive is that you can store the backups offsite to protect
against a site-level disaster, such as a fire.
The HP StorageWorks RDX Removable Disk Backup System is an easy-to-use alternative for
performing backups that can be stored offsite. Data cartridges are available in 320 GB, 500 GB,
or 1 TB capacities. The latest version connects to a server using USB 3.0 and provides a 360
GB/hr transfer rate.
The HP RDX appears as a drive in Windows Explorer and supports drag-and-drop file access.
The HP RDX Continuous Data Protection Software backs up the entire system automatically,
performs file deduplication, and maintains multiple file versions. It also supports bare metal

A compression method that works by removing duplicate copies of data.
bare metal recovery
The process of recovering only the files necessary to start the operating system.

The HP RDX is available in internal, external, and rack-mount models.

Another option for performing backups that will be stored offline is to back up to a DVD or other
optical media. Windows Server backup allows you to perform a manual backup to DVD.
However, you cannot restore individual files from a DVD backup. When using Windows Server
backup, the DVD option is the only option that compresses the backup file.

Physical tape drives are still very common in modern IT environments, in large part because
they provide affordable long-term storage. The cost per TB is less than other storage options,
particularly when you must store a lot of data. Tape is scalable because you simply need to
purchase additional tape cartridges as the data storage requirements grow. Tape also has a
much lower TCO than disk storage because of lower power requirements. Tape is also wellsuited for offline backups.
Tape is fast at handling sequential workloads. Most backup environments generate extremely
sequential workloads. As you back up a server to tape, you literally stream the contents of the
server to tape. There is no need to constantly stop, rewind, or fast forward the tape drive during
a normal backup.

DAT tape drives

Digital Audio Tape (DAT) media is a tape technology introduced by Sony in 1987. It is still used
today. HP offers both internal, external, and rack-mount DAT tape drives. The capacity ranges
from 72 GB to 320 GB. The characteristics of the various recording technologies are described
in Table 12-4.
Table 12-4: DAT Characteristics

Write Once Read Multiple (WORM)

A backup media that allows data to be written once and not modified. Some types of data must be archived to WORM to
meet compliance requirements.

You must purchase Digital Data Storage (DSS) tapes instead of DAT tapes because
DSS offers higher data integrity. DAT should be used only for audio tape. HP DAT drives will
reject DAT audio tapes.
USB offers the advantage of plug-and-play immediate accessibility. SAS allows you to back up
multiple servers to a single tape drive. DAT tape drives are supported on ProLiant ML and DL
servers, but not on ProLiant BL or SL servers.

LTO tape drives

HP is a leading member of the Linear Tape Open (LTO) consortium.
In addition to being used to refer to the consortium, the term LTO is often used to refer to the
type of tape or drive being used. It would be common to refer to a tape drive or tape cartridge
that conforms to the LTO standards as an LTO drive or an LTO tape.
Current LTO standards are shown in Table 12-5.
Table 12-5: Current LTO Standards

As the table 12-5 shows, an LTO-5 tape can natively store ~1.5TB worth of data and potentially
up to 3TB with compression. It can also transfer data at 140 megabytes per second and encrypt
data. There are other non LTO tape standards, but LTO is the most common, especially in the
open systems space.
HP has committed to ensuring that its tape devices can read two LTO generations
back. Therefore, a tape drive that writes LTO-5 can read LTO-3, LTO-4, and LTO-5.
Open Systems
Open Systems refers to most of the computer systems seen today, including Microsoft Windows, Linux, and UNIX systems.
Open systems computers generally running x86 or x64 platforms, such as HP ProLiant. Open Systems are in contrast to
Mainframe and Minicomputers.

The Linear Tape File System (LTFS), which was introduced with LTO-5, allows users to access
files on a mounted tape using standard file operations.

Ultrium tape drives

HP Ultrium tape drives are LTO tape drives. Available models support LTO-3, LTO-4, and LTO5. There are internal, external, and rack-mount options available. An external StorageWorks
LTO05 Ultrium SAS tape drive is shown in Figure 12-30.

Figure 12-30: External Storage Works LTO-5 Ultrium SAS Tape Drive

Table 12-6 shows the options available.

Table 12-6: Ultrium Tape Drives

You can connect an Ultrium tape drive to a ProLiant server by using a host bus adapter or a
Smart Array controller.
Ultrium tape drives include HP Library and Tape Tools (LTT) and HP TapeAssure.

HP TapeAssure
TapeAssure monitors backups and sends alerts when reliability standards are not met. It stores
information about drive and tape history in a database. You can export that database as a .CSV
file and import it into a spreadsheet program, such as Microsoft Excel. You can then analyze
the data to obtain information about media and drive utilization, as well as backup performance.
TapeAssure provides information about when you need to perform proactive maintenance,
Cleaning drives
Retiring tapes
Retiring drives
If you have multiple tape drives, TapeAssure can consolidate health information into a single

Tape libraries
A tape library is a specialized computer system that contains multiple tape drives and can
house hundreds and sometimes thousands of tapes. Large tape libraries can be taller than a
human being and several meters in length. Within these tape libraries there are:
Cartridge slots for keeping tape cartridges that are not in use.
Tape drives for reading and writing to tapes.
A robot, sometimes referred to as a picker, for moving tapes between cartridge slots
and tape drives.
The robot roams up and down inside the library, moving tapes in and out of drives and storing
them in slots when they are not in use. The HP MSL Tape Libraries are designed for SMBs.
They range in size from 2U to 8U, and they have between 24 and 96 cartridge slots. Figure 1231 shows the range of MSL tape libraries.

Figure 12-31: MSL Tape Libraries

Models with Fibre channel, SAS, and Ultra320 LVD SCSI interfaces are available.

Tape Autoloaders
Tape Autoloaders are designed for small and remote offices and are considerably smaller than
MSL offerings.
The Autoloader pictured in Figure 12-32 is the HP StorageWorks 1/8 G2 Tape Autoloader. It
has 2 x 4-cartridge slots, giving it 8 cartridge slots. The cartridge slots are shown on the left and
right of the front bezel.

Figure 12-32: HP StorageWorks 1/8 G2 Tape Autoloader

The 1/8 G2 tape autoloader can be used with Ultrium tape cartridges. The specifications for
three Ultrium autoloaders are shown in Table 12-7.
Table 12-7: Ultrium Autoloader Specifications

Compression is 2:1. Therefore, capacity and transfer rates will be doubled for compressed data.
You cannot use a Smart Array or other RAID controller with a tape autoloader. You
need to use an HBA instead.

One button disaster recovery (OBDR)

OBDR is a patented HP technology that is available with all modern ProLiant servers and HP
tape drives. OBDR allows administrators to quickly and easily restore an entire ProLiant server
(OS and all applications). In an OBDR situation, the tape drive emulates a CD-ROM drive and
boots the server from tape. Once the boot has completed, the tape drive continues to restore
applications and data.

D2D StoreOnce Backup Systems

A Disk to Disk (D2D) StoreOnce Backup System, like the D2D 2503i shown in Figure 12-33,
allows you to automatically back up the data on multiple servers to hard disk.

Figure 12-33: D2D 2503i

During backup, the system performs HP deduplication, which compares blocks of data. If a
block of data already exists, only a pointer to the data is stored in the backup, as illustrated in
Figure 12-34. Deduplication increases the usable capacity by up to 20 times, providing
approximately a 1:5 compression ratio.
A programming object that references a block of data by its address in memory or storage.

Figure 12-34: Deduplication

The actual compression ratio will vary based on the nature of the data. Some data has
more redundancy than other data.
A D2D system can be configured to emulate tape storage by establishing a virtual tape library.
Backing up to a virtual tape library provides the best performance. If a network already has a
tape infrastructure, backing up to a virtual tape library can allow you to use existing tape
automation licensing with your backup application.
As an alternative, you can configure the D2D Backup System as a NAS target. This allows you
to use it with any backup application that can back up to a file share. Some disadvantages to
using a NAS target include the following:
Deduplication only occurs on files larger than 24 MB.
Only one user at a time can access a file on the NAS target.
There are limitations on the number of servers that can be backed up to the same file
share concurrently.

One advantage to using a D2D StoreOnce Backup System, particularly if you have remote
sites, is that you can replicate data between backup systems at different sites. Figure 12-35
shows a company that has a main office and a branch office. The StoreOnce system at the
branch office replicates data to the StoreOnce system at the main office.

Figure 12-35: D2D Replication

Because data is deduplicated, only changes to the data will be replicated across the WAN after
the initial full data replication occurs.

Storing to tape
Another advantage of a D2D StoreOnce Backup System is that you can periodically transfer
data from the StoreOnce system to a tape cartridge. Doing so allows you to meet offsite storage
and compliance requirements.

Data will be rehydrated before saving it to tape, so the storage capacity required on the
tape will be significantly larger than the disk storage capacity.
The process of adding the redundant copies of data removed by deduplication back into the backup.

Backup types
As you can imagine, backing up the entire server each time you perform a backup uses a lot of
storage capacity and requires a significant amount of bandwidth, either on the HBA or across
the network. As you learned earlier, one method of dealing with this is to perform deduplication.
However, deduplication can only be used when performing a D2D backup.
When backing up to tape, you often need to combine various types of backups. These include:
Full backup
Differential backup
Incremental backup
Synthetic full backup
Zero downtime backup (ZDB)
Snapshot backup
System state backup

Full backup
A full backup (also called a normal backup) is a complete backup of a system or all files in the
backup set. The backup set might be the entire volume or only a subset of data, for example a
set of folders or a database.
Both differential and incremental backups require a full backup as a basis.

Differential backup
A differential backup is one that backs up all files in the backup set that have changed since the
last full backup. The size and duration of a differential backup starts out small, but as the length
of time since the last full backup increases, the size and duration of the next differential backup
becomes greater. The amount of redundant data stored is less than if you relied on full
backups, but there is still quite a bit of redundancy, as illustrated in Figure 12-36.

Figure 12-36: Differential Backup

When your backup strategy includes differential backups, you need to first restore the full
backup. Next, you need to restore the most recent differential backup. The restore chain of a
differential backup always consists of two backups. This is illustrated in Figure 12-37.

Figure 12-37: Differential Restore

restore chain
The backups that must be restored to recover the system or the data that it contains. A restore chain always begins with a
normal backup.

Incremental backup
An incremental backup is one that backs up all the files in the backup set that have changed
since the last full or incremental backup. Incremental backups require less time and storage
capacity than differential backups. However, they take more time to restore. When you restore a
computer from a backup set based on full and incremental backups, you need to restore the full
backup and then restore the incremental backups in order, as illustrated in Figure 12-38.

Figure 12-38: Incremental Backup and Restore

On a Windows server, files are marked as changed using the Archive bit. When an
incremental backup is performed, the Archive bit is cleared. When a file is modified, the Archive
bit is set.

Synthetic full backup

A synthetic full backup is one that combines any number of incremental backups with a full
backup to create a single backup that can be restored in the event of failure. Creating a
synthetic full backup requires a backup application like HP Data Protector. The process for
creating a synthetic full backup is:
1. Create a full backup to disk or tape.
2. Create incremental backups to disk.
3. Create a synthetic full backup and store it to either disk or tape.
The process requires three agents: two Restore Media Agents (RMAs) and a Backup Media
Agent (BMA). As illustrated in Figure 12-39, the RMA reads the full backup from the backup
medium. The data is sent to another RMA. This RMA reads and merges the data from the
incremental backups and sends the consolidated data to the BMA. The BMA then writes the
synthetic full backup to the backup medium.

Figure 12-39: Synthetic Full Backup Process

The synthetic full backup can later be merged with subsequent incremental backups to create a
new synthetic backup.

Virtual full backup

A virtual full backup is similar to a synthetic full backup, except that all backups are stored in the
same file library, as shown in Figure 12-40. The library must use the distributed file media

Figure 12-40: Virtual Full Backup

distributed file media format
A media format, available with the file library, that supports virtual full backup.

A virtual full backup requires less storage capacity than a synthetic full backup.

Zero downtime backup (ZDB)

Conventional methods of backing up to tape are not well suited for large database applications.
Either the database has to be taken offline or, if the application allows it, the database has to be
put into hot-backup mode while data in it is streamed to tape. The first can cause major
disruption to the applications operation. The second can produce many large transaction log
files, which puts extra load on the application system.
hot backup mode
A mode in which all changes to a database are written to transaction logs instead of to the database itself. The database is
updated from the transaction logs after it is fully functional.
transaction log
A file that stores a list of changes made to a database. Changes are saved to the database after they have been committed.
Changes can be rolled back if all actions associated with a specific transaction could not be made successfully.

Zero downtime backup (ZDB) uses disk array technology to minimize the disruption. In very
general terms, a copy or replica of the data is created or maintained on a disk array. This is very
fast and has little impact on the applications performance.
An exact copy of data.

The replica can be the backup, or it can be streamed to tape without further interruption to the
applications use of the source database, as shown in Figure 12-41.

Figure 12-41: Zero Downtime Backup Process

Snapshot backup
A snapshot backup is one that copies data as it changes in near real-time. A copy-on-write
snapshot backup copies the old data to a new volume before it is overwritten. A redirect-onwrite snapshot writes new data to a different location and preserves the old data.
The advantage to a snapshot backup is that a restoration is very fast, and at the same time, it
allows minimal or no data loss. However, when using a snapshot backup, you must ensure that
you monitor storage capacity carefully.

System state backup

A system state backup is one that backs up only the files necessary to restore the operating
system to a functioning state. It does not include backups of data files or applications.
When you perform a system state backup of an Active Directory domain controller, the
Active Directory database is backed up in addition to operating system files.

Backup strategy
Now that you have a general understanding of the backup technologies available, we will look
at some guidelines for devising backup strategies.

Defining requirements
Defining objectives and constraints of your backup strategy requires answering questions, such
as those discussed here.

What are your organizational policies regarding backups and restores?

Some organizations already have defined policies on archiving and storing data. Your backup
strategy should comply with these policies.

What types of data need to be backed up?

List all types of data existing in your network, such as user files, system files, web servers, and
large relational databases.

How long is the maximum downtime for recovery?

The allowed downtime has a significant impact on the investments into network infrastructure
and equipment needed for backups. For each type of data, you should list the maximum
acceptable downtime for recovery, that is, how long specific data can be unavailable before

recovered from a backup. For example, user files may be restored in two days, while some
business data in a large database would need to be recovered in two hours. Recovery time
consists mainly of the time needed to access the media and the time required to actually restore
data to disks. A full system recovery takes more time because some additional steps are

How long should specific types of data be kept?

For each type of data, list how long the data must be kept. For example, you may only need to
keep user files for three weeks, while information about company employees should perhaps
be kept as long as five years or more.

How should media with backed up data be stored and maintained?

For each type of data, you should list how long the media with data must be kept in a vault, a
safe, or other external location, if you use one. For example, user files typically are not stored in
a vault at all, while order information may be kept for five years, with verification of each storage
medium after two years.

How many media sets should the data be written to during backup?
You should consider writing critical data to several media sets during backup to improve the
fault tolerance of such backups or to support storing the data at multiple locations. Object
mirroring increases the time needed for backup.

How much data needs to be backed up?

You should list the estimated amount of data to be backed up for each type of data. The
estimated quantity of data influences the time needed for backup, and this information helps to
choose the right backup devices and media for backup.
When a very fast and large disk must be backed up on a slower device, you should consider
the possibility of backing up one hard disk through multiple concurrent Disk Agents. Starting
multiple Disk Agents on the same disk accelerates the backup performance considerably.

How often does data need to be backed up?

For each type of data, you should list how often the data needs to be backed up. For example,
user work files should probably be backed up on a daily basis, system data on a weekly basis,
and some database transactions twice a day.

Factors influencing your backup strategy

A number of factors influence how you should implement your backup strategy. You need to
understand all of these factors before preparing your backup strategy. For example, you should
Your companys backup and storage policies and requirements.
Your companys security policies and requirements.
Your physical network configuration.
The computer and human resources available at different sites of your company.

Preparing a backup strategy plan

For the most critical systems, you must decide whether the backup should be stored at a remote
location to protect against site-level disaster. You also must devise a restore and recovery
process. Finally, you should define security for the backups.
You should also list the companys types of data and the way you want to combine those data
types in backup specifications, including the time frames available for backups. The companys
data can be divided into categories like company business data, company resource data,
project data, and personal data. Each of these types of data can have its own specific
Next, you need to determine how backups are scheduled. You should consider using the
staggered approach, whereby full backups are scheduled for different servers on different days
to avoid network load, device load, and time window issues.
You also need to plan for data retention and storing information about backups. Your plan
should protect data from being overwritten by newer backups for a specified amount of time.
You should define the period of time that you should store information about backup versions,
the number of backed up files and directories, and messages stored in the database. When
Data Protector is used, this information is stored in the Catalog Database. As long as this
catalog protection has not expired, backed up data is easily accessible.
You should determine which devices to use for backups. Connect the backup devices to
systems with the largest amount of data so that as much data as possible is backed up locally
and not over the network. This approach increases the backup speed.
If you need to back up large amounts of data, you should consider using a library device or a
disk backup. D2D devices decrease the amount of backup time required, support synthetic and
virtual backups, and decrease the required storage capacity through deduplication.
You also must determine what type of media to use, how to group the media into media pools,
and how to place objects on the media. Decide whether to store media in a vault. Consider
duplicating backed up data during or after backup for storing in a vault.
Finally, you must identify the users who will act as backup administrators and operators, as well
as the rights those users will require.

Advanced Fault Tolerance

A backup strategy is not always sufficient, particularly when a company has mission-critical
servers that must be operational constantly throughout the day and night. In these cases, a
business needs multiple instances of mission-critical servers so that if one server fails, another
server can pick up the load. This process is known as failover. Depending on the servers role,
there are three possible way to achieve this type of server availability:
Failover clusters
Network load balancing (NLB)

Replication is used primarily with database applications and directory services. Data is
transferred between servers so that redundant copies of data exist. A variety of replication
models are used, all of which vary by product and are beyond the scope of this course.
Some additional uses for replication include:
Keeping data and services geographically close to the users who need them.
Dividing the load between servers based on fairly static requirements. For example,
you might have one copy of the database used for analysis and a different copy used
for transactions.
A common use of replication is to replicate Active Directory data between domain

Clustering was mentioned earlier in the course as a way of providing automated failover
support, enhanced computing capabilities, and the ability to balance the computing load
requirements between multiple servers. To understand these technologies, we will now take a
quick look at how they are implemented on Windows Server.
Windows Server 2008 R2 supports two basic cluster models. These are:
Failover cluster
Network Load Balancing (NLB) cluster
Each has a particular role in supporting server solutions, so we will take a quick look at each.

Failover cluster
You can get the highest degree of availability for server solutions by using failover clustering. A
failover cluster is made up of two or more solution servers, typically referred to as nodes, with
shared disk storage (Figure 12-42).

Figure 12-42: Failover Cluster

Shared storage can be implemented as a NAS, SAN, or DAS. Nodes use unicast messages
between each other to test and ensure that other servers in the cluster are still working properly.
In a failover cluster, when one server fails, another takes over providing the service. Failover
occurs with minimal disruption of service. This fast failover without interrupting network
operations makes failover clusters an excellent support option for mission-critical applications.
Typical uses include:
Messaging applications
Database applications
Critical file and print servers
Failover clusters are supported on Windows Server 2008 R2 Enterprise and Datacenter
editions only.

NLB cluster
Another technology that can help with server availability and help optimize the performance of
some network applications is network load balancing (NLB). Support is provided for TCP/IPbased services, including:
Web servers
FTP servers
VPN servers
Streaming media services
When setting up NLB, the servers taking part are configured as a virtual cluster. Windows

Server 2008 R2 allows you to include up to 32 nodes in a cluster (Figure 12-43).

Figure 12-43: NLB Cluster

You can add or remove nodes from the cluster as load requirements change. Implementing
NLB clustering does not require any changes to the server hardware or to supported
In an NLB cluster, each server runs a copy of the supported application or applications. The
cluster is accessed through a single cluster IP address, but each machine within the cluster still
has its own unique IP address. The NLB cluster distributes client requests between the
computers making up the cluster. If a node fails, the load is redistributed among the remaining
operational nodes.
One advantage of this configuration is that you can take a server offline for
maintenance and then return it to the cluster without disturbing operations.

HP Linux cluster solutions

In addition to Windows-based solutions, HP has cluster solutions based on ProLiant G7 or
Gen8 servers. Rather than Windows, these solutions come with Linux preinstalled (Figure 1244).

Figure 12-44: HP Cluster Platforms

At the time this course was written, HP cluster platform information was available at
Scenario: Stay and Sleep
Compare server fault tolerance options for the Stay and Sleep reservation system. Which
options are supported for each server? What are the advantages and disadvantages of each

In this chapter, you learned:
Identifying and removing single points of failure is an important part of a business
continuity plan.
Advanced Memory Protection allows you to eliminate memory as an SPOF.
Dual-homing and Network Adapter Teaming are two ways to add fault tolerance to
the network infrastructure.
A UPS protects against power failure, power spikes, and surges.
The design of a backup strategy requires you to consider data security requirements,
tolerance for data loss, archival requirements, amount of data, and the location of
An HP StorageWorks RDX Removable Disk Backup System is an external drive that
has removable data cartridges.

A tape autoloader has slots for multiple cartridges but is smaller than a tape library.
A tape drive emulates a CD-ROM drive with OBDR and boots the server from tape.
A D2D StoreOnce Backup System automatically backs up data using deduplication.
A differential backup backs up all data that has changed since the last full backup.
An incremental backup backs up all data that has changed since the last full or
incremental backup.
A synthetic full backup is a backup method that combines any number of incremental
backups with a full backup to create a single backup that can be restored in the event
of failure.
A virtual full backup is similar to a synthetic full backup, except that all backups are
stored in the same file library.
A zero-downtime backup creates a database replica on a disk array.
Replication, network load balancing, and clustering are three ways to provide server

Review Questions
1. Which memory protection mode provides the best fault tolerance?
2. With On-line On Demand technology, when does a UPS use double conversion?
3. What is required to implement a redundant UPS configuration using R1500 G3 UPSs?
4. Which technology minimizes the amount of storage capacity required to store backups?
5. A backup plan includes a weekly full backup and daily differential backups. Which
backups do you need to restore if a hard disk fails?
6. Which type of cluster requires shared storage?

True or false
1. A JBOD does not have any single points of failure.
2. Adding an ERM to a UPS increases the amount of time the servers connecting to the
UPS can operate after power failure.
3. Up to 35 servers can be connected to a single UPS as HPP Clients.
4. The backup strategy should be applied to all data stored on company servers.
5. Tape backup uses less power than D2D backup.
6. Data stored on a D2D StoreOnce Backup System must be rehydrated before it can be
stored to tape.
7. An NLB cluster requires shared storage.


1. You can use _____________ to enable Advanced Memory Protection on a ProLiant

2. Dual-homing provides fault tolerance for ____________ traffic but not for
_______________ traffic.
3. The _____________ allows you to create policies that provide for the orderly shutdown of
servers in the event of power failure of a circuit connected to an R1500 G3 UPS.
4. Ultrium tape drives read and write data _______________.
5. OBDR can be used to boot a server from _____________.
6. A ____________ uses deduplication.
7. A ____________ backup must be the base backup when using incremental backups.
8. A ___________ backup merges full and ____________ backups onto disk or tape.

1. Write an essay comparing the three memory protection modes available with Advanced
ECC. Discuss the level of protection each provides, any drawbacks, and requirements.
2. Explain the benefits of On-line On Demand technology.
3. Choose synthetic full, virtual full, or ZDB backups. Explain the advantages and
configuration of the backup strategy and apply it to a customer scenario.

Scenario questions
Scenario: MedDev
MedDev has four offices: the main office, a manufacturing facility on the lowest floor of the
main office, and three branch offices. The branch offices each have a tower file server that
contains data used by field engineers and sales. Data is stored in an internal hard disk on
each server. Each branch office has a switch and a router that connects the branch office to
the Internet and acts as a VPN endpoint. All devices connect to surge protectors.
A database server at the main office stores accounting, HR, and customer records. The main
office also houses a file server that contains product designs. The manufacturing facility has
two database servers: one responsible for manufacturing control and the other for inventory
management. Data at the main office and the manufacturing facility is stored on an iSCSI
SAN that has six hard disks, configured as a RAID 5 volume. There are two switches at the
main office. One is used to connect the servers and the other to connect the client devices. A
router connects the main office to the Internet and acts as a VPN endpoint. There is one
switch at the manufacturing facility. The main office and manufacturing facility equipment is
rack-optimized. The PDUs connect directly to the same AC outlet circuit.
The owner of MedDev has contracted you to design a business continuity strategy that meets
the following requirements:
* The manufacturing process must be shut down in an orderly fashion if power fails.
* Manufacturing cannot be offline due to equipment failure.
* All accounting, HR, inventory, and customer data must be recoverable within 24 hours.

* Accounting and field engineer data archives must be kept for 5 years.
* Product designs must be accessible in the event of a site-level disaster.
* Storage space used for archives must be kept to a minimum.
* Total cost of ownership should be minimized.

1. Make a list of additional questions that you should ask to design a business continuity
strategy for MedDev.
2. Identify the SPOFs in the current configuration.
3. Research current products on the HP website.
4. Design two possible business continuity strategies and identify the advantages and

This eBook is licensed to Catalin Dinca,

Chapter 13: Configuration Management

Keeping a data center running smoothly requires vigilance and ongoing change. Components
wear out, failures occur, workloads change, and new requirements emerge. As a server
administrator, you will need to respond to changes by performing upgrades and updates of
hardware, software, and firmware.
We begin this chapter with a look at the things that you need to consider when planning an
upgrade. Next, we look at the specific steps for upgrading various server components. From
there, we look at the steps required to update server firmware and software. We will use an HP
DL360 G7 server for most of our examples. Then, we look at the importance of version
management and the process of managing versions using HP tools. We conclude the chapter
with a look at upgrading a P2000 storage system.

In this chapter, you will learn how to:
Describe HP version control solutions.
Install and configure HP version control solutions.
Install and configure server components.
Perform hardware upgrades.

Perform firmware upgrades.

Perform software upgrades.

Planning for Upgrades

There are many reasons for upgrades. Sometimes, an upgrade project is initiated by the IT
department in response to a failing component, new version of an application, or performance
issue. Other times, it is required as part of a project initiated by the business side of an
organization. In either case, the upgrade will require appropriate planning and preparation to
have as little negative impact as possible.
Although there might be some variation, depending on the reason for the upgrade and the
business environment, as a general rule, the planning and preparation phase should include
the following steps:
1. Research upgrade options and their ramifications.
2. Obtain approval for purchasing the required components.
3. Develop an upgrade plan and schedule.
4. Obtain approval for the upgrade plan and schedule.
5. Notify users of the impact the upgrade will have.

Researching options
The first step that you need to take to prepare for any upgrade is to research the options
available and propose the best solution. During your research, you need to answer the
following questions:
Which upgrade options are available to meet the technical requirements?
Which upgrade options are supported by the server?
Do additional components or subsystems need to be upgraded?
What is the cost of the proposed upgrade?
There might be other questions that you need to answer, so make sure that you make a list of
questions before beginning your research.
There are a number of resources available for researching upgrades. A good place to start is
the manufacturers website. The HP website has a number of documents and sizers available
to help you research available options, including:
White papers.
Rack and Power Sizing Tool.
User guides and other manuals.
Product reviews.

Budget approval

After you have identified the components required for the upgrade, you need to put together a
document showing their purchase price and any costs for installation. In most companies, the
purchase needs to be approved by a manager and possibly also by executive-level personnel.

Upgrade plan and schedule

Creating an upgrade plan requires more research. At this point, you must estimate how much
time it will take to complete the upgrade and determine whether additional personnel will be
required. You also need to identify any preparations that need to be made before the upgrade.
Make sure to evaluate the impact of the upgrade on power and cooling requirements, device
drivers, and firmware versions.
The upgrade plan should include the following elements:
Estimated server downtime
Test plan
Rollback plan
Test plan
A set of steps taken to verify that the upgrade was successful.
Rollback plan
A set of steps that will be taken if the upgrade is not successful.

Whenever possible, the upgrade should be scheduled at a time that will have the least possible
impact on business operations. When 24x7 availability is required, you will need to deploy a
spare server before the upgrade to avoid downtime. If clustering is used, you can failover to the
passive server while upgrading the active server.
You will need to ensure that the upgrade plan is approved before the implementation date. All
users who will be impacted by the upgrade or the downtime it causes should be given as much
advance notice as possible.

Preparing for the upgrade

After the upgrade plan has been approved, you can begin making preparations for the upgrade.
Depending on the nature of the upgrade, these preparations might include:
Backing up the server.
Updating the firmware.
Upgrading the operating system software.
Upgrading the power and/or cooling infrastructure.
Distributing and posting notices of downtime.
Consult the user guide and product documentation to know exactly which preparations will be

Hardware Upgrades

Before you begin any hardware upgrade, read the user guide and product documentation
carefully. Next, assemble the tools necessary to perform the upgrade. Finally, take the
necessary ESD precautions any time that you need to open a server.

Safety procedures
Before removing and replacing, reseating, or modifying any system component, be sure to read
and follow all safety procedures. Pay attention to warnings, cautions, rack safety guidelines,
and the use of blanking panels to protect equipment from damage and yourself from harm.
Failure to follow safety guidelines can result in electrical shock, burns, or injury.

Powering down the server

Before removing and replacing any non-hot plug component, you should completely remove all
power from the system. To do so, you must disconnect all power cords. A server in Standby
mode is not off. Standby mode provides auxiliary power and removes power from most of the
electronics and drives. However, portions of the power supply, the system interlock circuitry,
and some internal circuitry remain active.

Rack safety guidelines

When working on a rack-mounted device, it is important to follow rack safety guidelines. As a
reminder, the rack safety guidelines are the following:
Ensure that leveling feet extend to the floor.
Ensure that the full weight of the rack rests on the leveling feet.
Attach stabilizing feet to the rack if it is a single -rack installation.
Couple racks together in a multiple-rack installation.
Extend only one component at a time to prevent the rack from becoming unstable.
Prevent casters from touching the floor. Lower the leveling feet to the ground so that
the casters are raised and the rack does not move.
Distribute weight properly and ensure that weight requirements are met.
Install ballast kits, if necessary, to meet minimum requirements.
ballast kit
An assembly that is typically installed near the bottom of a rack to provide extra stability.

Blanking panels and heat dissipation

You should always fill empty server hard drive and rack slots with blanking panels to ensure
proper air flow and avoid overheating. Install any heat-dissipation accessories that were
included with the upgrade kit to prevent overheating.

ESD protection
It is essential to protect sensitive components against damage from electrostatic discharge
(ESD). Follow recommended procedures for ESD protection before opening a server. This
includes wearing a metal ESD wrist strap.

Paper and cloth ESD straps do not provide sufficient protection for working inside a

Inside the DL360 G7 server

Before we begin our discussion of specific hardware upgrade procedures, we should look at
the location of various components on a specific server. You can find a similar diagram for the
HP server model that you are servicing within the QuickSpecs or product documentation. The
inside and the front panel of the DL360 G7 server are shown in Figure 13-1.

Figure 13-1: DL360 Front

The components are described as follows:

1. Hood cover
2. Up to two Intel processors
3. Video connector
4. Slide-out System Insight Display (SID)
5. Removable fan modules
6. 18 DIMM slots
7. Redundant Hot Plug Power Supplies
8. 2 PCIe slots
The rear of the DL360 G7 server is shown in Figure 13-2.

Figure 13-2: DL360 Rear

The components are described as follows:

1. PCIe expansion slot 1, low profile
2. PCIe expansion slot 2, full-height, full-length X16
3. Power supply bay 1 (populated)
4. Power supply bay 2 (unpopulated)
5. iLO 3 NIC connector
6. Serial connector
7. Video connector
8. NIC 4 connector
9. NIC 3 connector
10. NIC 2 connector
11. NIC 1 connector
12. USB connector
13. USB connector

General component removal and replacement steps

Regardless of the component you are replacing, you must read the instructions that come with
the component. Generally speaking, there are two types of replacement procedures: one for
hot-plug components and another for non-hot plug components.
Remember, hot-plug components in HP equipment can be identified by a burgundy
colored tab.

Hot-plug component replacement

You can only use hot-plug removal when a device is a hot-plug component and meets the
redundancy requirements shown in Table 13-1.
Table 13-1: Hot-plug Component Redundancy Requirements

Hot-plug component

Redundancy requirements


You can only remove one fan per zone at any given time. If two
fans are removed from a zone, the system shuts down after one
minute to prevent heat damage.

Power supply

A redundant power supply is installed (1+1 redundancy).


A redundant RAID configuration is required for any one drive in

the array to be hot pluggable.


The system must meet configuration requirements for hot add

or replace.


Requirements will differ by server but could include a hot-plug

mezzanine card and hotplug board option kit.

When replacing a hot-plug device that meets redundancy requirements and is not a hard drive,
perform the following steps:
1. Use the operating system procedure to stop the device.
2. Remove the device when prompted.
3. Insert the new device.
To remove a hot-plug hard drive, you need to use Option ROM Configuration for Arrays (ORCA)
or Array Configuration Utility (ACU) to remove the drive from the array before removing the hotplug hard drive. Failure to do so could cause data loss.

Non-hot plug component replacement

To remove a component that is not a hot-plug component or that is a hot-plug component in a
system that does not meet redundancy requirements, you typically will need to perform the
following steps:
1. Uninstall the operating system driver.
2. Shut down the server and remove all power cords.
3. Open the chassis cover.
4. Remove the component.
5. Replace any items removed during installation.
6. Close the chassis cover.
7. Connect the power cords.
8. Turn on the server.
9. Install necessary drivers.

Processor replacement and upgrade

A processor upgrade might involve adding an additional processor, replacing an existing
processor, or both. As with any upgrade or replacement, you should consult the documentation
for the specific server to determine which processors are supported. Some general procedures
are given here, but you should always follow the specific procedures for your server model.
Here are some guidelines to keep in mind when upgrading a processor:
All installed processors must be the same type and speed.
Some servers support a mix of processor stepping, core speeds, or cache sizes.
Others do not. Consult the documentation for your server.
Update server ROM to ensure that the server can recognize the new processor.
Install each processor in the correct slot or socket position.
Install the necessary Voltage Regulator Module (VRM) or Processor Power Module
(PPM), as required by the processor.

Perform any necessary memory installation, as required by the processor.

Voltage Regulator Module (VRM)
A component that converts the voltage supplied by the power supply to the lower voltage required by the CPU.
Processor Power Module (PPM)
Similar to the VRM.

Failure to flash the system ROM before installing processors or processor memory
boards can cause system failure.
The installation procedures will differ for Intel and AMD processors. You must follow the
configuration rules specific to the processor you are installing.

Intel processors
Intel-based ProLiant servers that have multiple processor slots require that slot 1 be populated
at all times. Figure 13-3 shows a ProLiant DL580 G5 server that supports four Intel processors.
The processor slots are numbered from 1 to 4. If the server has only one processor, it must be
installed in slot 1. The processor in slot 1 boots first.

Figure 13-3: Processor Slots in a ProLiant DL580 G5 Server

When installing a processor:

1. Correctly align the processor pins to seat the processor in the socket.
2. Carefully and correctly position any locking levers.
The locking levers for a DL580 G5 are shown in the expanded image at the right of Figure 13-3.
The locking levers are different in various server models. Handle the locking levers with care to
avoid damaging the processor, lever, or slot. Do not apply too much force.
Xeon 5500 and 5600 processors have an integrated memory controller. When installing one of
these processors, you also need to install a memory module next to the processor.

Intel processor installation steps

Although the exact steps will vary depending on the server model, here are a set of general
steps that you will perform when installing an Intel processor:
1. Power down and unplug the power cords from the server.
2. Remove the access panel.
3. Remove all hard drives and hard drive blanks.
4. Remove the hard drive backplane.
5. Remove the front panel/hard drive cage assembly.
6. Remove the heatsink blank. You can keep the blank for later use.
7. Open the processor retaining latch and the processor socket retaining bracket.
8. Remove the processor socket protective cover.
9. Reinsert the processor in the tool.
10. Align the processor installation tool with the socket and install the processor.
11. Press down firmly until the processor installation tool clicks and separates from the
12. Remove the processor installation tool.
13. Close the processor socket retaining bracket and the processor retaining latch.
14. Remove the thermal interface protective cover from the heatsink.
15. Install the heatsink.
16. Install the front panel/hard drive cage assembly.
17. Install the hard drives and hard drive blanks.
18. Install the access panel.
A component that regulates the temperature of a CPU by dissipating heat into the environment.

Opteron processor installation

Opteron processors have different requirements from Intel processors. For example, there are
different rules regarding memory configuration. With an Opteron processor, DIMMs are installed
on a separate processor memory board, instead of on the system board, as shown in Figure 134.

Figure 13-4: DL585 Processor Memory Board

An Opteron-based ProLiant server with multiple processors requires that processor memory
boards 1 and 2 are always installed. The system will not boot if either board is missing.
The integrated memory controller changes the way the processor accesses main memory and
results in increased bandwidth, reduced memory latency, and better processor performance.

Installing memory
Before performing a memory upgrade, it is essential to consult the documentation about the
configuration guidelines supported for a server. We will now discuss some general guidelines
that you need to follow.
Memory needs to be installed in banks of four DIMMs. Although some servers allow you to
install DIMMs with different capacities within the same bank, doing so is not always supported
and can result in errors. Some servers, such as the DL360 G7, allow you to mix DIMM speeds,
but the memory bus will default to the lowest clock rate, even if the slower DIMM is on another
You cannot mix UDIMMs and RDIMMs within the same server. If you are installing UDIMMs,
you can only install two DIMMs per channel. If you are installing quad-rank DIMMs, you can
only install two DIMMs on each channel for a specific processor. If a channel contains a quadrank DIMM, you need to install that DIMM first.
Consult the system memory diagram located on the system hood or use the Memory
Configuration Utility (MCU) to determine where to install the DIMMs. Remember that memory
banks are associated with specific processors on a multiprocessor system. Do not install
DIMMs in a bank that does not have an installed processor.
You can also learn about the supported memory configurations for a server by consulting

The QuickSpecs document for a server model has a section on memory that gives you
information about the supported configurations, including the DIMM rank, capacity, native

speed, voltage, and maximum capacity.

Memory Configuration Utility (MCU)

The MCU provides you with recommendations for memory upgrades to meet specific
requirements and can help you plan and implement a successful memory upgrade.
You can access the MCU on the HP website at
You need to agree to the HP DDR3 Memory Configuration License Agreement (Figure 13-5)
before continuing.

Figure 13-5: MCU License Agreement

After you accept the license agreement, the MCU displays a screen that provides information
about the various types of memory supported and a summary of factors that affect how memory
is populated (Figure 13-6).

Figure 13-6: MCU Information Screen

After you click Next, you are prompted to select whether you have a pre-configured build-toorder (BTO) model part number, as shown in Figure 13-7. If you do, select Yes. If you do not,
select No. For this example, we will assume that we do not have a BTO number.

Figure 13-7: BTO Part Number Prompt

If you select No, you need to select the ProLiant server series that you are configuring. For this
example, we will select the 300 Series, as shown in Figure 13-8.

Figure 13-8: Select Server Series

Now you can select the server model from the Select your ProLiant server drop-down list.
After selecting the server, you are prompted for the number of processors that you have
installed or plan to install and whether there is currently memory installed in the system, as
shown in Figure 13-9.

Figure 13-9: Server Information

If you indicate that there is memory installed, you are prompted to respond whether your server
is configured to work with Mirrored Memory, Lock-Step, or Online Spare, as shown in Figure

Figure 13-10: Memory Configuration

The next screen (Figure 13-11) allows you to select whether you want to allow HP Insight
Diagnostics to detect the current memory configuration and generate an Insight Diagnostic
Memory Configuration File or whether you will enter the memory configuration manually. If you
plan to use Insight Diagnostics, the following software must be installed on the server:
HP SMH version or later
HP iLO 2 Management Controller Driver, version or later
HP Insight Management Agents for Windows Server 2003/2008, version or
HP Insight Diagnostics Online Edition for Windows Server 2003/2008, v
or later

Figure 13-11: Memory Identification

To gather the memory information:

1. Launch System Management Homepage.
2. Log on using the Administrator username and password.
3. Click Webapps in the toolbar.
4. Click HP Insight Diagnostics, as shown in Figure 13-12.

Figure 13-12: Launching Insight Diagnostics

5. You are prompted to log in to SMH again. After you do, HP Insight Diagnostics scans the
system. A report like the one shown in Figure 13-13 is generated.

Figure 13-13: Insight Diagnostics Report

6. Select Memory from the Categories drop-down list. The memory configuration is
displayed as shown in Figure 13-14.

Figure 13-14: Insight Diagnostics Memory Report

The memory configuration is saved to a file named dimmconfig.log at c:\hp\hpdiags. You can
open that file and copy the contents to the MCU, as shown in Figure 13-15.

Figure 13-15: Pasting the Memory Configuration

If you entered the memory configuration manually, the MCU displays a server diagram that you
must print (Figure 13-16). Next you must shut down the server that will be upgraded and
manually inspect the part numbers of the DIMMs and document them on the printout.

Figure 13-16: Server Diagram

Regardless of how you enter the current memory configuration, you are prompted to select the
desired amount of memory as shown in Figure 13-17.

Figure 13-17: Selecting Desired Memory Capacity

The green area of the slider bar indicates the supported memory with the configuration and the
number of processors in the system (in this case 2). If the system only had one processor, the
area of the slider bar over 192 GB would be red to indicate that you would have to upgrade the
system by adding a second processor before you could add more than 192 GB of RAM.
You can also select whether you want to optimize for performance, power efficiency, low cost,
or general purpose.
The next screen shows some recommended configuration options, as shown in Figure 13-18.

Figure 13-18: Configuration Options

After you select an option, the installation diagram is displayed, showing you how to install the
memory (Figure 13-19). You should print this diagram to ensure that you install the memory
correctly after it arrives.

Figure 13-19: DIMM Installation Diagram

Installing storage devices

As with other components, you can learn which storage devices are supported in a server by
consulting the QuickSpecs for that model. The QuickSpecs also provide information about
which configurations are supported. For example, as shown in Figure 13-20, the ProLiant
DL360 G7 supports between 1 and 8 SFF SAS hot-pluggable drives when there is no optical
drive installed, or 4 SFF SAS hot-pluggable drives along with a CD or DVD-ROM.

Figure 13-20: Storage Bays in a ProLiant DL360 G7

When selecting a hard disk drive, you must keep several points in mind. These are:
For SAS and SATA drives, the system automatically sets the drive number.
Each SCSI drive must have a unique ID, which the system automatically sets.

If one drive is used, install it in the bay with the lowest number.
Drives must be the same capacity to provide the greatest storage space efficiency
when they are grouped together in the same storage array.
ACU does not support mixing SAS and SATA drives in the same logical volume.
SCSI hard disk drives are found in legacy servers. Current servers use SAS and SATA
hard disk drives.
The procedures for installing drives differ depending on whether they are hot-plug or non-hot
plug drives.

Installing internal non-hot-plug drives

The procedure for installing a non-hot-plug drive is the same whether the drive is a CD-ROM,
DVD-ROM, tape drive, or hard drive. You will generally need either a Torx or Phillips
screwdriver. As with any upgrade that requires opening the server, make sure to take proper
ESD precautions.
You can install a non-hot-plug drive using the following steps:
1. Shut down the server and unplug all power cords.
2. Open the chassis cover.
3. Remove any placeholders.
4. Consult the documentation to determine whether jumpers need to be set. If so, set them.
5. Mount the drive using the hardware (rails, screws, and so on) included in the kit.
6. Connect the power cable, drive cable, and other cables (if applicable).
7. Boot the server. The drive should be detected automatically during Power-On Self Test
The SATA and SAS specifications allow only one controller per cable. There is no need to wait
for data traffic to clear from the cable. Therefore, there are typically no jumpers or switches on
SATA/SAS drives or controllers. You can use an SATA/SAS expander to operate several
drives at the same time from a single controller.

Installing hot-plug drives

Installing a hot-plug drive does not require a shutdown, jumper settings, or restarts unless a
driver must be installed. To install a hot-plug drive, perform the following steps:
1. Choose a bay.
2. Insert the drive in the bay.
3. Secure the latch.
4. If the drive is a hard disk drive, configure it using ORCA or ACU.
Hot-plug drives, with the exception of hard disk drives, are detected and configured

Hot-plug drive LEDs

You can use drive LEDs to determine whether the drive was successfully installed and its
status. SATA and SAS hard drives combine the Activity and Online indicators into a single
LED. Figure 13-21 describes the LED values.

Figure 13-21: Activity/Online LED Values

SATA and SAS hard drives also combine the Fault and Unit Identification LEDs into a single
LED. The meanings of this LED are described in Figure 13-22.

Figure 13-22: Fault/Identification LED Values

RAID migration and expansion

After installing a new drive, you may want to add it to an existing volume. You can do so by
using the Array Configuration Utility (ACU).
To expand an array, select an array and click Expand Array. Select the drives you want to add
to the array and click Save. The array will be expanded. The operation might take a significant
amount of time, depending on the size of the drives.
Another option is to perform a RAID migration. A RAID migration allows you to change the
RAID level or, in a parity configuration, the stripe size. To perform a RAID migration, launch
ACU and select a volume, as shown in Figure 13-23.

Figure 13-23: Initiating a RAID Migration

Select the RAID level and the stripe size, as shown in Figure 13-24, and click Save.

Figure 13-24: Performing a RAID Migration

It will take some time to transform the array to the new configuration.
Although adding a new disk might prompt a RAID migration, that is not the only reason for
performing one. A RAID migration might also be required to:
Obtain more usable capacity.
Improve performance by aligning the RAID configuration with the usage profile.
Scenario: BCD Train
The database server where all BCD Train content is stored has two hot-plug drives
configured as a RAID 1 volume. The budget has been approved to purchase two more hotplug drives. You want to maximize storage capacity while ensuring protection against failure
of a single drive. Explain the steps that you will take.

Rack-mounted tape drive installation

Many servers, including the ProLiant DL360 G7, support external tape drives that are mounted
in the rack. Tape drives must be installed in a tape enclosure like the 1U Rackmount Tape
Enclosure, which supports two half-height tape drives. SCSI, SAS, and USB connections are
It is recommended that you attach only one tape drive to the SCSI bus to prevent slow
Figure 13-25 shows the rear panel of the tape enclosure.

Figure 13-25: Rear Panel of Tape Enclosure

The ports and switches are described as follows:

1. AC power connector
2. SCSI connector (SCSI models only)
3. SCSI ID switch (SCSI models only)
4. USB connector (USB models only)
5. SAS connector (SAS models only)
The internal components of the SAS version of the tape enclosure are shown in Figure 13-26.

Figure 13-26: SAS Tape Enclosure Components

These components are described as follows:

1. Tape drive
2. Tape drive blank
3. Power supply

4. Fan assemblies (2)

5. SAS repeater board
The internal SAS cables in the enclosure are shown in Figure 13-27.

Figure 13-27: SAS Connectors

Connectors 1 and 3 are power connectors and should be used only with LTO-5 and DAT
drives. Previous versions of LTO require a separate power cord. Connector 2 is the SAS
connector that routes a device to external port 2. Connector 4 is the SAS connector that routes
a device to external port 1. Connector 5 is used to attach to the PC board within the enclosure.
To install a tape drive in the enclosure, perform the following steps:
1. Remove the top access panel of the enclosure, as shown in Figure 13-28.

Figure 13-28: Removing the Top Access Panel

2. Remove the drive blank by pulling on the spring-loaded button (1), sliding the assembly
forward, and then lifting up (2) (Figure 13-29).

Figure 13-29: Removing the Drive Blank

3. Remove the mounting brackets from the blank and install them on the sides of the tape
drive, as shown in Figure 13-30. Make sure to use the 6mm M3 screws and washers
provided with the drive. Do not over tighten the screws.

Figure 13-30: Installing the Mounting Brackets

4. Install the tape drive, as shown in Figure 13-31. Position the mounting bracket keyhole
slots over the mounting posts (1). Slide the drive toward the back of the enclosure (2). The
spring-loaded button automatically snaps into place (3).

Figure 13-31: Installing the Tape Drive

5. Attach the cables, as shown in Figure 13-32. Cable 1 is the power cable, cable 2 is the

data cable, and cable 3 (SCSI only) is the SCSI ID selector switch. Fold excess cable
length and secure it with the clips provided in the tape enclosure.

Figure 13-32: Cable Connections

6. Replace the top access panel.

Disk Subsystem Upgrades

External storage array systems also sometimes need to be upgraded or have components
replaced. As with servers, you should always read the documentation for the specific device
and component that you are replacing. To give you an idea of what is required to replace
components in a storage array system, we will look at the procedures for replacing the following
components in a P2000 G3 MSA Array System:
Hard drive
I/O module
Smart Array Controller
As with any hardware replacement procedure, you should take the appropriate ESD and other
safety precautions before opening the system.

Hard drive replacement

Before beginning a hard drive replacement procedure, verify that the disk is part of a faulttolerant configuration and that it can be removed without data loss. If multiple drives need to be
replaced, remove only one at a time to ensure proper airflow. You should also check the drive
status using logs and LEDs.
When you are ready to remove the drive, as shown in Figure 13-33, perform the following steps:
1. Press the drive ejector button. If the drive is a hard disk drive, wait approximately 30
seconds or until the media stops rotating before removing the drive from the enclosure.
2. Pivot the release lever to the full open position.
3. Pull the drive out from the enclosure.

Figure 13-33: Removing the Drive

Wait approximately 30 seconds to allow the enclosure to recognize that a drive was removed.
Now you are ready to replace the drive, as illustrated in Figure 13-34.
1. Press the drive ejector button on the replacement drive and pivot the release lever to the
full open position.
2. Insert the replacement drive into the disk enclosure. Slide it in as far is it will go. As the
drive meets the backplane, the release lever automatically begins to rotate closed.

Figure 13-34: Installing a Drive

3. Press firmly on the release lever to ensure that the drive is fully seated.
4. Wait approximately 30 seconds for the system to discover the drive.
5. Use the Storage Management Utility to verify that the system found drive. The drive
should have a status of Available.

6. Verify that the latest firmware is installed for the drive.

Expanding the array

If you add an additional drive to the enclosure, you may want to expand one of the Vdisks to
include the new drive. You can do so using SMU by selecting the Vdisk and choosing Tools |
Expand Vdisk.
A P2000 does not support RAID migration.

Replacing the I/O module

You only need to replace an I/O module that has failed. As a reminder, a failed I/O module will
have a solid amber fault LED. If an I/O module is connected to a controller enclosure with only
one controller, you need to shut down the controller first to prevent the Vdisks from going offline.
You can shut down the controller using SMU. To do so, choose Tools | Shut Down or Restart
Controller. Select Shut down from the Operation drop-down list, Storage from the Controller
Type list, and either A or B from the Controller drop-down list (Figure 13-35) and click Shut
down now.

Figure 13-35: Shutting Down the Controller

As a reminder, I/O module A is the top module and I/O module B is the bottom module, as
shown in Figure 13-36.

Figure 13-36: I/O Modules

Now you are ready to remove the module using the following procedure:
1. Disconnect the cables from the module. Label them to make it easier to reconnect them to
the new module.
2. Remove the module as illustrated in Figure 13-37. Turn the thumbscrews (1) until they
disengage from the module and rotate both latches downward (2) to disengage the
module from the internal connector. Pull the module straight out of the enclosure (3).

Figure 13-37: Removing the I/O Module

Now you can replace the module, as illustrated in Figure 13-38.

Figure 13-38: Replacing the I/O Module

1. With the latches in the open position, slide the replacement module into the enclosure as
far as it will go. If necessary, press lightly on the top-center of the module to facilitate
2. Rotate the latches upward to engage the module with the internal connector.
3. Turn the thumbscrews until they feel tight.
4. Reconnect the cables.
Verify the installation by ensuring that the OK LED is green and the Fault LED is off. Install the

latest firmware, if necessary.

Replacing a Smart Array Controller

The process for replacing a Smart Array Controller depends on whether the system has two
controllers or only one. In either case, you should record configuration information from the
following SMU screens before beginning the replacement procedure:
Configuration | System Settings | Date, Time
Configuration | System Settings | System Information
Configuration | Users | Modify User
Configuration | Services | Email Notification
Configuration | Services | SNMP Notification
View | Overview | Schedules
Hosts | View | Overview
Overview and map information for each host

To replace the controller in a single-controller system, use the following procedure:
1. Verify that the cache has been copied to the CompactFlash by checking that the Cache
Status LED is off.
2. Shut down the controller.
3. Verify that the OK to Remove LED is blue.
4. Power off the enclosure.
5. Disconnect cables connected to the module and label them to make reconnection easier.
6. Remove the controller.
A Smart Array controller is removed and installed using the same procedure as
the I/O module.
7. Remove the CompactFlash from the controller, as shown in Figure 13-39. Label it and set
it aside.

Figure 13-39: Removing the CompactFlash

8. Replace the CompactFlash in the new controller with the CompactFlash you removed in
step 7.
9. Replace the controller.
10. Reconnect the cables.
11. Start the system and verify that the Fault light stays off.
12. Verify configuration settings.
13. Update the firmware, if necessary.

HP recommends using Partner Firmware Update to ensure that both controllers have the most
recent firmware version. You can enable this feature in SMU by choosing Configuration |
Advanced Settings | Firmware (Figure 13-40).

Figure 13-40: Enabling Partner Firmware Update

You can replace a failed controller in a dual-controller configuration without shutting down the

enclosure. To do so, use the following procedure:

1. Shut down the failed controller.
2. Verify that the OK to Remove LED is blue.
3. Disconnect cables connected to the module and label them to make reconnection easier.
4. Remove the controller.
You should not transport the CompactFlash when replacing a controller in a dual
controller system. Doing so could cause data corruption.
5. Replace the controller.
6. Reconnect the cables.
7. Start the system and verify that the Fault light stays off.
8. Verify configuration settings.
If you need to replace the other controller, wait 30 minutes before beginning to procedure to
ensure that Vdisk ownership is fully stabilized.

Firmware and Software Updates

Another key factor to consider in your configuration management plan is how and when
firmware and software updates are performed. HP offers several tools that allow you to manage
firmware and software updates. In this section, we will look at two of them:
HP Smart Update Manager (HP SUM)
HP SIM with Version Control Repository Manager (VCRM)
But first, we will look at two ways to view information about the installed firmware versions:
SMH and Insight Diagnostics.

Checking firmware versions with SMH

To check the firmware version using SMH, access Home page and click Firmware
information, as shown in Figure 13-41.

Figure 13-41: Accessing Firmware Information in SMH

Information about the firmware versions installed for various components will be listed, as
shown in Figure 13-42.

Figure 13-42: Installed Firmware

Checking firmware versions with Insight Diagnostics

Like SMH, Insight Diagnostics provides you with information about the firmware versions
installed, including the BIOS version, redundant ROM version, the firmware version for each
NIC, power management controller, and the iLO firmware version (Figure 13-43).

Figure 13-43: Insight Diagnostics Firmware Report

It also provides you with information about the firmware version of all storage controllers and
devices, as shown in Figure 13-44.

Figure 13-44: Insight Diagnostics Firmware Report - Storage Devices

HP Service Pack for ProLiant

The Service Pack for ProLiant (SPP) includes the firmware, drivers, and software updates for a
ProLiant server. You can obtain the latest SPP from the HP website. You can install the SPP in
the following ways:
Online interactive mode
Launch start.html, which is found at the root of the ISO, and follow the onscreen

Online automatic mode

Insert a DVD or USB key containing the SPP and wait for 30 seconds. Firmware
installation will be performed automatically.
Offline automatic firmware update mode
Boot from the SPP DVD or USB key
Deploy using HP SIM or HP SUM
Booting the SPP DVD from iLO virtual media is only supported in automatic firmware
update mode.

Using HP SIM to perform updates

HP SIM can be used to update software and firmware on up to 10 servers as long as the HP
Version Control Repository Manager (VCRM) is correctly configured. VCRM is part of the HP
Foundation Pack.
Earlier versions of HP SIM required each server that receives updates from HP SIM to have the
HP Version Control Agent (VCA) installed. The VCA is part of HP SMH. As an alternative to
using the VCA, you can configure Software/Firmware Baselines in HP SIM 7.1 (or later).

Configuring VCRM
You can configure VCRM through the SMH Home page. To do so, click the HP Version
Control Repository Manager link under the Version Control category, as shown in Figure

Figure 13-45: Launching VCRM

The home tab, shown in Figure 13-46, displays repository statistics and allows you to perform
the following tasks:
Upload a support pack
Create a custom software baseline
Change repository and update settings

Figure 13-46: VCRM Home Tab

Clicking Upload a support pack displays the dialog shown in Figure 13-47, which allows you
to browse to load a support pack.

Figure 13-47: Upload Support Packs

Select the support pack, as shown in Figure 13-48 and click OK.

Figure 13-48: Selecting a Support Pack

Click Upload to upload the support pack (Figure 13-49).

Figure 13-49: Upload the Support Pack

The files will be copied to the repository. This process takes some time. The progress is
reported in the upload dialog. After the upload is complete, information about the repository will
be displayed on the home tab, as shown in Figure 13-50. From here, you can upload a support
pack, create a custom software baseline, or change repository and update settings.

Figure 13-50: Repository Statistics

The catalog tab shows the components in the repository, as shown in Figure 13-51. You can
delete components, configure them, or copy them to another repository. You can also create a
custom software baseline, update the components with the latest versions from the HP website,
and rescan the repository and rebuild the catalog.

Figure 13-51: Catalog Tab

Creating a software baseline

You can click create a custom software baseline to create baselines for specific types of
servers in your environment. You are prompted for information about the baseline configuration,
as shown in Figure 13-52.

Figure 13-52: Creating a Custom Baseline

The next screen allows you to select the components that will be included in the baseline
(Figure 13-53).

Figure 13-53: Selecting Components

Changing Repository settings

You can click change repository & update settings to modify the repository settings. The first
screen allows you to change the directory that is used as the repository, as shown in Figure 1354.

Figure 13-54: Changing the Repository Directory

The next screen allows you to select operating systems for which the support packs should be
downloaded during scheduled and manual updates, as shown in Figure 13-55.

Figure 13-55: Selecting Operating Systems

The next screen prompts you to configure automatic update settings, as shown in Figure 13-56.
Automatically updating the SSP stored in the repository will help keep your servers up-to-date.

Figure 13-56: Automatic Update Settings

HP Smart Update Manager (HP SUM)

HP SUM can be configured to install and update firmware and software on ProLiant servers. It
can be used to update as many as 100 servers at the same time. HP SUM is included with the
HP SUM has an integrated hardware and software discovery engine that finds the installed
hardware and current versions of firmware and software in use on target servers. This capability
prevents unnecessary network traffic by sending only the required components to a target host.
HP SUM installs updates in the correct order and ensures that all dependencies are met before
deploying an update. HP SUM prevents an installation if there are version-based
dependencies that it cannot resolve.
HP SUM does not require an agent for remote installations because it copies a small, secure
SOAP server to the target for the duration of the installation. After the installation is complete,
the SOAP server and all remote files associated with the installation, except installation log
files, are removed. HP SUM copies the log files from the remote targets back to the system
where HP SUM is executed.
Simple Object Access Protocol (SOAP)
A protocol used to exchange XML data between services or applications.

HP SUM has a graphical interface and also allows you to use scripts. There is also an Express
option that allows you to update software, but not firmware, on the local host. In this course, we
will focus on GUI deployments.

Using HP SUM to perform updates

You can launch the GUI for HP SUM by navigating to the \hp\swpackages directory on the
SPP and double-clicking hpsum.exe. After you launch HP SUM, you are prompted to select the
sources from which updates will be retrieved, as shown in Figure 13-57. The default source is
the folder where the HP SUM application is located. However, you can select other
repositories, including the HP FTP site ( or a repository that you downloaded
from the FTP site. The FTP site contains the latest versions of firmware and software

components available from HP.

Figure 13-57: Source Selection

A directory that contains the updates.

If the repository is on the local computer, you can click Configure Components to perform any
necessary configuration. A dialog that lists the configurable components in the repository is
displayed, as shown in Figure 13-58. Components can only be configured if they are in a
repository that allows write access.

Figure 13-58: Configure Components

You can also click Repository Contents to view the contents of a specific repository, as

shown in Figure 13-59.

Figure 13-59: Repository Contents

You can expand a bundle to view information about each component in the bundle, as shown
in Figure 13-60.

Figure 13-60: Components in a Bundle

Clicking the link related to a specific component displays information about that component,
including the description, installation notes, and a version history, as shown in Figure 13-61.

Figure 13-61: Component Information

If you plan to use the HP FTP site, you can click the Proxy Options link to enter necessary
proxy information for that repository. The dialog shown in Figure 13-62 is displayed, allowing
you to either detect a proxy server or specify the information manually.

Figure 13-62: Proxy Server Details

If you need more control over which updates are installed, you can click Add Repository to
specify a path to a repository that contains the installation packages for the components you
want to install on the target servers.
The dialog shown in Figure 13-63 is displayed. As you can see, you can select whether the
repository contains firmware updates, software updates, or both. You can specify a location on
the local server or on the network. To specify a location on the network, use a Universal
Naming Convention (UNC) path.
Universal Naming Convention (UNC) path
A network path that uses the format \\servername\directory or \\ip address\directory

Figure 13-63: Adding a Repository

You can install components from multiple repositories. If you do, and the same component
exists in both repositories, HP SUM will choose the version according to the following rules:
The highest version number (most recent) bundle or component.
The local version if the component exists in both a local repository and the FTP site
After you have finished configuring the repositories, click Next. You are prompted to select
targets, as shown in Figure 13-64.

Figure 13-64: Select Targets

You can click Find Targets to have HP SUM search for targets based on an IP address range,
port address, or an LDAP query. You define the search type in the dialog shown in Figure 1365.

Figure 13-65: Search Type

Selecting IP Address Range allows you to specify the start and end IP address, as shown in
Figure 13-66. You can enter the address using either IPv4 or IPv6 formats. The start and end
address must be in the same class C address block. For example, you search for computers
from through, but not for computers from through Also, no more than 255 computers can be found through a search.

Figure 13-66: Find Target by IP Address

Selecting Port address allows you to specify not only an IP address range, but a port as well.
For example, you would use a port scan and specify a port address of 80 if you wanted to
locate all web servers in a specific IP address range, as shown in Figure 13-67.

Figure 13-67: Find Address by Port

An LDAP search allows you to locate servers by using a directory server, such as an Active
Directory domain controller. You need to specify the name of the LDAP server as well as the
username and password of an account that has permission to retrieve information about
computer accounts, as shown in Figure 13-68.

Figure 13-68: Using an LDAP Server

You can also manually add a target by clicking the Add Target button. The dialog shown in
Figure 13-69 is displayed. You can specify the targets IP address, target type, and the
credentials used to perform the installation. You can also select what action to take if the
computer is already in the process of running an update.

Figure 13-69: Adding a Target

HP SUM allows you to create groups to make it easier to manage the updates to devices based
on their configuration. For example, you might want to group all database servers or all web
servers in a specific group. Click Manage Groups to add a group, as shown in Figure 13-70.

Figure 13-70: Manage Groups

You cannot update a target without entering the necessary credentials. To do so, select the
target and then click Enter Credentials. HP SUM displays the dialog shown in Figure 13-71.
You can specify the username and password, or you can use the current domain credentials if
the target trusts the server that is running HP SUM. The credentials that you use must have
Administrator rights (root rights on a Linux server).

Figure 13-71: Enter Credentials

After you have entered credentials, HP SUM verifies them and then initiates self-discovery on
the computer. After self-discovery, the targets will have a status of Ready to proceed, as
shown in Figure 13-72.

Figure 13-72: Ready to Proceed

You can click Schedule Update to configure a scheduled time for the targets to be updated.
For example, it is usually best to perform updates during slow times or outside of business
hours. HP SUM displays a dialog that allows you to select a schedule that has already been
defined or create a new one (Figure 13-73).

Figure 13-73: Select or Create a Schedule

You can perform the update immediately or schedule it to be performed at a later time. If you
click Create Schedule, you are prompted for the name of the schedule. The schedule will be
created. You can set the date and time of the schedule and move targets from the Unscheduled
Targets list to the Scheduled Targets list, as shown in Figure 13-74.

Figure 13-74: Creating a Schedule

Click Save and Continue to create an additional schedule. Click Done after you have finished
creating schedules. If you click Done, you are prompted whether to change any unsaved
changes to the open schedule.
After the schedule has been defined, it is listed in the Status column for the targets with which it
is associated, as shown in Figure 13-75.

Figure 13-75: Scheduled Updates

Click Next to proceed to the review/install updates screen. The Review/Install Updates
screen, shown in Figure 13-76, lists the updates that are assigned to targets.

Figure 13-76: Review/Install Updates

You can click Select Bundles to view information about the bundles in the repository or
repositories that have been assigned to the target. You can expand the bundle to view the
status, installed version, active version, available component, and optional actions for each
component, as shown in Figure 13-77. Optional actions include:
Select Devices
Allows you to select which devices are affected by the update.
Installation Options
Allows you to set installation options for the component.

Causes discovery to run again. You would run Analyze if an error or warning had
been detected, but is now resolved.

Figure 13-77: Select Bundles

You can click Installation Options to enable a forced installation. A forced installation is one
that overwrites a component with the same or a newer version. You can force installation for an
individual component or set force options that apply to all bundles, all firmware, and/or all
software, as shown in Figure 13-78.

Figure 13-78: Installation Options

You can also configure reboot options, as shown in Figure 13-79, to determine how reboots are
handled after installation. You can reboot if needed or always. You can also specify a delay

and a message that will be displayed to warn users who are logged into the server that the
server is going to reboot.

Figure 13-79: Reboot Options

Generating reports
You can click Generate Report to generate one or more of the reports shown in Figure 13-80.
The reports can be generated as HTML or XML. You can select the path to which the reports
will be saved and select whether to view the report after it is run.

Figure 13-80: Generating Reports

The supported reports are described in Table 13-2.

Table 13-2: Supported Reports



Inventory selections

Provides details of the contents in all selected repositories.

Target Firmware details

Provides firmware versions for the selected targets. You can

only generate this report after you have entered the target
credentials and HP SUM completes the discovery process on
the Select Targets screen.

Target Installables

Provides a list of updates available for the selected targets or

devices. HP SUM will collect all the information available for
this report on the Review/Install Update screen.

Failed Dependency

Provides details on any failed dependencies that will prevent an

update from succeeding. The Failed Dependency details report
is automatically generated when either Target Firmware details
or Target Installables details reports are generated.

Installed details

Provides details on the updates that HP SUM installed in this


In this chapter, you learned:
Proper upgrade planning is essential to ensure a smooth upgrade process.
An upgrade plan should include a schedule, estimated server downtime, a test plan,
and a rollback plan.
Consult the product documentation for the exact steps and safety procedures when
performing a hardware upgrade.
A server in Standby mode is not off. You need to disconnect all power cords to turn a
server off.
A hot-plug removal is only supported for hot-plug components that meet the
necessary redundancy requirements.
Installation procedures differ somewhat for Xeon and Opteron processors.
You should only install memory into banks associated with an installed processor.
Consult the system memory diagram or use the MCU to determine where to install
You can use HP Insight Diagnostics to obtain the current memory configuration of a
You should not physically remove a hot-plug drive until it has been removed from the
After installing a drive, you can use ACU to expand an array or perform a RAID

A P2000 supports array expansion but not RAID migration.

You must shut down the controller before replacing an IO module in a P2000.
You must shut down the entire P2000 enclosure before replacing a controller in a
single-controller configuration.
You do not need to shut down the P2000 enclosure before replacing a controller in a
dual-controller configuration.
You can use SMH or Insight Diagnostics to determine the current firmware versions.
SPP includes the firmware, drivers, and software updates for a ProLiant server.
You can use HP SIM to deploy SPP updates to up to 10 servers, provided that the
VCRM is configured.
You can use HP SUM to deploy SPP updates to up to 100 servers.

Review Questions
1. You have extended a rack-mounted server to perform a replacement procedure. You
realize you need to open another rack-mounted device. What should you do first?
2. Which processor slot must be populated first in a quad-processor Intel-based ProLiant
3. How can you learn which DIMM speeds and capacities are supported by a specific
4. You replace an internal SAS drive. The activity light blinks at a steady 1Hz frequency.
What is the reason?
5. You install a tape drive enclosure in a rack. When will connectors 1 and 3 be used?
6. Under what circumstances would you replace the CompactFlash from the current Smart
Array controller into the new Smart Array controller?
7. How can you ensure that HP SIM deploys the most recent firmware and driver updates?
8. How can you obtain HP SUM?
9. What are three ways to discover server in HP SUM?

1. You should always fill empty server hard drive slots with __________ to ensure proper air
2. In a server with a(n) _______________ processor, DIMMs are installed on a separate
processor memory board.
3. The _____________________ generates a set of memory upgrade options based on a
server model, current configuration, and desired configuration.
4. You install a non-hot-plug drive in a server. The drive will be detected

5. A blinking amber Fault LED on a hot-plug SAS drive indicates that the drive
6. Before you can deploy updates with HP SIM, you need to configure
7. You can create a _______________ in VCRM to control which updates are applied to
specific servers.

True or false
1. You can power down a server by pressing the Standby button.
2. When replacing a hot-plug hard drive in a server, you must remove the drive from the
array before physically removing it from the server.
3. The DL360 G7 allows you to mix DIMM speeds, but the memory bus will default to the
lowest clock rate.
4. You need to manually configure the drive number when installing a SATA drive.
5. The only way to install the firmware from an SPP is to create a DVD or USB key and use
it to boot the server.
6. HP SUM does not require an agent for remote installations.
7. You can use the HP FTP site as a repository for HP SUM deployments.

Essay questions
You are planning to install more RAM in a server. List and describe the components
that should be included in the upgrade plan.
A fan in a ProLiant server has failed. Explain the circumstances under which you can
replace the fan without shutting down the server.
You are replacing a Smart Array controller in a P2000 system. How does the
replacement procedure differ for a dual-controller system than for a single-controller

Research activity
Review the Memory section of the QuickSpecs for the ProLiant DL360 G7 at the location listed
below. Answer the following questions.
a. How many slots can be populated when using UDIMMs?
b. If 2 1066 MHz DIMMs are installed in channel 1 and 5 1333 MHz DIMMs in each
other channel, will an error occur? If not, at what speed will memory access occur?
c. If you want to install 4 DIMMs in a single-processor server, which channels and
slots will you use?
d. What are the limitations for using Low Voltage DIMMs?
e. Can you achieve a higher memory capacity using Quad Rank, Dual Rank, or
Single Rank DIMMs?
f. You need to install 80 GB of RAM in a server with a single processor. Can you

achieve a higher speed using Quad Rank, Dual Rank, or Single Rank DIMMs?

Scenario questions
Scenario: BCD Train
BCD Train has six servers at the main office and two servers at each training center. One
server is an Active Directory domain controller. All other servers are domain members. Two of
the servers at the main office are web servers running SUSE Enterprise Linux. There is a file
and print server and a management server, both running Windows. There is a database
server at each location running Windows Server 2008 R2 and SQL Server 2005. The
database servers all listen on port 1433. Each training center also has a file server.
You need to configure updates to the servers using HP SUM. The updates to the database
servers should be performed only when classes are not in session.

1. Explain the steps that you will take.
2. How can you ensure that users are warned before a server shuts down?

This eBook is licensed to Catalin Dinca,

Chapter 14: Troubleshooting

This chapter focuses on some of the things that you need to keep in mind when network
servers and resources stop working properly. No matter what you do, problems are going to
occur. Equipment breaks, errors occur or are introduced through improper configurations.
Troubleshooting is an inevitable part of server support. It is critical to understand the
troubleshooting process and actions to take when problems arise.
During this chapter, we look at the troubleshooting process, including the HP-recommended 6step troubleshooting methodology and how to apply it to problems that occur. You will be
introduced to some of the tools that help you with troubleshooting and repair. We will also talk
about some common problems that you might encounter.

In this chapter, you will learn how to:

Use the HP 6-step troubleshooting methodology to resolve problems.

Identify troubleshooting tools.
Identify HP utilities that can assist with troubleshooting.
Identify Windows utilities that can assist with troubleshooting.
Recognize device-specific problems and possible solutions.

About Troubleshooting
When you are faced with network or server problems, it is best to have a systematic process in
place for isolating and resolving problems. Otherwise, you could waste your efforts by exploring
unproductive paths or repeating actions that you have already taken.
One of the challenges of troubleshooting a network problem is that the problem may
manifest itself at a distance far from the actual problem source.
HP has defined a 6-step troubleshooting methodology to help walk you through the process in
a logical manner. The HP 6-step troubleshooting methodology includes:
1. Gathering information.
2. Evaluating data to determine the problem.
3. Developing an action plan to resolve the issue.
4. Executing the action plan.
5. Testing to ensure that the problem is resolved.
6. Implementing preventive measures.
It is sometimes necessary to repeat a step more than once before you can clearly identify the
problem and its cause (Figure 14-1). In fact, in some cases, you need to go back to the data
collections step if your first attempt to solve the problem fails.

Figure 14-1: Troubleshooting Process

Oftentimes, it is necessary to repeat specific tests and other steps that you have taken to verify
that a problem is indeed resolved. The last step is always to implement preventive measures to
prevent the problem from occurring again.

Gathering information
Information gathering can often take more time than any other single step, but you should look
at this as time well invested. Time spent up front gathering information about the problem pays
off later in the process.
The process usually starts by asking open-ended questions, such as:
When did you first notice the problem?
Did it happen once or does it keep happening?
Can you reproduce the error?
For a recurring problem, are there any common factors, such as time of day or other
activities on the network?
What error messages (if any) were reported?
Once you have a general idea of the problem, you can start asking more specific yes or no
questions. However, because you may be talking to people with limited technical expertise, be
sure to avoid overly technical terms or technical jargon. Also, avoid asking questions that might
lead the person to answer in a way that points you to a cause you already suspect.

When you feel that you have a good understanding of the problem, you should try to reproduce
it yourself. This is also necessary later when you are trying to determine whether you have fixed
the problem.
You also need to collect information about the network and information logged or reported by
severs. This includes:
Hardware components
Installed software (including version and build information)
Error messages
Error or event log data
Device logs can be good sources of information. Warnings, faults, and error reports are written
to the log. You can see what problems have occurred, and how long and how often the
problems have been occurring. Many devices give you the option of forwarding log entries to a
central location. This makes it easier to check and compare log entries.
HP makes several information sources available to you to help with information gathering. This
includes information such as:
BIOS setup
System log using Lights Out products
Library and Tape Tools (for Tape Drives)
System Management Homepage
Server health logs
Visual and audible indicators
There are also several operating system and third-party tools that can provide useful
information. The same is true of server software, such as anti-virus and malware checker
software. Some applications, such as backup and restore applications, can provide useful
information in specific areas.

Evaluating data
After you have all of the information, you need to determine how to proceed. You need to ask
yourself what the raw data is telling you, such as what kinds of symptoms you are seeing and
what factors seem to influence the problem. You should be able to:
Identify failing subsystems.
This includes both hardware and software failures.
Determine the components most likely to cause the problem.
Keep in mind that the problem could be caused by multiple components failing,
especially after an event such as a lightning strike. Whenever possible, try to isolate
the problem down a field replaceable unit (FRU) subsystem.
Field replaceable unit (FRU)
Component subsystem designed to be replaceable in a field environment, such as at a customer site.

If you can determine that there was an event that probably caused the failure, you may also be
able to narrow down the failing components.
Keep in mind that you do have a number of resources available to help you in diagnosing
problems, including:
Subscribers Choice
Customer Advisories
Tech notes
Quick specs
Service Product Announcement
Reference and Troubleshooting Guide
Services Media Library
The HP website is one of the best sources for useful information and links to additional
Keep in mind that some reported problems might be false positives. In other words,
some reported issues may not be the actual problem that you are trying to resolve.

Developing an action plan

Once you have an idea about what is failing, you should decide what type(s) of corrective
action you should take. You should create an optimized action plan to organize your efforts. An
action plan should include the following steps:
1. Identify the root causes of failures.
2. Identify possible solutions for each.
3. Weigh the cost in money and time for each solution against the value of the solution.
4. Determine whether you can take the necessary actions or if escalation is necessary.
5. Identify the individual steps necessary to implement the solution.
6. Review each step to ensure that each action impacts only one variable at a time.
7. Generate an organized list of steps or actions that you plan to take as your action plan.
8. Inform the customer of your action plan and any progress in resolving the problem.
Passing a problem on to personnel who are more qualified to resolve the issue.

It is very important that the steps of your plan allow you to change only one variable at a time
and then test. That way, you know which action corrected the problem.
Never be afraid to escalate a problem if it is beyond your ability to isolate or repair. Many
support organizations have support engineers whose primary job is to handle the tough
problems. Often, a support agreement sets a time limit for how long a problem can continue
before it is escalated. If you have a service contract or service level agreement (SLA), the types
of service performed, response times, and other performance details are defined as part of the

service level agreement (SLA)

The portion of a service contract that defines levels of service.

Executing the action plan

Next, you must execute the steps that you have identified in your action plan. Carefully
document what you do and the results of each action (if there are results). Any additional
information that you gain is useful in isolating the problem and its root cause. During this
process, you may find it necessary to revise your action plan as new information becomes
As you execute the steps, be sure that you change only one variable before going to the next
step. Otherwise, if you make multiple changes and resolve the problem, you still might not know
what actually fixed the problem.
When you replace a component, be sure to verify firmware and driver information. It may be
necessary to upgrade (or downgrade) a replacement component to ensure compatibility.

Testing is an ongoing process while you are executing the action plan steps. With each action,
you must determine whether you have resolved the problem. You must also test your solution
after you complete the action plan to verify that the problem is really fixed.
If you have not fixed the problem, you must go back through the previous steps, adding any
new information that you have gained while executing the action plan.
If the problem is fixed, you should clearly document the final results and inform the customer.
Even though it might be obvious to you, the problem is not really fixed until the customer knows
that it is fixed.

Implementing preventive measures

In many cases, it is appropriate to implement preventive measures that prevent the problem
from occurring again. Preventive measures can take several forms such as:
Additional hardware protection, such as putting a device on an Uninterrupted Power
Supply (UPS).
Periodic maintenance procedures.
New or updated procedures and processes.
User or operator training.
Security measures, such as better anti-malware software or firewalls.
Some preventive measures are obvious. For example, if the root cause is determined to be a
virus, you need to install antivirus software or ensure that virus definitions are being updated
Documentation is an important part of preventive measures. Any repairs should be clearly and
completely documented. You also want to update any history that you maintain for equipment
and, if necessary, update the hardware inventory and configuration documentation. Any special
requirements, especially those introduced or required by the solution, also need to be
thoroughly documented.

Troubleshooting Tools and References

An important part of troubleshooting is knowing and understanding the resources that are
available to you. This includes both troubleshooting tools and available documentation.
Product manuals are a good place to start and have answers to many of the more common
device problems. Support forums are also a good source. There is often a good chance that
someone has already seen and resolved the problem that you are trying to fix.

Product manuals
Product manuals are a good source for product- and model-specific troubleshooting assistance.
Most manuals have context-appropriate tips throughout the manual, but some manuals also
have separate troubleshooting sections. One example of this is the Maintenance and Service
Guide, which includes the following sections:
Illustrated parts catalog
Removal and replacement procedures
Diagnostic tools
Component identification
Another example is the HP ProLiant Servers Troubleshooting Guide, which provides the
following information:
Common problem resolution
Diagnostic flowcharts
Hardware problem troubleshooting
Software problem troubleshooting
Software tools and solutions
HP resources for troubleshooting
Error messages
Instructions for contacting HP
Information relating to specific issues can sometimes be found in white papers available from
the HP website.
One important piece of information that is provided in product manuals is a guide to interpreting
information codes in LED displays.

Guided Troubleshooting website

The Guided Troubleshooting website, available at, provides
troubleshooting steps for a number of HP products, including servers and rack infrastructure
devices. Figure 14-2 displays the Guided Troubleshooting website for ML and DL series

Figure 14-2: Guided Troubleshooting Website

Technical support
Sometimes you may find that you need to contact HP technical support to get help with your
problem. Technical support can usually provide you with faster answers than you can find
through your own research. Often, they have already seen the problem and have a solution.
You can contact support personnel through the HP Customer Care site, HP Business Support
Center, HP Support Center, and through other links on the HP website. HP offers several
support options, including:
Online chat
Click on your preferred option. You can also submit new support cases or manage existing
cases (Figure 14-3).

Figure 14-3: Technical Support Links

Technical support asks for information about your product, such as:
Model number
Product number
Serial number
Support personnel have access to references and can escalate a problem to more experienced
personnel. They also have access to a wide array of support tools, including the ability to use
Remote Desktop to take control of a remote computer to work directly on the problem on your

Care Pack service levels

You can purchase care packs to enhance and extend product warranty. Care Packs provide 25
service levels across geographies and technologies. The HP ProLiant Care Pack service
levels include:
Hardware support
Provides a selection from 6-hour Call-to-Repair, 24 x 7, or 13 x 5 coverage with 4hour on-site response or next business day.
Software support
Provides 24 x 7 coverage for advisory and remedial support.
Integrated hardware and software support services
Includes proactive support with onsite services to maximize productivity and
minimize disruptions.
Startup and implementation

Ensures that servers are installed and configured to provide top performance and
6-hour Call-to-Repair
An SLA that guarantees that a system is repaired within six hours of the initial call.

HP utilities
HP servers include a number of utilities that allow you to gather data about the problem. You
have already been introduced to several of these, including Array Configuration Utility (ACU)
and Integrated Lights-Out (iLO). In this section, we look at these with a focus on
troubleshooting. We will also look at several other utilities that can be used to gather
information about a problem.

System Insight Display

The System Insight Display provides an easy, front view of system component health and at-aglance server diagnostics. It is available in most ProLiant servers.
The front panel health LEDs only indicate the status of the current hardware. In some situations,
HP SIM might report server status differently than the health LEDs because the software tracks
more system attributes. The System Insight Display LEDs identify components that are
experiencing an error, event, or failure.
The System Insight Display LEDs are described in Figure 14-4.

Figure 14-4: ProLiant DL360 G7 System Insight Display

The PS1 and PS2 LEDs correspond to the two power supplies. The Power Cap LED (1)
indicates the systems status with regards to the power cap settings. The status of the built-in
NICs is indicated by the LEDs referenced by (2). Advanced Memory Protection (AMP) status is
referenced by (3). The Interlock LED turns amber if the PCI riser board is not seated properly.
The LEDs numbered 1-9 correspond to DIMM slots. The processor LEDs are located between
the two sets of DIMM LEDs. The LEDs along the bottom indicate the status of system fans.
The LED combinations and their meanings are described in Table 14-1.
Table 14-1: ProLiant DL360 G7 LED Combinations

HP Insight Diagnostics Utility

The HP Insight Diagnostic utility provides diagnostic information to help administrators verify
server installations, troubleshoot problems, and validate repairs. It is available in Online and
Offline editions.

Online Edition
The HP Insight Diagnostics Online Edition is a web-based application that captures

configuration and performance data. It is available in a Windows version and a Linux version
and is part of the Service Pack for ProLiant (SPP) bundle.
You can launch the HP Insight Diagnostics Online Edition from the Windows Start menu, from
HP SIM, or from HP SMH.
When you first launch the HP Insight Diagnostics Online Edition, it scans the system, as shown
in Figure 14-5.

Figure 14-5: HP Insight Diagnostics Online Edition - Scanning Hardware

The Survey tab, shown in Figure 14-6, displays information on various system components.

Figure 14-6: HP Insight Diagnostics Online Edition - Survey Tab

You can filter the information shown by choosing a category. The available categories are

listed in Table 14-2. You can also choose whether to show summary or advanced information.
Table 14-2: Categories




Displays information about the system and all subsystems.


Displays general information about the system.


Displays the type of bus as well as BIOS and PCIrelated


Asset Control

Displays the product name, serial number, asset tag, and

system identification number.


Displays information about parallel ports, serial ports, USB

ports, and NICs.


Displays information about firmware and BIOS versions for the

system controllers and devices.


Displays information about the graphics subsystem, including

graphics card, graphics mode, ROM, and video memory.

Input devices

Displays information about the type of keyboard, mouse, and

other input devices.

Internal conditions (not

available on all servers)

Displays health information, including the fan, temperature,

power supply, and health LED information.


Displays information about RAM.


Displays information obtained from CMOS, BIOS data area,

Interrupt Vector table, TPM, and diagnostics component

Operating system

Displays information on the OS (available in Online mode only).

Remote Management
(not supported on all

Displays information about iLO.


Displays information about system resources, realtime clock,

and operating settings for I/O and IRQs.


Displays information about storage controllers and storage



Displays information about the system ROM, product type,

processor type, speed, and coprocessor.

You can schedule a system survey to be performed periodically. You can also compare one
survey to another to note any differences.
The Diagnose tab, shown in Figure 14-7, can be used to run diagnostics on power supplies
and logical drives that are attached to an HP Smart Array Controller.

Figure 14-7: HP Insight Diagnostics Diagnose Tab

To run a diagnostic test on one or more components, check the components that you want to
test and click Diagnose. The system performs diagnostics, as shown in Figure 14-8.

Figure 14-8: HP Insight Diagnostics - Diagnosis Status

You can view the progress and results of a diagnosis on the Status tab (Figure 14-9). You can
also cancel running a test or perform a retest from this tab.

Figure 14-9: HP Insight Diagnostics - Testing Complete

The possible test results and their meanings are described as follows:
The device is operating within specifications.
The test was canceled.
The device failed the test and additional testing should be performed.
Further troubleshooting required
A communication problem exists.
Abnormal termination
The test terminated abnormally.
When testing a logical drive, information is shown for each physical drive in the volume, as
shown in Figure 14-10. You can cause the identity light on a drive to flash by clicking the Start
Drive Identity LED. You can stop the light from flashing by clicking the Stop Drive Identity
LED button.

Figure 14-10: Logical Drive Status Summary

You can view information about the history of diagnostic tests by examining the Diagnosis Log,
as shown in Figure 14-11.

Figure 14-11: Diagnosis Log

If any errors have occurred during testing, they are listed on the Error Log (Figure 14-12).

Figure 14-12: HP Insight Diagnostics Error Log

The Integrated Management Log (IML), shown in Figure 14-13, lists any system errors
discovered during POST or by the System Management driver during server operations. The
IML entries have dates, severity levels, and error counts. The following severity levels are
General information about a system event.
The issue resulting in the entry has been repaired.
A non-fatal error condition has occurred.
A device has failed.
Each IML entry also has a class that identifies the subsystem with which the event is

Figure 14-13: IML

You need to manually change the severity level to Repaired by selecting an item and clicking
Set Selected Items To Repaired. Figure 14-14 displays a log with one item set to Repaired.

Figure 14-14: Repaired Problem

You can also add a note to an entry by clicking Add Maintenance Note. You can enter text in
the dialog shown in Figure 14-15 to leave information such as actions taken, additional
information needed, and so forth.

Figure 14-15: Adding a Maintenance Note

You can also save or clear the log.

The System Event Log (SEL) shown in Figure 14-16, records events detected by system
sensors. Each entry includes the following information:
Sensor name
Sensor type
Initial occurrence
Last occurrence
SEL events are only generated on systems that have an Intelligent Platform Management
Interface (IPMI) provider installed.
Intelligent Platform Management Interface (IPMI)
A standardized instrumentation architecture that allows components from various vendors to report status information to a
management application.

Figure 14-16: System Event Log

You can clear or save the entries in the System Event Log.

Offline Edition
The HP Insight Diagnostics Offline Edition is accessed by booting the server using SmartStart.
Click Maintenance on the operation selection screen (Figure 14-17).

Figure 14-17: SmartStart Operation Menu

Click HP Insight Diagnostics on the maintenance operation selection screen (Figure 14-18).

Figure 14-18: SmartStart Maintenance Operation Menu

HP Insight Diagnostics scans the system. After the system is scanned, the System Survey
displays the results. The Offline edition has an additional tab, the Test tab, shown in Figure 1419.

Figure 14-19: HP Insight Diagnostics - Test Tab

The Test tab allows you to select one or more components to test. You can click All Devices to
run a test on all devices. You can run an interactive test or an unattended test. An interactive
test requires user input. An unattended test does not require user input. As you can see in
Figure 14-20, the interactive tests are shown in blue when Interactive is selected.

Figure 14-20: HP Insight Diagnostics Quick Test

You can also choose to perform the following types of tests:

Quick Test
A predetermined test script that tests a sample of the components functionality.
Complete Test

A predetermined test script that fully tests a components functionality.

Custom Test
A test that allows you to select which tests are run and enter test parameters.
You can also select how many iterations of the tests to run or the amount of time that a test
should run.
Figure 14-21 displays an example of how to configure a custom test. You expand a subsystem
on the left, expand a component, check a test, and then define parameters.

Figure 14-21: HP Insight Diagnostics - Custom Test

Select the components that you want to test and click Begin Testing to execute a test. The test
status displays on the Status tab, as shown in Figure 14-22.

Figure 14-22: HP Insight Diagnostics - Status Tab

You can click Cancel to stop the testing. When the testing is complete, the results are shown in
Figure 14-23. The Status column has the same values as the diagnostic tests.

Figure 14-23: HP Insight Diagnostics - Test Status

After testing, the Log tab displays any errors that occurred, including the number of times the
failure occurred, the error code, and the recommended steps to take to repair the problem, as
shown in Figure 14-24. In this example, the optical drive registered two failures.

Figure 14-24: HP Insight Diagnostics - Error Log

The Test Log displays the number of times each test has been run, the number of times it has
failed, the duration of the test, and the last time the test was run, as shown in Figure 14-25.

Figure 14-25: HP Insight Diagnostics - Test Log

iLO allows you to access troubleshooting information remotely. Troubleshooting information is
available at the following locations:
System Information
iLO Event Log
Integrated Management Log

System Information
The System Information node allows you to view status information for various subsystems and
devices. The Summary tab is shown in Figure 14-26.

Figure 14-26: System Information - Summary

If a subsystem had a warning or failure, you can gather additional information about the failing

component by accessing the tab related to that subsystem. The Fans tab is shown in Figure
14-27. As you can see, the status and speed of the fans is reported.

Figure 14-27: System Information - Fans

The Temperatures tab allows you to view the ambient temperature and the temperature of
various components, as shown in Figure 14-28. This allows you to determine whether you have
a cooling problem and which component or components are affected.

Figure 14-28: System Information - Temperature

The Power tab, shown in Figure 14-29, allows you to view the status of each voltage regulator
module (VRM), the present power consumption, the status of each power supply, the power
supply redundancy mode, and the firmware version of the power microcontroller.

Figure 14-29: System Information - Power

The Processors tab allows you to view information about each CPU, as shown in Figure 1430.

Figure 14-30: System Information - Processors

The Memory tab (Figure 14-31) allows you to see the memory that is installed.

Figure 14-31: System Information - Memory

The NIC Information tab, shown in Figure 14-32, displays the MAC address, device type, and
network port for each integrated NIC.

Figure 14-32: System Information - NIC Information

The Drives tab, shown in Figure 14-33, allows you to view the status for each drive, the product
ID of the drive, and whether the drives UID light is on or off.

Figure 14-33: System Information - Drives

iLO Event Log

The iLO Event Log allows you to view events related to iLO. The severity, class, last update,
initial update, count, and description are logged, as shown in Figure 14-34. You can use the
iLO Event Log to audit configuration changes, remote access, and server power resets.

Figure 14-34: iLO Event Log

Integrated Management Log

You can also view the Integrated Management Log from within iLO, as shown in Figure 14-35.

Figure 14-35: Integrated Management Log

The Diagnostics screen, shown in Figure 14-36, reports the results of the self-tests that iLO
performs. It also allows you to reset iLO, generate a Non-Maskable Interrupt (NMI), and view
information about the ROM version and date for active ROM and backup ROM.

Figure 14-36: iLO - Diagnostics

ACU diagnostics
The Array Configuration Utility (ACU) has a Diagnostics tab that allows you to run diagnostic
reports to troubleshoot problems related to DAS. Some versions of ACU also support a
SmartSSD Wear Gauge report that allows you to assess the estimated remaining life of an
There are online and offline editions of ACU.
The Diagnostics/SmartSSD tab is shown in Figure 14-37.

Figure 14-37: Array Configuration Utility - Diagnostics

Click Run Array Diagnostic Reports and select a Smart Array controller to display a list of
reporting options, as shown in Figure 14-38.

Figure 14-38: Array Configuration Utility - Diagnostic Report Options

Click View Diagnostic Report to generate a diagnostic report like the one shown in Figure 1439.

Figure 14-39: Array Configuration Utility Diagnostic Report

The report contains detailed information about the storage media, controller, cache
configuration, surface status, and monitor and performance violations. Any errors that occur are
consolidated in the Consolidated Error Report section at the top of the report.
Clicking Generate Diagnostic Report generates the same information, but does not display
the report. Instead, it prompts you to save the report.

Windows utilities
Windows Server also has a number of utilities you can use to troubleshoot problems. These

Event Viewer
Windows Memory Diagnostics
Task Manager
Performance Monitor
We will look at the first two in this chapter. Task Manager and Performance Monitor are covered
in Chapter 15.

Event Viewer
Event Viewer, shown in Figure 14-40, allows you to view various logs created by Windows and
by certain applications.

Figure 14-40: Event Viewer

The Windows logs are located in the Windows Logs folder. They are described in Table 14-3.
Table 14-3: Windows Event Logs




Displays entries logged by various applications.


Displays security audits logged by the operating system.


Displays events related to Windows installation.


Displays entries logged by the operating system and drivers.

Forwarded events

Displays entries sent by other computers. You need to start the

Windows Event Collector service and create a subscription to
receive events from another computer.

Each log has properties that define the path at which the log file is stored, the maximum log
size, and the action to take when that size is reached. Figure 14-41 displays the properties for

the Security log.

Figure 14-41: Security Log Properties

In all except the Security log, each event entry has a level, date and time, source, event ID and
task category. Figure 14-42 displays entries in the System event log.

Figure 14-42: System Log Entries

The following levels apply to entries:

Displays information-only entries that usually do not require any action.
A potential problem has been detected, but you do not necessarily need to take any

An error has been detected. If this error continues, you may need to take some action
to correct the error.
A critical error has occurred that may require corrective action.
Event properties show more detailed information about the event.
The Security log, shown in Figure 14-43, does not assign a level to events. Instead, each event
is logged as an audit.

Figure 14-43: Security Log Entries

You can expand Custom Views to view events that are filtered based on whether they are
Administrative Events (Figure 14-44) or events related to a specific role.

Figure 14-44: Administrative Events

Attaching a task to an event

Some events are so important that you need to perform an action when the event occurs. To do
so, select the event and choose Action | Attach Task To This Event. The Create Basic Task
Wizard starts, as shown in Figure 14-45. The name of the event is filled in automatically, but
you can rename it. You can also fill in a description of the event.

Figure 14-45: Create Basic Task Wizard

After you click Next, the log, source, and event ID are displayed, as shown in Figure 14-46.
These are associated with the selected event. If you launched the Create Basic Task Wizard
without first selecting an event, you need to fill these in.

Figure 14-46: Create Basic Task Wizard - Event Information

The next screen prompts you for the action to take in response to the event. As shown in Figure
14-47, the supported actions are to start a program, send an e-mail, or display a message.

Figure 14-47: Create Basic Task Wizard - Action

The next screen prompts you for the information needed to describe the action. For example, if
you select Send e-mail, the screen shown in Figure 14-48 would be displayed to allow you to
specify the sender, recipient, subject, text, attachment, and SMTP server used to send the

Figure 14-48: Create Basic Task Wizard - Email Information

If you select Start a program, you are prompted for the path to the program or script, arguments
to pass to the program or script, and the working directory for the program (Figure 14-49).
working directory
The file system path that the application considers its default directory.

Figure 14-49: Create Basic Task Wizard - Program Information

If you select Display a message (not shown), you are prompted for the title and message that
should be displayed in a popup window when the event occurs.
The final screen displays a summary of the configuration, as shown in Figure 14-50. When you
click Finish, the task is created. You can display the Properties dialog to make additional
configuration changes.

Figure 14-50: Create Basic Task Wizard - Confirmation

Memory Diagnostic Tool

Windows Memory Diagnostic can analyze your computer and check whether there is a problem
with the memory. You can launch Windows Memory Diagnostic from Administrative Tools. The
first screen, shown in Figure 14-51, allows you to choose whether to restart the computer to
check for memory problems immediately or to do so the next time the computer restarts.

Figure 14-51: Windows Memory Diagnostic

If you want to change the type of test run, the number of times you want to repeat the test, or the
cache setting for the test, press the F1 key when the computer starts. After you configure the
Memory Diagnostic Tool, press the F10 key to begin the test.

Troubleshooting System Failures

Now that you have an understanding of general troubleshooting techniques and are familiar
with some of the tools available, we should look at some common troubleshooting scenarios.
First, we will look at a checklist of some quick checks that you can make to troubleshoot
common problems.

Quick checks
A large number of complaints turn out to be simple problems with quick fixes. Even though
these might seem obvious, you should keep a checklist as a reminder. It can be easy to
overlook the obvious when you are trying to troubleshoot problems quickly and efficiently. Many
of these relate back to the common problems mentioned previously.
User settings
Try to find out if users or internal support or administrative personnel have made any
configuration changes or added or removed any components. Even small changes, if
they are not made correctly, can lead to problems.
Software and firmware
Verify that software and firmware is up-to-date. This includes, but is not limited to,
operating systems and device drivers.
Blocked or covered vents can cause equipment to overheat. This can result in
intermittent failures and eventually lead to damage. Accumulated dust inside systems
can also become a problem. Dust acts as an insulator and prevents components from
dissipating heat properly.
Make sure that devices are plugged in and powered on. If a device does not turn on,
do not immediately assume that it has failed. Check the outlet and make sure that the
device has power. It might be something as simple as a circuit breaker that has been
tripped, which indicates an interrupted circuit.
A tripped circuit breaker can be a symptom of a bigger problem. You could have too
many devices connected to a circuit, or a device that is beginning to fail could be drawing

excessive current.
Modules and expansion cards
Plug-in modules and internal expansion cards can sometimes become loose. If you
suspect that this might be the problem, power off the device, reseat the module or
card, and power the device back on to test it.
Make sure that cables are attached snugly and are not damaged. Even if a cable
looks fine from the outside, it might have a broken wire internally, especially if it is
routed through a high traffic area.
Obviously, these suggestions cannot fix all of your problems, but they provide quick solutions
for some common failures.

Troubleshooting flowcharts
HP has designed several flowcharts that can be followed to systematically diagnose some
common failures. The flowchart for beginning problem diagnosis is shown in Figure 14-52. This
flowchart references other flowcharts that we will discuss throughout this section.

Figure 14-52: Start Diagnosis Flowchart

Power-on problems
You should use the Power-on problems flowchart (Figure 14-53) for help with power-on
problems. Typical symptoms of a power-on problem are:
The server does not power on.
The system power LED is off or amber.
The health LED is red, flashing red, amber, or flashing amber.
Power problems can be caused by the following issues:
Improperly seated or faulty power supply
Loose or faulty power cord
Power source problem
Improperly seated component or interlock problem

Figure 14-53: Power-On Problems Flowchart

The Power-on problems flowchart requires you to examine health LEDs. You need to consult
the servers documentation for the location of the referenced LEDs.

POST problem diagnosis flowchart

The Power-On Self Test (POST) is the first step that occurs during server boot. A POST is
finished when the system attempts to access the boot device. If the POST does not complete or
completes with errors, you should use the flowchart shown in Figure 14-54 to guide your
troubleshooting. You can generally find information about POST errors in product
documentation or on the vendors website.
A POST problem is always related to a hardware issue, such as an improperly seated
or faulty internal component or a faulty KVM or video device.

Figure 14-54: POST Problems Flowchart

crash dump
A file created during a system crash or blue screen that contains the contents of the memory when the crash occurred. This
is also called a memory dump.
blue screen of death (BSOD)
The screen that appears after a STOP error occurs in Windows. The error code and other information are displayed on the
STOP error
An error that the operating system cannot recover from. Also called a fatal error.

Operating system boot failure

If a server does not boot to the operating system, it might be due to a hardware failure, such as
the inability to access the boot drive, an incorrectly configured boot order, or an operating
system problem.

Windows configuration errors

If the server is running Windows Server, you can use Windows boot options to diagnose
whether the problem is due to a driver that is loaded during startup or a configuration problem.
Windows boot options are described in Table 14-4.
Table 14-4: Boot Menu Options

Boot option


Last Known Good


Starts the server using the registry settings that were loaded the
last time a login was successful.

Safe mode

Starts the server with the minimum device drivers necessary to

boot, the standard VGA video driver, and no startup

Safe mode with


Starts the server in safe mode, but also loads the driver for the

Safe mode command


Starts the server in safe mode, but displays a command prompt

instead of the desktop.

If these boot options do not work or you cannot get to the boot menu, you need to take the
additional steps shown in the operating boot problem flowchart.

Operating system boot failure flowchart

The operating system boot problems flowchart, shown in Figure 14-55, is used to diagnose a
server that does not boot to any operating system boot mode or does not boot SmartStart.
Possible causes for failure include:
Corrupt operating system
Hard drive subsystem problem
Incorrect boot order settings
If you are running iLO Advanced Pack, you can use Remote Console and mount
SmartStart as a virtual DVD drive to attempt to boot to SmartStart.

Figure 14-55: OS Boot Problem Flowchart

Fault indications flowchart

If a server boots, but a fault event is indicated in the IML or the health LED is red or amber, you
need to use the fault indications flowchart, shown in Figure 14-56, to troubleshoot the problem.

Figure 14-56: Fault Indications Flowchart

General diagnosis flowchart

The General diagnosis flowchart, shown in Figure 14-57, provides a path to take when
troubleshooting a problem that cannot be solved by following other flowcharts or if the other
flowcharts do not resolve the problem.

Figure 14-57: General Diagnosis Flowchart

Hardware problems
Hardware problems can occur as a result of an upgrade or replacement or failed hardware
component. The steps for troubleshooting are different for each scenario.

New hardware component

If you have recently installed a hardware component, you should start your troubleshooting
using the following checklist:
1. Verify that the new component is supported on the server.
2. Review the hardware release notes for possible known issues.
3. Verify that the hardware is installed properly and that all requirements are met. Common
problems include:

1. Incomplete population of a memory bank.

2. Installing a processor without a corresponding PPM.
3. Improper termination or ID settings for a SCSI device.
4. Connecting the data cable, but not the power cable.
4. Verify that memory, I/O, and IRQ conflicts do not exist.
5. Make sure that there are no loose connections.
6. Verify that cables connect to the correct place and that cable lengths do not exceed limits.
7. Make sure that other components were not accidentally unseated during installation.
8. Ensure that all firmware and software updates have been completed.
9. Run RBSU to make sure that all system components recognize the changes.
10. Verify that any switch settings are correct.
11. Make sure that all boards are properly installed.
12. Run HP Insight Diagnostics.
13. Uninstall the new hardware.

Network controller problems

If the server cannot connect to the network, the problem might be caused by a failing network
controller, a bad driver, an incorrect IP configuration, or a problem somewhere else on the
network. You can troubleshoot the problem using the following steps:
Check the network controller LEDs.
Make sure that the network cable is securely connected.
Replace the network cable with a known working cable.
Verify that the server and operating system support the controller.
Verify that the controller is enabled in RBSU.
Check the PCI Hot Plug power LED to make sure that the PCI slot is receiving
Verify that the ROM is up-to-date.
Verify that the NIC drivers are is up-to-date.
Verify that a valid IP address has been assigned.
Run HP Insight Diagnostics.
These are the steps to take, but not necessarily the order in which they should be taken.
For example, you might find it easiest to check the IP configuration first and then start looking at
hardware or driver problems.

Unknown hardware problem

If you have not recently installed hardware and a problem is occurring and you cannot isolate
the subsystem or component causing the problem, you need to take the following steps:
1. Power down the system and disconnect all power cables.

2. Reduce the system to its minimum configuration by uninstalling the following

a. All DIMMs, except those necessary to boot the server and successfully pass
a POST. This is either one DIMM or two, depending on the server
b. All additional cooling fans, except those necessary to boot the system.
c. All power supplies except one.
d. All hard drives.
e. All optical drives.
f. All optional mezzanine cards.
g. All expansion boards.
3. Keep the video card and monitor connected.
4. Reconnect power, and power on the system.
a. If video does not work, check for video problems, such as the monitor cord
and cables, KVM issues, and video drivers.
b. If the system does not boot, the problem is caused by one of the minimum
configuration components.
c. If the system does boot, begin replacing components one-by-one until the
system fails.
Consult the server documentation to determine the minimum configuration.

Operating system and application problems

A number of problems can arise due to operating system and application issues. Performing
operating system and application troubleshooting could be a course of its own. However, we
will look at a few common causes of problems.

Device drivers
A failing or incompatible device driver is a common source of application or service problems.
One cause of device driver failure is replacing an OEM device driver with a more generic driver
from the Microsoft website.
You can typically resolve a device driver problem by either installing a new device driver from
the latest SSP or by rolling back to a previous driver.

Service configuration
A Windows service or Linux daemon might fail to start or not function correctly. Some possible
reasons include:
Service is disabled.
A service on which the service depends cannot start.
The Account under which the service is configured to run does not have sufficient

Shared Libraries
Updates can sometimes cause problems with shared libraries. Both Windows and Linux use a
number of shared libraries, as do many applications. With some application suites, like
Microsoft Office, different applications can share the same files in the shared library.
Shared libraries
Files shared by executables, but loaded at load or run time as necessary.

Shared libraries can get updated with fixes and service packs. Also, files get overwritten when
applications are updated to a newer version. Occasionally, these can lead to compatibility
problems. When such problems arise, you should check with the manufacturer for possible
You also need to be careful when uninstalling individual programs. It is possible to accidentally
delete a shared library file that is needed by another application.

Registry issues
The Windows registry stores configuration information about the system itself and about the
user desktop for each user who has logged in. The registry is a hierarchical database of hives
and keys.
A container that stores keys or other hives.
A name-value pair that typically holds a setting for a parameter.

The top-level hives of the registry are described in Table 14-5.

Table 14-5: Top-Level Registry Hives




Stores information about installed components and shared



Stores data related to the current users configuration



Stores configuration settings that apply to all sessions,

including hardware and software settings.


Stores configuration information for each user who has an

account on the server and for the default user profile, which
is used when a user logs on for the first time.


Stores the configuration settings for the current session.

Sometimes a problem is caused by a registry configuration setting. Some types of problems

can only be resolved by modifying a registry setting. Before you modify a registry setting, you
should perform a backup of the registry. Incorrectly modifying a registry setting can cause the

operating system to fail.

You can also back up the registry by launching Regedit and choosing File | Export. The file is
saved with a .reg extension. If you need to restore a registry saved in this manner, open
Regedit and run File | Import.

In this chapter, you learned:
The steps in the HP 6-step troubleshooting methodology are:
1. Gathering information.
2. Evaluating data to determine the problem.
3. Developing an action plan to resolve the issue.
4. Executing the action plan.
5. Testing to ensure that the problem is resolved.
6. Implementing preventive measures.
You should consult the product manuals for guidance in troubleshooting the specific
server that you are troubleshooting.
You need to obtain all necessary information about the server and the problem
before contacting technical support.
System Insight Display provides LEDs that alert you to predictive and actual failures.
HP Insight Diagnostics allows you to run tests for various components.
The Online Edition tests disk drive and power supplies only.
The Offline Edition tests all components.
The IML lists system errors discovered during POST or by the System Management
driver during server operation.
The SEL records events detected by system sensors.
iLO allows you to remotely view status information about various components, as
well as the IML and the iLO Event log.
ACU allows you to run diagnostic reports on storage volumes.
Windows Event Viewer allows you to view log entries created by the operating
system and certain applications.
The Windows Memory Diagnostic tool tests a servers memory.
Troubleshooting flowcharts help you to systematically diagnose common failures.

Review Questions
1. You have replaced a component in a failed server. What should you do next?
2. The LED for Processor 1 is amber. What should you do next?
3. Which tool can you use to run diagnostic tests for the fans and power supplies on a

4. Which tool can you use to obtain the current temperature reading for each component in a
5. Which event log contains entries made by various installed services and applications?
6. What must you do to run the Memory Diagnostic Tool?
7. A ProLiant DL360 G7 server completes POST, but cannot locate the operating system
files. What should you do?
8. A server does not boot using its normal configuration. You can start the server with a
minimal configuration. What should you do next?

1. A(n) _________________ is a component that can be replaced at a customer site.
2. The _______________ provides common problem resolution and diagnostic flowcharts
for server problems.
3. A(n) ________________ provides service levels beyond those covered by standard
4. The _________________ provides LEDs that indicate the status of each DIMM slot, each
processor, each power supply, and each fan.
5. A flashing amber AMP LED indicates _________________.
6. The _______________ contains events detected by system sensors.
7. The _______________ allows you to view the status of each voltage regulator module
and present power consumption.
8. The ________________ displays audits logged by the Windows operating system.

1. After executing an action plan, it is sometimes necessary to gather more information.
2. Never escalate a problem if you have an SLA.
3. It is not necessary to gather any information before contacting technical support.
4. An amber DIMM LED does not always mean that a DIMM has failed.
5. You can use HP Insight Diagnostics to set the status of an error to Repaired.
6. You can view the IML from HP Insight Diagnostics and from iLO.
7. HP Insight Diagnostics Online Edition allows you to run customized memory tests.
8. A server does not boot. The first step you should take is to boot to SmartStart and erase
the disk.
9. A POST problem can be caused by a hardware component or a corrupt operating system.

Essay questions

1. List the steps in the 6-step troubleshooting methodology and give an example of each of
2. Compare the Offline Edition of HP Insight Diagnostics with the Online Edition. Give an
example of when you would use each?
3. Compare the IML, the SEL, and the Windows System event logs.

Research activity
You will use the Guided Troubleshooting website to troubleshoot a problem.
Scenario: FI-Print
FI-Print has a DL360 G7 server running Windows that crashes intermittently. Use the Guided
Troubleshooting website to learn how to diagnose the problem.
1. Navigate to
2. Click Servers.
3. Click HP ProLiant DL Servers.
4. Click HP ProLiant DL360 Server.
5. Click HP ProLiant DL360 G7 Server.
6. Click Diagnosis/Troubleshooting.
7. Click Server Crash/Reboot.
8. Click Server Crash within the Operating System.
9. Click Diagnosing a Windows Crash.
10. Use the links to answer the following questions:
a. How can you obtain information about the cause of a STOP message?
b. What is the most likely cause of the following STOP message?
Stop message 0x00000050 Descriptive text:
c. If the server has already been rebooted from the blue screen, how can you
determine the error?
d. How can you create a memory dump the next time the problem occurs?
Scenario: Stay and Sleep
The web server does not power on. The health LED is amber.
Describe the steps that you should take to isolate the cause of the problem.
Scenario: BCD Train
A file server does not execute POST. There is video output that displays a POST error

What should you do next?

This eBook is licensed to Catalin Dinca,

Chapter 15: Optimization

In the last chapter, you learned how to troubleshoot problems. In this chapter, we take a look at
another important part of a server administrators job: tuning the servers to keep them operating
A server environment is dynamic. Workloads change due to new applications, more users, and
changing usage patterns. Optimizing a server involves gathering and analyzing usage data,
and then adjusting the server configuration to ensure that performance is adequate to meet the
needs of the users.
In this chapter, we will begin by discussing operating system architecture and the resources
that can impact performance. Next we will look at some of the tools available for gathering the
data that you need to analyze resource utilization. From there we will move on to discuss
various types of resource bottlenecks and ways that you can resolve them. We will also look at
some guidelines for resolving application and operating system performance issues.

In this chapter, you will learn how to:
Describe the fundamentals of operating system architecture and the ways it relates to
Use various tools to determine whether performance is optimal.
Identify and resolve bottlenecks.
Check for known performance issues.
Tune performance.

Performance Overview
Every task a server performs requires resources. The actual resource requirements will differ by
task. Some require more memory, some more processing power. Some use a lot of network
bandwidth. Others are disk intensive.
Optimizing a computers performance requires a general understanding of what happens

behind the scenes when programs perform a task. While you do not need to know how to
actually write code, you do need an understanding of how program execution happens. For this
reason, we will start by looking at the details of program execution. We will use Windows
Server in our examples. However, many of the same concepts apply to Linux server operating

Applications vs. services

In general, a server can execute three types of code: applications, services, and scripts. Each
type is executed differently by the operating system.
Depending on the server, there might also be other types of executable code, including
database queries and websites.

An application is usually a file with an extension of .exe (for example, application.exe).
Applications are compiled executables that have a user interface and run as foreground
processes. Most users are familiar with a variety of applications, including Microsoft Word,
Adobe Photoshop, and Skype. The only foreground applications that are run on most servers
are management tools, such as Microsoft SQL Server Management Studio.
The process of changing human-readable programming statements into machine language.
foreground process
A process that a user can interact with.

Task Manager is a tool that shows the current state of a computer. Running applications can be
viewed on the Applications tab of Task Manager, as shown in Figure 15-1.

Figure 15-1: Task Manager - Application Tab

When you launch an application, the operating system creates a process. You can view the

running processes on the Processes tab. The process associated with SQL Server
Management Studio is highlighted in Figure 15-2.

Figure 15-2: Task Manager - Processes

The Image Name column identifies the name of the executable. In this case, you know that the
executable is a 32-bit program named Ssms.exe. You can also see that the program is running
in the security context of the Administrator account. This setting means that the program can do
anything that the Administrator account has permission to do. We will talk more about security
contexts in a minute. You can also view information about the resources that each process is
using. Here you can see that SQL Server Management Studio is not using any CPU cycles
while it is using 25,954 KB of memory. It is not using CPU cycles because it is idle.
If SQL Server Management Studio is the only application running, what are all those other
processes? A lot of them are operating system processes. Others are application processes
that start up automatically and run in the background. For example, you can see that the
spoolsrv.exe image is running. This is the Spooler SubSystem used to send a document to a
You can view a lot more information about each process by adding data columns. To do so,
click View | Select Columns. The Select Process Page Columns dialog is then displayed, as
shown in Figure 15-3.

Figure 15-3: Select Process Page Columns

We will select two: Base Priority and Threads, as shown in Figure 15-4.

Figure 15-4: Select Process Change Columns - Base Priority and Threads Selected

In Figure 15-5, you can see that a Base Priority column and a Threads column have been
added. We have also sorted the list by Base Priority.
You can sort the list by a specific attribute by clicking the column heading for that

Figure 15-5: Processes - Base Priority and Threads

A thread is responsible for executing a sequence of instructions. A thread can execute one
instruction at a time. Each process has at least one thread, but most have many more.
The Base Priority indicates the priority at which the operating system assigns CPU cycles to
the process. Nearly all processes execute at Normal priority. Some critical processes are
assigned High priority. For example, Task Manager is always assigned High priority so that it
can gain control if a process running at Normal priority is experiencing a hang and consuming a
large number of processor cycles.
You can change the priority of a process by right-clicking on it and choosing Set Priority and
then selecting the priority that you want assigned to the threads of that process. You can
associate a running process with a specific processor core by right-clicking that process and
choosing Set Affinity. By default, a process can be run on any processor core, as shown in
Figure 15-6.

Figure 15-6: Processor Affinity

Because a large number of threads are executing, the CPU needs to be able to share its time
between them. This allocation of resources requires a context switch each time one thread is
suspended and another resumes. Consider a very simple example with two threads: ThreadA
and ThreadB. To switch the execution from ThreadA to ThreadB, the kernel must do the
1. Save the state of ThreadA.
2. Add ThreadA to the end of the queue for its priority.
3. Locate the highest priority queue that has ready threads - in this case, we will assume that
is ThreadB.
4. Restore the saved state for ThreadB.
5. Execute ThreadB.
A resource, like a file or memory address, can only be accessed by one thread at a time.
Therefore, sometimes threads have to wait for a resource that is being used by another thread.
Occasionally threads will wait on each other, resulting in a lock up. A lock up is also known as
a deadlock or hang. When a deadlock happens, an application will be shown as Not
Figure 15-7 shows a simple illustration of a deadlock.

Figure 15-7: Deadlock

A deadlock can cause a big problem on a server because it can stop the server from
responding to requests. If a deadlock occurs, you can use Task Manager to terminate the
application or process causing the problem.
Deadlocks can occur in databases too. If you encounter a database deadlock, you
might need to ask a database administrator to help troubleshoot and resolve the problem.

A service is a program that does not have a user interface and runs in the background. A
Windows service runs in a process created by svchost.exe. A service created by another

application might run in a process created by svchost.exe or within a different service host.
On a Linux computer, a service is known as a daemon.
The running services can be viewed on the Services tab of Task Manager, as shown in Figure

Figure 15-8: Task Manager - Services

Notice that each running service is associated with a PID (Process ID) that allows you to
identify the process where the service is loaded. For example, the HP ProLiant System
Shutdown Service and the HP ProLiant Health Monitor Service are both running inside
Process 2476. To learn about the process hosting those services, click the Processes tab and
display the PID column, as shown in Figure 15-9.

Figure 15-9: Selecting the PID Column

When you locate PID 2476 on the Processes tab, as shown in Figure 15-10, you can see that
these two services are running in an instance of ProLiantMonitor.exe, which has 11 threads
and is using 7,080 KB of memory.

Figure 15-10: Locating the Process by PID

You can easily locate the services running under a specific process by right-clicking
that process and clicking Go to Service(s). Windows displays the Services tab, and the
services associated with that process will be highlighted.
You can manage services through the Services utility, which you can access by clicking
Services on the Services tab or through the Administrative Tools menu.
The Services utility is shown in Figure 15-11. You learned a little about managing services
earlier in the course. Here we will take a closer look at some of the options.

Figure 15-11: Services Utility

We will now consider the HP ProLiant Health Monitor service. Its General properties are shown
in Figure 15-12. As you can see, the Path to the executable setting refers to the location of the
ProLiantMonitor.exe file. This is set in application code and cannot be modified.

Figure 15-12: HP ProLiant Health Monitor Service Properties

The Startup type is set to Automatic. This setting ensures that the service will be loaded when
the server boots. You want to keep the Startup type for this service on Automatic. However,
other services may require different settings. The available Startup type settings are described
in Table 15-1.

Table 15-1: Startup Type Settings




Starts automatically when the operating system starts


Automatic (Delayed start)

Starts automatically, but does so in the background.

Set this option on important but non-critical services to
improve startup time.


Starts only when it is invoked by a user, application, or

another service.


Does not start and cannot be started unless you first

change the startup type. Disabling unnecessary
services can help reduce the attack surface of a
computer and improve performance because each
running service consumes resources.

You can start, stop, or restart a service from the General tab or by using the net start and net
stop commands from a command prompt or within a script.
The Log On tab, shown in Figure 15-13, allows you to set the security context under which the
service will execute.

Figure 15-13: Service - Log On

You can set the service to execute under the context of one of three service accounts, or you
can configure a specific user account. The built-in accounts are described in Table 15-2.
Table 15-2: Built-in Service Accounts



Local System

An account that is granted permissions to do almost

anything on the local computer and can access
network resources.

Network Service

An account that has somewhat more permissions than

a member of the Users group, but can also access
network resources.

Local Service

An account that has the same access permissions as

a member of the Users group. It can only access
network resources that allow anonymous access.

The permission requirements will vary depending on the service. You should always consult
the software documentation before changing the security context for an installed service.
The Recovery tab, shown in Figure 15-14, allows you to configure an action to correct a
service failure. You can specify different actions for first failure, second failure, and subsequent

Figure 15-14: Service - Recovery

For each failure instance, you can select from the following actions:
Restart the service
Run a program
Restart the computer
If you choose Run a program, you need to specify the path to the program, as well as any
command-line parameters.

If you choose Restart the Computer, you can configure restart options, including sending a
message to computers on the network and delaying the restart by a specific number of minutes,
as shown in Figure 15-15. Doing so can help ensure that users save their work before the
computer restarts.

Figure 15-15: Restart Options

The Dependencies tab lists the services that must be running for this service to operate and
those that depend on this service. As you can see in Figure 15-16, the HP ProLiant Health
Monitor Service has no dependencies, but it must be running for the HP ProLiant System
Shutdown Service to operate.

Figure 15-16: Service Dependencies

A script is code that is interpreted and run by a scripting host. Scripting hosts include:
Windows Scripting Host
Windows PowerShell

Bourne-Again shell (bash)

The process of reading a user-readable programming statement and translating it to an executable task.

The language used to write scripts depends on the scripting host. Commonly used scripting
languages include:
A script is often used to automate a management routine.

A website is an application that is hosted by a web server, such as Internet Information Server
(IIS) or Apache. A web server typically needs to handle a lot of network requests. Some
websites are very dynamic and require the server to execute code, including database queries.
Others are pretty simple and only require the web server to send HTML to the requester.
As you can imagine, the resource requirements for a website will differ greatly based on the
type of content and the number of requests.

Database queries
A database query can be executed directly within the database management system (DBMS)
tools, but more commonly a query is sent to the database server by an application that is
running on the same server or on a different server. For example, HP SIM uses a database to
store and retrieve various types of information.
A database query can read or write data. Stored procedures are a set of statements that
combine queries and programming statements to perform a task on a database.
Most DBMSs have tools that allow you to view performance information about a specific query
or stored procedure. A discussion of these tools is beyond the scope of this course, but it is
important for you to know that they exist. If you suspect a database query is causing a
performance problem, you need to inform the database administrator about the problem.
Scenario: Stay and Sleep
The server that handles online reservations is experiencing periodic performance problems.
You suspect that it is related to a new service that was installed on the server.
Discuss the steps that you should take to gather information about the new service, including
its resource consumption and information about service failure.

Windows Server and Linux are both event-driven operating systems. In an event-driven system,
applications and services listen for events and then take action based on the events they
receive. An event might be triggered by one of the following:
A user action, such as a mouse-click or keystroke
Another executing application process
The operating system
The key point to remember is that these events do not happen in any order. Instead, they
execute asynchronously. A lot of processes, particularly those that run services, spend much of
their time waiting for an event to occur.

About Performance Optimization

Users sometimes report slow performance when they access a server. How do you make the
performance better? As with troubleshooting, you should follow a sequence of steps when
optimizing performance. These steps include:
1. Gather information about resource usage.
2. Compare the data to the performance baseline.
3. Analyze the data to identify a bottleneck.
4. Resolve the bottleneck.
5. Test the change.
6. Create a new performance baseline.
A set of data that indicates the performance profile of a server at a specific configuring state.

You should capture the initial performance baseline shortly after you deploy a new server. The
data should always be captured during normal operations so that it provides a valid comparison
when you later use it to evaluate performance. You should gather a new set of baseline data
after making any change, including new hardware, operating system upgrades, application
updates, and new applications or services.

Performance monitoring tools

A performance monitoring tool is a tool that allows you to gather information about performance
and resource utilization. A number of tools are available, including operating-specific tools, HP
tools, and those distributed by third-party companies. In this section, we will examine the tools
included in Windows Server, as well as some HP tools. We will start with the Windows tools.

Task Manager
As you already saw, Task Manager provides you with information about the processes that are
currently running. It also allows you to see a birds-eye view of the servers resource utilization
by viewing the Performance tab.

Figure 15-17: Task Manager - Performance Tab

As you can see, this server is pretty idle, as evidenced by the CPU utilization being at 0. Also,
more than half of the 8 GB of physical memory is still available.
You can click Resource Monitor to see more detailed information about resource
consumption, as shown in Figure 15-18.

Figure 15-18: Resource Monitor

Resource Monitor provides a graphical view of CPU, Disk, Network, and Memory utilization. It

also allows you to expand those categories to see how that consumption is divided among
running processes. We will examine each of these later in the chapter.

Performance Monitor
Performance Monitor also allows you to view real-time performance data. However, it provides
you with many more options for choosing how to view data.
Figure 15-19 shows a Performance Monitor graphic that monitors the % Processor Time for all
processors, as well as all the Logical Disk counters for volume C.

Figure 15-19: Performance Monitor

Various types of objects expose counters to Performance Monitor. The operating system
provides a set of objects, but applications can as well. For example, SQL Server provides sets
of performance objects that allow you to monitor various aspects of database performance. You
can add object counters by clicking the Add (+) button.
The Add Counters dialog is shown in Figure 15-20.

Figure 15-20: Performance Monitor - Add Counters

As you can see, you can select counters from the local computer or from another computer.
This gives you a way to compare the performance of two servers in the same graph.
You can also display the information as a histogram (bar graph), as shown in Figure 15-21 or in
a report, as shown in Figure 15-22.

Figure 15-21: Performance Monitor - Histogram View

Figure 15-22: Performance Monitor - Report View

Data collector sets

So far we have been focusing on how to obtain information about the current resource usage of
a server. Although this information will help you research immediate problems, it is not
sufficient for analyzing trends, saving baselines, or gathering information about sporadic
performance issues.
The Data Collector Sets node in Performance Monitor allows you to run, schedule, and create
data collector sets based on performance counters. As shown in Figure 15-23, there are two
pre-defined Data Collector Sets: System Diagnostics and System Performance.

Figure 15-23: Predefined Data Collector Sets

The System Performance Data Collector Set cannot be modified. You can right-click and
choose Start to start the data collection process. The collection process runs for 60 seconds.
You can view the results beneath the Reports | System Performance node, as shown in
Figure 15-24.

Figure 15-24: System Performance Report

A great deal of information is available in this report, including:

The date and time the data was collected
A summary of resource utilization
Detailed information about CPU, Network, and Disk utilization by process
Performance information for each processor core
System information, shown in Figure 15-25
Network traffic information for each process
Network traffic information for each NIC
Files causing the most disk I/O
Disk access statistics by application or service
Physical disk utilization statistics
Memory utilization by process
Memory utilization statistics

Figure 15-25: System Information

If you want more control over what type of information is collected, or if you want to run data
collection on a scheduled basis, you need to create a custom Data Collector Set. To do so,
right-click User Defined and choose New | Data Collector Set. You are prompted for the name
of the Data Collector set and asked to choose whether to create one from a template or to
create one manually, as shown in Figure 15-26. In this course, we will only cover how to create
one from a template.

Figure 15-26: Creating a New Data Collector Set

Next you will be prompted to choose a type of template, as shown in Figure 15-27. You can
select a listed template or browse to choose a user-defined template or one provided by an
application vendor.

Figure 15-27: Choose a Template

The default templates are as follows:

Basic - allows you to choose Performance Monitor objects and counters to include in the
System Diagnostics - analyzes hardware resources, processes, configuration data, and
system response times, and then it makes recommendations for ways to optimize performance.
System Performance - gathers the same data as in the predefined System Performance Data
Collector. For this discussion, we will choose System Diagnostics.
Next, you will be prompted to choose the directory where the data will be saved, as shown in
Figure 15-28.

Figure 15-28: Choose the Root Directory

The next screen allows you to choose the security context under which the data collector
should be run, as well as whether to open the properties, start the data collector set now, or

save it and close the dialog. For this discussion, we will open the Data Collector Sets
properties, as shown in Figure 15-29.

Figure 15-29: Data Collector Set Creation Options

Windows then opens the Properties dialog for the Data Collector Set.
The General tab, shown in Figure 15-30, shows the name, a description, and the keywords
associated with the Data Collector Set. You can add or remove keywords and change the
description. As you can see, the default is to run the Data Collector Set using the SYSTEM
(Local System) user account. As you will recall, this account has a high level of permissions, so
it can access all of the data that it needs to gather.

Figure 15-30: Data Collector Set - General

The Directory tab, shown in Figure 15-31, allows you to set the root directory and subdirectory.
It also lets you specify a naming convention to uniquely identify the data for subsequent times

the Data Collector Set is run.

Figure 15-31: Data Collector Set - Directory

The Security tab, shown in Figure 15-32, allows you to view permissions that have been
assigned to users and groups. You can select a user or group and then allow or deny various

Figure 15-32: Data Collector Set - Security

The Schedule tab, shown in Figure 15-33, allows you to define one or more schedules for
starting the Data Collector Set.

Figure 15-33: Data Collector Set - Schedule

When creating a baseline, you should gather performance data during both normal and peak
workloads. When analyzing a performance problem, you should schedule the information
gathering to coincide with the time that the performance problem typically occurs. For example,
if users report that the accounting server has slow performance on the last two days of the
month, you should collect the performance data on those days.
As you can see in Figure 15-34, you can select the beginning date, expiration date, start time,
and the days of the week when data will be collected.

Figure 15-34: Adding a Schedule

The Stop Condition tab (Figure 15-35) allows you to specify how long the Data Collector Set
should run. You can specify the duration, as well as limits, such as the maximum size of data

Figure 15-35: Data Collector Set - Stop Conditions

The Task tab, shown in Figure 15-36, allows you to specify a script that should run when data
collection is complete.

Figure 15-36: Data Collector - Task

HP Insight Control Performance Management (ICpm)

HP Insight Control includes HP Insight Control Performance Management (ICpm), which is
installed during a normal installation or can be chosen as an option during a custom
installation, as shown in Figure 15-37.

Figure 15-37: Installing HP Insight Control Performance Management

ICpm allows you to monitor the performance of managed systems from a central management
server. ICpm allows you to perform either online or offline analysis of a managed server.
ICpm requires a server license for each server that you want to manage.

Online analysis
Online analysis allows you to view real-time performance of a server. To run it, select
Diagnose | Performance Management | Online Analysis. You are prompted to select a target
system, as shown in Figure 15-38.

Figure 15-38: Online Analysis - Select Target System

You can select a collection from the drop-down list or choose Search to search for a system by
name. For this discussion, we will select All Servers from the drop-down list and click View
Contents. Then select a server by checking the box next to it, as shown in Figure 15-39.

Figure 15-39: Online Analysis - Selecting a System from a Group

The icons to the left of each server provide status information:

HS - indicates the overall system health.
MP - indicates the health of the management processor.
SW - indicates the software versioning status.

PF - indicates the performance status. It will be unknown until performance statistics

are run.
ES - indicates the aggregate event status.
Check the servers whose performance you want to check and click Apply.
You are prompted to add additional target systems or event filters, as shown in Figure 15-40.

Figure 15-40: Verifying Target Systems

Click Run Now to display a screen to collect and display the performance data, as shown in
Figure 15-41. Each component in the server has an icon to indicate its status. A green circle
indicates that the component is operating within performance thresholds. An orange triangle
indicates that a resource that has exceeded established thresholds.

Figure 15-41: Performance Data

You can also view performance data in a table or in a graph, as shown in Figure 15-42. Here
you can see that a warning has been generated that indicates that a memory bottleneck has

Figure 15-42: Graphical Display - Memory Bottleneck

Identifying and Resolving Bottlenecks

Now that you have collected performance data, it is time to analyze it to determine which
resource or resources are the bottleneck.

The resource that limits performance.

The process of identifying a bottleneck is sometimes tricky. The apparent source of a bottleneck
can sometimes mask the true cause of a performance slowdown. To better understand how this
can happen, we will now review a few points about system architecture and the potential points
where a bottleneck might occur.
As you will recall, a server has a number of components, all of which communicate over a bus,
as shown in Figure 15-43.

Figure 15-43: Components Communicate over the Bus

At the very least, an application or operating system task requires communication between
RAM and the processor. In many cases, other components are involved as well. Because most
servers perform a large number of tasks at the same time, these tasks contend for the same
resources. The operating system has to juggle all the tasks and allocate resources to provide
the best possible performance. In this section, we will look at some performance indicators that
can help you isolate a bottleneck to a specific resource. We will also provide you with some
common fixes that can help resolve various bottlenecks.

Processor bottlenecks
A number of counters are available that can help you determine whether the number or speed
of processors in a server is causing a bottleneck.
One way to view processor utilization is through ICpm. Figure 15-44 shows normal processor

Figure 15-44: Normal Processor Utilization

The % Processor Time counter indicates the percentage of time that the processor is not idle.
ICpm reports the %Processor Time for all processors as the Average Processor Utilization%.
Ideally, this number should remain below 80%. If the % Processor Time reading is greater than
80%, you need to do some further investigation to determine whether the processor is, indeed,
the bottleneck and to better understand the best course of action to resolve the bottleneck.
Figure 15-45 shows the overall system performance statistics for a server that has a processor
bottleneck. As you can see in this example, the average processor utilization is 100%, which
means that all processors are fully utilized.

Figure 15-45: Overall System Performance Statistics for a Performance Bottleneck

Another indicator of a processor bottleneck is the Processor Queue Length counter. This
counter indicates the number of threads that are waiting for a CPU time slice. If this value is
greater than the number of processor cores * 2, it means that multiple threads are competing for
the same processing resources. If this is the case, adding an additional CPU or replacing the
CPU with one that has more cores might improve performance.
You should also check the number of context switches that occur. A high processor utilization
with a low number of context switches indicates that one thread is monopolizing the processor.
A high processor utilization with a large number of context switches could indicate that an
inefficient application has created too many threads or that a device driver problem has
When symmetric multiprocessing is used, utilization should be approximately the same across
all processors. However, you can assign the threads of an application to a specific processor.
This is known as processor affinity. When processor affinity is used, you might run into a
situation in which the load is carried more heavily by certain processors than others.
The Resource Monitor CPU tab, shown in Figure 15-46, allows you to view total CPU usage,
CPU usage by services, and CPU usage for each CPU (NUMA Node) and each core (CPU

Figure 15-46: Resource Monitor - CPU

ICpm also allows you to view the utilization of each processor. Figure 15-47 shows a situation
in which the processor load is possibly uneven. As you can see, the busiest processor is more
than twice as busy as the average processor, and there is a large difference in the loads carried
by individual processors.

Figure 15-47: Uneven Load Distribution

You can identify which process is consuming resources by checking the %Processor
Time counter for specific processes. If the System is consuming a large percentage of
processor time, the problem could be caused by a failing device driver.
Some possible steps you can take to alleviate a processor bottleneck include:
Upgrade to a processor that has a larger cache.
Upgrade to a faster processor.
Upgrade expansion boards to versions that include special-purpose processors,
such as a network adapter with a TCP Offload Engine.
Adjust HP Power Regulator settings so that processors are able to fully utilize power.
Enable HyperThreading to allow more threads to execute simultaneously.
Disable HyperThreading to optimize sequential workloads.
sequential workload
A series of instructions that must be executed sequentially instead of asynchronously.

When you consider a processor upgrade, you should remember that performance benefits will
not be linear due to the overhead of context switching, memory utilization, and other system
factors. For example, upgrading from 4 cores to 8 cores will not double performance.
A performance problem that appears to be a processor bottleneck might actually be caused by
insufficient memory or a large number of interrupts due to disk or network I/O. Therefore, it is
important to analyze network and I/O counters before assuming that high processor utilization
indicates a processor bottleneck. We will look at memory bottlenecks next.

Memory bottlenecks
Issues with memory can be very common sources of bottlenecks, so common, in fact, that some
server administrators routinely attempt to fix any sort of performance problem by adding more
RAM. Although adding RAM is not always necessary and will not always resolve a
performance problem, RAM is so integral to most operations that insufficient RAM often
masquerades as another bottleneck, such as a disk bottleneck or processor bottleneck.
Before we talk about specific performance counters, we will first talk a little about how an
operating system allocates memory. As you will recall, each process is associated with a block
of memory addresses. Each memory address references a 4 KB block of memory. The data
stored at that memory location might be stored in RAM or in a virtual memory paging file on a
storage volume, as illustrated in Figure 15-48.
On a 32-bit platform, the address space is always 4 GB. Generally, addresses in the
lower 2 GB are user mode addresses, which can be accessed by the process. The top 2 GB
are kernel mode addresses, which are reserved for use by the operating system. On a 64-bit
platform, the address space can be as large as 8 TB.

Figure 15-48: Memory Allocation

It is the responsibility of the operating system to swap pages between the paging file and RAM.
When the operating system needs to read a page from the paging file, a hard page fault occurs.
Excessive paging can manifest itself as a storage bottleneck because of the increased
disk I/O.
To determine whether memory is a bottleneck using the Windows Performance tool, you should
monitor the counters described in Table 15-3.
Table 15-3: Monitoring Memory



Memory | Available Bytes

Indicates the amount of physical memory available. A

low value indicates a possible memory shortage.

Process (All processes) |

Working Set

The number of pages assigned to processes. When

there is no memory shortage, allocated pages can
remain in the working set longer.

Memory | Pages/sec

The number of hard faults per seconds.

If you see a large number of hard faults and a low number of available bytes, it indicates that
you need to increase the amount of RAM.
You can also monitor memory using ICpm. Figure 15-49 shows normal memory utilization. As
you can see, the number of page faults per second (shown in orange) is relatively low.

Figure 15-49: Graph of Normal Memory Utilization

Figure 15-50 shows a memory bottleneck. The amount of available memory is very low and the
number of page faults per second is high.

Figure 15-50: Graph of a Memory Bottleneck

Memory leaks
A memory leak is caused by an application or driver that allocates memory but does not free it.
Thus, the memory used by the process continues to grow. Some typical symptoms of a memory
leak include:
Performance that is good when the system starts up, but worsens over time
Virtual memory errors
Error messages that indicate system services have stopped
You can identify an application that has a memory leak by analyzing the following processspecific counters:
Private Bytes
Working Set
Page Faults/sec
Page File Bytes
Handle Count
Rises in these counters over time indicate a possible memory leak.
Some processes, such as many system services and drivers, allocate memory that cannot be
paged. When a leak occurs in this type of memory, it is especially serious. To uncover a
memory leak in nonpaged memory, monitor the following counters:
Memory | Pool Nonpaged Bytes
Memory | Pool Nonpaged Allocs
Process (process name) | Pool Nonpaged Bytes

Increases in these counters indicate a memory leak in nonpaged memory. An increase in Pool
Nonpaged Bytes for a specific process identifies the process responsible for the leak. If you
discover a memory leak, contact the application vendor and check for an available patch.

Virtual memory configuration

In an ideal situation a server will have enough physical memory that it will not need to rely on
virtual memory. In reality, this is not always possible. However, the location of the paging file (or
files) can help reduce the impact of paging on performance. Some best practices for the virtual
memory paging file include:
Place the file on a RAID 0 volume.
Create multiple paging files on different physical disks.
Increase the initial size of the paging file.
Verify that there is sufficient capacity on the volumes where the paging files are

File system cache

The file system cache is an area of physical memory where Windows stores data that has been
recently read from a storage volume. If a server has insufficient memory, the file system cache
might shrink and the cache hit rate might decrease.
The file system cache is especially important on servers that perform a large amount of data
access, such as database servers.

Large memory support

As you will recall from earlier in the course, a 32-bit processor uses 32-bit addresses. By
default, this supports an address space of only 4 GB. Several technologies are available that
allow a 32-bit server to support more than 4 GB of physical RAM.

Physical Address Extension (PAE) allows the use of 36-bit addresses. PAE is supported on 32bit editions of Windows 2000 Server and Windows Server 2003, as well as some 32-bit Linux
distributions. When running multiple applications on one of these operating systems, you can
use PAE to take advantage of more physical memory and reduce paging. On a Windows
server, you can enable PAE by booting with the /PAE switch.

4 GB tuning
Another feature of 32-bit Windows Server editions is the ability to modify the virtual address
spaces so that 3 GB can be accessed by the application and only 1 GB is reserved for the
operating system. To enable 4 GB tuning, start Windows with the /3GB switch.

Address Windowing Extensions (AWE) is supported on 32-bit and 64-bit Windows operating
systems. It allows applications that take advantage of it to use nonpaged memory to provide
fast memory operations that are not paged to disk.

Support for large pages

Application code can also be written to use larger blocks of memory than the 4 KB standard
size. Doing so can improve performance, particularly for 64-bit applications.
Large page allocations are only supported for 64-bit applications when they are running
on servers with an Intel Itanium processor.

Storage bottlenecks
Optimizing storage is important for every server, but particularly for database servers and file
servers because they perform a significant amount of file I/O. One way to optimize storage is to
replace HDDs with SSDs. However, when a lot of storage capacity is required or when data
access requirements exceed the lifespan of an SSD, switching to SSDs is not always a
practical solution.
Monitoring storage involves monitoring the amount of disk space available, data transfer rates,
and the number of queued operations. Monitoring logical disk counters allows you to view
performance data on a volume.
Figure 15-51 shows logical disk statistics for an array that does not represent a bottleneck.

Figure 15-51: Monitoring the Logical Disk

Monitoring physical disk counters allows you to view performance data on the physical disks
that comprise a volume. Figure 15-52 shows the statistics for monitoring a physical disk.

Figure 15-52: Monitoring the Physical Disk

Some common disk-related counters are described in Table 15-4.

Table 15-4: Storage Performance Counters

Logical Disk

Logical Disk or Physical


Logical Disk or Physical


Logical Disk or Physical




% Free Space

The percentage of free

space that is available on
the volume.

Avg. Disk Bytes/Transfer

The average size of read

and write operations.
Large transfers indicate
more efficient disk I/O.

Avg. Disk sec/Transfer

The average amount of

time required for a read or
write operation. A high
value could indicate that
transfers are being retried
because of a long queue
or disk failures.

Avg. Disk Queue Length

The average number of

read or write operations
that are in progress or
waiting. A value greater
than 2 times the number
of spindles could indicate
a bottleneck.

Current Disk Queue Length

The number of read or

write operations that are
currently in progress or

Disk Bytes/sec

The transfer rate for

bytes. Measures disk

Disk Transfers/sec

The transfer rate for read

and write operations,
regardless of size.
Measures disk utilization.

% Disk Read Time

The percentage of time

spent servicing read
requests. Includes the
entire duration of the I/O
request, not just the time
the disk is busy.

Logical Disk or Physical


% Disk Write Time

The percentage of time

spent servicing write
requests. Includes the
entire duration of the I/O
request, not just the time
the disk is busy.

Logical Disk or Physical


% Idle time

The percentage of time

spent idle.

Logical Disk or Physical


Logical Disk or Physical


Logical Disk or Physical


Logical Disk or Physical


%Disk Read Time + %Disk Write Time + %Disk Idle Time might not equal 100 because
of the fact that %Disk Read Time and %Disk Write Time include the entire duration of the I/O
You should also monitor memory counters, particularly the paging file, when determining
whether storage is a bottleneck because excessive paging causes a significant amount of disk
One important thing to keep in mind is that the Physical Disk counters use the array controller
as the reference point. Therefore, the Physical Disk counters include any RAID overhead. The
Logical Disk counters, on the other hand, refer to the number of I/O requests the operating
system or application sends to the array controller. When using hardware RAID, the Logical
Disk counters do not include RAID overhead because hardware RAID is transparent to the
operating system.
Consider the example of a RAID 5 array. With a RAID 5 array, four physical write operations
are required for each logical write operation. Therefore, if an application sends 1,000 requests,
750 reads, and 250 writes, the counters will reflect the values shown in Table 15-5.

Table 15-5: Logical vs. Physical Disk with Hardware RAID

Logical disk transactions

Physical disk

Application reads



Application writes



Total transactions



RAID does not affect read performance. However, any type of RAID that includes parity stripes
does affect write performance. The performance penalty is shown in Table 15-6.
Table 15-6: RAID Write Penalty

RAID level

Physical I/O requests per logical write


1 - no penalty because there is no parity

RAID 1+0

2 physical writes for each logical write


2 reads and 2 writes for each logical write


6 physical requests for each logical write

If the write block is a known size, you can eliminate the write penalty by adjusting the
stripe size so that each write is aligned with an entire st