You are on page 1of 25

IT 160 Ch.

8 Activities Worksheet
Note from the Instructor: Any missing or incorrect screenshots will result in either a point reduction
or no credit at all for the activity.

Make sure you are doing the activities on the correct machines and in the correct order or the
activities may not work.

When taking screenshots for each activity, make sure the VM number in the top left-hand
corner is in the screenshot or NO credit will be given for the screenshot.

Ch. 8 Activities Worksheet will be worth 700 pts!

The following is a checklist of the activities you will need to complete for Ch. 8:

☐Activity 8-1: Resetting Your Virtual Environment

The above activity is not graded but MUST be done for the activities to work proper.

☐Activity 8-2: Installing the Network Load Balancing Feature

☐Activity 8-3: Creating an NLB Cluster

☐Activity 8-4: Resetting Your Virtual Environment

The above activity is not graded but MUST be done for the rest of the activities to work.

☐Activity 8-5: Configuring Shared Storage for Failover Clustering

☐Activity 8-6: Configuring the iSCSI Initiators

☐Activity 8-7: Installing the Failover Clustering Feature and Validating a Cluster Configuration

☐Activity 8-8: Creating a Failover Cluster

☐Activity 8-9: Creating a File Server Failover Cluster

Activity 8-1: Resetting Your Virtual Environment


Objective: Reset your virtual environment by applying the InitialConfig checkpoint or snapshot.

Required Vms: ServerDC1-#, ServerDM1-#, and ServerDM2-# (# being your VM number)

Description: Apply the InitialConfig snapshot to ServerDC1-#, ServerDM1-#, and ServerDM2-#.

1. Be sure ServerDC1-#, ServerDM1-#, and ServerDM2-# are shut down.


2. In the VMware dashboard, click on the ServerDC1-# server but do not start it.
3. Click Revert to current snapshot.

4. Click Yes to confirm.


5. Repeat Steps 2 – 4 on all servers listed in Required VMs section at the beginning of this
activity.
6. Continue to the next activity.

Activity 8-2: Installing the Network Load Balancing Feature


Objective: Install the network load balancing feature.

Required VMs: ServerDC1-#, ServerDM1-#, and ServerDM2-# (# being your VM number)

Description: In this activity, you install the NLB feature on ServerDM1-# and ServerDM2-#.
ServerDC1-# is the domain controller (DC) for the network and should be running during this activity.

1. Start ServerDC1-#. Start ServerDM1-# and sign into the domain as Administrator.
2. On ServerDM1-#, you will install Network Load Balancing by using the PowerShell cmdlet
Install-WindowsFeature NLB -IncludeManagementTools. After running the cmdlet, take a
screenshot of the results and paste it below. Make sure the VM number in the top left-hand
corner is in the screenshot for full credit for this step.
Screenshot:

3. Start ServerDM2-# and sign into the domain as Administrator. Remember to hit Esc twice
and then Other User. Once signed in type PowerShell.
4. You will install the Network Load Balancing feature by running the cmdlet Install-
WindowsFeature NLB. You will be managing the NLB cluster from ServerDM1-#, so there is
no need to install the management tools. After running the cmdlet, take a screenshot of the
results and paste it below. Make sure the VM number in the top left-hand corner is in the
screenshot for full credit for this step.
Screenshot:

5. Stay signed in to both servers for the next activity and leave ServerDC1-# running.

Activity 8-3: Creating an NLB Cluster


Objective: Create an NLB cluster.

Required VMs: ServerDC1-#, ServerDM1-#, and ServerDM2-# (# being your VM number)

Description: In this activity, you configure DNS for the NLB cluster and then configure the NLB
cluster. You configure the NLB cluster hosts with two NICs using Multicast mode. The topology looks
like Figure 8-16, p.297 in the textbook, except that you don’t have NLB clients in your network.
1. First, you need to create the DNS record for the NLB cluster. Sign into ServerDC1-# as
Administrator. Open a PowerShell window and type Add-DnsServerResourceRecordA -
Name NLB -ZoneName MCSA2016.local -IPv4Address 192.168.0.100 and press Enter.
2. Verify the records in the zone by typing Get-DnsServerResourceRecord -ZoneName
MCSA2016.local and pressing Enter. You will see several records, but verify that you see the
NLB, ServerDC1-#, ServerDM1-#, and ServerDM2-# records. After running the cmdlet, take a
screenshot of the results and paste it below. Make sure the VM number in the top left-hand
corner is in the screenshot for full credit for this step.
Screenshot:

3. On ServerDM1-#, open Server Manager, and click Tools, Network Load Balancing Manager
from the menu.
4. Right-click Network Load Balancing Clusters and click New Cluster. Type ServerDM1-# in
the Host text box and click Connect. After you’re connected, the New Cluster: Connect dialog
box shows the available interfaces for ServerDM1-# (see Figure 8-17, p.297 in the textbook).
5. Click Ethernet (the adapter with address 192.168.0.2) to choose the adapter you want to use
for NLB traffic. The other interface will be used strictly for cluster communication between
cluster servers. Click Next.
6. In the New Cluster: Host Parameters dialog box (see Figure 8-18, p.298 in the textbook),
accept the default value 1 in the Priority (unique host identifier) text box. The Dedicated IP
addresses section lists the IP address used when an external device communicates directly
with the server, for example, for remote management purposes. The Default state option
specifies how this host should behave when it boots. The default state is Started, which means
this host participates in the cluster when the system boots. Click Next.
7. In the New Cluster: Cluster IP Addresses dialog box, click Add. Type 192.168.0.100 in the
IPv4 address text box and 255.255.255.0 in the Subnet mask text box. This is the virtual IP
address that client computers will use to access the cluster. Click OK, and then click Next.
8. In the New Cluster: Cluster Parameters dialog box, type nlb.mcsa2016.local in the Full
Internet name textbox. This is the name client that computers use to access the cluster and
corresponds to the DNS record you created in Step 1. In the Cluster operation mode section,
click the Multicast option button as shown in Figure 8-19, p.298 in the textbook. Click Next.
9. In the New Cluster: Port Rules dialog box, read the port rule description for the default port
rule, and then click Finish.
10. To add ServerDM2-# as a second cluster host, right-click nlb.mcsa2016.local and click Add
Host To Cluster.
11. Type ServerDM2-# in the Host text box and click Connect. After the Ethernet interface is
listed in the Interfaces available for configuring the cluster list box, click the interface with
address 192.168.0.3 and click Next.
12. In the Add Host to Cluster: Host Parameters dialog box, leave the Priority setting at the default
value of 2, and click Next.
13. In the Add Host to Cluster: Port Rules dialog box, click Finish.
14. A correctly configured and working NLB cluster shows the status of both servers as
Converged, and both servers are outlined in green, as shown in Figure 8-20, p.299 in the
textbook. Take a screenshot showing BOTH servers in the NLB cluster converged and paste it
below. Make sure the VM number in the top left-hand corner is in the screenshot for full credit
for this step.
Screenshot:

15. To test that the cluster is working correctly, open a command prompt window on ServerDC1-#.
Type ping nlb and press Enter. You should get successful ping replies. The first ping might
time out, but this is normal.
16. Type arp -a and press Enter to see the ARP table. You should see an entry for 192.168.0.100
with a MAC address that begins with 03, which is a multicast MAC address. After running the
command, take a screenshot of the results and paste it below. Make sure the VM number in
the top left-hand corner is in the screenshot for full credit for this step.
Screenshot:

17. Stay signed in to all three servers if you’re continuing to the next activity.
 Note: Configuring an NLB cluster can be complex, and much can go wrong. DNS must
be set up correctly, NICs must be capable of dynamic MAC address changes, and the
IP configuration must be correct. If you believe you have everything set correctly but the
NLB Manager still reports errors, shut down both servers and restart them. Open the
NLB Manager after both servers have restarted to see whether the problem has been
solved.
Activity 8-4: Resetting Your Virtual Environment
Objective: Reset your virtual environment by applying the InitialConfig checkpoint or snapshot.

Required VMs: ServerDC1-#, ServerDM1-#, and ServerDM2-# (# being your VM number)

Description: You are finished using NLB and will install Failover Clustering next. NLB and Failover
Clustering cannot be installed on the same server, so you will apply the InitialConfig checkpoint or
snapshot to ServerDC1-#, ServerDM1-#, and ServerDM2-# to reset your environment.

1. Be sure ServerDC1-#, ServerDM1-#, and ServerDM2-# are shut down.


2. In the VMware dashboard, click on the ServerDC1-# server but do not start it.
3. Click Revert to current snapshot.

4. Click Yes to confirm.


5. Repeat Steps 2 – 4 on all servers listed in Required VMs section at the beginning of this
activity.
6. Continue to the next activity.

Activity 8-5: Configuring Shared Storage for Failover Clustering


Objective: Install the iSCSI Target Server role service and create shared iSCSI virtual disks for a
failover cluster.

Required VMs: ServerDC1-#, ServerDM1-#, and ServerDM2-#

Description: In this activity, you install the iSCSI Target Server role service and then create two 10
GB iSCSI virtual disks and assign them to iSCSI targets.

Note: These instructions are largely the same as those from Activities 5-4 and 5-5 except that the
iSCSI Target Server is ServerDC1-# and both ServerDM1-# and ServerDM2-# will be configured as
iSCSI Initiators.

1. Start ServerDC1-# and ServerDM1-#. Sign in to ServerDC1-# as Administrator.


2. On ServerDC1-#, open a PowerShell window. To install the iSCSI Target Server and iSCSI
Target Storage Provider role services, type Install-WindowsFeature FS-iSCSITarget-
Server, iSCSITarget-VSS-VDS and press Enter. After the installation is complete, take a
screenshot of the results and paste it below. Make sure the VM number in the top left-hand
corner is in the screenshot for full credit for this step.
Screenshot:

3. Close PowerShell.
4. Sign in to ServerDM1-# as the domain Administrator. In Server Manager, click Tools,
iSCSI Initiator. When prompted to start the iSCSI service, click Yes. In the iSCSI Initiator
Properties window, click OK. This starts the iSCSI initiator service that is needed for later
steps.
5. Start ServerDM2-# and sign in as the domain Administrator. If you are prompted to enter
your password for Administrator, press Esc to switch users, then press Esc again, and
select Other user. Type mcsa2016\administrator, press Tab, type the password, and
then press Enter.
6. Type PowerShell and press Enter and then type Start-Service msiSCSI and press Enter
to start the iSCSI initiator service on ServerDM2-#. Type Set-Service -Name msiSCSI -
StartupType Automatic and press Enter to ensure that the service starts each time
Windows starts.
7. Switch to ServerDC1-#, and in the left pane of Server Manager, click File and Storage
Services. Click iSCSI.
8. In the right pane, click the To create an iSCSI virtual disk, start the New iSCSI Virtual
Disk Wizard link.
9. In the iSCSI Virtual Disk Location window, ensure the Select by volume option button is
selected. By default, the C: volume is selected as the location to store the iSCSI virtual
disks. Click Next.
10. In the iSCSI Virtual Disk Name window, type FOdisk1 in the Name text box and click Next.
11. In the iSCSI Virtual Disk Size window, type 10 in the Size text box, click the Dynamically
expanding option button, if necessary, and click Next.
12. Because there are no existing targets, accept the default option New iSCSI target in the
iSCSI Target window and click Next.
13. In the Target Name and Access window, type ServerDC1-#target and click Next.
14. In the Access Servers window, click Add. In the Add initiator ID dialog box, click the Query
initiator computer for ID option button, if necessary, and type ServerDM1-
#.MCSA2016.local in the text box. This step allows ServerDM1-# to access to the iSCSI
target. Click OK.
15. The server queries ServerDM1-# to get its IQN, which is why you started the iSCSI service
on ServerDM1-# first. Repeat Step 13, replacing ServerDM1-# with ServerDM2-#. When
you have finished, the Access Servers window should look like the one in Figure 8-28,
p.311 in the textbook. After both initiators are added, take a screenshot and paste it below.
Make sure the VM number in the top left-hand corner is in the screenshot for full credit for
this step.
Screenshot:

16. Click Next.


17. In the Enable authentication service window, click Next because you will use Active
Directory for authentication. In the Confirm Selections window, take a screenshot of the
settings, and paste it below. Make sure the VM number in the top left-hand corner is in the
screenshot for full credit for this step.
Screenshot:

18. Click Create. After the iSCSI virtual disk is created, click Close.
19. In File and Storage Services, you see the new virtual disk and the iSCSI target. If you need
to make changes to either, you can right-click it and click Properties. Next, you’ll create
another virtual disk that will be used as the disk witness. In the right pane of File and
Storage Services, right-click in the empty space and click New iSCSI Virtual Disk.
20. In the iSCSI Virtual Disk Location window, ensure that the Select by volume option button
is selected. Click Next.
21. In the iSCSI Virtual Disk Name window, type FOdisk2 in the Name text box, and click
Next.
22. In the iSCSI Virtual Disk Size window, type 10 in the Size text box, click the Dynamically
expanding option button, if necessary, and click Next.
23. Because you already have a target defined and have assigned initiators, just click Next in
the iSCSI Target window. In the Confirm Selections window, take a screenshot of the
settings, and paste it below. Make sure the VM number in the top left-hand corner is in the
screenshot for full credit for this step.
Screenshot:

24. Click Create. After the iSCSI virtual disk is created, click Close.
25. Stay signed in to all servers and continue to the next activity.

Activity 8-6: Configuring the iSCSI Initiators


Objective: Configure the iSCSI initiators to access the virtual disks shared by ServerDC1-#.

Required VMs: ServerDC1-#, ServerDM1-#, and ServerDM2-# (# being your VM number)

Description: In this activity, you start the Microsoft iSCSI service and configure the iSCSI initiator to
connect to the iSCSI target that you configured in the previous activity.

1. Make sure ServerDC1-#, ServerDM1-#, and ServerDM2-# are running.


2. Sign in to ServerDM1-# as domain Administrator, if necessary, and open Server Manager.
3. Click Tools, iSCSI Initiator. In the iSCSI Initiator Properties window, type ServerDC1-
#.mcsa2016.local in the Target text box. Click Quick Connect. In the Quick Connect box,
you see the iqn for ServerDC1-# (see Figure 8-29, p.312 in the textbook). Click Done.
26. The iSCSI Initiator Properties window Targets tab shows the target as Connected. Click the
Volumes and Devices tab and click Auto Configure to automatically connect to all
available devices. The two volumes are listed in the Volume List box (see Figure 8-30,
p.312 in the textbook). After clicking Auto Configure, take a screenshot showing the
Volume List and paste it below. Make sure the VM number in the top left-hand corner is in
the screenshot for full credit for this step.
Screenshot:

4. Click the other tabs in the iSCSI Initiator Properties window to see other configuration
options. Click OK when you have finished.
5. Now, you must configure the iSCSI Initiator on ServerDM2-#. Because ServerDM2-# is
running Server Core, you’ll perform all the iSCSI Initiator steps in PowerShell. Switch to
ServerDM2-#. Start PowerShell, if necessary. Type New-IscsiTargetPortal -
TargetPortalAddress ServerDC1-# and press Enter.
6. Next, type Get-IscsiTarget and press Enter. You’ll see the iqn for ServerDC1-# in the
output. Make sure the VM number in the top left-hand corner is in the screenshot for full
credit for this step.
Screenshot:G

7. Type Get-IscsiTarget | Connect-IscsiTarget -IsPersistent $True and press Enter to


connect to the target. The -IsPersistent parameter ensures that the iSCSI client will connect
to the target each time it restarts.
8. Now that you are connected, you can view the session details. Type Get-iSCSISession
and press Enter. After running the cmdlet, take a screenshot of the results and paste it
below. Make sure the VM number in the top left-hand corner is in the screenshot for full
credit for this step.
Screenshot:

9. To see the disks available, type Get-iSCSISession | Get-Disk and press Enter. You see
the virtual disks you created on ServerDC1-#. After running the cmdlet, take a screenshot
of the results and paste it below. Make sure the VM number in the top left-hand corner is in
the screenshot for full credit for this step. (You will refer to this screenshot for the disk
numbers in the next step)
Screenshot:

10. Because the iSCSI disks are offline, you need to bring them online and initialize them. In
the following two commands, be sure that the disk numbers correspond with the disk
numbers you saw in Step 9. Type Set-Disk -Number 4 -IsOffline $false and press Enter.
To initialize it, type Initialize-Disk -Number 4 and press Enter. Repeat these two cmdlets,
replacing the disk number with the number of the second disk from Step 9 (see Figure 8-
31, p.313 in the textbook).
11. To verify the disks are online and initialized, type \ and press Enter. After running the
cmdlet, take a screenshot of the results and paste it below. Make sure the VM number in
the top left-hand corner is in the screenshot for full credit for this step.
Screenshot: Error. Wouldn’t work

12. Switch to ServerDM1-# and open Server Manager, if necessary. Click File and Storage
Services, and then click Disks. Look for the two iSCSI disks (shown in the Bus Type
column).
13. Right-click each iSCSI disks and click Bring Online (see Figure 8-32, p.313 in the
textbook). Click Yes when prompted. After bringing BOTH disks online, take a screenshot
showing the disks and paste it below. Make sure the VM number in the top left-hand corner
is in the screenshot for full credit for this step.
Screenshot: Error. Wouldn’t work

14. Right-click the first iSCSI disk and click New Volume. Follow the New Volume Wizard and
assign drive letter G: to the first volume and give it the volume label Cluster1. Leave all
other settings at the default. Repeat the process for the second iSCSI disk, assigning drive
letter H: and volume label of Cluster2. Click Close when you have finished.
15. Leave all three servers running and continue to the next activity.

Activity 8-7: Installing the Failover Clustering Feature and Validating a Cluster
Configuration
Objective: Install the Failover Clustering feature on two servers and validate a cluster configuration.

Required VMs: ServerDC1-#, ServerDM1-#, and ServerDM2-# (# being your VM number)

Description: The topology for this cluster configuration is shown in Figure 8-33, p.314 in the
textbook. In this topology, ServerDC1-# is the iSCSI target, which has iSCSI shared storage for
ServerDM1-# and ServerDM2-#, the cluster servers. The clients in the figure are not part of the
activity. You’ll install the Failover Clustering feature on ServerDM1-# and ServerDM2-#, and after the
feature is installed, you’ll validate the configuration.

1. Sign in to ServerDM1-# as the domain Administrator, if necessary.


2. Open a PowerShell window and install the Failover Clustering feature by typing Install-
WindowsFeature Failover-Clustering -IncludeManagementTools and pressing Enter.
3. Sign in to ServerDM2-# as the domain Administrator and install the Failover Clustering
feature as in Step 2 without the -IncludeManagementTools option.
4. Switch to ServerDM1-#, open Server Manager, and click Tools, Failover Cluster Manager
from the menu.
5. Click Validate Configuration in the Actions pane to start the Validate a Configuration
Wizard. Read the information in the Before You Begin window, and then click Next.
6. In the Select Servers or a Cluster window, type ServerDM1-#, and click Add. Then type
ServerDM2-#, and click Add again (see Figure 8-34, p.314 in the textbook). Click Next.
7. In the Testing Options window, leave the default option Run all tests (recommended)
selected, and click Next.
8. In the Confirmation window, review your validation settings, and then click Next.
9. The validation test runs, and each test reports results as it runs. This test will take several
minutes. AFTER the validation test has finished take a screenshot of the Summary window
and paste it below. Make sure the VM number in the top left-hand corner is in the
screenshot for full credit for this step.
Screenshot:

10. The Summary window (see Figure 8-35, p.315 in the textbook) has a button that you can
click to review the validation report when the tests are finished. If errors or warnings are
reported in this window, click View Report to get additional information. You are likely to
see a warning about the operating system installation option because one server is
installed with Desktop Experience and the other is Server Core. In addition, if Windows
Update is disabled, you will see a warning. Warnings are usually okay. If there are errors,
try to solve any problems and run the validation wizard again.
11. Be sure the Create the cluster now using the validated nodes option is cleared. You will
create the cluster in the next activity. Click Finish.
12. Leave the Failover Cluster Manager open and continue to the next activity.
 Note: While testing this activity, the networking tests failed the first time the validation
was run but succeeded on the second run. If you get an error in the validation but can’t
find an obvious reason, try running the test again.

Activity 8-8: Creating a Failover Cluster


Objective: Create a failover cluster.

Required VMs: ServerDC1-#, ServerDM1-#, and ServerDM2-# (# being your VM number)

Description: Your cluster servers and network environment have been validated, so it’s time to
create the failover cluster.
1. Sign in to ServerDM1-# as the domain Administrator, if necessary.
2. Open the Failover Cluster Manager, if necessary.
3. Click Create Cluster in the Actions pane to start the wizard. Read the information in the
Before You Begin window, and then click Next.
4. In the Select Servers window, type ServerDM1-#, and click Add. Then type ServerDM2-#
and click Add. Click Next. (After clicking next, it may take a minute to populate the
networks)
5. In the Access Point for Administering the Cluster window, type Failover1 in the Cluster
Name text box. Click Click here to type an address, and type 192.168.0.100 (see Figure
8-36, p.316 in the textbook). A host record with the name and address is added to DNS,
and an Active Directory computer object is created. Clear the check box for the
192.168.1.0/24 Network if necessary. Click Next.
6. The Confirmation window shows the settings for creating the cluster, which include the
cluster name, IP address, and the nodes (servers) in the cluster. Also, be sure that the Add
all eligible storage to the cluster option is checked. Click Next.
7. If errors or warnings are reported in the Summary window, click View Report to get
additional information. If the cluster was created successfully, you see that the quorum
mode of Node and Disk Majority was selected automatically (see Figure 8-37, p.317 in the
textbook). Take a screenshot of the Summary window and paste it below. Make sure the
VM number in the top left-hand corner is in the screenshot for full credit for this step.
Screenshot:

8. Click Finish.
9. The next step is to review the cluster configuration in the Failover Cluster Manager. In the
middle pane, review the cluster summary. In the left pane, click to expand the cluster name
(Failover1.MCSSA2016.local), and then click Nodes to view the servers and their status
in the middle pane. Click ServerDM1-# in the middle pane to see more details about this
node (see Figure 8-38, p.318 in the textbook).
10. In the left pane, click to expand Storage and then click Disks to see the disks that are
available for the cluster. The disks listed are shared storage from the iSCSI SAN. Notice
that Cluster Disk 1 has been assigned as Disk Witness in Quorum. Also notice the Owner
Node column, indicating which cluster server is currently owner of the storage. Take a
screenshot of the Disks and paste it below. Make sure the VM number in the top left-hand
corner is in the screenshot for full credit for this step.
Screenshot:
11. In the left pane, click Networks to review the cluster networks. You see two networks
listed. Cluster Network 1 is listed as Cluster and Client. Cluster Network 2 is listed as
Cluster Only. Click each network to see the network address information in the bottom
pane. Close the Failover Cluster Manager.
12. Sign in to ServerDC1-# as Administrator and open Server Manager. Click Tools, Active
Directory Users and Computers. Expand MCSA2016.local and click the Computers
folder. You should see the new computer account created for the cluster named Failover1.
Take a screenshot showing the new computer account Failover1 and paste it below. Make
sure the VM number in the top left-hand corner is in the screenshot for full credit for this
step.
Screenshot:

13. Close Active Directory Users and Computers.


14. Click Tools, DNS to open DNS Manager. Click to expand ServerDC1-#, Forward Lookup
Zones and click MCSA2016.local. You should see that the Failover1 A record has been
created. Take a screenshot showing the Failover1 A record and paste it below. Make sure
the VM number in the top left-hand corner is in the screenshot for full credit for this step.
Screenshot:

15. Close DNS Manager.


16. Stay signed in to all servers if you’re continuing to the next activity.

Activity 8-9: Creating a File Server Failover Cluster


Objective: Configure the File Server role on the failover cluster.

Required VMs: ServerDC1-#, ServerDM1-#, and ServerDM2-# (# being your VM number)

Description: Your cluster is up and running, so now it’s time to configure a role for high availability.
You will install the File Server role and then configure it in the failover cluster.

1. You need to install the File Server role on ServerDM1-# and ServerDM2-# so they can
participate in the cluster role.
2. Sign into ServerDM1-# as the domain Administrator, if necessary. Open a PowerShell
window, if necessary, and then type Install-WindowsFeature FS-FileServer and press
Enter. After the role is installed, close the PowerShell window. Repeat the process for
ServerDM2-#.
3. Switch to ServerDM1-# and open the Failover Cluster Manager, if necessary.
4. Click to expand Failover1.MCSA2016.local in the left pane. Right-click Roles and click
Configure Role to start the High Availability Wizard.
5. Read the information in the Before You Begin window, and then click Next.
6. In the Select Role window, click File Server, and then click Next. In the File Server Type
window, leave the default option File Server for general use selected. Notice that you also
have the option to create a Scale-Out File Server. Click Next.
7. In the Client Access Point window, type Failover1FS in the Name text box, and then click
Click here to type an address. Type 192.168.0.101 for the address of this cluster service.
Clear the check box for the 192.168.1.0/24 Network if necessary and then click Next.
8. In the Select Storage window, you select the storage volume you want to use. These
servers were set up to share two iSCSI volumes, and one of them is used as the witness
disk. Click the check box next to the cluster disk (probably named Cluster Disk 2), and
then click Next.
9. Review the information in the Confirmation window, and then click Next.
10. If any errors or warnings were generated, you could click View Report in the Summary
window to troubleshoot. Click Finish.
11. In the Failover Cluster Manager, click Roles and then Failover1FS to review the summary
information for the clustered service. Notice that one of the servers is designated as the
current owner. The other server is in passive or standby mode. Take a screenshot of the
Failover Cluster Manager showing the Failover1FS server under the Roles pane and paste
it below. Make sure the VM number in the top left-hand corner is in the screenshot for full
credit for this step.
Screenshot:

12. Click the Shares tab at the bottom of the Roles window. Notice that a default administrative
share is created. You can create new shared folders on the shared volume by using Share
and Storage Management or the Add File Share link in the Actions pane. Click the
Summary tab.
13. To test the failover configuration, click Roles in the left pane, right-click Failover1FS in the
middle pane, point to Move, and click Select Node.
14. In the Move Clustered Role dialog box, click ServerDM2-# or ServerDM1-#, whichever is
listed, and then click OK. The Summary window shows the new owner of the service. Take
a screenshot of the Failover Cluster Manager showing the new owner of the Failover1FS
server under the Roles pane and paste it below. Make sure the VM number in the top left-
hand corner is in the screenshot for full credit for this step.
Screenshot:

15. Close the Failover Cluster Manager.


 Note: Chapter 9 continues coverage of failover quotas and you will need the current
configuration on ServerDC1-#, ServerDM1-#, and ServerDM2-#, so do not revert the
servers to an earlier snapshot.