You are on page 1of 137

603: Deploying desktop virtualization

using Citrix XenDesktop and HP


Moonshot
Hands-on Lab Exercise Guide
Contents
Contents .................................................................................................................................... 1
Exercise 1 - Overview ................................................................................................................ 2
Exercise 1 — Installing the XenDesktop VDA ............................................................................ 6
Exercise 2 - Overview ...............................................................................................................22
Exercise 2 — Configuring DHCP scope options ........................................................................26
Exercise 3 - Overview ...............................................................................................................39
Exercise 3 — Installing the Provisioning Services Client ...........................................................43
Exercise 4 - Overview ...............................................................................................................70
Exercise 4 — Leveraging PowerShell for Moonshot and PVS ...................................................74
Exercise 5 - Overview ...............................................................................................................80
Exercise 5 — vDisk Administration............................................................................................84
Exercise 6 - Overview .............................................................................................................100
Exercise 6 — Creating Catalogs and Groups in Citrix Studio ..................................................104
Exercise 7 - Overview .............................................................................................................123
Exercise 7 — Testing a Moonshot XenDesktop ......................................................................127

| 1 |
Exercise 1 - Overview
Hands-on Training Exercise 1
Overview
In this Exercise, students will have access to a Windows 7 x64 image on a bare-metal node in the
chassis. Since this is a lab we have already provisioned a node which has Windows 7 x64 already
installed in the interest of time. Students will install and configure the Citrix XenDesktop VDA and
apply necessary updates for functionality with the CS-100 environment.

Objective
After completing this lab, you will be able to:
 Understand the Moonshot injection and driver support process
 Install the Citrix Virtual Delivery Agent
 Update the Citrix Virtual Delivery Agent

Prerequisites
 Citrix Provisioning Services 7.1 ISO
 XenDesktop 7.1 ISO

Lab Environment Details


The Visio diagram below displays the HP hardware infrastructure as well as the virtual machines
used to host this lab.

| 2 |
The Student lab virtual machines are accessed remotely using Microsoft Remote Desktop Client
from the HP Thin Client. The Citrix Receiver running on your HP Thin Client is used to access the
physical desktops created during this lab.

Lab Guide Conventions

A VM that runs the Windows Active Directory 2012.


AD1
NIC1=172.23.0.3
VLAN(X)

| 3 |
A VM that runs the Citrix XenApp delivery infrastructure for hosting
applications.
XenApp1
NIC1=172.23.0.5
VLAN(x)

This VM runs Windows Server 2012 and the Citrix Studio infrastructure.
XDC1
NIC1=172.23.0.6
VLAN(x)

A VM that runs the Windows Server 2012 and Microsoft Windows


Deployment Services infrastructure role to deliver Windows Client OS to
WDS bare-metal machines in the HP Moonshot platform.
NIC1=172.23.0.7
VLAN (x)

This VM that runs Windows Server 2012 and the Citrix Provisioning
PVS 1 Services infrastructure.
NIC1=172.23.0.8
NIC2=172.24.0.8
VLAN(x)

Student CM VM is a Student VM that is used for launching the Putty SSH


application to establish communication to the Moonshot Chassis Manager.
Student CM VM
NIC1=172.23.100.4
VLAN(x)

CM is the HP Chassis Manager interface IP that is accessed from the


Student VM via Putty. CM provides a HP ILO command line console that
CM provides access to nodes and switching within the chassis.
NIC1=172.22.0.9
VLAN(x)

| 4 |
Required Lab Credentials
The credentials required to connect to the environment and complete the lab exercises.

Username IP Address Password Description

All Windows
HP\Administrator HPdemo=123 HP.Local Domain administrator
Servers
Administrator 172.23.0.9 password HP Moonshot Chassis Management
Node\Administrator HP1nvent Local administrator account for Windows 7

| 5 |
Exercise 1 — Installing the
XenDesktop VDA
From your Thin Client click on the Desktop Remote Manager icon. We will use this
application to provide a consolidated view of RDP connections.

Expand the Nodes folder. Right click on the Win7 Master Node icon and choose
properties.

!!!!!!!!!PLEASE READ THIS SECTION VERY CAREFULLY!!!!!!!!!

2a. Enter in the correct cartridge and node # (NetBIOS name) that matches the name of
the source node that you have successfully deployed Windows 7 to. In this lab
Windows 7 Enterprise has already been successfully deployed to a node for you.
The correct NetBIOS name of the node that each student will use follows the
corresponding Excel sheet below. Each assigned student’s lab has 4 nodes (PCs) per
cartridge. Each student is assigned ONLY 1 cartridge. For example if you are Student 1
your cartridge is C1. N1-4 means Nodes 1-4 are the 4 nodes that you have access to.
Your node that you will RDP into to create the master image from will always be N1,
however the cartridge # will be unique for each student. To determine the cartridge
you are using for this lab please refer to the Master Node to RDP column in the chart
below. Your Student # will be pre-assigned to the seat where you are at.

| 6 |
NOTE: If you would like to use a full screen RDP session you can simply change that for
each machine. In the Connection Tab choose Display and select External and click OK.

| 7 |
Select the Win7 Master Node and click the open session button to establish a RDP
session.

| 8 |
Note: After logging via RDP you may see this Windows activation message appear.
Please ignore this message and click close. We are using a volume license KMS based
install of Windows 7 Enterprise however, we are not applying a corporate key via our
WDS deployment for security purposes.

From the Windows 7 master image desktop click on Start Run and type the
path \\172.23.0.2\moonshot\iso\XenDesktop7_1.iso. The ISO will be automatically
mounted using the SlySoft virtual CloneDrive free utility.
If the ISO will not mount, you will need to install the Virtual Clone Drive Software from the
following location: \\172.23.0.2\moonshot\software\VirtualCloneDrive

| 9 |
Open the My Computer icon on the desktop and double click the XenDesktop ISO that
has been mounted.

The XenDesktop 7.1 ISO should be mounted at this point and the autorun will start. Click
on the Start button to begin the installation

| 10 |
The XenDesktop ISO Meta installer automatically detects the source OS that it was
mounted upon and only allows for the installation of components which are supported
and appear with white colored text. Select the Virtual Delivery Agent for Windows
Desktop OS

Select the Create a Master Image button and click next.

There are two types of VDAs and it is important to understand the naming convention
and support for each. For traditional VDI, the XenDesktop Standard provides virtual
machines, or in the case of the CS100 physical nodes, access to leveraging a GPU, for

| 11 |
certain graphics engines such as Direct X11. This is a new feature and only available in
XenDesktop 7.1 and above. More about this can be found here.(type out this link) In
the past to leverage a GPU for a desktop you needed HDX3DPro. This is no longer the
case for XenDesktop 7.1 as the Standard VDA can use a GPU ONLY with applications
that use the Direct X 11 graphics engine like Microsoft Office and Google Earth.
Select the green no, install the standard VDA and click next. Don’t’ select the RED HDX
3DPro

Accept the defaults and keep the Citrix Receiver checked and click Next.

| 12 |
.

In this screen we need to enter only the FQDN not the IP address or netbios name of the
XenDesktop controller. Enter XDC1.HP.LOCAL then click Test connection
.
A green checkbox should appear verifying that the FQDN and configuration of the
XenDesktop controller was successfully found.

Click Add and the FQDN of the Controller should be added to the VDA setup. Click Next
to continue.

| 13 |
Accept the defaults for Features and click Next.

Accept the defaults for the Firewall and click Next.

| 14 |
Review the Summary of the installation and click Install.

During this time you will see the prerequisites and other components install. Depending
upon the write speeds this may take a few minutes.
Note: If you experience any issues with the VDA installing you can copy the ISO to local
C: of your master node and rerun the installer.

| 15 |
Once the install has finished the node will need to be restarted. Click Finish and the
node will reboot and your RDP session will be disconnected.

Wait about 5 minutes after the reboot to continue with the next steps.
From the Remote Desktop Manager tree select the Win7 Master Node and click the
open session button to establish a RDP session.

| 16 |
Later in the labs we will use the Citrix Receiver app that was just installed to subscribe to
hosted XenApp applications via StoreFront. A small registry modification will need to be
applied in this lab to our master node in order to connect to the Storefront portal without
using SSL certificate. Using SSL is a Citrix best practice, however to simplify this lab we
will not be using SSL.
In the taskbar click on the triangle in the far right corner. This will then expose the hidden
app icons. Right click on the Citrix Receiver icon and chose exit

| 17 |
From the Windows 7 master image desktop Click on Start Run and type the path
\\172.23.0.2\moonshot\software\ Click OK.
Double click the ReceiverHTTP registry file

A pop up will appear. Click Run to start the Registry import process.

| 18 |
Click on Yes to continue the process.

Click Ok

Close out any open windows.


From the Windows 7 master image desktop Click on Start Run and type the path
\\172.23.0.2\moonshot\iso\XenDesktopHDX_Update\ XD710ICAWSWX64002.MSP. Click
OK.

Click Open.

| 19 |
This Citrix MSP will apply a new AMD optimized HDX build that will help enhance the
WDDM driver for the user experience on the CS100. This MSP is only applicable for
the Standard VDA and is not tested or certified by Citrix for use for 3DPro. Click
Next to start the installation. The URL to download the MSP directly is
here http://support.citrix.com/article/CTX139622 for future reference.

Click Update to install the HDX VDA improvements. This process takes a few minutes to
install.

Click Finish after the process has completed.

| 20 |
After clicking Finish the node will need to be rebooted. Click OK to reboot.

END OF EXERCISE 1

| 21 |
Exercise 2 - Overview
Hands-on Training Exercise 2
Overview
In this Exercise, students will have access to a Citrix Provisioning Services Server running the
DHCP Service. Student will configure the DHCP Server with the necessary options for allowing the
Moonshot nodes to boot from network and receive a streamed image from the PVS Server.

Objective
After completing this lab, you will be able to:
 Understand the how DHCP can be setup for Provisioning Services

Prerequisites
To complete this lab, you need:
 Windows administrator credentials
 Access to PVS VM
 Access to DHCP console on PVS

Lab Environment Details


The Visio diagram below displays the HP hardware infrastructure as well as the virtual machines
used to host this lab.

| 22 |
The Student lab virtual machines are accessed remotely using Microsoft Remote Desktop Client
from the HP Thin Client. The Citrix Receiver running on your HP Thin Client is used to access the
physical desktops created during this lab.

Lab Guide Conventions

A VM that runs the Windows Active Directory 2012.


AD1
NIC1=172.23.0.3
VLAN(X)

| 23 |
A VM that runs the Citrix XenApp delivery infrastructure for hosting
applications.
XenApp1
NIC1=172.23.0.5
VLAN(x)

This VM runs Windows Server 2012 and the Citrix Studio infrastructure.
XDC1
NIC1=172.23.0.6
VLAN(x)

A VM that runs the Windows Server 2012 and Microsoft Windows


Deployment Services infrastructure role to deliver Windows Client OS to
WDS bare-metal machines in the HP Moonshot platform.
NIC1=172.23.0.7
VLAN (x)

This VM that runs Windows Server 2012 and the Citrix Provisioning
PVS 1 Services infrastructure.
NIC1=172.23.0.8
NIC2=172.24.0.8
VLAN(x)

Student CM VM is a Student VM that is used for launching the Putty SSH


application to establish communication to the Moonshot Chassis Manager.
Student CM VM
NIC1=172.23.100.4
VLAN(x)

CM is the HP Chassis Manager interface IP that is accessed from the


Student VM via Putty. CM provides a HP ILO command line console that
CM provides access to nodes and switching within the chassis.
NIC1=172.22.0.9
VLAN(x)

| 24 |
Required Lab Credentials
The credentials required to connect to the environment and complete the lab exercises.

Username IP Address Password Description

All Windows
HP\Administrator HPdemo=123 HP.Local Domain administrator
Servers
Administrator 172.23.0.9 password HP Moonshot Chassis Management
Node\Administrator HP1nvent Local administrator account for Windows 7

| 25 |
Exercise 2 — Configuring DHCP
scope options
1. From your Student VM click on the Desktop Remote Manager icon. We will use this
application to provide a consolidated view of RDP connections.

Note: The Remote Desktop Manager tool opens all RDP connections in a
tabbed view. If you would like to use a full screen RDP session you can change
that for each machine. Right click on the machine in the list you would like to
change and choose properties. In the Connection Tab choose Display and
select External and click OK.

2. Expand the Remote Desktop Manager tree and navigate to the Nodes folder. From
there select the pvs1.hp.local Master Node and click the open session button. A
RDP connection should be made to pvs1.hp.local.

| 26 |
3. From here navigate to the Windows 2012 Charms menu and click on the Start Menu
icon.
Note: If you’re not familiar with getting to the Windows 2012 Start Menu, you can use
the Remote Desktop Manager Toolbar which provides a nice shortcut as opposed to
using your mouse to getting the charms menu to appear. This can be done by
clicking on the arrow at the top of the Remote Desktop Manager Toolbar. Once this
toolbar in engaged the RDP Windows will shrink and a toolbar will appear on top.
Click on the Start Screen icon to access the Windows 2012 Start Menu.

| 27 |
4. From the Start menu click on the DHCP Console

| 28 |
5. The console will then open up in the Windows Desktop shell. Expand the MMC
console window to full screen
6. From the DHCP console click through the tree to navigate to the DHCP scope
settings. Make sure you’re NOT in the DHCP Server Options section.

7. Right click on the scope options and click configure options.

| 29 |
8. From the Scope Options window arrow down until you see the available options 66
and 67. Select each of those by placing a checkmark in the box as in the example
below.

9. In the Option 66 section we need to specify the IP address for the Provisioning
Services streaming network which handles delivering the information for TFTP. Enter
in the string value section the address 172.24.0.8. Don’t click apply yet.

| 30 |
10. Using your mouse click on the option 67 Bootfile name section which will be used for
the PVS bootstrap which helps deliver the PVS server information via DHCP. Enter in
the string value section the value ARDBP32.BIN. Be sure that it is spelled
correctly or the bootstrap will not get delivered to the node once it starts to
PXE boot.

11. Once you have confirmed the IP address and bootfile name click Apply.

12. The new Scope options changes are now displayed in the DHCP console.

| 31 |
13. To ensure these changes take effect the DHCP services will need to be restarted.
From the DHCP console simply right click on the pvs1.hp.local icon and navigate to
All Tasks and click on Restart.

14.

15. The DHCP services will restart. Once this is finished you can close out of the DHCP
console.

16. Now that DHCP is configured we need to change our existing PVS server
configuration.
Note: In a customer environment where PVS is already configured, the following
steps will not have to be performed. We are only performing these steps to
provide an understanding on how PVS is configured in this lab and to address a
change for power management which is used later on in the lab.
17. From here navigate to the Windows 2012 Charms menu and click on the Start Menu
icon. If you’re not familiar with getting to the Windows 2012 Start Menu, you can use
the Remote Desktop Manager Toolbar which provides a nice shortcut as opposed to
using your mouse to getting the charms menu to appear. This can be done by clicking
on the arrow at the top of the Remote Desktop Manager Toolbar.

| 32 |
18. Once this toolbar in engaged the RDP Windows will shrink and a toolbar will appear
on top. Click on the Start Screen icon to access the Windows 2012 Start Menu.

19. From the Start Menu click on the Provisioning Services Configuration Wizard.
Note: If you leave mouse pointer over an icon it will display the full application
name.

20. Click Next

| 33 |
21. In the DHCP Services screen select The service that runs on this computer and
select Microsoft DHCP. Click Next.

22. In the PXE services windows select The service that runs on this computer and
click Next.

| 34 |
23. In the Farm Configuration window select Farm is already configured and click Next.

24. In the User Account window select the Specified user account and enter the domain
administrator user, domain name, and password then click Next.
Note: In a production environment there will be specific administrative accounts
or service accounts that will be used. To simplify this lab we are using the
domain administrator account.

| 35 |
25. In the Active Directory account password window check the Automate computer
account password updates and select 30 days for password updates. Click Next.

26. In the Network Communications window check the box for Streaming network card
to have the 172.24.0.8 IP. In the Management network card section ensure that
172.24.0.8 is also selected. Click Next.

| 36 |
27. In the TFTP window select the Use the Provisioning Services TFTP service and
accept the default path and click Next.

28. In the Stream Services Boot list window ensure that 172.24.0.8 is in the Server IP
Address column and the correct 16 bit 255.255.0.0 mask is there and click Next.
29. Confirm all the settings and click Finish.

| 37 |
30. If all the information and credentials were correct the services will show all green. If
there are any errors you may need to run the wizard again and ensure that all the
correct information is there.

31. Click Done to finish the wizard.

END OF EXERCISE 2
| 38 |
Exercise 3 - Overview
Hands-on Training Exercise 3
Overview
In this exercise students will have access to a Windows 7 x64 image on a bare-metal node in the
chassis. Students will be installing the Provisioning Services Client software on the bare-metal node
and creating a master image to use for XenDesktop.

Objective
After completing this lab, you will be able to:
 Understand the how to install the Provisioning Services client
 Create a virtual disk (vDisk) from a master image
 Boot a node on the Proliant M700 cartridge from a vDisk

Prerequisites
To complete this lab, you need:
 Windows 7 x64 Sp1 Enterprise deployed to a node in the chassis
 Access to the Provisioning Services (PVS) 7.1 ISO
 Windows Domain Administrator credentials to RDP into the node
 Access to the CM console
 Access to PVS Console

Lab Environment Details


The Visio diagram below displays the HP hardware infrastructure as well as the virtual machines
used to host this lab.

| 39 |
The Student lab virtual machines are accessed remotely using Microsoft Remote Desktop Client
from the HP Thin Client. The Citrix Receiver running on your HP Thin Client is used to access the
physical desktops created during this lab.

Lab Guide Conventions

A VM that runs the Windows Active Directory 2012.


AD1
NIC1=172.23.0.3
VLAN(X)

| 40 |
A VM that runs the Citrix XenApp delivery infrastructure for hosting
applications.
XenApp1
NIC1=172.23.0.5
VLAN(x)

This VM runs Windows Server 2012 and the Citrix Studio infrastructure.
XDC1
NIC1=172.23.0.6
VLAN(x)

A VM that runs the Windows Server 2012 and Microsoft Windows


Deployment Services infrastructure role to deliver Windows Client OS to
WDS bare-metal machines in the HP Moonshot platform.
NIC1=172.23.0.7
VLAN (x)

This VM that runs Windows Server 2012 and the Citrix Provisioning
PVS 1 Services infrastructure.
NIC1=172.23.0.8
NIC2=172.24.0.8
VLAN(x)

Student CM VM is a Student VM that is used for launching the PuTTY SSH


application to establish communication to the Moonshot Chassis Manager.
Student CM VM
NIC1=172.23.100.4
VLAN(x)

CM is the HP Chassis Manager interface IP that is accessed from the


Student VM via PuTTY. CM provides a HP ILO command line console that
CM provides access to nodes and switching within the chassis.
NIC1=172.22.0.9
VLAN(x)

| 41 |
Required Lab Credentials
The credentials required to connect to the environment and complete the lab exercises.

Username IP Address Password Description

All Windows
HP\Administrator HPdemo=123 HP.Local Domain administrator
Servers
Administrator 172.23.0.9 password HP Moonshot Chassis Management
Node\Administrator HP1nvent Local administrator account for Windows 7

| 42 |
Exercise 3 — Installing the
Provisioning Services Client
1. From your Student VM click on the Desktop Remote Manager icon. We will use
this application to provide a consolidated view of RDP connections.

Note: The Remote Desktop Manager tool opens all RDP connections in a tabbed
view. If you would like to use a full screen RDP session you can change that for each
machine. Right click on the machine in the list you would like to change and choose
properties. In the Connection Tab choose Display and select External and click OK.

2. Expand the Remote Desktop Manager tree and navigate to the Nodes folder.
From there select the Win7 Master Node and click the open session button.

| 43 |
3. Before starting the PVS client installation we need to confirm we are logged into
the node as the domain administrator or an account that has local administrator
rights to the OS and also to the Provisioning Services OS and console. In this lab
we are using the HP\Administrator account to simplify things. To ensure that we
are logged into this node as the domain administrator open a command prompt
and type WHOAMI

| 44 |
4. The return should say hp\administrator. If it doesn’t please log off the RDP
session and login again with the domain administrator account NOT the local
administrator account.

5. From the Windows 7 master image desktop Click on Start Run and type the
path \\172.23.0.2\moonshot\iso\ProvisioningServices 7_1.iso. The ISO will be
automatically mounted using the SlySoft virtual CloneDrive free utility.

6. Open the My Computer icon on the desktop and double click the D: which
contains the Citrix Provisioning Services ISO that has been mounted.

| 45 |
7. Double click the autorun. Click on the Target Device Installation to begin the
installation

8. The Provisioning Services Target Device Installation will begin. On the next
screen click on the Target Device Installation one more time to continue the
process.

9. The installation will begin and an image like below will appear.

| 46 |
10. Click Next to Continue.

11. Accept the licensing agreement and click Next.

| 47 |
12. Accept the defaults and click Next to continue.

13. Accept the defaults for the installation location and click Next to continue.

| 48 |
14. Select Install to start the installation.

15. This process may take a few minutes. Once the installation is complete click
Finish. The imaging wizard process will begin.

16. Click Next to begin the imaging wizard.

| 49 |
17. Enter the IP address not the NetBIOS or FQDN of the Provisioning Services
machine used. In some cases there may be multiple IP addresses on the
Provisioning Services machine so be sure to enter the address which is used for
management and not for the streaming service. In this case we will enter the IP
address 172.24.0.8 (I had to use 172.23.0.8) which is the management IP and
leave the port number 54321 as the default. Leave the Use my Windows
Credentials as since you have remoted into the node as the domain
administrator who also has Provisioning Services administrator rights. In the case
of this lab all RDP connections are using the domain administrator account who
is also a Provisioning Services administrator so there is no need to enter any
other credentials. Click Next to continue.

18. Select Create new vDisk and click Next.

| 50 |
19. In the vDisk name field enter a short name description that describes the virtual
disk you are about to create. For example since the source OS is Windows 7 x64
you can name the disk W7X64 and append description like HDI. This helps with
understanding what the virtual disk types are later on. In this example I’m using
W7X64HDI to separate this from other vDisks types. In the Store section accept
the defaults as the Provisioning Services has only 1 store at this time. For the
vDisk type select Fixed and click Next. It’s not recommended to select dynamic
as it will take longer for OS conversion process to happen as the disk will have to
grow as the image is being converted from physical to a vDisk for this lab.

20. There are two types of Microsoft Windows licensing the PVS supports. KMS and
MAK. The type of license you should select is determined by the ISO that was
used to create the OS from WDS. Most volume license ISOs from enterprise
customers have the letters VL in the iso file name and also may not require a
license key upon installation. These are used for KMS activation. For those
others that are not VL based select MAK. If you’re unsure you can also open a
command prompt from this node and type in SLMGR /DLV to display the type
of license that was used to install Windows.

| 51 |
21. Select Key Management Service (KMS) and click Next.

22. In this screen we will select the source volume we are going to convert from
physical to virtual which will basically clone the entire Windows OS into a VHD or
Virtual Hard Disk file which will be stored in the Provisioning Services Store that
we setup earlier in this module. There are a few items to review. First the
Windows OS that was built with WDS created a standard Windows system
reserved partition that requires an additional 200mb of disk space and is primarily
used for recovery and Windows Bitlocker to encrypt data. This use case is
primarily for physical machines such as laptops. In the VDI/HDI world this
partition is not needed as any recovery of a virtual disk operating system will be
done by leveraging vDisk backups and not a system partition backup. As we
convert this physical node to a virtual disk, we will have to accept that this
partition will be there for this lab, however when using WDS you can change the
unattended XML file to not create this partition which will in turn save space and
other issues that may arise.

| 52 |
12. From the Provisioning Services Imaging Wizard you can also resize the virtual disk
you are about to create by clicking on the blue circle with the arrow . If you resize the
disk here you will also have to extend the partition later on using the Windows DiskPart
command so that the newly created sized will be extended to fill the entire disk. DO NOT
resize the disk here so please accept all the defaults.

13. Review all the settings here and make sure your volumes and sizes are the same as
in this screenshot and click Next.

PLEASE READ THIS SECTION CAREFULLY. ENTERING THE WRONG INFORMATION


WILL REQUIRE YOU TO START OVER THIS THIS LAB WHICH WILL PUT YOU
BEHIND IN THE LABS.

14. In the Target Device Name enter the NetBIOS or computer name of what this
machine will be called in Active Directory. Provisioning Services can create new target
devices in its own database and then add them to Active Directory automatically. In this
case my node (NetBIOS) is called C2-C21n1 so I will enter a different name like C21N1
in the Target device name as there can’t be two of the same names in Active
Directory. This can also be any other combinations of other character names, but
appending the Cartridge Name and Node Number tags helps in the long run to help
distinguish which node is on what cartridge and connected in PVS. You can also verify
your NetBIOS name by opening a command prompt and typing hostname as in the
example below.

| 53 |
15. For the MAC section ensure that the Local Area Connection is selected and NOT
Local Area Connection 2. This is critical to get correct as each node has two
network adapters. One network adapter will be used to connect to the Provisioning
Services network to create and stream the virtual disk during the next steps, while the
other will be used for normal network traffic. In order to successfully convert this node to
a vDisk the correct network adapter needs to be selected. To see which mac address is
used open a CMD prompt and type Ipconfig /All.

| 54 |
16. Verify that you have the correct Target Device Name as well as the MAC is only
Local Area Connection. The default Collection of Orbit is where PVS will create these
nodes in. Your screen should look similar to the one below. Click Next.
.

17. Click on the Optimize for Provisioning Services.

18. Accept all the defaults for the Provisioning Services Optimization Tool and click OK.

| 55 |
19. Review the Summary of Farm Changes screen as this is that last chance to go
back and correct any mistakes that have happened. Click Finish.

20. The creating vDisk process will begin. This process may take up to 30 minutes
depending upon the size of the virtual disk that is created as well as write speed where
the vDisk is being created.

BREAK TIME

| 56 |
21. PLEASE READ THIS SECTION CAREFULLY. ENTERING THE WRONG
INFORMATION WILL REQUIRE YOU TO START OVER THIS THIS LAB WHICH
WILL PUT YOU BEHIND IN THE LABS.

22. Once the vDisk has been created a message will appear. Click NO and don’t
reboot the node as we will need to change the boot order in CM for this node

23. After clicking NO to the previous message, another message will pop up asking
to restart. Click NO again here also.

24. Close any windows that are open and click on the Start button and select
Windows Security.

| 57 |
25. From the Windows Security click the RED and select Shut down.

26. The node will start to shutdown

27. Once the node is shutdown open a RDP connection to the Student CM VM from
the HP Thin Client.
Note: We will now use this VM as a jumper VM to make PuTTY SSH sessions from and
to the Moonshot Chassis Manager.

| 58 |
!!!!!PLEASE READ THIS SECTION CAREFULLY. ENTERING THE WRONG
INFORMATION WILL MAY IMPACT YOUR FELLOW STUDENTS BY CONNECTION TO
THE WRONG CHASSIS!!!!!!

28. From the Student CM VM open Remote Desktop Connection manager and
expand the Chassis folder and right click on Chassis Manager and go to
properties.

| 59 |
29. In the Chassis Manager Properties menu click on the … to open the PuTTY SSH
settings.
30. In the PuTTY SSH settings highlight on the Chassis Manager saved session. In
the Host Name enter the IP address of the chassis manager that is assigned to
your student group. Please refer to the table below to enter the correct IP
address of the CM you will SSH into.

| 60 |
31. Enter the IP address 172.22.0.X where X is the Chassis you are assigned to
based on your student # in the table above.

32. Click Save then hit the X to close the PuTTY application.

| 61 |
33. Click the OK button to close the properties of the Remote Desktop Connection
manager application.

34. Click on the Open Session to start the PuTTY SSH session.

35. The Connection window for PuTTY should open up and automatically sign into
the Chassis Manager as Administrator like below.
Note: If the SSH connection doesn’t sign automatically, you can manually sign in using
the credentials of username Administrator and password of password.

| 62 |
36. From the PuTTY window we need to now change the node to boot PXE so that
we can continue the PVS conversion of this source node into the vDisk that we
created in the previous step. To change the boot order we will type in the
command Set node boot pxe cXn1 where X is the cartridge number assigned to
you and N1 is node. This will then change the node to boot from PXE and not
HDD which is the default. Verify your cartridge number and node then hit Enter.
The output from PuTTY should show that the node boot order was change from
HDD to PXE.

37. From the Student CM VM we need to create a secondary Remote Desktop


Connection Manager PuTTY Window in a tabbed view alongside our primary
PuTTY window so we can watch the node from the VSP (Virtual Serial Port)
while powering on. This helps with verifying that there is node communication
with PVS and also that the node is obtaining the correct IP address info,
Bootstrap information, and vDisk assignment.
38. From the Remote Desktop Connection manager console on the Student CM VM
right click on Chassis Manager and choose Duplicate Entry.

39. Change the name to display Chassis Manager- VSP then click OK.

| 63 |
40. The Remote Desktop Connection Manager should now have two entries in the
Chassis folder. Click on the Chassis Manager-VSP and click Open Session.

A second PuTTY session should now open and be automatically logged into the chassis

41. From the Chassis Manager –VSP PuTTY window type the command Connect
node VSP CXN1 where X is the cartridge number assigned to you and N1 is
node. This will establish a session to watch the node boot from its BIOS once we
power on the node.

| 64 |
42. From the Chassis Manager PuTTY window we need to power on the node. In
the PuTTY window enter Set Node Power on CXN1 where X is the cartridge
number assigned to you and N1 is node. Hit Enter to execute the PuTTY
command and power on the node.

43. Once we power on the node we can see the status of the node in the VSP by
switching over to the Chassis Manager- VSP PuTTY window.

| 65 |
44. As the node powers up we can see the DHCP scope options we included earlier
in the lab. The bootstrap and other information for PVS was delivered via DHCP
and that the nodes MAC address has now connected to the vDisk we created in
the previous steps.

45. Once the node has fully booted up, you should see the SAC interface appear.
From this point we can RDP into the node to continue the vDisk conversion
process.

46. From the HP Thin Client Remote Desktop Connection Manager Expand the
Remote Desktop Manager tree and navigate to the Nodes folder. From there
select the Win7 Master Node and click the open session button

| 66 |
47. The login process to the desktop will take an extra 15 seconds or so. Once the
login has finished the Provisioning Services Imaging 7.1 screen will appear.

48. At this point the OS contents from the source iSSD are now be copied to the
VHD that was created earlier on the PVS server. This process may take up to 30
minutes depending upon hardware disk speed where the vDisk is located.
During this time DON’T hit any keys or cancel this operation as you will have the
start the lab over.

This is a good stopping point for


a bio break while this conversion
finishes.
49. Once the vDisk capture process finishes click Finish to continue to be connected
to the desktop.

| 67 |
50. After the Windows desktop appears navigate to the bottom right hand corner of
the taskbar and click on the arrow to enumerate the hidden icons. Click on the
PVS vDisk icon.

51. This new PVS client icon shows the physical node now has a network streamed
connection to the virtual disk and is in read/write mode. When a virtual disk is in
read/write mode all changes made within Windows such as installing applications
and so forth are persistent.

52. Now that we have successfully created a vDisk we need to change the mode
from Read/Write to Read only and also change the Boot From to local hard drive
to boot from vDisk so that we are 100% booting from only the network.
53. As the last step in this lab we will need to shut down the node in order to make
changes to the vDisk mode as changes can’t be done on the fly

| 68 |
54. Close any windows that are open and click on the Start button and select
Windows Security. From the Windows Security click the RED and select
Shut down

The node will start to shutdown

END OF EXERCISE 3

| 69 |
Exercise 4 - Overview
Hands-on Training Exercise 4
Overview
The HP Moonshot Moonshot Tools for PowerShell seamlessly integrates CS100 Chassis
Management tasks within Windows PowerShell. Specialized Moonshot functions are also included
within the package to integrate with Device Discovery in the Citrix PVS Console. This lab will walk
you through using the HP Moonshot Tools for PowerShell with the HP Converged System 100 for
HDI. The following exercises will focus on both basic and advanced usage of the HP Moonshot
Tools with the CS100.

Objective
After completing this lab, you will be able to:
 Understand pre-requisites and restrictions with HP Moonshot Tools for PowerShell
 Download and install HP Moonshot Tools for PowerShell onto Windows Server
 Execute PowerShell functions supported by the HP CS100 for HDI
 Usage of interactive and non-interactive modes for all supported functions
 Automate HP CS100 for HDI chassis management tasks in PowerShell

Prerequisites
To complete this lab, you will need:
 Moonshot Chassis, etc.
 Windows Client or Server with PowerShell v3.0 (build 6.2.9200.16384) or higher
 Putty.exe and Plink.exe installed to "C:\Program Files (x86)\PuTTY\"
(http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html)
 iLO minimum version: CM v1.10
 SSH server and usernames\passwords configured on Moonshot CM and Switches
 Windows PowerShell Execution Policy to allow RemoteSigned scripts

Lab Environment Details


The Visio diagram below displays the HP hardware infrastructure as well as the virtual machines
used to host this lab.

| 70 |
The Student lab virtual machines are accessed remotely using Microsoft Remote Desktop Client
from the HP Thin Client. The Citrix Receiver running on your HP Thin Client is used to access the
physical desktops created during this lab.

Lab Guide Conventions

A VM that runs the Windows Active Directory 2012.


AD1
NIC1=172.23.0.3
VLAN(X)

| 71 |
A VM that runs the Citrix XenApp delivery infrastructure for hosting
applications.
XenApp1
NIC1=172.23.0.5
VLAN(x)

This VM runs Windows Server 2012 and the Citrix Studio infrastructure.
XDC1
NIC1=172.23.0.6
VLAN(x)

A VM that runs the Windows Server 2012 and Microsoft Windows


Deployment Services infrastructure role to deliver Windows Client OS to
WDS bare-metal machines in the HP Moonshot platform.
NIC1=172.23.0.7
VLAN (x)

This VM that runs Windows Server 2012 and the Citrix Provisioning
PVS 1 Services infrastructure.
NIC1=172.23.0.8
NIC2=172.24.0.8
VLAN(x)

Student CM VM is a Student VM that is used for launching the Putty SSH


application to establish communication to the Moonshot Chassis Manager.
Student CM VM
NIC1=172.23.100.4
VLAN(x)

CM is the HP Chassis Manager interface IP that is accessed from the


Student VM via Putty. CM provides a HP ILO command line console that
CM provides access to nodes and switching within the chassis.
NIC1=172.22.0.9
VLAN(x)

| 72 |
Required Lab Credentials
The credentials required to connect to the environment and complete the lab exercises.

Username IP Address Password Description

All Windows
HP\Administrator HPdemo=123 HP.Local Domain administrator
Servers
Administrator 172.23.0.9 password HP Moonshot Chassis Management
Node\Administrator HP1nvent Local administrator account for Windows 7

| 73 |
Exercise 4 — Leveraging
PowerShell for Moonshot and PVS
1. From your Student VM click on the Desktop Remote Manager icon. We will use this
application to provide a consolidated view of RDP connections.

Note: The Remote Desktop Manager tool opens all RDP connections in a tabbed view. If
you would like to use a full screen RDP session you can change that for each machine.
Right click on the machine in the list you would like to change and choose properties.
In the Connection Tab choose Display and select External and click OK.

2. Expand the Remote Desktop Manager tree and navigate to the Nodes folder. From there
select the Student CM VM and click the open session button. A RDP connection
should be made to the Student CM VM.

| 74 |
3. Set-HPMoonshotCreds is used to save Chassis credentials into the credentials
database. These credentials will be stored until either the credential set is removed, the
entire database is erased, or until the HPMoonshotTools is uninstalled. In this exercise
you will leverage the HP Moonshot tools which have already been installed for you.
Note:
Tab completion can be used for all HPMoonshot command names as

| 75 |
well as for the command line parameters.

4. Open the PowerShell console from the Student CM VM.

5. Use interactive mode to store the following Chassis Manager credentials into the
Moonshot Credentials database:
Set-HPMoonshotCreds
Alias: cm
IP address: 172.22.0.X
Enter the IP address 172.22.0.X where X is the Chassis you are assigned to based on
your student # in the table below.
SSH Port: <Enter> for default 22
Username: <Enter> for default “Administrator”
Password: password

| 76 |
Next we will query the Moonshot chassis for a list of MAC addresses to be imported later
into the PVS collection.
6. In the PowerShell Window Use interactive mode and type in the command Get-
HPMoonshotMACs
cm
7. When prompted enter the following parameter values:
Alias: cm
Node Range: CXN1-4 where X is the cartridge number and 1-4 are your
assigned nodes. Refer to the chart above for your cartridge #.
NIC Number: 1
Chassis Label: cx- <where x is your Chassis number (ex: c2)>
Site Name: Houston
Collection Name: Orbit
Note
The characters c1- of the Chassis Label is meant to represent the
Moonshot chassis number in a rack. The hostname here could be
customized. The m700 device hostname convention is defined by
concatenating the chassis label and node location within the chassis.
This hostname will be used in Active Directory to communicate with
each node.

| 77 |
8. Open File Explorer to the current directory
Note
The .CSV file is saved in the current directory.

9. Open .CSV file “.\HPMoonshotMACs_172.23.0.9_all_nic1.csv” in Notepad.exe to view


contents:
Note
This .CSV file will be used to import all m700 devices into the “Orbit”
Device Collection in the “Houston” PVS Site later in the training course.

| 78 |
10. Confirm there are only 4 MAC addresses as well as the spelling of Farm and Collection.

END OF EXERCISE 4

| 79 |
Exercise 5 - Overview
Hands-on Training Exercise 5
Overview
In this Exercise students will have access to a Windows 7 x64 image that has been converted into a
vDisk for Citrix Provisioning Services. Students will then modify the vDisk to be shared among other
nodes on their assigned cartridges.

Objective
After completing this lab, you will be able to:
 Understand the how Provisioning Services (PVS) works
 Navigate around the PVS console
 Administer basic vDisk changes

Prerequisites
To complete this lab, you need:
 Windows administrator credentials
 Access to PVS Console
 Access to CM

Lab Environment Details


The Visio diagram below displays the HP hardware infrastructure as well as the virtual machines
used to host this lab.

| 80 |
The Student lab virtual machines are accessed remotely using Microsoft Remote Desktop Client
from the HP Thin Client. The Citrix Receiver running on your HP Thin Client is used to access the
physical desktops created during this lab.

Lab Guide Conventions

A VM that runs the Windows Active Directory 2012.


AD1
NIC1=172.23.0.3
VLAN(X)

| 81 |
A VM that runs the Citrix XenApp delivery infrastructure for hosting
applications.
XenApp1
NIC1=172.23.0.5
VLAN(x)

This VM runs Windows Server 2012 and the Citrix Studio infrastructure.
XDC1
NIC1=172.23.0.6
VLAN(x)

A VM that runs the Windows Server 2012 and Microsoft Windows


Deployment Services infrastructure role to deliver Windows Client OS to
WDS bare-metal machines in the HP Moonshot platform.
NIC1=172.23.0.7
VLAN (x)

This VM that runs Windows Server 2012 and the Citrix Provisioning
PVS 1 Services infrastructure.
NIC1=172.23.0.8
NIC2=172.24.0.8
VLAN(x)

Student CM VM is a Student VM that is used for launching the Putty SSH


application to establish communication to the Moonshot Chassis Manager.
Student CM VM
NIC1=172.23.100.4
VLAN(x)

CM is the HP Chassis Manager interface IP that is accessed from the


Student VM via Putty. CM provides a HP ILO command line console that
CM provides access to nodes and switching within the chassis.
NIC1=172.22.0.9
VLAN(x)

| 82 |
Required Lab Credentials
The credentials required to connect to the environment and complete the lab exercises.

Username IP Address Password Description

All Windows
HP\Administrator HPdemo=123 HP.Local Domain administrator
Servers
Administrator 172.23.0.9 password HP Moonshot Chassis Management
Node\Administrator HP1nvent Local administrator account for Windows 7

| 83 |
Exercise 5 — vDisk Administration
1. From your Student VM click on the Desktop Remote Manager icon. We will use this
application to provide a consolidated view of RDP connections.

Note: The Remote Desktop Manager tool opens all RDP connections in a
tabbed view. If you would like to use a full screen RDP session you can change
that for each machine. Right click on the machine in the list you would like to
change and choose properties. In the Connection Tab choose Display and
select External and click OK.

2. Expand the Remote Desktop Manager tree and navigate to the Nodes folder. From
there select the pvs1.hp.local and click the open session button. A RDP connection
should be made to PVS.

| 84 |
3. From here navigate to the Windows 2012 Charms menu and click on the Start Menu
icon. If you’re not familiar with getting to the Windows 2012 Start Menu, you can use
the Remote Desktop Manager Toolbar which provides a nice shortcut as opposed to
using your mouse to getting the charms menu to appear. This can be done by clicking
on the arrow at the top of the Remote Desktop Manager Toolbar.

| 85 |
4. Once this toolbar in engaged the RDP Windows will shrink and a toolbar will appear
on top. Click on the Start Screen icon to access the Windows 2012 Start Menu.

5. From here Start menu click on the Provisioning Services Console

| 86 |
6. The console will then open up in the Windows Desktop shell. Expand the MMC
console window to full screen.
7. From the Provisioning Services console click through the tree to navigate to the
vDisk Pool.

| 87 |
8. From the PVS console in the vDisk Pool we can see the vDisk that we created in the
previous labs. This vDisk is a simple flat file VHD that can viewed using Windows
Explorer. Let’s take a look at the contents of this vDisk file.
9. Click on the File Explorer icon from the taskbar.

10. In the File Explorer window navigate to the path E:\Vdisk. In this directory we can
see the newly created vDisk which is merely just a clone of the iSSD drive, but is now
available in VHD form which makes the file portable.

| 88 |
11. After viewing the contents of the vDisk minimize the File Explorer window.
12. From the PVS Console navigate to Device Collections-> Orbit

13. Double Click the node C21N1 or similar name of what your node name is called.
From here you can see the Target Device Properties. The target devices properties
tells information about the MAC address, where it’s booting from, the NETBIOS name
and the type of vDisk whether it’s production or test.

| 89 |
14. Since we now have a vDisk created we will want this node to boot from PXE from
now on and not its local Hard Disk. Change the Boot From to now say vDisk the click
on the vDisks tab.

| 90 |
15. The vDisk tab tells us what vDisk this particular device is booting to. In our case it’s
the same vDisk we created earlier in the lab. Click Ok to save changes and close out
the Window.
16. From the PVS console navigate to vDisk Pool. From this pane we can see the vDisk
and its specific information in each column. Information such as the Store, where it’s
saved, Connections, vDisk size, and the mode is displayed.

17. In order to make the vDisk shared so that multiple nodes can boot from it in read only
mode, we need to adjust the Mode from Private to Standard Image. To do this

double click the vDisk icon

18. This will then open up the vDisk Properties window.

| 91 |
19. In the Access mode click the drop down menu and switch from private and choose
Standard Image (multi-device, read-only access)

20. Once we have switched from private to shared, the choices for where the PVS client
target will store its local temporary data like the page file and such, will be available in
the Cache type. Click on the drop down menu and choose the cache type Cache on
device hard drive. In the case of the CS100 we will leverage the iSSD of each node
to store temporary write cache information which is discarded after the node reboots.

| 92 |
21. Verify that your Access Mode and Cache type look like the image below and click
OK.

22. Now that we have a master vDisk ready to accept multiple nodes to boot from, we
can make things easier by creating a template of this device in the Orbit collection
that we created earlier. Templates make assigning things like vDisk and other
properties faster for future devices that will be added in. From the Device Collections
click on Orbit.

23. In the column on the right hand side right click on the node and choose Set Device
as Template.

| 93 |
24. A message window will pop up asking to set this device as template device for this
collection. Click Yes.

25. To confirm this device is now set as a template Right click on the Orbit collection
and choose Refresh.
Note: you can also use the F5 key to refresh the console as well.
26. The node in the column will now have a new icon that identifies it as a template.

Note: Earlier in the HP labs, a CSV file was created that has the information
such as the PVS Farm name, Site and chassis node MAC addresses, and Device
collection. We will now use that CSV file for the next portion of this lab.
27. Expand the Device Collections and highlight the Orbit collection. Right click on the
Orbit Collection and a menu will appear. Navigate to Target DeviceImport
Devices
28. The import Target Device Window will appear. Click Next.

| 94 |
29. From the Import Target Device Wizard screen click on the to browse to the
location of the CSV that you created in the previous labs.
Note: Since the CSV is located on the Student CM VM we need to browse using
RDP mapped drives to locate the CSV.

| 95 |
30. Click Next to continue.
31. Since we created a template in the previous steps we will apply that here so that all
our new nodes will inherit this new template. Select the Apply Template box and
click Next.

32. In the New Target Device Wizard screen you will see the output of what is in the CSV
file which is primarily information such as node MAC addresses, PVS Site name and
collection which match exactly what’s in the GUI of the PVS console.
33. Click Finish to import these into the Orbit Collection.

| 96 |
34. Once the Import Target Device Wizard has finished a new column called Status will
appear showing that the nodes are now imported. In the event where a node has
already been added like our source node, the status column will display exists. Click
Done.

35. In the PVS console you can now see all the new nodes that are imported into this
collection. These nodes have a vDisk assigned and are ready to be used.

| 97 |
36. Once we have the devices created we need to create Active Directory accounts. To
do that we need to select all the devices in the Orbit collection by first using the
mouse and left clicking on the node on the top then using the keyboard hold
CTRL and SHIFT at the same time and then using the mouse left click on the node
at the bottom. With all the devices highlighted right click in the blue area and a
menu will appear.

37. Click on Create Machine Account.

38. In the Create Machine Account in Active Directory select the OU XenDesktop and
click Create Account.
39. Once all the accounts are created the status tab will say Success. Click Close

| 98 |
END OF EXERCISE 5

| 99 |
Exercise 6 - Overview
Hands-on Training Exercise 6
Overview
In this Exercise, students will have access to a Windows 7 x64 image that was created using
Provisioning Services. With a golden image already created, the final steps are to create a Machine
Catalog and Delivery Group using Citrix Studio and assign the desktops to users.

Objective
After completing this lab, you will be able to:
 Understand the how Studio works
 Create and assign desktops to user
 Administer basic PVS changes

Prerequisites
To complete this lab, you need:
 Windows administrator credentials
 Access to Citrix Studio Console
 Access to CM

Lab Environment Details


The Visio diagram below displays the HP hardware infrastructure as well as the virtual machines
used to host this lab.

| 100 |
The Student lab virtual machines are accessed remotely using Microsoft Remote Desktop Client
from the HP Thin Client. The Citrix Receiver running on your HP Thin Client is used to access the
physical desktops created during this lab.

Lab Guide Conventions

A VM that runs the Windows Active Directory 2012.


AD1
NIC1=172.23.0.3
VLAN(X)

| 101 |
A VM that runs the Citrix XenApp delivery infrastructure for hosting
applications.
XenApp1
NIC1=172.23.0.5
VLAN(x)

This VM runs Windows Server 2012 and the Citrix Studio infrastructure.
XDC1
NIC1=172.23.0.6
VLAN(x)

A VM that runs the Windows Server 2012 and Microsoft Windows


Deployment Services infrastructure role to deliver Windows Client OS to
WDS bare-metal machines in the HP Moonshot platform.
NIC1=172.23.0.7
VLAN (x)

This VM that runs Windows Server 2012 and the Citrix Provisioning
PVS 1 Services infrastructure.
NIC1=172.23.0.8
NIC2=172.24.0.8
VLAN(x)

Student CM VM is a Student VM that is used for launching the Putty SSH


application to establish communication to the Moonshot Chassis Manager.
Student CM VM
NIC1=172.23.100.4
VLAN(x)

CM is the HP Chassis Manager interface IP that is accessed from the


Student VM via Putty. CM provides a HP ILO command line console that
CM provides access to nodes and switching within the chassis.
NIC1=172.22.0.9
VLAN(x)

| 102 |
Required Lab Credentials
The credentials required to connect to the environment and complete the lab exercises.

Username IP Address Password Description

All Windows
HP\Administrator HPdemo=123 HP.Local Domain administrator
Servers
Administrator 172.23.0.9 password HP Moonshot Chassis Management
Node\Administrator HP1nvent Local administrator account for Windows 7

| 103 |
Exercise 6 — Creating Catalogs and
Groups in Citrix Studio
1. From your Student VM click on the Desktop Remote Manager icon. We will use this
application to provide a consolidated view of RDP connections.

Note: The Remote Desktop Manager tool opens all RDP connections in a
tabbed view. If you would like to use a full screen RDP session you can change
that for each machine. Right click on the machine in the list you would like to
change and choose properties. In the Connection Tab choose Display and
select External and click OK.

2. Expand the Remote Desktop Manager tree and navigate to the Nodes folder. From
there select the xdc1.hp.local and click the open session button. A RDP connection
should be made to XDC1.

| 104 |
3. From here navigate to the Windows 2012 Charms menu and click on the Start Menu
icon. If you’re not familiar with getting to the Windows 2012 Start Menu, you can use
the Remote Desktop Manager Toolbar which provides a nice shortcut as opposed to
using your mouse to getting the charms menu to appear. This can be done by
clicking on the arrow at the top of the Remote Desktop Manager Toolbar.

| 105 |
4. Once this toolbar in engaged the RDP Windows will shrink and a toolbar will appear
on top. Click on the Start Screen icon to access the Windows 2012 Start Menu.

5. From here Start menu click on the Citrix Studio Console

6. The MMC console snap-in will then start to load.

| 106 |
Note: in this lab we created an existing environment that has some desktops
and apps already setup to mimic a preconfigured lab.
Once the console opens we can see that there are a series of objects organized in a
tree fashion. In this lab we have already created a few simple set of Catalogs and
Delivery Groups. Catalogs contain different types of operating system and the types
of machines they are like virtual or physical with PVS.

7. From the Studio console we will create a new Catalog for our Moonshot nodes that
we imported into PVS. Click on the Machine Catalog in the left side of the tree then
on the far right side click Create Machine Catalog

8. The Create Machine Catalog Wizard will appear. Click Next to continue.

| 107 |
9. Select Windows Desktop OS and click Next.

10. In the Machine Management screen we will select the Desktop image technology in
our case which will be Provisioning Services. Click Next.

| 108 |
11. In the User Experience screen select I want users to connect to a random desktop
each time they log on. Click Next.

12. In the Device Collection Window we will need to enter the Provisioning Server
Address and click Connect in order to browse and see the PVS farm and its device
collections. Enter in the IP address 172.23.0.8 and click connect
Note: the PVS vm has two ip addresses. One is on the 172.24.0.8 and the other
is the 172.23.0.8. Only machines that have routes to 172.24.0.X can talk to the
PVS network. All nodes in the chassis have two adapters which can talk to the
PVS network. All other infrastructure vms only have access to the 172.23.0.X
network as is the case for xdc1.hp.local.

| 109 |
13. If there is proper communication then the Provisioning Services device collection will
enumerate the farm as well as the collection named Orbit.

14. Click on the farm name Houston and select the collection named Orbit. Click Next to
continue.
15. The Summary screen provides a section to name the Catalog. Enter the name
Moonshot Desktops. You can also provide a description if needed. Click Finish.

| 110 |
16. Once the process is finished we can see that the new Moonshot Desktop catalog is
created. The column for number of machines reflects the numbers of machines that
Studio found in the Orbit collection.

17. To match the catalog we now need to create a Delivery Group. To create a Delivery
Group click on the Delivery Groups icon and then click on the Create Delivery
Group

18. The create Delivery Group Window will appear. Click Next.

19. Since there was a Machine Catalog already created with machines added, the
Delivery Group wizard can see those catalogs and machines within the catalog. In
the bottom of the Create Deliver Group wizard, change the Choose number of
machines to add to the full amount in Machines column and click Next
Note: the correct amount should be 4 as you will have 4 nodes per
cartridge

| 111 |
20. XenDesktop 7 can create Desktops, hosted application aka XenApp, or both
Desktops and Applications. For this Delivery Type select the Desktops and click
Next.

21. In the Users window click Add Users to assign users to this Desktop Group.

| 112 |
22. For the users groups type enter Domain Users and click the Check Names button to
verify the group then click Ok.

23. The assigned users should now say Domain Users. Click Next.

24. In the StoreFront window select Automatically, using the StoreFront Servers
selected below and check the Receiver StoreFront site then click Next.

| 113 |
25. In the summary window enter a name for the Delivery Group. Delivery groups can
match the catalog names to make it easier to link them together. Click Finish

26. The Delivery group is created and will now appear in the Studio console under
Delivery Groups.

| 114 |
27. The new group shows the number of machines added to the group. It also shows
those that are not registered. Registration occurs when the machines are powered
on. The VDA that was installed in the master image is then able to communicate to
the XenDesktop controller to relay that it is available to accept connections.
Note: Since these are physical machines and not virtual, there is a challenge
with powering on and off these nodes. In a VDI world when provisioning or
power management is performed the commands are sent via PowerShell from
the Studio to the management system for each hypervisor. For HyperV the
commands are sent to SCVMM, for XenServer the commands are sent to
XenCenter, and for VMware the commands are sent to vCenter. Since we have
no hypervisor for HDI we have no management to pass the power on
commands to node, therefore we can control power management one of two
ways. The first is using CM to control nodes and the second is from the
Provisioning Services console. In this lab we will leverage the Provisioning
Services console.
The latest CM firmware also enables Wake-on-Lan (WOL) so that we can
wake these physical nodes up when they are shutdown.
28. Expand the Remote Desktop Manager tree and navigate to the Nodes folder. From
there select the pvs1.hp.local Master Node and click the open session button. A
RDP connection should be made to PVS.

| 115 |
29. From here navigate to the Windows 2012 Charms menu and click on the Start Menu
icon. If you’re not familiar with getting to the Windows 2012 Start Menu, you can use
the Remote Desktop Manager Toolbar which provides a nice shortcut as opposed to
using your mouse to getting the charms menu to appear. This can be done by
clicking on the arrow at the top of the Remote Desktop Manager Toolbar.

30. Once this toolbar in engaged the RDP Windows will shrink and a toolbar will appear
on top. Click on the Start Screen icon to access the Windows 2012 Start Menu.

| 116 |
31. From here Start menu click on the Provisioning Services Console

32. The console will then open up in the Windows Desktop shell. Expand the MMC
console window to full screen. Navigate to Device Collections Orbit

| 117 |
33. In order to power on all the nodes we need to select all the devices in the Orbit
collection by first using the mouse and left clicking on the node on the top then
using the keyboard hold CTRL and SHIFT at the same time and then using the
mouse left click on the node at the bottom. With all the devices highlighted right
click in the blue area where the nodes are highlighted and a menu will appear.
Choose Target Device Boot

34. In the Boot Devices window click Boot Devices and the status of the devices will
now show success.

35. After a few seconds the WOL command will be sent to each node. The power on
process for each node is not sent in a fashion in which a boot storm would occur as
the BIOS for each cartridge is shared across all four nodes so there is a staggered
power on process. As each node powers on, the status will change in the Orbit
collection. We can see the nodes that are connected if we right click on the Orbit
collection and then choose Refresh.

| 118 |
36. The status of nodes changes in the PVS console. Information such as IP Address,
PVS server it’s connected via the Streamed Services and so forth are now displayed
in the console. It will take a few minutes for all the nodes to be powered in the PVS
console. To check the status simply refresh the Orbit collection and the nodes will
eventually all show a green circle with a check mark .

37. If the status of the nodes power status doesn’t display after a few minutes in the PVS
console, we can check the Chassis Manager console to confirm each nodes status.
To do this switch over the Remote Desktop Connection Manager and open Student
CM VM RDP session. Enter the command Show node power CXn1-4 where X is
the cartridge number that is assigned to you. Please refer to chart below to
determine your cartridge #.

| 119 |
38. If the power status shows off, then we need ensure that WOL is enabled. To do this
enter the command set node options wol enable CXn1-4 where X is cartridge
assigned to you in the chart above. This command enables Wake-On-Lan which
uses bare-metal WOL with no additional software needed.

39. With the WOL enabled you can switch over to the PVS RDP console and repeat
steps 33-35 to power on the nodes again or simply power on the nodes here from
the CM console.
40. First, we should power down the nodes to ensure the WOL command is successful.
It the nodes are already on, and we try to turn them on again, there is no message
that lets us know they’re already on. From the CM console, enter the command set
node power off force Cxn1-4
41. From the CM console enter the command set node power on Cxn1-4 where X is
the cartridge # that is assigned to you in the table above.
42. At this point you may also return to the Chassis Manager VSP tab on the Student
CM VM to watch the nodes boot, or check the IP address information of individual
nodes. To connect to a specific node, enter the command connect node VSP cXnY
where X is the student (cartridge) number and Y is node number. When the SAC
prompt appears you may enter I <enter> to see the IP address information. Keep in
mind that it may take a few moments to receive DHCP addresses on both node
interfaces.

| 120 |
43. Once all the nodes have been powered on we can check the Studio console for the
registration status with the XenDesktop controller. To do that switch back to the
xdc1.hp.local RDP session.

44. From the Studio UI left click on Delivery Groups. Click on the Moonshot Desktop
group and the console will display information below. As the nodes VDA service is
starting these nodes will slowly start to appear as being in a registered state which
then enables them to be remotely connected to via XenDesktop HDX. The desktop
group also shows information such as machines that are not registered as well as
operating system version etc.

45. It is important to let all nodes register with the XenDesktop controller in order to allow
them to be 100% accessible for users to login to. This process may take several
minutes as each node will talk to the XenDesktop controller via DNS to register itself.

| 121 |
Other items that may contribute to slow registration are network time
synchronization, DNS records, etc so always keep those in mind.

END OF EXERCISE 6

| 122 |
Exercise 7 - Overview
Hands-on Training Exercise 7
Overview
In this Exercise, students will have access to a Windows 7 x64 image that are provisioned with
Citrix Provisioning Services, published via Citrix Studio, and is now running on the HP Moonshot
CS100 platform without the use of a Hypervisor. Students will be able to login using domain
administrator credentials and access the published desktops and XenApp hosted apps via
StoreFront.

Objectives
After completing this lab, you will be able to:
 Use Citrix StoreFront for self-service of apps and desktops
 Understand how to login to XenDesktop sessions
 Understand how to leverage XenApp hosted applications

Prerequisites
To complete this lab, you need:
 Windows user credentials
 Access to StoreFront

Lab Environment Details


The Visio diagram below displays the HP hardware infrastructure as well as the virtual machines
used to host this lab.

| 123 |
The Student lab virtual machines are accessed remotely using Microsoft Remote Desktop Client
from the HP Thin Client. The Citrix Receiver running on your HP Thin Client is used to access the
physical desktops created during this lab.

Lab Guide Conventions

A VM that runs the Windows Active Directory 2012.


AD1
NIC1=172.23.0.3
VLAN(X)

| 124 |
A VM that runs the Citrix XenApp delivery infrastructure for hosting
applications.
XenApp1
NIC1=172.23.0.5
VLAN(x)

This VM runs Windows Server 2012 and the Citrix Studio infrastructure.
XDC1
NIC1=172.23.0.6
VLAN(x)

A VM that runs the Windows Server 2012 and Microsoft Windows


Deployment Services infrastructure role to deliver Windows Client OS to
WDS bare-metal machines in the HP Moonshot platform.
NIC1=172.23.0.7
VLAN (x)

This VM that runs Windows Server 2012 and the Citrix Provisioning
PVS 1 Services infrastructure.
NIC1=172.23.0.8
NIC2=172.24.0.8
VLAN(x)

Student CM VM is a Student VM that is used for launching the Putty SSH


application to establish communication to the Moonshot Chassis Manager.
Student CM VM
NIC1=172.23.100.4
VLAN(x)

CM is the HP Chassis Manager interface IP that is accessed from the


Student VM via Putty. CM provides a HP ILO command line console that
CM provides access to nodes and switching within the chassis.
NIC1=172.22.0.9
VLAN(x)

| 125 |
Required Lab Credentials
The credentials required to connect to the environment and complete the lab exercises.

Username IP Address Password Description

All Windows
HP\Administrator HPdemo=123 HP.Local Domain administrator
Servers
Administrator 172.23.0.9 password HP Moonshot Chassis Management
Node\Administrator HP1nvent Local administrator account for Windows 7

| 126 |
Exercise 7 — Testing a Moonshot
XenDesktop
1. From your HP Thin Client open Internet Explorer and type in the
URL http://xdc1.hp.local/Citrix/StoreWeb

2. Enter in the credentials HP\Administrator with a password of HPdemo=123 and


click Log On
Note: Once the user logs into StoreFront it will display 2 desktops. The first is
the Moonshot Desktops and the other is the already existing Windows 7VDI
desktop. In this lab the Windows 7 VDI desktop is in maintenance mode and is
not accessible.
3. Click on the new MoonShot Desktop icon to start the connection to the desktop.

| 127 |
4. Citrix Receiver will start to launch. If you don’t see the Receiver application pop up,
look in the taskbar on your HP Thin Client as the Receiver program may be
minimized.

| 128 |
5. At this point you have successfully connected to a Moonshot XenDesktop session.
From here you can view items like the HDX toolbar which show advanced features
like Flash redirection, Window Sizing, File access, etc. To do that click on the HDX
toolbar at the top of the Window and then choose preferences.

| 129 |
6. In the prefreences section you can see the various settings that are either available
for a user to change or set by the administrator as a policy that can’t be changed.
Click Ok once you are finsihed.

| 130 |
Note: In this master image we can launch hosted XenApp applications which are
ready to use. To use those apps inside the XenDesktop Moonshot session we need to
use Citrix Receiver again to request applications.
7. From your XenDesktop session open the Citrix Receiver from the taskbar.

8. Citrix Receiver will pop up asking for asking for a Store site. Type in the following
URL http://xdc1.hp.local/Citrix/Store
Note: since we are not using SSL certificates Receiver informs us in the bottom.

9. Citrix Receiver will then start and show a connecting state.

| 131 |
10. The Citrix Receiver will start. Enter in the credentials HP\Administrator with a
password of HPdemo=123 and select the remember password then click Log On.

Note: Upon the first user logon StoreFront will appear empty. The Citrix FlexCast
model allows Receiver to leverage StoreFront for applications or desktops to be
delivered to the same user and these settings will be retained later on.

| 132 |
11. On the apps screen we can see that no new apps are available to this user. Click on

the to see the list of applications that can be added. Navigate to All Applications
and click Word 2013.

12. Word 2013 is now added to list of apps the user sees. From here wherever the user
goes to, and whatever device he or she logs into from, these apps will follow them
the next time he or she logs into StoreFront.
Note: the Applications icons may not appear in this lab right away, but will show
up.

| 133 |
13. Click on the Word icon to launch Word 2013 as a hosted XenApp session.

Note: You may be prompted to login again if you did not click the save password
so in the previous step so enter in the same credentials as before.

14. The XenApp Word 2013 session will start.

Note: Receiver will start Word 2013 as a hosted application. A message may
pop up looking like the following. This security message allows for the

| 134 |
XenDesktop virtual hard drive to be used by the XenApp session as a mapped
drive in the hosted session. Click Permit Use and select the don’t ask me
again for this site box at the bottom. From here the Word 2013 application will
start.

Note: In this lab we are using an Office 2013 Volume license media which
may pop up messages about not being activated as this lab has no KMS based
connectivity so you can safely ignore that message and click close to start using
Word.

END OF EXERCISE 7
| 135 |
Please complete this survey

We value your feedback! Please take a moment to let us know about your training
experience by completing the brief Learning Lab Survey

Revision: Change Description Updated By Date

Tony Sanchez and


1.0 Original version May /2014
Dennis Arnold

About Citrix
Citrix (NASDAQ:CTXS) is a cloud company that enables mobile workstyles—empowering people to
work and collaborate from anywhere, securely accessing apps and data on any of the latest
devices, as easily as they would in their own office. Citrix solutions help IT and service providers
build clouds, leveraging virtualization and networking technologies to deliver high-performance,
elastic and cost-effective cloud services. With market-leading cloud solutions for mobility, desktop
virtualization, networking, cloud platforms, collaboration and data sharing, Citrix helps organizations
of all sizes achieve the speed and agility necessary to succeed in a mobile and dynamic world.
Citrix products are in use at more than 330,000 organizations and by over 100 million users
globally. Annual revenue in 2012 was $2.59 billion. Learn more at www.citrix.com.

| 136 |

You might also like