Professional Documents
Culture Documents
CONTOSO
Information in this document, including URL and other Internet Web site references, is subject to
change without notice.
Without limiting the rights under copyright, no part of this document may be reproduced, stored in or
introduced into a retrieval system, or transmitted in any form or by any means (electronic,
mechanical, photocopying, recording or otherwise), or for any purpose, without the express written
permission of ACME.
Document history
Revisions
Version Date Author(s)/Editor(s) Notes
1.0 30/10/2015 Joeri Kumbruck Initial version
Reviews
Version Date Reviewer(s) Notes
1.1 04/11/2015 Joeri Kumbruck Changed info regarding RAM
memory configuration of Citrix
Xenapp servers & after feedback
from CONTOSO ICT team
members.
Related documents
Document Version Date Description
CONTOSO -new Citrix 1.0 18/06/2015 Excel sheet with overview of infrastructure
Xenapp Infrastructure items such as VM’s, service accounts,
detailed overview - databases.
v1.0.xlsx
CONTOSO - 1.0 18/06/2015 Sheet with all printer queues and printer
Printer_inventory_list drivers that need to be provided on the new
- v1.0.xlsx Xenapp environment
CONTOSO - 1.0 18/06/2015 Sheet with an overview of all the applications
Application_Inventory that will be available in the new XenApp
_list - v1.0.xlsx environment. A distinction is being made
between which application will be packaged
by ACME and which application will be
packaged by the customer.
CONTOSO Netscaler- 1.0 01/09/2015 Implementation document describing the
Implementation- technical setup of the Netscaler VPX
Document- v1.0.docx appliances.
CONTOSO - Design 1.1 07/09/2015 Design document used to be able to properly
Document Citrix build up the Citrix Xenapp environment
XenApp Upgrade -
v1.1.docx
Document history...........................................................................................................3
Revisions...................................................................................................................................3
Reviews.....................................................................................................................................3
Related documents...................................................................................................................3
1 Introduction.............................................................................................................7
1.1 Scope............................................................................................................................7
2 Architectural Overview...........................................................................................8
2.1 Design Structure...........................................................................................................8
2.2 Infrastructure server Virtual Machines Overview.......................................................10
2.2.1 New Virtual machines.........................................................................................10
2.2.2 Existing Virtual machines....................................................................................11
2.3 XenApp Virtual Machines Overview...........................................................................11
3 User Layer............................................................................................................13
3.1.1 Layer Overview....................................................................................................13
3.1.2 Key Design Decisions...........................................................................................13
3.1.3 Design.................................................................................................................14
4 Access Layer.........................................................................................................20
4.1.1 Layer Overview....................................................................................................20
4.1.2 Key Design Decisions...........................................................................................20
4.1.3 Design.................................................................................................................23
5 Resource Layer.....................................................................................................29
5.1 Layer Overview...........................................................................................................29
5.2 Key Design Decisions..................................................................................................29
5.3 Design.........................................................................................................................35
5.3.1 XenApp version...................................................................................................35
5.3.2 Flexcast Delivery model.......................................................................................35
5.3.3 XenApp Image Design.........................................................................................36
5.3.4 XenApp Sizing......................................................................................................38
5.3.5 User Profile Management...................................................................................40
5.3.6 User Home folder................................................................................................41
5.3.7 Folder Redirection...............................................................................................41
5.3.8 Citrix Policies.......................................................................................................43
5.3.9 User Environment Configuration.........................................................................44
5.3.10 Server Deployment..............................................................................................46
5.3.11 Server Configuration...........................................................................................46
5.3.12 Printing Environment..........................................................................................49
5.3.13 Applications........................................................................................................51
6 Control Layer........................................................................................................57
6.1 Layer Overview...........................................................................................................57
6.2 Citrix XenApp Controllers............................................................................................57
6.2.1 Overview.............................................................................................................57
1 Introduction
1.1 Scope
This document contains actual information about the implemented new Citrix XenApp
environment at CONTOSO.
This document is based on the design document and contains all changes that have been
done during the actual implementation of the Citrix Xenapp environment. In other words,
this document describes exactly how the Citrix Xenapp environment look like on the date
that operational management of the new Citrix Xenapp environment has been transferred to
CONTOSO (this is done on Friday 06/11 after knowledge transfer session)
Section: www.ACME.
8 of
-
2 Architectural Overview
2.1 Design Structure
Designing a desktop virtualization solution is simply a matter of following a proven process
and aligning technical decisions with organizational and user requirements. Without the
standardized and proven process, architects tend to randomly jump from topic to topic,
which leads to confusion and mistakes. The difficulty with creating an application delivery
solution is in trying to focus on everything at once, which leads to an overwhelming amount
of data points to design around. However, when focusing on the common use cases, which
typically accounts for the largest percentage of users, many of the decisions simply follow
best practices, which are based on years of real-world implementations. Once a foundation
is created, the complex use cases can be integrated as needed.
At a high-level, the solution is based on a unified and standardized 5-layer model.
1. User Layer – Defines the unique user groups, endpoints and locations.
2. Access Layer – Defines how a user group gains access to their resources. Focuses on
secure access policies and desktop/application stores.
3. Resource Layer – Defines the applications and data provided to each user group
4. Control Layer – Defines the underlying infrastructure required to support the users
accessing their resources
5. Hardware Layer – Defines the physical implementation of the overall solution The
recommended approach focuses on working through five distinct layers:
The following diagram shows an overview of how all these layers interact with each other:
3 User Layer
3.1.1 Layer Overview
The user layer represents the end-users. This layer describes the different groups of users
and their requirements. Users are often grouped based on their network connectivity to the
data center, endpoint devices, data storage needs and any special requirements or concerns
(e.g. security, performance, mobility, personalization).
3.1.3 Design
3.1.3.1 User Groups
From a high level perspective, one user group can be defined: Zenito employees.
3.1.3.1.1 Locations
This user group can access the environment from internal as well as external locations.
The following picture shows a high-level overview of the different locations that will connect
to the new Citrix Xenapp environment:
The environment will be prepared to enable access for the above mentioned client devices.
MAC DEVICES AS WELL AS MOBILE DEVICES (TABLETS AND SMARTPHONES) WILL BE ABLE TO CONNECT
TO THE NEW XENAPP ENVIRONMENT BUT THEY WILL NOT BE OFFICIALLY SUPPORTED BY THE ICT
CONTOSO TEAM.
Thin clients are very light, small devices that only contain robust components and in case
of CONTOSO is only used to access a virtual desktop on the Citrix Xenapp platform. Thin
clients are:
Windows 7 embedded Igel thin clients
Linux Igel thin clients
Citrix receiver version 4.3.0 (14.3.0) is used and deployed and installed on all company
owned fat clients.
Users will connect to 1 URL, https://access.CONTOSO.be, no matter if they are working from
an internal or external location.
Only Receiver for web will be used, thus end users will always connect to the Citrix access
webportal with their preferred web browser.
Users will connect to 1 URL, https://access.CONTOSO.be, no matter if they are working from
an internal or external location.
Only Receiver for web will be used, thus end users will always connect to the Citrix access
webportal with their preferred web browser.
All Igel thin clients will be installed with the latest publicly available Firmware version:
Old Linux thin clients: Igel LX firmware version 4.14.100
New Linux thin clients: Igel LX firmware version 5.07.100
W7 thin clients: Igel W7 firmware version 3.10.120
Igel thin client will be configured with an Igel configuration profile that will contain a
minimum of configuration:
Screen resolution config (1280 x 1024 dual screen)
Keyboard layout (Dutch-Belgium)
Autolaunch of IE web browser to URL
https://storefront.CONTOSO.be/Citrix/Internalweb for Windows embedded Igel thin
clients
Appliance mode for Igel Linux thin clients. Appliance mode has been configured to
connect to Citrix storefront site https://storefront.CONTOSO.be/Citrix/Internalweb
Local printer configuration (HP4350) for certain Igel linux thin clients
Note: Mac fat clients are not officially supported within the CONTOSO environment, services
will be delivered best effort.
4 Access Layer
4.1.1 Layer Overview
The Access Layer is responsible for making the resources available to end-users.
Users will utilize the access layer in order to connect to their resources based. Based on the
location, connectivity and security requirements the access layer defines how users should
authenticate and connect to their virtual desktop or applications.
External Access
External Access Netscaler VPX 200 Netscaler Gateway is preferred
method way to enable secure remote
access to Citrix Xenapp
environment.
Netscaler 11.0 build 62.10 Latest stable version available
Version
Netscaler Enterprise edition Customer choice. Enterprise
edition edition gives ability to enable
AAA feature which is very useful
in combination with reverse
proxy functionality.
Netscaler Two-arm topology Recommended topology by Citrix
network and ACME
topology
Number of 2 (vnscon-1212 and vnscon- Redundancy on component level
Netscaler 1213) as this is a crucial component.
appliances
Number of 2 (vsfcon-1200 and vsfcon- Redundancy on component level
Storefront 1201) as this is a crucial component.
servers
Store Name DEFAULT_STORE Same DEFAULT STORE can be
used
Storefront https://storefront.CONTOS A dedicated Receiver for web site
Receiver for O.be/citrix/EXTERNALWEB will be used for external access.
Web URLs
Delivery vddcon-1202, https,443 2 delivery controllers will be
Controllers vddcon-1203, https,443 used for redundancy puposes
https will be used as this is
citrix recommendation
Remote Access No VPN tunnel VPN tunnel is not required for
1 Netscaler gateway ICA proxy functionality
appliance At least 1 Netscaler gateway
STAs: need to be added if Remote
https://vddcon- access is required
1202.CONTOSO.be 2 STAs need to be provided
https://vddcon- for redundancy
1203.CONTOSO.be
Authentication Pass-through from This authentication method is
Netscaler Gateway required when working with
4.1.3 Design
4.1.3.1 Access Strategy
Desktops and Applications published by Citrix XenDesktop and XenApp need to be accessed
by users. Users can access the published resources using different methods.
Citrix StoreFront
Citrix StoreFront is the successor of Citrix Web Interface and has the same goal as to provide
users with access to their applications. Citrix Storefront uses subscriptions: users can add
their personal favorite application if configured, and are always presented with their
personal list of applications on each client device with a Citrix Receiver for Storefront.
Vsfcon-1200 192.168.1.200 1 4 GB 60 GB
Vsfcon-1201 192.168.1.201 1 4 GB 60 GB
4.1.3.2.3 Authentication
Authentication is performed on the StoreFront servers. Users will access the StoreFront web
site by logging on with their user name password.
Receiver auto detect and deployment will be disabled on this “Receiver for Web” website as
Citrix receiver software will be deployed and managed in a controlled way on corporate
owned fat clients and thin clients that connect from internal locations.
4.1.3.3.3 Authentication
For external users, authentication is performed on the Netscaler Gateway appliance. The NS
Gateway appliance validates user credentials using an LDAP connection to Active Directory.
To avoid unencrypted LDAP traffic from the DMZ into the trusted LAN, the LDAP traffic will
be encrypted using SSL to the domain controllers over port 636 (LDAPS). The Netscaler will
load balance all Secure-LDAP traffic between the domain controllers 192.168.1.1 and
192.168.1.3 to provide high availability and failover of the LDAP traffic. Only after successful
Receiver auto detect and deployment will be enabled on this “Receiver for Web” website to
make sure that correct Citrix receiver software will be deployed and installed on non-
corporate client devices that are connecting from remote locations.
4.1.3.3.5.1 Authorization
The Netscaler Gateway first performs authentication (via secure LDAP) and afterwards
authorization. Authorization determines where a succesfully logged on user has access to. In
this setup authorization is used to define an AD Group that controls the users who are
allowed to access the Netscaler gateway.
The NS Gateway is configured with a default authorization action of DENY.
An AAA Group is added with the name SGU_XA_CAG-USERS and this group has an
authorization policy bound that ALLOWs authorization.
5 Resource Layer
5.1 Layer Overview
The resource layer of a solution focuses on personalization, applications, and image design.
The resource layer is where users will interact with desktops and applications and is most
visible to the end users. The user requirements obtained during the Assess phase and
refined during the user layer design phase are used as the basis for the desktop design
recommendations.
>_USER_CITRIX_POLICIES
SGL_GPO_CONTOSO_XA_<dtap
>_USER_DUTCH_UI
SGL_GPO_CONTOSO_XA_<dtap
>_USER_FRENCH_UI
SGL_GPO_CONTOSO_<dtap>_X
P_USER_VISUALLY_IMPAIRED_
USERS_CONFIG
VBS Each DTAP environment will have User Logon & logoff scripts
logon/logoff its own user logon and logoff will contain user
scripts scripts: environment configuration
Logon: actions that need to be
ADMINS: executed during log on or
\\CONTOSO\netlogon\Citrix log off (mainly general user
\<DTAPenv>\adminlogon.vb environment config +
s application config actions)
USERS:
\\CONTOSO\netlogon\Citrix
\<DTAPenv>\\userlogon.vbs
Logoff:
ADMINS:
\\CONTOSO\netlogon\Citrix
\<DTAPenv>\\adminlogoff.v
bs
USERS:
\\CONTOSO\netlogon\Citrix
\<DTAPenv>\\userlogoff.vbs
Start Menu Shortcuts of locally installed Easiest and most flexible and
shortcuts apps as well as App-V apps will consistent way to present
be managed by ACME Taskflow application shortcuts within
framework. desktop.
AD Security groups will be used
to make sure that end users
only see the required app
shortcuts.
End users are not allwed to add
shortcuts to start menu
Desktop Shortcuts of locally installed Easiest and most flexible
Shortcuts apps as well as App-V apps will way to present application
be managed by ACME Taskflow shortcuts within desktop.
framework..
AD Security groups will be used
to make sure that end users
5.3 Design
5.3.1 XenApp version
For the new XenApp environment the latest XenApp 7.6 version platinum edition will be
used.
The XenApp components will be updated with the latest available public hotfixes.
Within Flexcast, several delivery technologies exist and can be combined. The following
delivery technologies are available. Each delivery technology has its own benefits if used for
the correct groups of users.
Hoste
- Shared - Personal - Runs locally - Secure - Mobile, online
- Best TCO Streame
d Services
- Terminal Hosted
- Desktop OS VDI - Boots from - SingleApps
Instance Local VM
or offline
- VMs or blades network d App management - Synchronisation
Share Services
VHD
The current XenApp environment is based on the Hosted Shared Desktop model and this is
still the best Flexcast model for this environment with the lowest CTO.
The user groups that have been defined can be mapped to the following Flexcast models:
Citrix Provisioning Services will be used to provision the new XenApp servers. Using PVS will
also guarantee all XenApp servers are exactly the same. The write cache will be located in
RAM with overflow to disk. This ensures optimal performance of the XenApp servers even
though they are streamed from the same base vDisk. The read actions are cached in RAM
by the XenApp server and the PVS server. The write actions are cached in RAM by the PVS
Cache To Ram mechanism on each XenApp server.
5.3.3.4 vDisks
A separate vDisk will be used for each silo.
Additionally a separate vDisk will be used for the different DTAP environments of the
CONTOSO silo. Separate vDisks for each environment ensure changes can be tested
completely independent from the production vDisk.
ACCEPTANCE XA_W2012R2_DESKTOPS_APPS_ACCEPTANCE_Vx.x
TEST XA_W2012R2_DESKTOPS_APPS_TEST_Vx.x
DEVELOPMENT XA_W2012R2_DESKTOPS_APPS_DEVELOPMENT_Vx.x
Each Xenapp server is sized for a disk capacity of around 60 GB. To include a safety margin
the vDisks will be created with a size of 80GB. The vDisks used will be Dynamic vDisks where
the size on disk is the same as the actual contents of the vDisk. The vDisk files will not be
pre-allocated.
The following diagram will show the distribution of the XenApp Servers across the different
physical hosts:
The production XenApp servers are located on 2 physical hosts. Each physical host contains
7 virtual production XenApp servers. Each virtual XenApp server has the following
specifications:
Specification Virtual XenApp server
# vCPU 4
RAM 25 GB (17 GB RAM + 8 GB PVS RAM Cache)
Local Disk 10 GB
The DEVELOPMENT Xenapp server will be located on Xenserver host SXSCON-1019. The
DEVELOPMENT Xenapp server will share system resources with other types of VM’s on this
Xenserver host.
The TEST Xenapp server will be located on Xenserver host SXSCON-1021. The TEST Xenapp
server will share system resources with other types of VM’s on this Xenserver host.
The ACCEPTANCE Xenapp server will be located on Xenserver host SXSCON-1022. The
ACCEPTANCE Xenapp server will share system resources with other types of VM’s on this
Xenserver host.
Local Disk 10 GB
redirection or more advanced 3rd party profile solutions such as AppSense Environment
Manager.
For the new XenApp environment at CONTOSO, a hybrid profile solution based on Citrix User
Profile Management is chosen as the user profile solution.
A template profile will be used to supply a properly configured user profile to end users who
connect to the Xenapp environment for the first time. The template profile will be located
on the central file server:
\\Vfscon-1208\userconfig$\template
The Citrix profiles will be stored on the new Windows server 2012 R2 File server:
\\vfscon-1208\USERCONFIG$\UPM\%USERNAME%\
(-> e:\USERCONFIG\UPM\%USERNAME% on File server itself)
Test environment
SGL_GPO_CONTOSO_XA_XT_COMPUTER_CITRIX_POLICIES
Acceptance environment
SGL_GPO_CONTOSO_XA_XA_COMPUTER_CITRIX_POLICIES
Production environment
SGL_GPO_CONTOSO_XA_XP_COMPUTER_CITRIX_POLICIES
The documents folder will be redirected to the existing location because it contains
documents. This is to ensure all documents are accessed from one place when users are still
testing the new XenApp environment. The documents folder will be assigned the H: drive
letter and named “Personal data”. Not changing the My Documents redirection means there
is no impact for users accessing their documents when they access both the current XenApp
environment and the new XenApp environment during a period of user testing. If required,
the documents shares can be moved to a new file server after the new XenApp
implementation.
The Pictures, Music, Videos and downloads folder will also be redirect to the user home
drive, like it is already the case within the current Citrix Xenapp environment.
Folders that are containing just hyperlinks, such as the Favorites and Links folders are
redirected to a new location on the new file server. A one-time import will be done from
the existing Favorites to the new location when the user first logs on to the new
environment.
The Appdata (roaming) folder will be redirected to the new file server.
The desktop folder will be redirected to the new file server and it is only allowed to save
shortcuts on the desktop. This will be forced by using the file screening feature on the new
File server.
The Start menu folder will still be located within the user profile and thus will not roam.
IMPORTANT!: THE USERS WILL RECEIVE A DESKTOP WITH ONLY THEIR EXISTING DESKTOP SHORTCUTS
(THUS E.G. NO OFFICE 2010 OR PDF DOCS WILL BE AVAILABLE ON THE USER’S DESKTOP) WHEN THEY
LOG ON TO THE NEW XENAPP ENVIRONMENT FOR THE FIRST TIME (DESKTOP SHORTCUTS WILL BE
MIGRATED FROM THE OLD CITRIX XENAPP ENVIRONMENT TO THE NEW CITRIX XENAPP ENVIRONMENT).
IT MUST BE COMMUNICATED TO END USERS THAT ONLY SHORTCUTS ARE ALLOWED ON THE DESKTOP.
The following Citrix policies will be created (similar to the current Citrix Policies):
Unfiltered Computer Configuration policy: this CTX policy contains baseline computer
Citrix policy settings that will be applied to all Citrix Xenapp servers.
Unfiltered User Configuration Policy: this CTX policy contains baseline user Citrix policy
settings that will be applied to all users.
OPTICAL_REMOVABLE_CLIENT_DRIVE_MAPPING_ALLOWED (filtered on AD security
group): this CTX policy will allow client CD-Rom and USB storage drive mappings for
users that are member of AD security group
“SGU_XA_CTX_OPTICAL_REMOVABLE_CLIENT_DRIVE_MAPPING_ALLOWED”.
ALL_CLIENT_DRIVE_MAPPING_ALLOWED (filtered on AD security group): this CTX
policy will allow all client drive mappings for users that are member of AD security group
“SGU_XA_CTX_ALL_CLIENT_DRIVE_MAPPING_ALLOWED”.
DEFAULT_CLIENT_PRINTER_MAPPING_ALLOWED (NOT DEFAULT) (filtered on AD
security group): this CTX policy will map the default client printer but will not make it
the default printer within the Citrix session for users that are member of AD security
group “SGU_XA_CTX_DEFAULT_CLIENT_PRINTER_MAPPING_ALLOWED (NOT
DEFAULT)”.
DEFAULT_CLIENT_PRINTER_MAPPING_ALLOWED (DEFAULT) filtered on AD security
group): this CTX policy will map the default client printer and will make it the default
printer within the Citrix session for users that are member of AD security group
“SGU_XA_CTX_DEFAULT_CLIENT_PRINTER_MAPPING_ALLOWED (DEFAULT)”.
DEFAULT_CLIENT_PRINTER_MAPPING_ALLOWED_VIA_NS (DEFAULT) (filtered on
Client name): this CTX policy will map the default client printer and will make it the
default printer within the Citrix session for users connecting with client name “E_*.
Users connecting from external locations (via Citrix Netscaler Gateway) will always get
client name (E_*). This means that users who are connecting from external locations will
always have their default client printer within their Citrix session.
Group policy loopback processing will be enabled to be able to assign GPO’s with user
configuration to the Citrix Xenapp AD computer objects. Group policy loopback processing
mode is set to “replace” this will ensure that GPO’s applied on user object level are ignored
for the Citrix Xenapp environment.
Also GPO inheritance is blocked on the Citrix root OU, this prevents that GPO’s applied at
higher OU level will also apply to the Citrix environment.
IMPORTANT!: THE USERS WILL RECEIVE A DESKTOP WITH ONLY THEIR EXISTING DESKTOP SHORTCUTS
(THUS E.G. NO OFFICE 2010 OR PDF DOCS WILL BE AVAILABLE ON THE USER’S DESKTOP) WHEN THEY
LOG ON TO THE NEW XENAPP ENVIRONMENT FOR THE FIRST TIME (DESKTOP SHORTCUTS WILL BE
MIGRATED FROM THE OLD CITRIX XENAPP ENVIRONMENT TO THE NEW CITRIX XENAPP ENVIRONMENT).
IT MUST BE COMMUNICATED TO END USERS THAT ONLY SHORTCUTS ARE ALLOWED ON THE DESKTOP.
In order to automate the Windows Server 2012 R2 build, the following tools will be used:
Clean VM installed by using Windows server 2012 R2 ISO: manual deployment of the
base operating system + operating system security patches / hotfixes.
Taskflow: Deployment of additional software, such as XenApp components, physically
installed applications …
5.3.11.3 DTAP
DTAP is standing for development, testing, acceptance and production.
Development environment
A development environment is used to make modifications and adjustments. This includes
development regarding installations and configurations. Actually it’s a sort of playground
where developers and system engineers can test and modify stuff as much as they like.
Therefore this environment needs to be separated completely from the production
environment to make sure the production environment cannot be influenced by
modifications made to the development environment.
The development environment doesn’t need to be exactly the same as the production
environment, it’s only used for development regarding installation and configuration of OS,
Citrix and applications.
A lot of customers choose to use the test environment if they want to make modifications
and adjustments. Although it is ideal to have a separation between development
environment (a real sandbox) and test environment (more controlled and used for technical
validation of changes), but maybe the required system resources are not available to have 2
separate environments and it is also 1 extra environment to manage and maintain.
Therefore, a lot of customers would like to keep it simple and have 1 test environment
instead of also having an extra development environment.
Test environment
A test environment is used to test the modifications and adjustments which were made in
the development environment. The test environment also needs to be separated completely
from the production environment to make sure the production environment cannot be
influenced by modifications made to the test environment. The test environment needs to
be integrated in a separate test environment infrastructure (test Active directory infra, test
SQL infra, test file and print servicing,…), The Citrix Xenapp infrastructure will be shared by
the development environment if a development environment is also present.
Acceptance environment
An acceptance environment is used to accept the implemented solution that has been
introduced in the test environment. An acceptance environment needs to be integrated into
the production environment infrastructure, but the Citrix Xenapp infrastructure can be
separated or can be the same as the production Citrix Xenapp environment, depending if the
customer decides to also have a pre-production environment or not.
If the customer decides to separate the acceptance Citrix Xenapp environment, then
acceptance environment normally does not accept end users during day-to-day operations,
but only accepts end users if functional validation of a change is required.
If the customer decides to integrate the acceptance Citrix Xenapp environment within the
production Citrix Xenapp environment, then the acceptance Citrix Xenapp environment will
actually operate as a pre-production Citrix Xenapp environment (see next paragraph for
more details regarding pre-production Citrix Xenapp environment).
An acceptance environment will be used by system engineers and key end users. The role of
key end users is crucial here, as they know the applications best and they can give valuable
feedback regarding the installation and configuration of the applications. They will supply us
information if applications are ready for production or not. Functional validation and
optionally performance validation will be done during this stage.
Production environment
A change can be implemented within the production environment when proper technical,
functional and performance validation has been done by ICT system engineers and key end
users.
The production environment is used by all end users. Changes that need to be implemented
to the production first need to go through all phases mentioned previously, to make sure
that these changes will not have a negative impact to the production environment.
Optionally, the production Citrix Xenapp environment can also contain a pre-production
Citrix Xenapp environment. A change that has been validated in acceptance phase will first
be implemented on the pre-production environment where also production end users are
connected to. Actually, there is no real difference between a pre-production Citrix Xenapp
server and a production Citrix Xenapp server, the only difference is the way how a change
will be implemented: first it will be installed on the pre-production Citrix Xenapp servers and
if this works fine then the change will be implemented on the other production Citrix Xenapp
servers.
However, it is very important to keep the 4 environment in sync as much as possible and at
least have the TEST – ACCEPTANCE – PRODUCTION environment fully in sync. This enables
you to have proper change & release management related to the Citrix Xenapp servers (and
its apps).
5.3.11.4.1 MS Applocker
End users can only launch executables from allowed locations. This security measure will be
configured by using Applocker that can be configured via AD GPO:
SGL_GPO_CONTOSO_XA_<dtapenv>_COMPUTER_APPLOCKER
PRN-CON-Secure_PCL SGL_PS_VPSCON-1009_PRN-CON-Secure_PCL
PRN-ANT-1-1_PCL SGL_PS_VPSCON-1009_PRN-ANT-1-1_PCL
PRN-CIN-1-1_PCL SGL_PS_VPSCON-1009_PRN-CIN-1-1_PCL
PRN-GEN-1-1_PCL SGL_PS_VPSCON-1009_PRN-GEN-1-1_PCL
PRN-LIB-1-1_PCL SGL_PS_VPSCON-1009_PRN-LIB-1-1_PCL
PRN-LOU-1-1_PCL SGL_PS_VPSCON-1009_PRN-LOU-1-1_PCL
5.3.12.2.2 End users connecting from external locations (via Netscaler Gateway)
The auto-creation of client printers is managed using Citrix policies.
Mapping of the default client printer is enabled for all users who are connecting via the
Netscaler Gateway, thus from an external location. The default client printer will also be the
default printer within the Citrix session.
The Citrix universal print driver will be used when the manufacturer driver is not available on
the Citrix Xenapp servers for auto-created client printers.
Citrix Client printers that are auto-created will only use the Citrix Universal driver if
manfucturer driver is not available.
5.3.13 Applications
5.3.13.1 Application List
The following table lists the applications that will be installed on the new Citrix Xenapp
environment. The table also lists whether or not the application will be delivered as a local
installed application in the XenApp image or as an App-V application.
In general, ACME recommends installing applications locally on each Citrix Xenapp server if
possible. Locally installed applications still perform better and when using an enterprise
application deployment solution then maintenance of applications ( installation,
uninstallation and upgrades) is still easy. We recommend using application virtualization
when application conflict, when multiple versions of the same app is required or maybe
when an app is update frequently.
In general, ACME recommends using MS App-V when application virtualization is solely used
within a Citrix Xenapp server environment, as MS App-V licenses are included within RDS
CAL’s.
For CONTOSO, ACME recommends installing as much applications as possible locally in the
vDisk. These applications will be installed during the initial deployment of a new XenApp
server (creating a new vDisk) or during the regular vDisk update schedule. The applications
will be installed in a controlled way by using ACME Taskflow.
ACME advises to use MS App-V application virtualization only when there are application
conflicts or when you want to install multiple versions of the same application side-by-side
on the same Citrix Xenapp server (e.g. MS office 2010 and MS office 2013)
When working with Citrix Provisioning Services, Citrix servers are streamed on the fly using
the same operating system image as a common base. Any changes to the disk are
intercepted by the PVS software and redirected to the write cache. When a Citrix XenApp
server is rebooted, these changes are purged and the server will boot from the same original
image.
When using App-V applications in combination with non-persistent images, it is important
how App-V applications are deployed to these images. By nature, App-V applications are
also streamed to the local machine, hence by default ending up in the write cache. This is
not a desired situation as the write cache can be quickly filled with virtual application data.
A number of options exist to provision App-V applications to XenApp images:
- Pre-Publish and Pre-Cache App-V applications: Applications are already streamed
and cached inside the base vDisk. Any servers streamed from this vDisk will contain
by default all the applications to prevent them being streamed again. This method
requires the vDisk to be updated regularly to cache any new or updated App-V
applications.
- Use App-V Shared Content Store mode. The App-V client will not stream package
contents to the local machine but instead access them directly from the App-V
Streaming Servers. This can introduce a slight impact to the application performance
but can also be used in combination with pre-caching. This way, most applications
can be pre-cached in the image but new and updated applications can still be
streamed without ending up in the write cache.
In the new XenApp environment, the App-V client will be configured with Shared Content
Store enabled.
In order to speed up the displaying of shortcuts in the user’s start menu, the App-V client’s
publishing data will be saved within the user’s Citrix User Profile Management profile.
In the new XenApp environment, the primary goal is the performance of the applications.
When applications are installed locally inside the XenApp image, they are immediately
available for the user and performance is guaranteed. For this reason, applications will be
installed locally in the image. App-V will be used for the following circumstances:
- Applications that have a conflict with another locally installed application
- Multiple versions of an application are required to run side-by-side next to each
other
6 Control Layer
6.1 Layer Overview
The control layer includes all infrastructure related components supporting the overall
solution. This includes the Citrix controllers, image management through MCS or PVS, and
the creation and publication of hosted resources. Specific Control Layer components and
design decisions are based on the completed design of the above layers (user, access and
desktop).
6.2.3 VM Specifications
6.2.4 Configuration
6.2.4.1 XenApp Sites
A XenApp or XenDesktop Site is the boundary of a XenDesktop environment. A XenApp sites
consists of one or more delivery controllers and the site configuration is stored in an SQL
Database.
Additional sites are usually created when two geographically spread data centers are used,
as such keeping all site traffic, configuration and database local to each datacenter.
For the new XenApp environment a single XenApp site; CONTOSO_XA_76, will be used as all
components will be located within the same datacenter.
VM-hosted apps (applications from desktop operating systems) and hosted physical
desktops. Only one user at a time can connect each of these desktops.
Remote PC Access — User devices that are included on a whitelist, enabling users to access
resources on their office PCs remotely, from any device running Citrix Receiver. Remote PC
Access enables you to manage access to office PCs through you XenDesktop deployment.
Type
XA_W2012R2_DESKTOPS_APPS_ XA_W2012R2_DESKTOPS_APPS_DEV Desktops
DEVELOPMENT ELOPMENT and
Application
s
XA_W2012R2_DESKTOPS_APPS_ XA_W2012R2_DESKTOPS_APPS_TES Desktops
TEST T and
Application
s
XA_W2012R2_DESKTOPS_APPS_ XA_W2012R2_DESKTOPS_APPS_ACC Desktops
ACCEPTANCE EPTANCE and
Application
s
XA_W2012R2_DESKTOPS_APPS_ XA_W2012R2_DESKTOPS_APPS_PRO Desktops
PRODUCTION DUCTION and
Application
s
6.2.4.4 Database
XenApp delivery controllers require SQL databases to store various configuration and logging
data:
Database Description
Site Configuration Stores the XenApp configuration such as the delivery
groups, machine catalogs, …
Monitoring User’s session data and metrics such as logon speed, to be
used by Director Console and reporting.
Configuration Logging Stores all the administrative changes on the environment
for auditing purposes.
By default, the Configuration Logging and Monitoring databases (the secondary databases)
are located on the same server as the Site Configuration database. Initially, all three
databases have the same name. Citrix recommends that you change the location of the
secondary databases after you create a Site. You can host the Configuration Logging and
Monitoring databases on the same server or on different servers. The backup strategy for
each database may differ.
We will rely on Hypervisor high availability in order to make sure the SQL databases are
always online. However network or other issues can still prevent a proper database
connection. In order to prevent an outage, Connection Leasing can be enabled on the
delivery controllers. When Connection Leasing is enabled, each delivery controller cashes
user connections to the recently used applications and desktops during normal operations.
If the database becomes unavailable, the controllers can remember all operations in an
internal cache and replay these when a user connects to a recently used published
application or desktop.
When there is a problem with the Site Configuration database and connection leasing is
active, users will still be able to connect to their resources, but the administrative consoles
will not be available. Also some other features such as workspace control will not be
available. More information can be found on :
http://support.citrix.com/proddocs/topic/xenapp-xendesktop-76/xad-connection-
leasing.html
A Citrix policy setting will be used to control the list of delivery controllers as it is a robust
solution and all configuration will be in a central place.
Also the StoreFront store will be configured to connect to the XML brokers on both
controllers.
The XenApp Site database will be hosted on a standalone MS SQL server with underlying
Hypervisor HA as HA solution. Also Citrix Xenapp connection leasing functionality will be
used to guarantee continuity of the Citrix Xenapp 7.6 environment when the Xenapp SQL
databases are offline for certain amount of time. For more info, see
https://www.citrix.com/blogs/2014/11/11/xendesktop-7-6-connection-leasing-design-
considerations/
Load balancing via Citrix Netscaler will be used to make the Citrix Director console highly
available.
General Configuration
Provisioning 7.6 Latest PVS version
Services Version available
License Server Vlscon-1209 Dedicated Citrix license
server
License Type Xenapp platinum Xenapp platinum licenses
entitles to use PVS for
deployment of Xenapp
servers
Database server vdbcon-1082.CONTOSO.be Customer SQL server
Database name CX_PVS_SITE_DB Customer choice
PVS farm name CONTOSO_PVS_76 Name according to
customer naming
conventions
PVS farm SGU_XA_GEN-PVS- One AD security group is
administrator ADMINISTRATORS sufficient for this
group environment
PVS site name Brussels Only 1 site at Brussels DC
Database MS SQL server that hosts PVS Offline support is
Redundancy database will run on Xenserver disabled by default but
with HA enabled when enabled, it allows
PVS database Offline support PVS within the farm to
will be enabled. use a snapshot of the
database in the event
that a connection to the
database has been lost.
High availability will be
configured in first
instance using this offline
support.
PVS Farm and Site layout
Number of PVS 1 Farm will be used. 1 Farm simplifies the
farms setup and will contain
TST, ACC and PRD vDisks.
Number of PVS 1 site will be used: CONTOSO All PVS servers and
Sites Xenapp servers will be
located within the same
datacenter. No need to
define multiple sites
_PRODUCTION_Vx.x
PVS Stores 1 PVS store (name: STORE, location: Simplest config, no need
E:\PVS_STORE) will be defined on to have more PVS stores.
each PVS store that contains all PVS
vdisks
vDisk Replication Script / Robocopy Script based on robocopy
will be used as this is the
easiest, simplest and best
way to replicate PVS
vdisks between multiple
PVS servers
vDisk Write Cache Device RAM with overflow to Disk Best performance
Type
vDisk Write Cache 8 GB RAM More than sufficient for
Size this environment
Target Device configuration
Target Device 10 GB Each XenApp server will
persistent disk have a local disk of 10GB
for storing persistent data
and as RAM Cache
failover.
Target Device Network boot with DHCP options 66 Simple and robust
boot and 67 without additional
network config.
Streaming Production LAN A single NIC is sufficient
Network and reduces additional
complexity.
TFTP The PVS servers will also run the This option provides a
TFTP services and deliver the boot TFTP infrastructure for all
files requesting targets and
provides redundancy.
TFTP traffic will reside in
the same vLAN as DHCP
traffic.
DHCP Each PVS will be configured as Simplest and easiest
TFTP server and DHCP server. configuration
A DHCP reservation will be All the servers running
created for each XenApp server. DHCP will be configured
DHCP failover will be configured with options 066 and 067
to have DHCP HA to deliver the boot files to
requesting targets.
PXE services No Boot information is
included by using DHCP
options 066 and 067, PXE
services are not needed
on PVS servers
6.3.3 VM Specifications
6.3.4 Configuration
6.3.4.1 General Configuration
The latest Citrix Provisioning Services version available is 7.6.
Citrix PVS needs to be licensed by connecting to a Citrix license server. The PVS servers will
be configured to connect to the license server used by XenApp: vlscon-1209
The Provisioning Services database, CX_PVS_SITE_DB, will be hosted on the standalone MS
SQL 2014 server, vdbcon-1082. In case of database unavailability, the PVS servers will have
offline support enabled allowing the servers to continue to work with a local snapshot of the
database until connectivity is restored.
Sites are used within farms to group Provisioning Servers located in a specific region or
datacenter. A site contains device collections and one or more stores where vDisks are
located. Sites can be used to make sure a collection of devices uses a specific list of PVS
servers. High Availability is only available within a PVS site, not cross-site.
Each Site should contain at least one PVS server. However that results in no high availability
for the streamed devices.
The new CONTOSO XenApp environment will use a single farm containing a single site. This
site contains 2 PVS servers: vpscon-1204 and vpscon-1205.
The two PVS servers are high available allowing XenApp servers to failover to the other PVS
server in case of PVS server failure.
The vDisk store on each PVS server will be a local attached disk. The vDisks will be replicated
between the sites by using a robocopy script.
Each PVS server will run as a virtual machine on the shared Xenserver environment within
the same datacenter. Each PVs server will run on a separate Xenserver host.
In the PVS site a Device Collection will be created for each vDisk. This results in the following
device collections:
- XA_W2012R2_DESKTOPS_APPS_MASTER: used for the MASTER Citrix Xenapp
server
- XA_W2012R2_DESKTOPS_APPS_DEVELOPMENT: used for the DEVELOPMENT
Citrix Xenapp server
- XA_W2012R2_DESKTOPS_APPS_TEST: used for the TEST Citrix Xenapp server
- XA_W2012R2_DESKTOPS_APPS_ACCEPTANCE: used for the ACCEPTANCE Citrix
Xenapp server
- XA_W2012R2_DESKTOPS_APPS_PRODUCTION: used for the PRODUCTION Citrix
Xenapp server
Its has been configured that only members of AD security group “SGU_XA_GEN-PVS-
ADMINISTRATORS” are administrators within the PVS console and are able to change PVS
configuration.
additional translation of write actions across the network this method does not give
the best performance.
- Write Cache in client RAM: The writes are redirected to a portion of the RAM
memory on the device. This is very fast but consumes more RAM memory. If the
assigned RAM cache is full, the device becomes instable.
- Write Cache in client RAM with overflow to disk: This is the best of both worlds
where writes are first redirected to fast RAM memory. If the assigned RAM buffer is
full, writes are redirected to a local attached hard disk.
PVS vDisks can be created with a static or dynamic size. As the vDisks are VHD files they can
be compared to virtual machine disks that can be pre-allocated or set to dynamically grow.
The performance impact of using dynamic disks is negligible. The available free space in the
PVS Stores must be carefully monitored to prevent no more free space being available.
In order to provide high available, each PVS server in a PVS site must have access to the
same PVS Stores. If a store is used based on local storage, the path to the store must be
identical on all PVS servers.
For the new Provisioning Services environment, a PVS store will be used on each PVS server’s
local storage. Each PVS VM will have an additional disk assigned for storing the vDisks.
The vDisk stores will be replicated by using a robocopy based script scheduled for daily
syncing at 23h00.
Estimated vDisk store usage on a PVS server:
Based on the above estimations it is recommended the vDisk store on each PVS server to be
at least 400GB in size. 500 GB is foreseen to have some margin.
We then need to assign this new vdisk version to the PVS target devices within each PVS
device collection:
We can than test the proper working of the new PVS vdisks within the TEST and
ACCEPTANCE environments. When all works fine within the TEST and ACCEPTANCE
environment, than we can create a copy of the new PVS vdisk for the PRODUCTION
environment:
Copy of new PVS vdisk will be renamed XA_W2012R2_DESKTOPS_APPS_TEST_Vx_x and
class and type of this PVS vdisk are changed to PRODUCTION
We than can “check for updates…” on each PVS server. At next reboot of the PRODUCTION
Citrix Xenapp server, the new PVS PRODUCTION vdisk will be assigned to the PRODUCTION
Citrix Xenapp server.
If any issues arise with the new vdisk, we can always perform a roll-back to the previous
vdisk version.
By using Citrix Provisioning services, we can perform upgrades and roll-backs in a very easy
and fast way. Upgrading and rolling back process is performed by simply assigning and un-
assigning vdisks to target devices (Xenapp servers).
Each vDisk will have a vDisk version number. This number contains 3 digits:
Major version number
Minor version number
Build number
Every vDisk modification should be reflected in the version number. After a major
modification (e.g. new XenApp version), the major version number should be increased by
one. After a minor modification (e.g. new application version), the minor version number
should be increased by one. When correcting a previous modification, only the build number
should be increased.
Every vDisk also has a Type and Class tag. Both should contain the same value. The Class tag
can also be set on target devices, and is used to attach a class of vDisks to a certain target
device. The class and type values that should be used are:
DEVELOPMENT: For PVS vdisk that need to be assigned to DEVELOPMENT Xenapp
servers
TEST: For PVS vdisk that need to be assigned TEST Xenapp servers
ACCEPTANCE: For PVS vdisk that need to be assigned to ACCEPTANCE Xenapp servers
PRODUCTION: For PVS vdisk that need to be assigned to PRODUCTION Xenapp servers
When you have added a new PVS vdisk version, you can execute “check for updates…” on
each PVS server. This will assign the new PVS vdisk version to the corresponding PVS target
devices and will result Citrix Xenapp servers to boot from the new PVS vdisk version on the
next reboot/maintenance window.
An operation procedure is available that explains the PVS vdisk update process in more
detail, summarized it contains the following steps:
1. Step 1: Take a copy of the DEVELOPMENT vdisk files (pvp and vhd file) and increase
version number. E.g. take a copy of DEVELOPMENT vdisk
“XA_W2012R2_DESKTOPS_APPS_DEVELOPMENT_V1_1” and call this copy
“XA_W2012R2_DESKTOPS_APPS_DEVELOPMENT_V1_2”.
2. Step 2:Import PVS vdisk into PVS environment via PVS console
3. Step 3: Change vdisk mode of vdisk
“XA_W2012R2_DESKTOPS_APPS_DEVELOPMENT_V1_2” to private mode.
4. Step 4: Shutdown vmware virtual machine VXDCON-1219 and assign DEVELOPMENT
vdisk “XA_W2012R2_DESKTOPS_APPS_DEVELOPMENT_V1_2” to target device VXDCON-
1219 within the PVS console.
5. Step 5: Startup vmware virtual machine VXDCON-1219 via Xencenter console
6. Step 6: Introduce necessary changes on server VXDCON-1219 (All changes are applied
via ACME TaskFlow by performing maintenance reboot) + validate changes
7. Step 7: Shutdown vmware virtual machine VXDCON-1219
8. Step 8: Change vdisk mode of vdisk
“XA_W2012R2_DESKTOPS_APPS_DEVELOPMENT_V1_2” to standard mode and cache
cache type to “Cache in device RAM with overflow on hard disk” and set Maximum RAM
size to 8192 MB.
9. Step 9: Take a copy of DEVELOPMENT vdisk
“XA_W2012R2_DESKTOPS_APPS_DEVELOPMENT_V1_2” files (pvp and vhd file) and
rename the copy vdisk to “XA_W2012R2_DESKTOPS_APPS_TEST_V1_2”
10. Step 10: Import PVS vdisk into PVS environment via PVS console
11. Step 11: Change the Class and type of vdisk
“XA_W2012R2_DESKTOPS_APPS_TEST_V1_2” from “DEVELOPMENT” to “TEST”
12. Step 12: Shutdown vmware virtual machine VXTCON-1220 and assign TEST vdisk
“XA_W2012R2_DESKTOPS_APPS_TEST_V1_2” to target device VXTCON-1220 within the
PVS console.
13. Step 13: Startup vmware virtual machine VXTCON-1220 via Xencenter console and
validate changes again. If all works well, please continue to next step.
14. Step 14: Take a copy of TEST vdisk “XA_W2012R2_DESKTOPS_APPS_TEST_V1_2” files
(pvp and vhd file) and rename the copy vdisk to
“XA_W2012R2_DESKTOPS_APPS_ACCEPTANCE_V1_2”
15. Step 15: Import PVS vdisk into PVS environment via PVS console
16. Step 16: Change the Class and type of vdisk
“XA_W2012R2_DESKTOPS_APPS_ACCEPTANCE_V1_2” from “TEST” to “ACCEPTANCE”
17. Step 17: Shutdown vmware virtual machine VXACON-1221 and assign TEST vdisk
“XA_W2012R2_DESKTOPS_APPS_ACCEPTANCE_V1_2” to target device VXACON-1221
within the PVS console.
18. Step 18: Startup vmware virtual machine VXACON-1221 via Xencenter console and
validate changes again. If all works well, please continue to next step.
19. Step 19: Take a copy of ACCEPTANCE vdisk
“XA_W2012R2_DESKTOPS_APPS_ACCEPTANCE_V1_2” files (pvp and vhd file) and
rename the copy vdisk to “XA_W2012R2_DESKTOPS_APPS_PRODUCTION_V1_2”
20. Step 20: Import PVS vdisk into PVS environment via PVS console
21. Step 21: Change the Class and type of vdisk
“XA_W2012R2_DESKTOPS_APPS_PRODUCTION_V1_2” from “ACCEPTANCE” to
“PRODUCTION”
22. Step 22: Perform “Check for automatic Updates…” via PVS console on PVS server
VPVCON-1204
23. Step 23: The new PVS vdisk “XA_W2012R2_DESKTOPS_APPS_PRODUCTION_V1_2” will
be automatically assigned at next reboot of PRODUCTION Citrix Xenapp server.
means that by default each XenApp VM would boot up with the same identifier as
was in use by the Master VM and that might give a problem. It is sometimes
required to perform sysprep-like actions before sealing each vDisk to make sure
each XenApp VM is unique.
For CONTOSO only the event logs and print spooler will be located on the persistent disk.
Each Xenapp server will have its own dedicated persistent disk with size of 10 GB.
Antivirus products installed on the PVS Streamed XenApp servers require specific exclusions
to be added to the antivirus engine configuration. This to ensure optimal performance and
stability. For more information refer to CTX124185 – Provisioning Servers Antivirus Best
Practices
Management Servers
SQL server vdbcon-1082 Customer choice
SQL instance default Customer choice
SQL database CX_TST_APP_V_MGT_DB Customer choice
Administrators AD Group SGU_XA_GEN-APPV- Based on the existing
ADMINISTRATORS naming convention
Mgmt Servers port 81 Preferred port for App-
V management
Management server high Hypervisor HA Hypervisor HA is
6.4.3 VM Specifications
Name IP Address # vCPU RAM Disk
vavcon-1206 LAN: 192.168.1.206 1 4 GB C:\60 GB (virtual disk)
Iscsi: 192.168.100.206 E:\100 GB (Iscsi disk)
6.4.4 Configuration
6.4.4.1 App-V Version
The latest available version of MS App-V is 5.0 SP3 Hotfix 2.
High availability
Only one App-V server will be deployed. This is a SPOF, but we rely on the underlying
Xenserver HA functionality and the App-V environment will host only a very small subset of
applications, thus impact is not high when this service is temporarily unavailable. When
99,99% availability is required on this App-V component, you can easily deploy a second
App-V server and implement network load balancing via the Citrix Netscaler.
The publishing server will be configured to stream packages using the SMB protocol from
the App-V content file share that will be created on the App-V server.
High availability
Only one App-V server will be deployed. This is a SPOF, but we rely on the underlying
Xenserver HA functionality and the App-V environment will host only a very small subset of
applications, thus impact is not high when this service is temporarily unavailable. When
99,99% availability is required on this App-V component, you can easily deploy a second
App-V server and implement network load balancing via the Citrix Netscaler.
The following command line can be used for installing the RDS client on the XenApp servers:
Appv_client_setup.exe
/CEIPOPTIN=0 (Disable customer experience improvement)
/MUOPTIN=0 (Disable Windows Update optin)
/SHAREDCONTENTSTOREMODE=1
/ENABLEPACKAGESCRIPTS=1 (Enable scripts in App-v packages)
/NORESTART
/q
/ACCEPTEULA
Shared Content Store mode will be enabled on the App-V client. This means that packages
are not necessarily cached locally on each XenApp server. Packages that are not in the local
cache will be read directly from the Publishing servers.
The App-V publishing servers will be configured using an ADMX template with GPO
SGL_GPO_CONTOSO_XA_<dtap env>_COMPUTER_DEFAULT_SERVER_CONFIG.
Security Groups
CONTOSO.be/_CONTOSO/Grou
ps/Security/Resources
Service accounts
CONTOSO.be/_CONTOSO/Users
/System
AD Group Policies
Naming SGL_GPO_CONTOSO_XA_<server Customer naming
Convention scope>-<gpo settings scope>- convention
<description>
AD Security Groups
Naming Application groups Customer naming
Convention SGU_XA_APP-<Application convention
name>
Desktop Groups
SGU_XA_DES-<Desktop name>
AD GPO groups
SGL_GPO_CONTOSO_XA_<Citrix
component>_<description>
Printer groups
SGL_PS_<print server
name>_<print queue name>
Language groups
SGU_XA_LAN-<scope>
General groups
SGU_XA_GEN-<scope>
OU stem
AD Computer objects
Naming V<server type>CON-<ip range Customer naming
Convention segment><last 3 digits of server ip convention
address>
Computer objects CONTOSO.be/_CONTOSO/Systems/ Customer standards
OU XA Environment
6.5.3 Configuration
6.5.3.1 AD Sites and Services
The current AD Sites and Services design is already configured according to best practices
with a separate AD Site for each datacenter based on the IP Subnet’s of each datacenter.
<server scope>=
XM=Master Xenapp server
XT=Test Xenapp server
XA=Acceptance Xenapp server
XP= Production Xenapp server
DD= Delivery Controller server
SF= Storefront server
PV= Provisioning Services server
LS= Licensing server
FS= File server
AV= App-V management/publishing server
SQ= App-V senquencer
<gpo settings scope> =
USER= GPO contains only user configuration settings
COMPUTER= GPO contains only computer configuration settings
<description> = description of content of GPO
Multiple AD security groups will need to be used by various components in the new XenApp
environment. AD Security groups will be used to grant users access to resources such as
applications.
AD global security group, which will contain all user objects and/or other group objects that
have access to the resource and will also be used to grant access to required resources. This
is customer choice.
For the new Citrix Xenapp environment, 8 different security group types will be used:
APPLICATION GROUPS: security groups that are related to application access on the
new Citrix Xenapp environment.
DELIVERY GROUPS: security groups that are related to the Desktop Delivery Groups of
the new Citrix Xenapp environment.
DESKTOP GROUPS: security groups that are related to published desktops of the new
Citrix Xenapp environment.
AD GROUP POLICIES GROUPS: security groups that will be assigned to its corresponding
AD GPO
CITRIX POLICIES GROUPS: security groups that are related to certain Citrix Policy
Objects.
PRINTER GROUPS: security groups that are related to network print queues that need to
be mapped within the Citrix Xenapp environment.
LOCAL ADMINISTRATOR GROUPS: security groups to give local administrator
permissions on each server.
LANGUAGE GROUPS: security groups to be able to assign the correct (dutch or French)
language interface to end users.
GENERAL GROUPS: security groups that are related to certain Citrix-related components
of the new Citrix Xenapp environment.
SGU_XA_APP-<Application name>
Where:
SGU= Security Group Universal
XA= Xenapp environment
APP = Application
<Application name> = corresponding application name for the security group
E.g. The security group that contains all user objects that will have access to the Office 2010
applications will get the following naming: SGU_XA_APP-OFFICE2010
The following naming convention will be applied for Desktop Delivery Groups groups:
Where:
The following naming convention will be applied for published desktops groups:
SGU_XA_DES-<Desktop name>
Where:
SGU= Security Group Universal
XA= Xenapp environment
DES = Desktop
<Desktop name> = corresponding Desktop name for the security group
SGU_XA_L-<language>
Where:
SGU= Security Group Universal
XA= Xenapp environment
LAN = Language
<language> = corresponding laguage
Users not belonging to one of the above groups will have English as language UI.
SGL_GPO_CONTOSO_XA_<Citrix component>_<description>
Where:
SGL= Security Group Local
GPO=Group Policy Object
CONTOSO= CONTOSO
XA= Xenapp environment
<Citrix Component>=
ALL= All Citrix-related servers
XM=Master Xenapp server
XT=Test Xenapp server
XA=Acceptance Xenapp server
XP= Production Xenapp server
DD= Delivery Controller server
SF= Storefront server
PV= Provisioning Services server
LS= Licensing server
FS= File server
AV= App-V management/publishing server
SQ= App-V senquencer
<Description> = corresponding description of AD GPO
E.g. The security group that contains all AD objects that will have GPO
“SGL_GPO_CONTOSO_XA_XT_USER_DEFAULT_USERS_CONFIG” applied will be named:
SGL_GPO_CONTOSO_XA_XT_USER_DEFAULT_USERS_CONFIG
The following naming convention will be applied for Citrix policy groups:
Where:
SGU= Security Group Universal
XA= Xenapp environment
CTX = Citrix Policy
<Citrix Policy name> = corresponding Citrix policy name
E.g. The security group that contains all AD objects that will have Citrix Policy
“CLIENT_DRIVE_MAPPING_ALLOWED” applied will be named:
SGU_XA_CTX_CLIENT_DRIVE_MAPPING_ALLOWED
Where:
SGL= Security Group Local
PS=PrintServer
<print server name> = print server name
<print queue name> = corresponding print queue name for the security group
E.g. The security group that contains all user objects that will have print queue “PRN-CON-
Secure_PCL” from print server VPSCON-1009 mapped within the new Citrix Xenapp
environment will get the following naming: SGL_PS_ VPSCON-1009_PRN-CON-Secure_PCL
SGL_SYS_<server name>_LocalAdministrators
Where:
SGL = Security Group Local
SYS = System
<server name> = server name
LocalAdministrators = will be added to local administrator group of the server
E.g. The security group that contains all user objects that will have admin privileges on the
Citrix file servers will get the following naming: SGL_SYS_vfscon-1208_LocalAdministrators
E.g. The security group that contains all user objects that will have admin privileges on the
production Citrix Xenapp servers will get the following naming: SGL_SYS_vxpcon-1221-
1230_LocalAdministrators
SGU_XA_GEN-<scope>
Where:
SGU= Security Group Universal
XA= Xenapp environment
GEN = General
<scope> =
XA-ADMINISTRATORS = administrator permissions on Citrix Xenapp environment
XA-SERVICEDESK = servicedesk permissions on Citrix Xenapp environment
XA-USERS= user permissions on Citrix Xenapp environment
PVS-ADMINISTRATORS= administrator permissions on Citrix PVS environment
APPV-ADMINISTRATORS= administrator permissions on AppV environment
Sysxa<description>
Where:
Sys= system
Xa= Xenapp environment
<description>= description of service account>
Where:
V = Virtual
<server type>:
XM=Master Xenapp server
XT=Test Xenapp server
XA=Acceptance Xenapp server
XP= Production Xenapp server
DD= Delivery Controller server
SF= Storefront server
PV= Provisioning Services server
LS= Licensing server
FS= File server
AV= App-V management/publishing server
SQ= App-V senquencer
CON = Brussels datacenter
<iprange segment>= will be 1
<last 3 digits of server ip address> = will be in the range of 200 to 235
Typical uses:
Automatically setting up thin and zero clients with the right profile when they first attach
to the network
Changing the settings of the device or its local protocols and software tools
Re-imaging devices mid-life when new firmware becomes available, diagnostics and
support
6.7.2 VM Specifications
Name IP Address # vCPU RAM Disk
vmgcon-1071 192.168.1.71 4 4 GB 50 GB
Igel UMS profiles folder A folder per thin client ACME best practices
structure firmware version
Igel UMS profiles Per Thin client ACME best practices
firmware version
One baseline profile
per thin client version
A separate profile per
thin client version for
following task areas:
Firmware Update
GFX
Hardware –
printer
Keyboard
Mouse
Sound
Igel UMS thin client folder A folder per thin client ACME best practices
structure firmware version
A department folder
per firmware version
folder
6.7.4 Configuration
6.7.4.1 General information
The existing Igel UMS server, vmgcon-1071, will be used to manage the Igel thin clients that
will connect to the new Citrix Xenapp environment.
Igel UMS version 4.09.110 is running on this server.
Igel UMS HA will not be setup as this is not required within this environment.
The following Igel thin clients are currently used within the environment:
UD3-420 LX
UD3-421 LX
UD3-430 LX
UD3-431 LX
UD3-W7 40C
The multimedia codec pack is not installed on the Igel LX thin clients.
The following subfolders will be created below each Igel firmware version folder:
<folder>_<section>_<description>
Where:
Folder = the folder to which the profile belongs
Section = the profile section of the configured settings
Description = description of the settings that are configured within the profile
Intially the following profiles will be configured for each igel firmware version:
BASELINE: this profile contains all the default settings that will be applied to all thin
clients that have the same firmware version.
FIRMWARE_UPDATE: this profile contains settings that are required to perform a
firmware upgrade for thin clients that have the same firmware version.
GFX_RESOLUTION_1280x1024_1280x1024_DUAL-MONITOR: this profile contains
screen resolution settings for thin clients that have two monitors with 1280x1024
resolution.
This results in the following configuration within the IGEL UMS console:
The following subfolders will be created below each Igel firmware version folder:
6.8.3 VM Specifications
Name IP Address # vCPU RAM Disk
vfscon-1208 LAN: 192.168.1.208 2 12 GB C:\60 GB (virtual disk)
6.8.4 Configuration
The file servers will host the file shares:
Taskflow$ share hosting the Taskflow console, master share, status share and
installation sources.
Userconfig$ share hosting:
Citrix user profile for each user
Redirected user shell folders for each user:
Desktop
Favorites
Links
Appdata (Roaming)
PVS
6.9.3 VM Specifications
6.9.4 Configuration
6.9.4.1 Citrix License Server
A Citrix License Server is required to be compliant with Citrix licensing. The License Server is
used by the Xenserver hosts, XenApp Delivery controllers, XenApp servers and Citrix
Provisioning Servers.
When the Citrix license server is unavailable, XenApp servers enter a grace period.
Xenserver, XenApp and PVS have a grace period of 30 days.
The license server component will be installed on server vlscon-1209 using default port
27000.
The Citrix licenses will be installed on the new license server:
- 200 XenApp platinum edition concurrent user licenses
- 12 per socket Xenserver enterprise licenses (covers 6 x 2 socket CPU servers)
The license server that is used for Citrix licensing, vlscon-1209, will also be used as RDS
licensing server.
IMPORTANT: HYPERVISOR HA WILL BE USED TO ENSURE BASIC HA SOLUTION FOR THIS COMPONENT.
HOWEVER, TO ENSURE 99,99% AVAILABILITY ON RDS LICENSING COMPONENT, PLEASE INSTALL AND
CONFIGURE A SECOND RDS LICENSING SERVER WITH NO RDS CAL’S AND ADJUST GPO
“SGL_GPO_CONTOSO_XA_<DTAP ENV>_COMPUTER_DEFAULT_SERVER_CONFIG” TO
INCLUDE THIS EXTRA RDS LICENSING SERVER.
7 Hardware Layer
7.1.1 Layer Overview
The hardware layer is responsible for the physical devices required to support the entire
solution including servers, and storage devices. Specific Hardware Layer components and
design decisions are based on the completed design of the above layers (User, Access,
Desktop and Control).
sufficient.
Workload separation? No separate Xenserver pool but Important to have
2 dedicated Xenserver hosts dedicated Xenserver
within pool for Xenapp workload hosts for Xenapp
workload to make sure
that Xenapp servers
have sufficient system
resources (no CPU
overcommitment).
Local Storage 124GB RAID1 Local storage only
used for Xenserver
installation.
Shared Storage 2 new SR will be created to host Important to have
VMs of new Citrix Xenapp sufficient and
environment: performant shared
ISCSI-XENLUN-R620POOL1- storage
SAS
ISCSI-XENLUN-R620POOL2-
SAS
Network config 4 network configs: Xenserver best
Bond 0+1 SAN: storage practices, best
Bond 2+3 LAN: VM traffic configuration in
Network 5 Mgt: customer’s situation
management
Network 4: DMZ
7.1.3 Design
7.1.3.1 Physical Hardware
The XenApp servers as well as the related infrastructure servers will be virtual machines
running on a hypervisor environment hosted on 6 physical servers.
4 Physical servers will be used for infrastructure server workload while 2 physical servers will
be used for Xenapp workload. As mentioned before, all physical servers will run Xenserver
hypervisor and all Xenapp and infrastructure servers will run as virtual machines.
Each physical server has the following specs:
Item Specification
Vendor + Model Dell Poweredge R620
CPU 2 x Intel Xeon E5-2650v2 8-core 2.6 ghz
7.1.3.2 Hypervisor
7.1.3.2.1 Hypervisor vendor
In order to fully use the new hardware, a hypervisor will be used to allow multiple XenApp
servers to be used on the same physical box.
The hypervisor to be used is Citrix XenServer Enteprise edition (commercial version)
because:
- This hypervisor is optimized for Citrix Xenapp workload
- CONTOSO is already using XenServer for the current virtual server environment,
including the currently used Citrix Xenapp environment and already has knowledge
about the product.
This is the latest version available and should fully support the hardware.
This also results in the following vCPU and memory assignment per Citrix Xenserver host:
Xenserver host # vCPU Current # total Current Workload
Assigned RAM free
vCPU RAM
SXSCON-1018 32 57 262 GB 140 GB Infra
SXSCON-1019 32 61 262 GB 125 GB Infra
SXSCON-1020 32 28 196 GB 2 GB Xenapp
SXSCON-1021 32 43 196 GB 62 GB Infra
SXSCON-1022 32 64 196 GB 72 GB Infra
SXSCON-1023 32 28 196 GB 2 GB Xenapp
Currently there is no need for multiple VLAN’s so the ports will be configured as access ports
on the switch.
The Management IP’s of the XenServer hosts will be located in the same subnet as the VM’s.
Shared storage
Shared storage will be used.
2 new Storage repositories of each 1000 GB should be created to host disk of VM’s that are
related to the new Citrix Xenapp environment:
ISCSI-XENLUN-R620POOL1-SAS
ISCSI-XENLUN-R620POOL2-SAS
8 Monitoring
By having an in-depth understanding of current and expected behavior of the Citrix environment and its components, administrators are better equipped to
discover an issue before it impacts the user community. Furthermore the data tracked during normal operations can be used for trending and capacity
planning. This section defines how a Citrix environment should be monitored, as well as some common tools that can be used.
Section: www.ACME.
119
- of
Section: www.ACME.
111
- of
LogicalDisk/Physical % Free Space is the percentage of total usable space on the selected logical <10% of physical <5% of physical disk Identify which files or folders consume disk space and delete
Disk - % Free Space disk drive that is free. disk or obsolete files if possible. In case no files can be deleted,
consider increasing the size of the affected partition or add
or 10% reported after 1 additional disks.
10% reported after 2 minute
minutes
LogicalDisk/Physical % Disk Time marks how busy the disk is. >70% consistently >90% consistently Identify the processes / services consuming disk time using
Disk - % Disk Time or or Task Manager or Resource Monitor.
90% over 15 minutes 95% over 15 minutes If all processes/services work within normal parameters and
the level of disk consumption is an expected behavior it
(_Total) (_Total) should be considered to move the affected partition to a
more capable disk subsystem in the future.
If a process/service can be identified which works outside
normal parameters, the process should be killed. Please note
that killing a process can cause unsaved data to be lost.
LogicalDisk/ Current disk queue length provides a primary measure of disk congestion. It >=1 (per spindle) >=2 (per spindle) A long disk queue length typically indicated a disk
PhysicalDisk – is an indication of the number of transactions that are waiting to be consistently consistently performance bottleneck. This can be caused by either
processed. or processes/services causing a high number of I/Os or a
Current Disk Queue or shortage of physical memory. Please follow the steps outlined
Length 3 over 15 minutes 10 over 30 minutes for counter “LogicalDisk/PhysicalDisk - % Disk Time” and
(_Total) (_Total) counter “Memory – Available Bytes”
PhysicalDisk – Avg. The Average Disk Second counters show the average time in seconds of a >=15ms consistently >=20ms consistently High disk read or write latency indicates a disk performance
Disk Sec/Read read/write/transfer from or to a disk. bottleneck. Systems affected will become slow, unresponsive
and application or services may fail. Please follow the steps
– Avg. Disk Sec/Write outlined for counter “LogicalDisk/PhysicalDisk - % Disk Time”
– Avg. Disk Sec/
Transfer
Network Interface – Bytes Total/sec shows the rate at which the network adapter is processing < 8 MB/s for 100 70% of NIC speed Identify the processes / services consuming network using
Bytes Total/sec data bytes. This counter includes all application and file data, in addition to Mbit/s adaptor inbound and Task Manager or Resource Monitor.
protocol information, such as packet headers. outbound If all processes/services work within normal parameters and
<80 MB/s for 1000
traffic for 1 min. the level of bandwidth consumption is an expected behavior
Mbit/s adaptor it should be considered to move the respective
or process/service to a dedicated NIC (or team of NICs).
60% of NIC speed If a process/service can be identified which works outside
Section: www.ACME.
112
- of
Section: www.ACME.
113
- of
Section: www.ACME.
114
- of
Section: www.ACME.
115
- of
Section: www.ACME.
116
- of
Hardware Failure: Any event notification that relates to a hardware failure should be looked at immediately. Any device that has failed will have an
impact on the performance of the system. At a minimum, a hardware failure will remove the redundancy of the component.
Security Warnings: Customers should investigate security warnings or audit failure events regarding failed logons in the security log. This could be an
indication that someone is attempting to compromise the servers.
Disk Capacity: As the drives of a Windows system reach 90% of capacity, an event error message will be generated. To ensure continuous service,
customers should poll these event errors. As the system runs out of hard disk space, the system is put at severe risk. The server might not have enough
space left to service the requests of users for temporary file storage.
Application / Service errors: Any event notification that relates to application or services errors should be investigated.
Citrix errors: All Citrix software components will leverage the Windows Event Log for error logging. A list of the known Event Log warnings and errors
issued by Citrix components can be found at the following links:
Event Codes Generated by PVS
XenDesktop 7 - Event Log Messages
It is important to periodically check the Event Viewer for Citrix related warnings or errors. Warnings or errors that repeatedly appear in the logs should
be investigated immediately, because it may indicate a problem that could severely impact the Citrix environment if not properly resolved.
In multi-server environments it becomes easier to administer the servers when logs can be collected and reviewed from a central location. Most enterprise
grade monitoring solutions provide this functionality. More sophisticated monitoring solutions enable an administrator to correlate event information with
other data points such as performance metrics or availability statistics. In case the selected monitoring solution does not provide this functionality the
Windows Server 2008 R2 or Windows Server 2012/2012 R2 Event Log subscription feature can be used. This feature allows administrators to receive events
from multiple servers and view them from a designated collector computer. For more information please refer to the Microsoft TechNet article – Manage
Subscriptions.
Section: www.ACME.
117
- of
Logon Performance: Shows how long it takes for users to log on to their applications and desktops.
Load Evaluator Index: Provides various performance counter-based metrics, including CPU, Memory, and Disk Usage for Server OS machines.
Hosted Application Usage: Details all applications published in the site and can provide usage information about each individual applications in detail
(concurrent instances, launches, usage duration, and so on).
For more information on Citrix Director Trends, please refer to the following:
Citrix Blogs – Citrix Director: Trends Explained
Citrix Support – CTX139382 Best Practices for Citrix Director
Section: www.ACME.
118
- of
For CONTOSO, The Citrix Xenapp servers will be deployed in a fully automated way, this also
includes rebuilding. A combination of Citrix provisioning services (for Xenapp machine
deployment), ACME TaskFlow (for server configuration + application deployment) and
Xenserver template (for VM + OS installation) will be used to automate the installation,
configuration and deployment of Citrix Xenapp servers.
The rebuild procedure for the other components will be based on a written installation
procedure.
For CONTOSO, we have the following rebuild strategy in place:
Component rebuild strategy Justification
Citrix Xenapp Installation procedure ACME recommended
that describes: rebuild procedure
VM template
deployment
Taskflow deployment
Deployment of
Xenapp servers via
Citrix PVS
Citrix Delivery controller Installation procedure ACME recommended
rebuild procedure
Citrix Provisioning Services Installation procedure ACME recommended
rebuild procedure
Citrix Storefront Installation procedure ACME recommended
rebuild procedure
Local high availability solutions ensure availability in a single data center deployment. These
solutions guard against process, node, and media failures, as well as human errors. Local
high availability solutions can be further divided into two types: active-passive and active-
active:
Active-passive solutions deploy an active instance that handles requests and a passive
instance that is on standby. When the active instance fails, the active instance is shut
down and the passive instance is brought online, and resumes application services. At
this point the active-passive roles are switched. This process can be done manually or it
can be handled through vendor-specific clusterware. Active-passive solutions are
generally referred to as cold failover clusters.
Active-active solutions deploy two or more active system instances at all times. All
instances handle requests concurrently.
Citrix Xenapp has already a built-in fail-over mechanism in place, so no extra technology is
required to have fail-over on this component.
Citrix Provisioning services has built-in fail-over by deploying multiple PVS servers in the
same PVS farm and enable high availability.
Citrix delivery controller servers have built-in high availability by deploying multiple servers
in the same Xenapp/Xendesktop site.
Citrix Storefront, does not have built-in fail-over and we have used a network load balancing
solution, Citrix netscaler, to enable redundancy and load balancing of the citrix storefront
servers.
The Citrix netscaler appliances have built-in fail-over functionality, however a deployment of
2 Citrix netscalers is required to have active-passive HA.
In general, the Citrix license server is not business critical and in most cases there is no need
to make them highly available. A good backup strategy and a well-documented disaster
recovery procedure for this component should be sufficient.
Other dependent infrastructure servers such as AD domain controllers, file servers, print
servers should also be made highly available by using their built-in fail-over capacities or by
using technologies such as clustering and network load balancing.
ACME recommends that there is no single point of failure within the citrix xenapp
environment. Citrix delivery controllers, Citrix Xenapp, Citrix PVS, Citrix Netscaler have built-
in fail-over capacity and they should be activated by deploying at least 2 servers/appliances,
however other components such as the Citrix storefront servers do not have built-in fail-
over and should be made highly available. Also make sure that all other dependent
infrastructure servers (such as AD DC’s, database servers, file servers, print servers,…) are
highly available.
controllers
Igel UMS server Hypervisor HA Temporarily unavailability
of this service is not
critical. Basic HA is
provided by underlying
hypervisor HA
functionality.
Citrix Xenserver Multiple Xenserver hosts Recommended HA
in HA setup solution for Citrix
Xenserver
No matter if you choose for the active-passive or active-active approach, always make sure
that you have proper scalability and capacity planning in place, so you can guarantee proper
functioning of the systems in case of a disaster.
CONTOSO only has 1 datacenter so no site fail-over strategy can be put in place. However,
CONTOSO has a procedure in place to rebuild their datacenter in case of disaster recovery.
CONTOSO has already a disaster recovery plan in place that is based on having a backup &
restore strategy in place for Virtual machines and reassigning iSCSI disks within the OS if this
is required.
ACME built the disaster recovery strategy for the new Citrix Xenapp environment based on
the disaster recovery strategy that is already in place at CONTOSO for their existing server
infrastructure.
ACME recommends to occasionally do disaster recovery testing to make sure that systems
continue to function correctly in case of a disaster.
ACME recommends having a decent backup and recovery strategy and procedure in place
for at least the citrix-related components. Make sure that at least the above mentioned
components are backed up. Also, occasionally, perform recovery testing to make sure that
you are able to restore proper data.
Note: It is assumed that there is a fast automated rebuild process in place for the servers
supporting the Xenapp infrastructure (Delivery controller, StoreFront server, Provisioning
Server, etc.). If this assumption is not true then all infrastructure servers must also be
backed up. Virtual networks are not included in a full server backup. You will need to
reconfigure the virtual networking by recreating the virtual networks and then reattaching
the virtual network adapters in each virtual machine to the appropriate virtual network.
Make sure the virtual network configuration and all relevant settings are documented as part
of the backup process.
CONTOSO uses a script to frequently create snapshots of VM’s running on the Citrix
Xenserver environment and to store these snapshots as backup to a QNAP NAS storage
device.
Symantec Backup exec is used as backup solution to backup databases and files from within
VM’s.
For CONTOSO, we have the following backup strategy in place: