You are on page 1of 336

Windows Test Technologies

User Manual
WTT 2.0 RTM

Revised September 2004

Disclaimer
Information in this document, including URL and other Internet Web site
references, is subject to change without notice. Unless otherwise noted, the
example companies, organizations, products, domain names, e-mail addresses,
logos, people, places, and events depicted herein are fictitious, and no
association with any real company, organization, product, domain name, e-mail
address, logo, person, place, or event is intended or should be inferred.
Complying with all applicable copyright laws is the responsibility of the user.
Without limiting the rights under copyright, no part of this document may be
reproduced, stored in or introduced into a retrieval system, or transmitted in any
form or by any means (electronic, mechanical, photocopying, recording, or
otherwise), or for any purpose, without the express written permission of
Microsoft Corporation.
Microsoft may have patents, patent applications, trademarks, copyrights, or other
intellectual property rights covering subject matter in this document. Except as
expressly provided in any written license agreement from Microsoft, the
furnishing of this document does not give you any license to these patents,
trademarks, copyrights, or other intellectual property.
2004 Microsoft Corporation. All rights reserved.
Microsoft, MSDN, MS-DOS, Visual C#, Visual C++, Win32, Windows, Windows NT,
and Windows Server are either registered trademarks or trademarks of Microsoft
Corporation in the United States and/or other countries.
The names of actual companies and products mentioned herein may be the
trademarks of their respective owners.

Table of Contents
Chapter 1: Introduction...................................................................................
Windows Test Technologies Overview................................................................
WTT Features.....................................................................................
Windows Test Technologies Architecture............................................................
Enterprise Detail.................................................................................
Test Resources Detail............................................................................
Controllers...............................................................................................
Getting Started - Process Summary..................................................................
Chapter 2: WTT Setup.....................................................................................
WTT Setup Overview...................................................................................
Controller Setup.........................................................................................
System Requirements............................................................................
Hardware Requirements.........................................................................
User Account Requirements.....................................................................
Database Installation............................................................................
Microsoft .NET Framework Installation.......................................................
Installing WTT Controller.......................................................................
Client Setup..............................................................................................
Software Requirements..........................................................................
User Account Requirements.....................................................................
Installing WTT Client.............................................................................
MSXML Installation...............................................................................
WTT Studio Setup.......................................................................................
Software Requirements..........................................................................
User Account Requirements.....................................................................
.NET Framework Installation...................................................................
Installing WTT Studio............................................................................
Chapter 3: Asset Tracking.................................................................................
Asset Terminology.......................................................................................
Asset Pools...............................................................................................
Getting Started in Asset Management................................................................
Asset Tracking Best Practice Recommendations....................................................
Asset Pool Management Procedures..................................................................
Asset Tracking Procedures.............................................................................
Registering Assets................................................................................
Viewing and Editing Computer Details........................................................
Searching for Assets..............................................................................
Transferring an Asset.............................................................................
Asset Loans........................................................................................
Vendor Management.............................................................................

Standard Asset Reporting........................................................................


Chapter 4: Jobs.............................................................................................
Fundamental Jobs Concepts...........................................................................
Using the Job Explorer Tree View.....................................................................
Creating a Job Feature or Category Node...........................................................
Creating and Editing Jobs..............................................................................
Setting General Job Characteristics...........................................................
Setting Runtime Parameters....................................................................
Setting Job Constraints..........................................................................
Setting Job Mixes and Contexts................................................................
Setting an LMS....................................................................................
Setting Job Tasks.................................................................................
Setting Task Dependencies and Order.........................................................
Advanced Task Dependencies...................................................................
Setting Attributes for a Job.....................................................................
Setting Dimensions for a Config Job...........................................................
Using Job Explorer......................................................................................
Jobs Toolbar Options.............................................................................
How to Use the Job Explorer Tree View.......................................................
Job Explorer Feature and Category Short-Cut Commands.................................
Exporting and Importing Jobs from Job Explorer............................................
Job Explorer Short-Cut Commands............................................................
Exporting Jobs Details..................................................................................
Using the Job Explorer Query Function..............................................................
Using the Scheduler.....................................................................................
Schedule Toolbar Options.......................................................................
Creating a Schedule..............................................................................
Adding Constraints to a Schedule..............................................................
Setting Schedule Options........................................................................
Setting Mailing Options for a Schedule........................................................
Scheduler Fundamentals...............................................................................
Terminology.......................................................................................
Scheduler Prioritizing............................................................................
Twin Scheduler....................................................................................
Smart Scheduler.........................................................................................
Common User Scenarios and Best Practices.........................................................
Common Team Setup (Sharing).................................................................
Private Binary Installation......................................................................
Using Tests from Another Team.................................................................
SQL, IIS, and Operating System Installation..................................................
Smart Scheduler Considerations...............................................................
Chapter 5: Job Results....................................................................................

Monitoring Jobs..........................................................................................
Job Monitor Toolbar Options....................................................................
Machine Pool Short-Cut Commands..........................................................
Machine List View Short-Cut Commands.....................................................
Job Execution Status View Short-Cut Commands..........................................
Task Execution Status View Short-Cut Commands.........................................
Querying results in Job Monitor...............................................................
Quick Schedule of a job on computer(s)....................................................
Using the Result Explorer.............................................................................
Result Explorer Toolbar Options..............................................................
Result Explorer Short-Cut Commands........................................................
Task Results Short-Cut Commands............................................................
Viewing Job Results.............................................................................
Viewing Job Errors..............................................................................
Working with the Results Log.................................................................
Adding Manual Job Results to the Results Log..............................................
Changing the Column Display and Sort on the Job Results Form........................
Querying Results in Result Explorer..........................................................
Editing Results in Result Explorer............................................................
Using Result Collection Explorer....................................................................
Result Collection Toolbar Buttons............................................................
Result Collection Short-Cut Commands......................................................
Querying a Result Collection..................................................................
Using Result Rollup....................................................................................
Result Rollup Toolbar Options.................................................................
Result Rollup Short-Cut Commands..........................................................
Querying Results in Result Rollup.............................................................
Chapter 6: WTT Administration........................................................................ 118
Managing Enterprises..................................................................................
User Administration...................................................................................
Dimensions..............................................................................................
Adding and Editing Dimensions...............................................................
Machine Configuration Query Dimensions...................................................
Create MCU Policy for an Asset Pool.........................................................
Verify MCU Policy...............................................................................
Global Parameters.....................................................................................
Global Mixes............................................................................................
Working With a Global Simple Mix............................................................
Setting Constraints for a Simple Mix Context...............................................
Setting Parameters for a Simple Mix Context...............................................
Setting Attributes for a Simple Mix Context................................................
Working with a Global Advanced Mix.........................................................
Setting Dimensions and Parameters for a Global Advanced Mix.........................

Chapter 7: WTT Autotriage.............................................................................. 142


Terminology.............................................................................................
Working with WTT Autotriage tools.................................................................
Default Behavior of Autotriage Tools...............................................................
Chapter 8: Resolver...................................................................................... 146
Terminology.............................................................................................
Working with Resolver................................................................................
Chapter 9: Notification Service........................................................................ 154
Notification Service Independent Setup............................................................
Service Setup Requirements...................................................................
Notification Service Best Practices..................................................................
Configuring a Standalone Notification Service....................................................
Chapter 10: Scenario Builder.......................................................................... 157
Terminology.............................................................................................
Working with Scenario Builder.......................................................................
Designing a Scenario............................................................................
Executing Scenarios............................................................................
Appendix A: Glossary..................................................................................... 162
Appendix B: Accessibility Options..................................................................... 170
Global Keyboard Shortcut Combinations...........................................................
Keyboard Shortcut Combinations....................................................................
Accessibility Known Issues............................................................................
Appendix C: Best Practices.............................................................................. 175
Asset Tracking Best Practice Recommendations...................................................
Jobs Best Practice Recommendations..............................................................
Test Structure and Design.....................................................................
Job ImplementationLogical Machine Set (LMS)...........................................
Job ImplementationFile Copying...........................................................
Job ImplementationTask Execution Phase and Dependency............................
Using Batch Files in Jobs.......................................................................
Naming Conventions (Tests, Features, and Categories) ...................................
Security Concerns...............................................................................
Keys, Parameters, and Environmental Variables...........................................
Appendix D: WTT Logger................................................................................ 183
WTT Logger Functions................................................................................
Terminology.............................................................................................
Working with WTT Logger............................................................................
Code Example of a Typical Application Using WTT Logger................................
Coding WTT Logger.............................................................................

WTT Logger Integration with WTT Jobs............................................................


Appendix E: WTTCMD Command Tool................................................................. 188
WTTCMD Code sample................................................................................
WTTCMD Commands...................................................................................
Local Symbol Users.............................................................................
Local Logical Users.............................................................................
Appendix F: WTTOMCMD Command Tool............................................................. 193
Terminology.............................................................................................
WTTOMCMD Syntax....................................................................................
Database Parameters...........................................................................
Command Directive.............................................................................
Properties........................................................................................
Working with WTTOMCmd............................................................................
WTTOMCmd configuration file.......................................................................
JobImportOptions...............................................................................
AttributeOptions.......................................................................................
GlobalMixOptions......................................................................................
GlobalParamOptions...................................................................................
JobConflictOptions....................................................................................
JobOverwriteOptions..................................................................................
LibraryJobConflictOptions............................................................................
LibraryJobOverwriteOptions.........................................................................
FeatureImportOptions..........................................................................
RemapHierarchy.......................................................................................
AppendFeature.........................................................................................
RemapFeature..........................................................................................
CategoryImportOptions........................................................................
ImportCategory........................................................................................
ImportHierarchy.......................................................................................
JobImportReportOptions.......................................................................
ReportType..............................................................................................
ResultImportOptions............................................................................
ResultsOnly.............................................................................................
ResultConflictOptions.................................................................................
WTTOMCmd Best Practices...........................................................................
Appendix G: Unified Stress Testing.................................................................... 208
Stress Scheduler.......................................................................................
Terminology......................................................................................
Stress Scheduler Best Practices...............................................................
Working with Stress Scheduler................................................................
Test Mix Management.................................................................................
Terminology......................................................................................

Test Mix Management Best Practices.........................................................


Working with Test Mix Management..........................................................
Appendix H: Machine Configuration Query Dimensions........................................... 220
Overview of Machine Configuration Query process...............................................
Common Usage Scenarios for MCU..................................................................
Machine Configuration Update Best Practices..............................................
Additional XPath samples......................................................................
Troubleshooting Updates.......................................................................
Appendix I: Sysparse..................................................................................... 225
Sysparse Coverage Detection........................................................................
Sysparse Data....................................................................................
Computer Attribute.............................................................................
Caveats...........................................................................................
Appendix J: Log Viewer.................................................................................. 228
Terminology.............................................................................................
Working with Log Views...............................................................................
Log Views........................................................................................
Using Custom View Transforms................................................................
Appendix K: Extending WTT (The UI Framework).................................................. 232
Terminology.............................................................................................
Working with UI Framework..........................................................................
Appendix L: WTT Metric Collection.................................................................... 237
Terminology.............................................................................................
Working With the Configuration UI..................................................................
General...........................................................................................
EventLog.........................................................................................
Perfmon (Performance Monitoring)..........................................................
Pool Tags.........................................................................................
Process...........................................................................................
Intelligent Pass/Fail Configuration...........................................................
Configuration UI FAQs..........................................................................
Working with Analyzer UI.............................................................................
Queries...........................................................................................
Display Section..................................................................................
Analyzer UI FAQs................................................................................
Jobs integration........................................................................................
Job Integration FAQs...........................................................................
Advanced Options.....................................................................................
Customization Procedures.....................................................................
Deploying Custom Client Engine Plug-ins....................................................

Appendix M: Managing LLU and LSU Functions...................................................... 249


Local Logical Users (LLU).............................................................................
Usage of an LLU.................................................................................
Configuring an LLU..............................................................................
Local Symbol User (LSU)..............................................................................
Usage of LSU.....................................................................................
Configuring an LSU..............................................................................
Appendix N: Unified Reporting......................................................................... 259
Unified Reports Cube Administration...............................................................
Unified Reports Module Architecture........................................................
Unified Reporting Service Datastore.........................................................
Unified Reports OLAP Database...............................................................
Unified Reports Cube Admin Dialog..........................................................
Unified Reports Crawler Service..............................................................
Unified Reports Web Site......................................................................
Working with Unified Reports........................................................................
Appendix O: Unattended Installation................................................................. 264
Test Controller.........................................................................................
Test Client..............................................................................................
Test Studio..............................................................................................
MSI uninstall commands.......................................................................
Appendix P: Code Coverage............................................................................. 268
Controller Setup.......................................................................................
Client Setup............................................................................................
Job Considerations....................................................................................
Managing Code Coverage Data.......................................................................
Appendix Q: Windows Test Labs........................................................................ 271
Windows Test Lab Information.......................................................................
1394 Lab..........................................................................................
ACPI/Power Lab.................................................................................
Application Compatibility Lab................................................................
Audio Lab........................................................................................
Base Scenarios Lab..............................................................................
Bluetooth Lab...................................................................................
Embedded Lab...................................................................................
Hardware Experience Lab.....................................................................
iSCSI Lab.........................................................................................
Kernel Lab.......................................................................................
Mobile Lab.......................................................................................
Networking Lab.................................................................................
OOBE Lab.........................................................................................

OPK/Setup/Fresh Install Lab..................................................................


Performance Lab................................................................................
Plug and Play/PCI Lab..........................................................................
RIS Remote Install Service Lab................................................................
Static Lab........................................................................................
Storage Lab......................................................................................
System Migration Lab (formerly Upgrades).................................................
Sustained Engineering (WinSE) Lab...........................................................
USB Lab...........................................................................................
VCT (Video) Lab.................................................................................
WDEG/NCD-AVQ Lab............................................................................

Chapter 1: Introduction
This chapter gives an overview of Windows Test Technologies (WTT) and some of
its important features. In addition, it lists the major steps to follow for an initial
end-to-end experience, from setting up WTT to reviewing the results of a test
run.

The following subjects are included in this chapter:


Windows Test Technologies Overview
WTT Architecture
Controllers
Naming Conventions (Folders, Tests, and Groups)
Getting Started - Process Summary

Windows Test Technologies Overview


Windows Test Technologies (WTT) is a distributed test automation framework for
the next generation of software. It is designed as a complete test case
management solution and automation framework for starting tests, tracking test
assets, gathering and analyzing test data, and logging the test results to backend
databases. In addition, WTT provides testers with a development platform that
can be used to create custom testing solutions.
WTT is installed as a stand-alone application, giving you the ability to centrally
store test cases and select when and where to run the tests.
Note: Please refer to the WTT Release Site for the most current version of this
document.

WTT Features
WTT Automation Datastore provides data storage for test cases. These test
cases can be grouped and scheduled when necessary to complete a test pass.
Utilizing the datastore, users are able to extend test cases with automation
information used when running the job (or test case).
WTT Controller, which continuously runs a set of services and applications to
support the execution and logging of jobs.
Note: A Controller can host both an Automation Datastore and a WTT
Controller on the same computer.
Asset Tracking, which helps track hardware for testing, as well as supporting
the sophisticated test automation and reporting technologies of WTT. Asset

Tracking provides users with dynamic asset pool management, allowing them
to efficiently allocate computers, devices, and other peripherals, as well as
configure complex test scenarios.
Jobs, which can be easily created, queried, sorted, grouped, and selected for
execution. Jobs are a tool for automating test cases and are stored in feature
nodes in a tree-view that allows for easy organization.
Runtime parameter values, which can be set by the user, allowing variable
values whenever a job is scheduled to run. The use of runtime parameters can
also extend the usefulness of test cases and can be reused for extra flexibility
and consistency among teams.
WTT Sysparse, which gathers detailed information about client computers
used for testing, storing the information in the Asset Tracking database. WTT
users can use this customizable information to set up jobs using specified
dimensions which are used to find appropriate computers for test cases. This
information also allows users to efficiently organize test cases on large
numbers of diverse computers and devices.
Library jobs, which can be called up and referenced from within the context of
another job. Library jobs allow test cases to be shared and reused throughout
WTT.

Windows Test Technologies Architecture


The architecture of Windows Test Technologies (WTT) is most easily seen as
follows:

Figure 1.1 WTT Architecture Overview

Client Detail

Figure 1.2 Client Level Client Side

Figure 1.3 Client Level OM Detail

Enterprise Detail

Figure 1.4 Enterprise Level Enterprise Servers

Figure 1.5 Enterprise Level WTTOM Interface

Test Resources Detail

Figure 1.6 Test Resources Automation Services Level

Figure 1.7 Test Resources Level Job Detail

Controllers
Each WTT enterprise must contain at least one server known as the Controller.
The Controller is comprised of an Automation Datastore and one or more WTT
Controllers. The Automation Datastore contains information about WTT client
computers as well as current and past jobs. The WTT Controller hosts the Job
Delivery Agent and WTT Execution Agent, along with other services and
applications fundamental to the operation of WTT.
Additional separate controllers can be added to a WTT enterprise as needed
for project growth and load balancing. This distribution of controllers allows
WTT to provide support for labs where the computers do not have a direct
connection to the corporate network. It also allows individual teams to assess
whether they need to host their own WTT controller than use WTT-hosted
servers.
Controllers are an important entity to the end user in WTT. Although users can
access computers across controllers, basic logical groupings of individual client
computers (known as asset pools or machine pools) are specific to individual
controllers to allow for better asset control. This means that certain functions
that are based on asset pools cannot function across multiple controllers. WTT
Job scheduler, for example, schedules on the asset pool level, and therefore
cannot be used to schedule jobs across more than one controller

Figure 1.8 Controller hierarchy in a WTT enterprise

Getting Started - Process Summary


This process summary will provide you with an overview to the process of setting
up WTT, running tests, and analyzing their results. It is not intended to function
as directions for using WTT and users should see the specific sections for each
process for detailed procedures.

WTT Infrastructure Setup (usually performed by Enterprise


Administrators)
1. Install Microsoft SQL Server 2000 Service Pack 3a (SP3a) or Microsoft
Database Engine (MSDE) 2000 on the system that will function as
controller. SQL Server or MSDE should be configured for NT Authentication
or Mixed mode authentication.
2. Install Microsoft .NET Framework version 1.1 on the controller.
3. Install the test controller on the server.

WTT End-User Deployment Setup


1. Install WTT Client on each computer on which jobs will be run.
2. Install WTT Studio on the computer where the user will develop, manage,
or view WTT assets and test execution.

Adding an Asset Pool and Preparing Assets


1. Open the Asset Management UI from WTT Studio.
2. Add an asset pool for running test jobs to the root directory and select a
Job Delivery Agent.
3. Move client computers and devices to your new asset pool.
4. Set client computers status to Ready.

Creating and Scheduling Jobs


1.

Open the Job Explorer.

2. Create a job feature node to store your jobs.


3. Create a job.
4. Specify general characteristics, runtime parameters, job
constraints, and logical machine sets, if any.
5. Specify the tasks, set the task execution order, and
specify task dependencies, if any.
6. Schedule the job to run immediately or at another
designated time.
Viewing Job Results
1. Query results.

2.
3.
4.
5.

Monitor jobs.
View results.
Edit results, if appropriate.
View the failure logs and job reports.

Chapter 2: WTT Setup


This chapter provides information about how to set up a Windows Test
Technologies (WTT) enterprise, consisting of a Controller for managing the
enterprise, WTT Client computers for running jobs, and the WTT Studio user
interface.

The following subjects are included in this chapter:


WTT Setup Overview
Server Setup
Client Setup
WTT Studio Setup

WTT Setup Overview


Setting up a WTT enterprise consists of setting up a server to function as a
Controller, and then installing the WTT Client software on your test computers.
WTT Studio is then used as a user interface for managing and running tests on
these client computers. The test computers are automatically registered with the
Controller and information about each computer and its associated devices is
gathered and stored in the Asset Tracking database on the Controller for use
when scheduling jobs (test cases).
Using the WTT Installer, the WTT software is copied to the server and the WTT
Controller is configured during setup. Once the Controller is installed, WTT clients
and WTT Studio installations originate from a folder on the WTT Controller.
WTT Controller
Although a WTT enterprise may contain multiple controllers, it must contain at
least one server operating as the WTT Controller. The WTT Controller is comprised
of two parts: an Automation Datastore, where information about client computers
and test cases is stored, and a WTT Controller, which hosts the Job Delivery
Agent, and the WTT Execution Agent, as well as other WTT services and
applications. Additional controllers can be added to a WTT enterprise as needed
for project growth and load balancing.
WTT Client
The WTT Client software must be installed on each computer that is used for
testing with WTT. In order to install the WTT Client, the installation package is run
on the client computer from the installation file share on the WTT Controller.
During installation, the client software is configured to reference the WTT
Controller and configuration information about the client computer is

automatically registered in the Automation Datastore for designing and running


jobs.
WTT Studio
WTT Studio provides test engineers with a user interface for managing assets,
creating and running jobs, and otherwise working with test results. As with the
WTT Client, WTT Studio is installed from the installation file share on the WTT
Controller and is configured to reference a unique WTT Controller.

Controller Setup
Before installing the WTT Controller software, the server must have either
Microsoft SQL Server 2000 or Microsoft Database Engine (MSDE) 2000
installed.
Note: In an enterprise environment, the setup of a WTT Controller will usually
be completed by enterprise administrators.

System Requirements
WTT Controller is supported on computers running the following software:
x86 version of Microsoft Windows XP SP1, Microsoft Windows Server
2003, or Microsoft Windows codename Longhorn.
Microsoft SQL Server 2000 or MSDE 2000 Service Pack 3a (SP3a).
Microsoft .NET Framework version 1.1. You should install this before you
begin the WTT server setup.

Hardware Requirements
WTT Controller setup is currently only supported on x86 architectures. Detailed
hardware requirements such as number and speed of processors, video and
network cards and hard disk capacities have not been determined as of this
release. It is recommended, however, that a server be selected that is well within
the hardware specifications identified for Windows Server 2003.

User Account Requirements


To install the WTT Controller, a user must be logged on using a domain account
that has been granted local administrative rights. The account used for the WTT
Service (known as the Controller User) must have at least User rights on the
controller server and cannot have access to Source Depot.
Note: It is best that the Controller User account is not an administrator on the
server in order to reduce the attack surface of the Controller. If, however, the
Controller is being set up without creating a database (using an existing database
on a different computer), the setup account needs to have administrator

privileges on the database computer and the Controller User must have at least
User rights on that computer.

Database Installation
Before installing the WTT Controller software package, either Microsoft SQL
Server 2000 or MSDE 2000 SP3a must be installed.

To install MSDE
1. From the computer where you plan to install the Controller, download and
run the downloadable web package file from the Internet:
http://download.microsoft.com/download/8/7/5/875e38ea-e5824ee2-9485-b459cd9c0082/sql2kdesksp3.exe

2. Click Open to continue.


3. If you are asked Do you want to install and run
sql2kdesksp3.exe from download.microsoft.com? click
Yes to continue.
4. Read and accept the Software License Agreement.
5.
6.

Accept the default installation path.


Click Finish to extract the MSDE setup. Wait for it to
complete.

1. In Windows, click Start and Run, and then at the prompt, type the path
where you saved the MSDE setup followed by:
\msde\setup sapwd=<databasepassword>
disablenetworkprotocols=0
where
<database password> is your database administrator password.
2. To start the MSDE installation, press Enter and wait for it to complete.

To make sure that the SQL Server 2000 services have started, confirm
that a green arrow is visible on the SQL Server 2000 icon is active in
the Windows System Tray on the Controllers desktop.

Microsoft .NET Framework Installation


Before installing the WTT Controller software package, you must install the
Microsoft .NET Framework version 1.1. If the server is running the Windows
Server 2003 operating system, you can skip this step. Windows Server 2003
already includes the correct version of the .NET Framework.

To install the Microsoft .NET Framework


1. From the Controller, install the Microsoft .NET Framework 1.1 from:
http://www.microsoft.com/downloads/details.aspx?FamilyID=262d25e3-f589-48428157-034d1e7cf3a3&DisplayLang=en.
2. Accept all of the defaults during installation.

Installing WTT Controller


During the installation of the WTT Controller software package, you will be asked
to provide the following information.
WTT Controller installation directory. This is the directory into which the
core WTT Controller files are installed. Unless there are reasons to do so, you
can simply accept the default setting provided by the Installer.
Install File Share settings. The Installer creates a file share on the computer
and populates it with the WTT Studio and WTT Client installation packages.
During the Controller installation, you can choose the name of the file share
and its location on your computer if you desire. You can also accept the
default settings provided by the Installer.
Log File Share settings. The Installer creates a file share on the computer to
be used as a repository for test log files. Additionally, the Installer creates a
local user account by which test computers can connect to the log file share
and upload their log files. The Installer allows you to choose the name and
location of this file share, as well as the user account details for accessing the
file share. Unless there are reasons to do so, you can simply accept the
default setting provided by the Installer.
Database settings. The Installer creates and initializes a WTT database to be
used for creating and scheduling WTT jobs. In doing so, the Installer will
supply a default name for the database which you may accept if desired.
Note: directory names used for WTT installation may not have spaces within
them. Any directory names with spaces will cause the installation to fail.

To install the WTT Controller


1. Remove any previous version of the WTT Controller software.
2. From the WTT installation file share, run the command setup.exe. This
can be done using Windows Explorer or from a Windows command
prompt. No command-line options are needed.
For the WTT installation file share to use, please see your WTT
Administrator or the WTT Release Site.

3.
4.
5.
6.
7.

8.

When the Installer displays the Controller Setup welcome


screen, click Next.
Read and accept the End User License Agreement, and then
click Next.
If you wish to change the setup destination folder, click
Browse, select the destination directory, and then click OK.
Click Next.
If you wish to change the default file share name and location,
type the file name in the Install Share Name box and type
the path in the Install Share Path box.
In the Password box, type the installation user password.
Note: This password must comply with domain password
restrictions and the account being used must have local
administrative access to be valid.

Click Next.
you wish to change the default log file share information,
type your own log file share settings. Otherwise accept the
defaults. Click Next.
11. Click Next, and then click Next again.
12. In the User Name, Password, and Domain boxes, type the
user credentials for the Controller User, and then click Next.
9.

10. If

Note: Use of a Local account with User privileges but not


local administrator rights is recommended, as this will make
the Controller more secure. In any case, internal Windows
policy requires that this account does not have access to
Source Depot.
13. Click

Install.
14. When the installation is complete, click Finish.

Uninstalling the WTT Controller

All WTT Client operations must be stopped in order to remove the WTT Controller.
1. In Control Panel, click Add or Remove Programs.

Select Microsoft WTT RC Controller and click Remove.


3. Click Yes to confirm that you want to remove WTT Controller.
2.

Note: The Installer will remove the WTT database, provided that this
database was created upon installation.

Client Setup
The WTT Client installation package installs the client software on test computers
that are used to execute WTT jobs. Because this software ties the client systems
to the Controller and supplies the Automation Datastore with vital information,
each test system used to execute jobs under WTT 2.0 must have this software.
With it, each client can be uniquely identified and analyzed by Sysparse for WTT
and included in an asset pool for testing. The WTT Client can be installed alone or
on a computer with WTT Studio installed. WTT Controller and the WTT Client,
however, cannot be installed on the same computer.
Note: All examples in this section assume that you accept the default settings
for all installation steps.

Software Requirements
WTT Client is supported on the following operating systems: Microsoft Windows
2000 Professional SP4, Windows XP, Windows Server 2003, or Microsoft Windows
codename Longhorn.
WTT Client is supported on the following architectures: x86, Itanium-based, or
AMD64.
Note: Windows 2000 SP4 clients require MSXML support files be installed
before WTT Runtime Client can be installed and run. For instructions for
configuring your Windows 2000 SP4 clients, see Installing MSXML.

User Account Requirements


To install the WTT Client, you must be logged into the computer as a user with
administrator permissions.

Installing WTT Client


The WTT Client is installed after the Controller is set up. The WTT Client
installation software is located on the Controller server.
During the installation of the WTT Client software package, you may need to
specify the WTT Client installation directory. This is the directory into which the
core WTT Client files will be installed. Unless there is a specific reason, you can
accept the default setting provided by the Installer.
WTT Triage Tool option - The WTT Client package includes the WTT Triage tool,
which you can optionally choose to install. Installing the WTT Triage tool also
installs the Microsoft Debugger package, which is used by the WTT Triage tool. If
you select the WTT Triage tool, you are asked to select settings for the kernel
debugger. If you have a kernel debugger attached to the system, do not clear the
Kernel Debugger Attached option, and fill in the debugger settings as

appropriate. If you do not have a kernel debugger attached, clear the Kernel
Debugger Attached option. Discussion of the kernel debugger settings is
beyond the scope of this document.
Note: directory names used for WTT installation may not have spaces within
them. Any directory names with spaces will cause the installation to fail.

To install a WTT Client.

1. Remove any previous versions of WTT Client and WTT Studio that are on
the client computer.

2. From the Start menu, click Run.


3. In the Run dialog box, type
\\<server>\wttinstall\client\setup.exe

Where:
<server> is the name of a previously installed WTT Controller.
4. Click OK.
5. Click Next.
6. Read and accept the End User License Agreement, and
then click Next continue.
7. Click Next, and then click Next again.
8. If ICF is enabled on the target computer, a setup will display a
dialog advising you that a port must be opened in the firewall
to allow WTT Client to function. To continue installation:
Select Yes to continue.
Click Next.
2. Clear the Kernel Debugger Attached check box and then
click Next.
3. Click Install.
4. Click Finish to exit the Installer.
5. [Optional step] To install Autotriage only, run:
WTTCmd.exe /addsymboluser /user:<username>
/domain:<domain> /password:<password>
Where:
<username>

is the user name.


is the domain to which the user belongs.
<password> is the password for the user name.
For example:
<domain>

WTTCmd.EXE /addsymboluser /user:abc /domain:test


/password:abc123
Note: The password will be echoed back as you type and stored in plain
text on the client computer, therefore a test account should be used to
avoid compromising your corpnet credentials.

To install the Kernel Debugger for the client computer (optional)


The Kernel Debugger can be installed for the client after WTT Client
installation if the Kernel Debugger option is selected.
1. Click Start, and then click Run.

a. In the Run dialog box, type cmd, and then click OK.
2. At the command prompt, run the following command:
\\controller\<install share>\Debugger\WTTKDSetup.cmd
Note: For more information on this script run:
WTTKDSetup.cmd /?
from the install share. The script requires network access, so you need
to run the script under an account with network access.

To enable the ICF port on the client computer


If clients fail to connect to the WTT database or if jobs get stuck in the
scheduler, ensure that, if Internet Connection Firewall (ICF) is enabled, then a
port in the firewall for WTT is enabled on the client.
1. In Control Panel, open Network and Internet Connections.

Click Network Connections.


Right-click the connection used to talk to the WTT Controller
(usually a network card), and then click Properties.
4. On the Advanced tab, click Settings.
2.
3.

Note: If the WTT Client is already listed, simply verify that the settings
are correct (see below), and verify that the item is enabled (selected).
5.

Click Add and provide the following information:


Description of service: WTT Client
Name or IP address: <name of computer you are
configuring>
External Port: 1778 (leave as TCP)
Internal Port: 1778

Click OK.
7. Verify that the WTT Client entry is selected, and then click
OK.
6.

Note: WTT Client install does best effort attempt to make sure that the ICF
will not block its working. If ICF is configured as off WTT setup will
succeed. If ICF is configured as on recommended then setup will put the
port number (TCP:1778) on the list of enabled port. If ICF is configured as
off without exception then WTT installation will fail.
If setup fails to detect the configuration of ICF then it will assume that ICF is
not configured and give a pop up (ONLY in attended setup) notifying the
user to that effect.

Uninstalling the WTT Client


1. In Control Panel, click Add or Remove Programs.

Select Microsoft WTT RC Client, and then click Remove.


3. Click Yes to confirm that you want to remove WTT Client.
2.

MSXML Installation
Before installing WTT Client on a computer running the Microsoft Windows 2000
SP4 operating system, it is necessary to install MSXML.
Note: This step is necessary only if running Microsoft Windows 2000.

To install MSXML

1. From the computer where you plan to install the WTT Client, install the
Microsoft Windows Installer 2.0 from:
http://www.microsoft.com/downloads/details.aspx?
displaylang=en&FamilyID=4B6140F9-2D36-4977-8FA1-6F8A0F5DCA8F.
2. Install the Microsoft XML (MSXML) Parser 3.0 Service Pack 4 (SP4) from:
http://www.microsoft.com/downloads/details.aspx?FamilyId=C0F860222D4C-4162-8FB8-66BFC12F32B0&displaylang=en.
3. Install the MSXML 4.2 SP2 (Microsoft XML Core Services) from:
http://www.microsoft.com/downloads/details.aspx?FamilyID=3144b72bb4f2-46da-b4b6-c5d7485f2b42&DisplayLang=en.

WTT Studio Setup


The WTT Studio software package installs the user interface used by test
engineers for creating and scheduling WTT jobs and analyzing the test results. It
is not necessary to install WTT Studio on all client machines.
Note: All examples in this section assume that you accept the default settings
for all installation steps.

Software Requirements
To install the WTT Studio, the target computer must be running the following
software.
x86 versions of Windows XP, Windows Server 2003, or Microsoft Windows
codename Longhorn.
Microsoft .NET Framework version 1.1.
Note: WTT Studio setup is currently only supported on x86 architectures.

User Account Requirements


To install the WTT Studio, you must be logged onto the computer using a domain
user account that has local administrative access. Additionally, the computer on
which WTT Studio is to be installed must already have been joined to a domain
trusted by the domain on which the WTT Controller resides.

.NET Framework Installation


Before you install the WTT Studio software package, you must install Microsoft
.NET Framework version 1.1. Other versions of the .NET Framework are not
compatible with WTT 2.0.

Installing WTT Studio


WTT Studio is the WTT user interface used to manage assets, run jobs, and view
results.
During the installation of the WTT Studio software package, you need the
following information:
WTT Studio installation directory- This is the directory into which the core
WTT Studio files are installed. Unless there is a specific reason for changing
them, you can accept the default setting provided by the Installer.
Note: directory names used for WTT installation may not have spaces within
them. Any directory names with spaces will cause the installation to fail.

To install WTT Studio

1. Remove any previous version of WTT on the computer to be used.

2. From the Start menu, click Run.


3. In the Run dialog box, type
\\<server>\wttinstall\Studio\setup.exe

Where <server> is a previously installed WTT Controller.


4. Click OK.
5. Click Next.
6. Read and accept the End User License Agreement, and
then click Next.
7. Click Next, and then click Install.
8. Click Finish to exit the Installer.
9. If necessary, the following applications will automatically
install after the WTT Studio Installer has been exited:
Crystal Reports Runtime
Microsoft Office XP Web Components
Microsoft Office XP Primary Interop Assemblies
(PIA)
No user input is necessary.

Chapter 3: Asset Tracking


Within Windows Test Technologies (WTT), the computers and devices used for
testing are considered assets. The WTT Asset Tracking capabilities form a shared
resource for tracking hardware used both by software test engineers and by the
OEM vendors who provide computers to the test labs. It is also used to support
test automation and reporting. WTT Asset Tracking supports dynamic asset pool
management to enable more effective computer allocations and configuration for
lab managers and testers.

The following subjects are included in this chapter:


Asset terminology
Asset Pools
Asset Tracking Security
Asset Tracking Best Practice Recommendations
Getting Started in Asset Management
Asset Pool Management Procedures
Asset Tracking Procedures

Asset Terminology
Asset

An asset is a computer or a device (whether component or peripheral) that is


suitable for use in test cases.
Asset pool

An asset pool is a virtual collection of computers and/or devices, created by a


user, to help manage the testing process. An asset owner may create one or
more pools to manage those assets. An asset pool is also sometimes referred
to as a machine pool.
Associated Device

A device that is sent by a vendor along with a computer. Associated devices


are required to stay with the computer and are returned to the assets
permanent owner when the computer is either retired or returned to the
vendor. Common examples include laptop AC adaptors or network cards.
Attached Device

A device that is added to a computer on a temporary basis. Common


examples include printers and scanners.
Automation Datastore

The database used by WTT to store client configuration and test case
information.
Child Asset Pool

An asset pool contained within another pool. Several child asset pools can be
created as part of a parent asset pool in the asset pool hierarchy.
Controller

A server that is configured to host the WTT Job Delivery Agent and WTT
Execution Agent, along with other services and applications integral to the
operation of WTT.
Current Owner

A user who currently has a given asset in his or her possession. This need not
be the same person as the Permanent Owner.
Default Pool

The asset pool in which computers are initially placed when registered with a
WTT Controller. From the Default Pool, they may be moved to specific asset
pools for better asset or test management.
Permanent Owner

The user who is permanently responsible for a specific asset.


Registration

The process of entering the configuration information of a computer or tracked


device into the Asset Tracking portion of WTT. This may be done manually or
using Sysparse.
Sysparse

A tool within WTT that takes a snapshot of a computers configuration. It


runs on a client computer to inventory the computer's hardware components
and provide information for WTT to use to determine which computers to
schedule for testing.
Temporary Owner

A user who functions as the Current Owner of an asset while borrowing it for a
limited time period for testing periods from its Current Owner.
Target Owner

The prospective owner of an asset (when permanent ownership is being


transferred from one user to another).
Tracked Device

A device that has an asset tag and/or serial number, and a defined Device
Label. It is almost always an attached device, such as a printer or a scanner,
and is inventoried separately, rather than an associated device, which is not.
Transfer

The method for changing ownership of an asset from one permanent owner to
another.
Vendor

An original equipment manufacturer (OEM). Within WTT, a Vendor is often


involved in the supply of testing equipment to a test lab.

Asset Pools
An asset is a computer or a device (component or peripheral) suitable for test
cases. These are the client computers in the WTT test environment.
Computers and certain devices can be grouped into logical units by the WTT user.
These groups are called asset pools. Asset pools are displayed in a tree-view and
can be organized hierarchically. This allows for structured organization of large
numbers of computers as well as more flexible targeting for deployment of Jobs.
Properties of an asset pool include its name, its Job Delivery Agent computer, and
permissions. Users always have permissions to browse computers, but obtaining
permission to schedule or execute jobs on an asset pool are limited and can be
controlled by the asset pool owner.
Asset pools can be scheduled as a unit. When this option is selected, the
scheduler treats all computers in the pool as a single unit so that if any computer
or device from the pool is reserved by the scheduler, then all computers and
devices in the pool are reserved. This is particularly useful when the computers or
devices have some physical connection that ties them together. An example of
this is MSCS Clustering set up with shared storage. In this case it would be
undesirable for a job deployment to split across multiple clusters when the
intention is to execute on all nodes in the same cluster.

Figure 3.1 Asset Pool Hierarchy Examples

Getting Started in Asset Management


My Assets provides WTT users with a centralized location for managing all assets
that they control. From My Assets, it is possible to do local and global searches,
as well as initiate and respond to loans and transfer requests. At all times, the
assets controlled by the logged in user are at the fingertips.
This feature may be accessed within WTT by clicking My Assets on the Assets
menu.
On My Assets, three tabs on the left-hand pane provide different functionality:

My Search
The My Search tab displays all asset pools supported by the selected controller.
From My Search, you can register new assets, create asset pools, move assets
from one pool to another, place dimensions on the asset pools or modify security
permissions.

Global Search
The Global Search tab provides a location for you to search through the
database containing all computers and devices registered as assets in WTT.

My Actions
The My Actions tab allows you full control over all loan and transfer actions
relating to your assets, including current and pending asset loans, approvals,
returned loans, and ownership transfers.

Asset Tracking Best Practice Recommendations


A number of practices have been observed to help in managing assets more
effectively in a testing organization. These include the following:
When registering devices, be as specific as possible when typing the device
label so that it can be found easily in a search.
When registering computers or devices, double check all asset tag numbers to
be certain the correct tag number is entered. This is because the WTT Asset
Tracking Database will not permit duplicate asset tag number to be used. To
change tag numbers or other information after initially registering a computer
or device, see Viewing and Editing Computer Details.

Asset Pool Management Procedures


An integral part of WTT testing is the use of asset pools to group related
computers and devices into formations convenient for testing. Care in the
organization of these asset pools can make administration of your assets
significantly easier.
Managing asset pools is important from the beginning when the WTT Client is
initially run, all computers and devices are automatically added to the Default
Pool. They must then be moved to another asset pool before they can be used for
a job. Assets may be organized into asset pools in any manner convenient for
testing, although typically, all assets in an asset pool are related.
As well as providing a convenient holding group for individual computers, asset
pools may also be used for grouping computers together for testing purposes
using the Schedule as a unit option. With this option selected, running a test
with one computer within the pool will mean that all the computers in the pool
will run the test. Testers should be aware that this may impact job scheduling,
however. For more information, see Scheduler Prioritizing.
Note: In order to change an asset pool, including moving assets or adding a
child asset pool to it, it is necessary to have Write permission for that pool.

To add an asset pool


1. On the Asset menu, click My Assets.
2. Select your controller from the Controller drop-down list.
3. On the My Search tab, right-click where you wish to add this asset pool,
and then click Add Asset Pool.
To add an asset pool to the root, right-click the root symbol [$].
To add a child asset pool to another asset pool, right-click the asset
pool that will be the parent.

4. In the Name box, type a name for the new asset pool.
Note: The asset pool name must be unique within the selected
controller.
5. Select a Controller from the Job Delivery Agent drop-down list.
Note: If you have multiple options available and no job delivery agent is
selected, jobs scheduled for this asset pool will not be delivered.
6. If you wish the scheduler to treat all assets in the pool as a single unit,
select the Schedule as unit option.
Note: If this option is selected, all assets within the pool will be grouped
as a single testing unit for test selection.
7. Click OK.

To move a computer or device between Asset Pools


1. On the Asset menu, click My Assets.
2. Select your controller from the Controller drop-down list.
3. On the My Search tab, click the pool from which the asset should be
moved.
Note: Once a computer has been initially registered with WTT, it is
automatically placed in the Default Pool, and it is from there that it will
initially need to be moved.
4. On the Computer List tab, right-click the computer, and then click Move
Asset.
5. In the Asset Pool box, click the specific asset pool to which you want to
move the computer, and then click OK.
Note: Users must have Write permissions for the asset pool to which the
asset is being moved. Pools on which the logged-on user has appropriate
permissions appear in black in the Asset Pool box. All others appear in
red.
6. Click Move Asset.
Note: computers or devices may also be moved from one asset pool to
another by using drag-and-drop.

To delete an asset pool


1. On the Asset menu, click My Assets.
2. Select your controller from the Controller drop-down list.
3. On the My Search tab, right-click the selected asset pool, and then click
Delete.
4. Click Yes.

Note: Deleting a parent asset pool will also delete all child asset pools.
All computers and devices within those asset pools will then be moved to
the Default Pool.
Users cannot delete the $, Default Pool or System asset pools.

To rename an asset pool


1. On the Asset menu, click My Assets.
2. Select your controller from the Controller drop-down list.
3. On the My Search tab, right-click the selected asset pool, and then click
Rename.
Note: The Asset Pool name will also become editable by making a
single click on the pool name.
4. Type a new name for the asset pool, and then click elsewhere on the
screen to save the new name.
Note: All asset pool names must be unique within the selected controller.

To edit asset pool properties


1. On the Asset menu, click My Assets.
2. Select your controller from the Controller drop-down list.
3. On the My Search tab, right-click the selected asset pool, and then click
Properties.
4. Make any changes necessary, and then click OK.

To add a user to an asset pool


1. On the Asset menu, click My Assets.
2. Select your controller from the Controller drop-down list.
3. On the My Search tab, right-click the selected asset pool, and then click
Properties.
4. On the Security tab, click Add.
5. Click the user or user group desired, and then click OK.
6. Select the appropriate permissions check boxes and then OK.
Note: Users and user groups will not be available within the user list
until the WTT administrator adds those users. Additionally, all new users
by default have all permissions denied for asset pools.

To remove a user from an asset pool


1. On the Asset menu, click My Assets.

2. Select your controller from the Controller drop-down list.


3. On the My Search tab, right-click the selected asset pool, and then click
Properties.
4. On the Security tab, click the selected user, and then click Remove.
5. Click OK.

Asset Tracking Procedures


Assets are the computers and devices used for testing. As part of the WTT suite
of technologies, Asset Tracking is a shared resource allowing software test
engineers and OEM vendors who provide test lab equipment to track hardware. It
can also be used to help support test automation and reporting. Asset Tracking
supports dynamic asset pool management, enabling easy and convenient
computer allocations and configuration for lab managers and testers.

The following subjects are included in this section:


Registering assets
Viewing computer details
Searching for assets
Transferring an asset
Asset loans
Vendor management
Standard asset reporting
Removing a computer from WTT

Registering Assets

To register a computer using the wizard


Note: Once added, a test computer cannot be removed from WTT.
1. On the Asset menu, point to Register Asset, and then click Computer.
2. The Computer Registration Wizard opens. Click Next.
3. Select your controller from the Controller drop-down list.
4. Select one of the following options:
Run Sysparse Now
Sysparse will automatically populate the configuration information
from your computer.
Create Sysparse Floppy

A floppy disk with Sysparse configuration information will be created.


Register computer without running Sysparse
Configuration for you computer will need to be manually added.
2. Click Next. If you chose Run Sysparse Now, then click RunSysparse.
3. Enter configuration information for the computer being registered.
Note: All required fields are printed in bold.
4. Click Next. A summary of the entered information is displayed. If this
information is correct, click Finish.

To register a computer using the classic (manual) method


Note: Once added, a test computer cannot be removed from WTT.
1. On the Asset menu, click My Assets.
2. Select your controller from the Controller drop-down list.
3. On the My Search tab, right-click the target asset pool, and then click
Register Computer.
4. Enter configuration information for the computer you are registering.
5. Click Save, and then click No to exit.

To register (add) a device


1. On the Asset menu, click My Assets.
2. Select your controller from the Controller drop-down list.
3. On the My Search tab, right-click the asset pool in which to place the new
device, and then click Register Device.
4. Enter configuration information for the device you are registering.
5. Click Save, and then click No to exit.
Note: Devices may be also be added by going to the Asset menu, pointing
to Register Asset, and then clicking Device. However, if this is done, the
device must then be moved to the appropriate asset pool before it can be
used.

To integrate a manually added computer with Sysparse XML file


1. On the Asset menu, point to Asset Management, point to Sysparse,
and then click Sysparse Integration.
2. Select your controller from the Controller item list.
3. Right-click the desired computer, and then click Add Sysparse File.
4. In the Sysparse File box, enter the path for the Sysparse file to be
added, or click Browse to browse for the file.
5. Click Save to add the Sysparse file.
6. Click OK.

To merge files of a manually entered computer with its Sysparse run


entry
1. On the Asset menu, point to Asset Management, point to Sysparse,
and then click Sysparse Merge Manager.
2. Select your controller from the Controller item list.
3. Click the desired computer in the Manually Added Computers box, and
then click its counterpart in Sysparse Run Computers box.
4. Click Save, and then click OK if necessary.
When a computer's manually entered configuration file is merged with a
Sysparse created configuration file, all information is merged into the
manually entered file and the Sysparse created entry is deleted from the
database.

To bulk add computers


Using the Bulk Add feature, several computers can be registered with WTT in
the same operation. To do this, the computers must be similar: the series and
model name must be the same for all of the computers that you are bulk
adding. When you bulk add computers, you can enter the device asset tags,
vendor information, and location details. In addition, before saving the
computer details, you can edit the details of individual computers, including
the asset tag, serial number, computer name, ownership, and location.
Note: Only popular vendors are automatically displayed when adding new
computers. If the computer vendor desired is in the database but is not a
popular vendor, please click More Vendors to see a complete list. If a
desired vendor is not yet in the database, please refer to the instructions on
Vendor Management to include it before attempting to add the computers.
1. On the Asset menu, click Bulk Add Computers.
2. Select your controller from the Controller drop-down list.
3. Type the beginning asset tag number in Start AssetTag, and, in the
Number of Computers box, enter the number you wish to add. For more
information on asset tag format, see the Asset Tag entry in the Glossary.
4. Enter configuration information for the computers you are registering.
Note: Although all computers added using Bulk Add must share the
same basic configuration details, details of individual computers may be
edited prior to saving the bulk add.
5. Click Bulk Add.
6. Type individual computer details or edit information in the Preview Bulk
Add Computers dialog box.

7. Click Save, and then click Yes.


8. Do one of the following:
If you wish to add additional computers, click Yes.
If you are done using Bulk Add, click No.

To register a device using the wizard


1. On the Asset menu, point to Register Asset, and then click Device.
Note: You may also right-click on the desired asset pool and click
Register Device.
2. Select your controller from the Controller drop-down list.
3. Enter configuration information for the device you are registering.
4. Click Save, and then click No to exit.

To associate a device with a computer


1. On the Asset menu, click My Assets.
2. Select your controller from the Controller drop-down list.
3. On the My Search tab, click the target asset pool, and then click the
Refresh

button to search for assets within that pool.

4. On the Computer List tab, right-click the computer to which you wish to
attach the device, and then click Associate Device.
Note: The computer to which the device is to be associated must
currently have a computer status of Ready.
5. Click the device to be associated.
Note: The device to be attached must have already been registered in
order to be associated to an already registered computer.
6. Click Associate Device.
7. Click OK.
Note: Because associated devices are intended to be permanently
connected to a computer, WTT allows devices to be disassociated from the
computer only if the device is externally connected to that computer. If a
device is intended for only a temporary connection to a computer, it should
be "attached," rather than "associated."

To attach a device to a computer


1. On the Asset menu, click My Assets.
2. Select your controller from the Controller drop-down list.

3. On the My Search tab, click the target asset pool, and then click the
Refresh

button to search for assets within that pool.

4. On the Computer List tab, right-click the computer to which you wish to
attach the device, and click Attach Device.
Note: The computer to which the device is to be attached must
currently have a computer status of Ready.
5. Click the device to be attached.
Note: The device to be attached must have already been registered in
order to be attached to an already registered computer.
6. Click Attach Device.
7. Click OK.

To remove a device from a computer


1. On the Asset menu, click My Assets.
2. Select your controller from the Controller drop-down list.
3. On the My Search tab, click the target asset pool, and then click the
Refresh

button to search for assets within that pool.

4. On the Computer List tab, right-click the computer to which the device is
attached. and click Edit Computer Details.
5. Click View Connected Devices.
6. Expand the tree view of the target computer and navigate to the
connected device in question.
7. Right-click the selected device and click Properties.
8. Click Advanced.
9. Click Unattach Device or Unassociate Device if available.
Note: If these options are not available, the device is permanently
connected to the computer and may not be removed.
10. Click Unattach Device or Unassociate Device.
11. Click OK.

Viewing and Editing Computer Details

To view computer details


1. On the Asset menu, click My Assets.
2. Select your controller from the Controller drop-down list.

3. On the My Search tab, click the asset pool containing the desired
computer, and then click the Refresh

button to view assets within

that pool.
4. On the Computer List tab, right-click the desired computer, and then click
View Computer Details.
5. Click OK to close the dialog box or click View Connected Devices to see
device details.
6. To see individual device details, expand the computer folder in the
Connected Devices dialog box.
7. Right-click the specific device you wish to view, and then click Properties.
8. Click the Close button on Device Properties, and then click the Close
button on Connected Devices.
9. Click OK to close the dialog box.

To edit computer details


1. On the Asset menu, click My Assets.
2. Select your controller from the Controller drop-down list.
3. On the My Search tab, click the asset pool containing the desired
computer, and then click the Refresh

button to view assets within

that pool.
4. On the Computer List tab, right-click the desired computer, and then click
Edit Computer Details.
5. Edit appropriate computer details as necessary.
6. Click Save, and OK to save your changes.

Searching for Assets


WTT allows you to search for assets within the database using either Quick
Search (which searches only for assets belonging to the logged-in user) or Query
Builder (which searches for assets globally unless specifically limited).

To search for assets using Quick Search


1. On the Asset menu, click My Assets.
2. Select your controller from the Controller drop-down list.
3. On the My Search tab, click the asset pool you wish to search.
Note: If no asset pool is selected, the Default Pool is searched.

Recursive searching is not supported--only the selected asset pool is


searched; child asset pools are not.
4. Select the type of asset you wish to search for from the drop-down list.
5. On the Quick Search tab, select your search parameter from the
Parameter drop-down list.
6. Type the string to search for in the Search String box, and then click the
Refresh

button.

Quick Search will only search for assets that belong to the logged-in user.

To search for assets using Query Builder


1. On the Asset menu, click My Assets.
2. Select your controller from the Controller drop-down list.
3. On the My Search tab, click the asset pool you wish to search.
Note: If no asset pool is selected, the Default Pool is searched.
4. Select the type of asset you wish to search for from the drop-down list.
5. On the Query Builder tab, click the empty cell beneath Field Name and
select a parameter to search for from the drop-down list.
6. Click the empty cell beneath Operator and select a query operator from
the drop-down list.
7. Click the empty cell beneath Value and type a search string or value for
which to search.
Note: By default the Path row displays the selected asset pool. To use
a different value, click the cell and type another value for the search
criteria.
8. Click the empty cell beneath Logic and select the appropriate logical
operator from the drop-down list.
Note: By default, recursive searching is not supported and only the
selected asset pool will be searched (child asset pools are not). However,
if the logical operator in the Path row is changed to OR, then the query
will include results that either match the selected asset pool or other
parameters chosen, rather than just results that match both parameters.
9. Click the Refresh

button to start the search.

Note: Query Builder will not limit searches to the assets of the logged-in
user unless the DSUserAlias parameter has been specified in the search
criteria.
Search results will be displayed on the Computer List tab or the
Device List tab depending upon the type of asset being sought.

Transferring an Asset
Because of the nature of enterprise testing in a large company, it is often
necessary to transfer ownership of one or more assets from one tester or test
group to another. With the WTT Asset Transfer Wizard, this is easily done, with
streamlined means allowing you to transfer ownership of one an asset to
someone else or to request ownership of an asset for yourself.

To transfer one or more of your assets to someone else


1. On the Asset menu, point to Asset Management, and then click Asset
Transfer Wizard.
2. Click Next.
3. Select your controller from the Controller drop-down list.
4. Click Transfer Ownership, and then click Next.
5. Search for the asset to transfer using Query Builder, Quick Search, or
Advanced Search.
6. Click Next.
7. Click the asset to transfer.
8. In the Target Owner box, type the alias for the planned owner.
9. In the Reason text box, type the reason for the transfer.
Note: An e-mail message will be sent to the target owner advising of
the transfer request.

To request an asset transfer


1. On the Asset menu, point to Asset Management, and then click Asset
Transfer Wizard.
2. Click Next.
3. Select your controller from the Controller drop-down list.
4. Click Request for current ownership of an asset, and then click Next.
5. Search for the asset to transfer using Query Builder, Quick Search, or
Advanced Search
6. Click Next.
7. Click the asset to transfer.
8. In the Reason text box, type the reason for the request.
9. Click Next.
10.If a warning dialog appears, click OK.
11.Click Finish.
Note: An e-mail message will be sent to the current asset owner
advising of the transfer request and the current owner will either
approve or deny the request.

Asset Loans
Exchanging assets is a common and often necessary practice while conducting
product testing. To facilitate this, WTT allows for easy loan of assets from one
current owner to another (the permanent owner remains the same, however).
Notification of loan approval or rejection is automatically sent by e-mail.
The following basic rules are important to keep in mind when requesting an asset
loan:
A borrower may place a loan request for an asset only if the borrower is
not the current owner of the asset.
Multiple users may place loan requests for a single asset at the same
time. The current owner will choose from among the requests.
A single user may only place single requests for a given asset at one
time.

You can perform the following tasks within Asset Loans.

To request an asset loan


1. On the Asset menu, point to Asset Management, and then click Asset
Loan Wizard.
2. Click Next.
3. On the Quick Search tab, select Computer or Device from the dropdown list.
Note: A more detailed search is available using the Advanced Search
tab.
4. From the Parameter drop-down list, select the type of search parameter,
and then enter a search string.
5. Click the Refresh

button to search for a matching asset.

6. Select an asset to request, and then click Next.


7. Enter your building and office information in the Requestors Location
Details group box.
8. In the Loan Period Details group box, use the drop-down calendars to
select the requested loan period in the Loan Start Date and Loan End
Date boxes.
9. Enter a brief reason for the loan request in the Reason text box.
10. Click Next, and then click Finish.
An e-mail notification is sent to the current owner of the asset with details of
the asset request. If the current owner approves the loan, an e-mail message
will be sent to you with details of the approved loan.

To return a loaned asset:


1. On the Asset menu, click My Assets.
2. On the My Actions tab, expand the Asset Loan folder, and then doubleclick Approved Loans.
3. Click the loaned asset to be returned and then click Return Loan.
The permanent owner of the asset will be alerted to the return, and upon
acceptance, the loan entry will be removed from the Asset Tracking History
table.

To approve or deny an asset loan


1. On the Asset menu, click My Assets.
2. On the My Actions tab, expand the Asset Loan folder, and then doubleclick Pending Loan Requests.
3. Double-click a specific loan request to modify loan details including loan
end-date and other information.
4. Click the desired loan in order to approve or deny the request, and then
click Approve Loan or Deny Loan.
Note: If a loan is denied, it is necessary to enter a comment under
Enter Reason.
If the same asset is requested by multiple users, an approval for one user
results in a rejection for all others.

To accept or reject an approved loan request:


1. On the Asset menu, click My Assets.
2. On the My Actions tab, expand the Asset Loan folder, and then doubleclick Approved Loans.
3. Click the desired loan, and then click Accept Loan or Reject Loan.
If an approved loan is rejected, the reasons for rejection may be
specified in the Comments box.

To accept an asset return:


1. On the Asset menu, click My Assets.
2. On the My Actions tab, expand the Asset Loan folder, and then doubleclick Returned Loans.
3. Click the returned asset.
4. Click Accept Returned Loan.

The temporary owner of the asset will be alerted to the acceptance of the
return, and the loan entry will be removed from the Asset Tracking History
table.

To revoke a rejected asset loan request:


1. On the Asset menu, click My Assets.
2. On the My Actions tab, expand the Asset Loan folder, and then doubleclick Rejected Loans.
3. Click the rejected asset.
4. Click Revoke Rejected Loan.
The loan entry will be removed from the Asset Tracking History table.

To send loan overdue mail


WTT Asset Tracking automatically sends loan overdue mail to a user who has
not returned a loan by the due date.

Vendor Management
In order to allow for testing on the widest range of OEM products, it is frequently
necessary to add to or modify Vendors or Vendor products. This is done through
the WTT Vendor Management tools.
The following tasks can be performed in Vendor Management.

To add a vendor
1. On the Asset menu, click Vendor Management.
2. Right-click the Vendor list, and then click Add Vendor.
3. In the Vendor Name box, type the name of the new vendor.
4. In the Description box, type a brief description of the vendor's service or
material.
5. Select the Flag as Popular check box to make the newly added vendor
appear as a popular vendor.
Note: New vendors may only be added by auditors.
6. Click Save, and then click No.

To add a vendor division


1. On the Asset menu, click Vendor Management.
2. On the Vendor List tab, click a vendor name.
A complete list of active vendor division will be displayed on the Vendor
Divisions tab.

3. On the Vendor Divisions tab, right-click the Vendor Division list, and
then click Add Vendor Division.
4. In the Vendor Division 1 group box, type the new division name in the
Vendor Division Name box.
5. Type a brief description of the division function in the Description box.
6. Click Save, and then click No.
Four separate Vendor Divisions may be added at once.

To add a computer model to a vendor entry


1. On the Asset menu, click Vendor Management.
2. On the Vendor List tab, click a vendor name.
A complete list of the vendor's computer models will be displayed on the
Computer Model/Series tab.
3. On the Computer Model/Series tab, right-click the Model list, and then
click Add Computer Model.
4. In the Model Name box, type the name of the model to be added.
5. In the Description box, type a brief description of the new model.
6. Click Save, and then click No.

To add a series to a computer model


1. On the Asset menu, click Vendor Management.
2. On the Vendor List tab, click a vendor name.
A complete list of the vendor's computer models will be displayed on the
Computer Model/Series tab.
3. On the Computer Model/Series tab, click the specific model to add the
series to.
4. Right-click the Series list, and then click Add Computer Series.
5. In the Series Name box, type the name of the series to be added.
6. In the Code Name box, type the series code name if it has one.
7. Select a vendor from the ODM Vendor drop-down list if appropriate.
8. In the Description box, type a brief description of the new series.
9. Click Save, and then click No.

To edit vendor information


1. On the Asset menu, click Vendor Management.
2. On the Vendor List tab, right-click a vendor name, and then click Edit
Vendor.

3. Make any edits necessary to the vendor information.


Note: If the Retire Vendor check box is selected, the vendor will no
longer appear in the Vendor List.
4. Click Save, and then click OK.

To edit vendor division information


1. On the Asset menu, click Vendor Management.
2. On the Vendor List tab, right-click a vendor name.
A complete list of active vendor divisions will be displayed on the Vendor
Divisions tab.
3. On the Vendor Division tab, right-click the division to be edited, and then
click Edit Vendor Division.
4. Make any edits necessary to the division information.
5. Click Save, and then click OK.

To edit computer model information in a vendor entry


1. On the Asset menu, click Vendor Management.
2. On the Vendor List tab, click a vendor name.
A complete list of the vendor's computer models will be displayed on the
Computer Model/Series tab.
3. On the Computer Model/Series tab, right-click the specific model, and
then click Edit Computer Model.
4. Make any edits necessary to the model information.
Note: If the Retire Model check box is selected, the model will no
longer appear in the Computer Model list.
5. Click Save, and then click OK.

To edit computer series information in a vendor entry


1. On the Asset menu, click Vendor Management.
2. On the Vendor List tab, click a vendor name.
A complete list of the vendor's computer models will be displayed on the
Computer Model/Series tab.
3. On the Computer Model/Series tab, click the specific model of which the
series is part.
4. Right-click the specific series, and then click Edit Computer Series.
5. Make any edits necessary to the series information.

Note: If the Retire Series check box is selected, the series will no
longer appear in the Series List.
6. Click Save, and then click OK.

To search for vendors


1. On the Asset menu, click Vendor Management.
2. In the Search Vendor group box, specify your search criteria
3. Click Search.

To audit vendors
1. On the Asset menu, click Vendor Management.
2. On the Vendor List tab, right-click a vendor name, and then click Audit
Vendors.
3. Select a vendor check box in the Duplicate Vendors box.
4. If the selected vendor is a valid entry, click Correct Entry and the vendor
will be added to the Existing Vendors box.
5. If the selected vendor is not valid, click Delete Entry and the entry will be
deleted from the list.
6. If the selected vendor is a duplicate of an existing vendor entry, then click
the correct vendor from the Existing Vendor list, select the duplicate
entry check box in the Duplicate Vendors box, and then click Duplicate
Entry. The duplicate entry will be removed from the Duplicate Vendors
box.
7. Click No, and then click Cancel.

To audit computer models


1. On the Asset menu, click Vendor Management.
2. On the Computer Model/Series tab, right-click the Models List, and
then click Audit Computer Models.
3. In the Vendor drop-down list, click the computer model vendor.
4. Select a model check box in the Duplicate Models box.
5. If the selected model is a valid entry, click Correct Entry and the model
will be added to the Existing Models box.
6. If the selected model is not valid, click Delete Entry and the model entry
will be deleted from the list.
7. If the selected model is a duplicate of an existing model entry, then click
the correct model from the Existing Model list, select the duplicate entry

check box in the Duplicate Models box, and then click Duplicate Entry.
The duplicate entry will be removed from the Duplicate Models box.
The Duplicate Model entry will be replaced with the selected Existing
Model entry.
8. Click No, and then click Cancel.

To audit computer series


1. On the Asset menu, click Vendor Management.
2. On the Computer Model/Series tab, right-click the Series List, and
then click Audit Computer Series.
3. In the Vendor drop-down list, click the vendor of the computer model to
which the series belongs.
4. In the Model drop-down list, click the computer model to which the series
belongs.
5. Select a series check box in the Duplicate Models box.
6. If the selected series is a valid entry, click Correct Entry and the series
will be added to the Existing Series box.
7. If the selected series is not valid, click Delete Entry and the series entry
will be deleted from the list.
8. If the selected series is a duplicate of an existing series entry, then click
the correct series from the Existing Series list, select the duplicate entry
check box in the Duplicate Series box, and then click Duplicate Entry.
The duplicate entry will be removed from the Duplicate Series box.
The Duplicate Series entry will be replaced with the selected Existing
Series entry.
9. Click No, and then click Cancel.

Standard Asset Reporting


This section of Asset Tracking provides following a standard asset report for
computers and a standard asset report for devices.

To build a report about computer details


1. On the Asset menu, point to Reports, and then click Computers Across
WTT.
2. Click Next.
3. Select Computer from the asset type drop-down list.
4. Run a search on desired criteria using one of the primary search types:
Query Builder, Quick Search, or Advanced Search.

A list of computers meeting the search criteria is displayed.


5. To customize your report, select the Allow selection of fields and
sorting order check box.
6. Add or remove fields on the report as follows:
To add individual fields to the report, click the field in the Available
Fields box and then click the right-arrow button.
To add all available fields to the report, click the double-right-arrow
button.
To remove specific fields from the report, click the field in the Selected
Fields box and then click the left-arrow button.
To remove all available fields from the report, click the double-leftarrow button.
7. To change the order of the displayed fields, click the specific field in the
Selected Fields box and then click the Up or Down arrow buttons until
the fields are in the desired order.
8. Set the fields to be sorted as follows:
Click the field to be sorted in the Available Fields box, and then click
the right-arrow button. Repeat for each field to be sorted.
For each field to be sorted (the fields in the Selected Fields box), click
Ascending or Descending to set the sort type.
Click each field and the Up or Down arrow to adjust the sort order for
the fields.
If more than one field is added to the Selected Fields box to sort, the
topmost field will be sorted first, followed by the other fields in the
order that they are listed.
Remove any unwanted sort fields by clicking on that field in the
Selected Fields box, and the click the left-arrow button.
9. Click Export to Excel, type a report name in the File name box, and
then click Save.
10. Click OK, and then click Finish.

To build a report about device details


1. On the Asset menu, point to Reports, and then click Devices Across
WTT.
2. Click Next, and then select Device from the asset type drop-down list.
3. Run a search on desired criteria using one of the primary search types:
Query Builder, Quick Search, or Advanced Search.
A list of devices meeting the search criteria is displayed.

4. To customize your report, select the Allow selection of fields and


sorting order check box.
5. Add or remove fields on the report as follows:
To add individual fields to the report, click the field in the Available
Fields box and then click the right-arrow button.
To add all available fields to the report, click the double-right-arrow
button.
To remove specific fields from the report, click the field in the Selected
Fields box and then click the left-arrow button.
To remove all available fields from the report, click the double-leftarrow button.
6. To change the order of the displayed fields, click the specific field in the
Selected Fields box and then click the Up or Down arrow buttons until
the fields are in the desired order.
Set the fields to be sorted as follows:
Click the field to be sorted in the Available Fields box, and then click
the right-arrow button. Repeat for each field to be sorted.
For each field to be sorted (the fields in the Selected Fields box), click
Ascending or Descending to set the sort type.
Click each field and the Up or Down arrow to adjust the sort order for
the fields.
If more than one field is added to the Selected Fields box to sort, the
topmost field will be sorted first, followed by the other fields in the
order that they are listed.
7. Remove any unwanted sort fields by clicking on that field in the Selected
Fields box, and the click the left-arrow button.
8. Click Export to Excel, type a report name in the File name box, and
then click Save.
9. Click OK, and then click Finish.

Chapter 4: Jobs
This chapter provides information about working with jobs. Within Windows Test
Technologies (WTT), jobs form the primary action being performed, and consist of
the individual tasks and attributes forming a test sequence. Jobs may be a single
test or a group of tests, and can be limited to a single computer on one controller,
or can be distributed across multiple computers and controllers. A thorough
understanding of jobs is therefore essential to utilize WTT effectively as a test
framework.

The following subjects are included in this chapter:


Fundamental Jobs Concepts
Jobs Best Practice Recommendations
Using the Job Explorer Tree View
Creating a Job Feature or Category Node
Creating and Editing Jobs
Using Job Explorer
Using the Scheduler
Using Job Monitor
Using the Result Explorer
Using Result Collection Explorer
Test case management

Fundamental Jobs Concepts


The end-user environment of WTT test automation includes a number of
fundamental concepts:

Jobs
A job is a means of automating test cases and is a collection of tasks and
attributes forming a testing sequence. It can be a single test or a group of tests,
and can include tasks such as copying test files, setting test shares, running the
test, and result reporting. Essentially, a job is the amalgamation of the following
information needed to complete tests:
Runtime parameters.
Logical Machine Sets, which are sets of computer requirements and
constraints that you define.

Tasks, or steps involved in running the test.


Task dependencies, which determine the logical flow between the
tasks in the Job.

Job Feature nodes


Features are job groupings organized within the hierarchical tree view that users
see in Job Explorer. These allow testers to sort tests in a meaningful manner and
execute jobs in a single group if appropriate. A job feature node can also have a
hiearchy of job sub-features created by the tester for better test control. Feature
nodes can also have security permissions applied to each node individually.
To see the individual jobs within a job feature node, click Job Explorer on the
Explorers menu, and then click a job feature node within the hierarchical tree
view. The results pane displays the jobs saved within that node. Job ID and Name
are displayed by default, and the results pane can be configured to display
additional job information as well.

Job Roles
Job roles provide information to WTT about how the job will be used. Different
roles place different restrictions on jobs.
Automated Job
An automated job is one where the individual steps required to execute
the test case are automated so as to require little hands-on action by the
test engineer during actual execution.
Library Job
A Library job is one that can be referenced from within the tasks of
another job. A library job is like a normal job; however a library job may
only be used by another job and therefore must not have a defined LMS.
Additionally, it cannot be scheduled directly. (Deprecated term: Sub job.)
A library job allows reusability of test cases throughout the WTT. However,
it can only use resources that were given previously to its parent job thus
a library job can only have access to the parameters and LMSs that its
parent job has. A library job also cannot contain references to additional
library jobs; rather, it is limited to one embedded layer.
Note: Library jobs are not designed to be executed by themselves, but
are rather hosted within another job.
Manual Job
A manual job is one where the individual steps required to execute the
test case are handled by the test engineer in a hands-on fashion.
Config Job
A Config job is one that performs a specific setup activity, such as a smart
installation or smart cleanup, in order to provide the asset configuration
needed for the execution of another job.

Tasks
A task is the smallest executable set of operations that a test engineer normally
defines for a job. Tasks specify what the test will do, as well as the action to take
if the job fails. All tasks have a number of characteristics in common:
Each task is assigned to run on one or more logical machine sets. If an
LMS contains more than one computer, then the task is normally
duplicated and run on all computers mapped to the LMS.
Copy File and EXE tasks are associated with their own run context,
including domain, user name, and password, as well as the running
directory.
Each task can be selected to include its result as part of the jobs result
statistic.
Note: Copy File and Copy Results tasks are exceptions to this rule
and their results are not included here.
Task Run Phase
Tasks may be set to run within a job during one (or more) of several distinct
phases of the execution:
Setup Specific tasks for initial job setup are executed.

Regular Normal tasks such as copy file or copy result are


executed.
Cleanup Tasks for performing job cleanup operations are
executed.

The run phase of tasks is set when adding tasks to a new or existing
job.
Task Types
The specific type of task determines what type of actions are performed if the
task fails. A task can be one of the following types:
Executable Task A command line to any executable.
Copy File Task A method to mass-copy files from a remote location to
the computer under test.
Copy Results Task A method to mass-copy results from a local test
computer to a central logging location. Results are copied to a dynamically
generated destination directory or subfolder.
Run Job A task that runs a library job. The library job can be located
anywhere within the Feature tree; however, the library job that runs within
his task will only be able to access resources (such as global parameters)
that the calling run job can access.

Manual Prompt A task for providing instructions or asking questions of


the end user. The instructions given to the user are specified along with
any output that is required.
Note: Only executable tasks can specify a reboot as part of their normal
operation. These tasks set a reboot flag in the internal configuration file that
notifies the execution agent (EA) of a pending reboot. If the reboot occurs
within a predetermined timeout period, the task is marked as Pass by the EA.
If reboot occurs outside the timeout period, the task is marked as Fail.
Task Failure Actions
If a task fails, the following failure actions are available:
Fail the job and stop the current job run.
Ignore the failure and continue with the job as normal.
Fail the job, stop the current job-run and freeze the computer.

Task Dependencies
Task dependencies define the relationship (execution order) between tasks across
individual computers or LMSs within a job. They are the basis for creating
complex client-server test scenarios where one application might depend on a
number of actions on different computers before it can begin to execute.
Task Dependencies Types
Types of Task Dependencies include:
Parallel All tasks within the job are executed simultaneously.
Sequential Tasks within the job are executed serially, in the task list
order set prior to execution.
Custom Tasks within the job are executed according to behavior that the
test engineer sets, including:
o A task is not executed unless all of the tasks on which it depends have
been executed on all target computers first.
o A task is not executed unless a task on which it depends on the same
computer is executed first.
o A task is not executed unless a task on which it depends on its
previous computer is executed first.

Parameters
Runtime parameters behave like environment variables, but are not restricted to
a specific computer and are not allowed within constraints. Parameters function
as placeholders in the job definition. They allow each task to accept user data
that can be defined either locally or globally as a job is created.
Note: Nesting parameters is not permitted within WTT.
Types of parameters
There are two types of parameters:

Local A parameter created and used for a specific job. Local parameters
are created as a job is being created.
Global A parameter created for a controller that can be used for any job
on that controller. Global parameters are created by clicking Parameter
on the Admin menu in WTT Studio.
Default parameters
WTT provides a number of default parameters, based on the characteristics of the
specific test computer or job being used. These include the following set of
default parameters:
Computer Config Parameters Computer properties such as Operating
Systems and language are available as parameters For example, WTT\OS
and WTT\Language.
Run-time Parameters Run-time properties of individual specific jobs are
populated as parameters, including:
WTTJobName The Name (TCM name) of the job that this task is part of.
WTTJobGuid The result GUID that this task is part of.
WTTLMSName The LMS that this task is assigned to.
WTTTargetMachineName The physical computer that this task is running
on. You can also get this from the [machinename] environment variable.
WTTRunGuid GUID of the Run that this task is part of.
WTTRunWorkingDir Default Working directory of the tasks.
<LMSName> This is mapped to a comma separated list of physical
computers that this LMS is mapped to.
WTTFullName Trace Name or fully qualified name of the job including the
feature path.
WTTControllerName Name of Controller/PD computer.
WTTDbMachineName Name of SQL Identity Server computer.
WTTDbName Name of logical datastore.
WTTMachinePoolName Name of machine pool being scheduled.
WTTTaskGuid GUID for the task being run.
WTTCopyLogsDest Location where test logs get copied using CopyResults
tasks.
Dereferencing parameters
Parameters can be de-referenced in a task command line by using [ ] brackets.
For example, if Path is a parameter is defined in a job, then dereferencing it
within a task command line requires it to be used as [Path].

Dimensions
Dimensions are customized pieces of information about a client computer that the
client automatically reports to the WTT database whenever it restarts. For
example, one dimension might be the processor manufacturer of the test
computer (ARM, Intel, or another vendor). WTT creates an initial set of default

system dimensions, but additional dimensions can easily be created by WTT


administrators.
Once a new dimension is created, its values can be populated using a variety of
user interfaces, including any WTT UI or the command-line utility. Some teams
create their own mechanisms to automatically populate these dimensions at
computer startup and they are presented similarly to WTT default dimensions.

Constraints
A constraint is a set of conditions under which a job can be executed. These
conditions are used to describe a class of computers which then allow lab
managers to target one set of tests against multiple sets of computer classes
without the need to reschedule the tests.

Figure 4.1 Constraints example

Contexts
A context is a set of constraints (logical descriptions of a computer class) that can
be applied to individual jobs and schedules. When a test or a set of tests is
designed, contexts can be applied in order to specify the test conditions under
which the job will be executed.

Common Context
In WTT, the user can specify a set of constraints for a job which is referred to as a
common context. If a job has an LMS then some constraints can be defined for
the LMS also. These are also referred to as common context, as they are common
to both the LMS and the job as a whole.
With common contexts, two types of LMSs are affected:

Primary LMSs - LMSs against which the results will be reported. These
LMSs must use the contexts of the job unchanged.
Other LMSs - LMSs not involved in reporting can use other contexts.
While these logical machine sets can inherit the common contexts of the
job, they can also define additional contexts. These new contexts are
appended to the inherited common contexts from the job and must not
conflict with them. If the LMS does not inherit from the jobs common
contexts, then the new contexts are the only contexts defined for that
LMS. These contexts can be made up of different and/or conflicting
contexts to those of the job itself.
As every job has a common context, at run time, we can add contexts to the
schedule. These contexts are referred to as common contexts of the schedule.
Schedule common contexts apply to a schedule and its associated mix, and also
to all of the jobs in the schedule. The schedule common contexts are compared
with the job common contexts; and if they conflict, then that job cannot be run
and must be removed.

Mixes
A mix is a set of one or more contexts, just as a context is a set of one or more
constraints that are applied to jobs and schedules. When you design a test or a
set of tests, you can apply several sets of contexts by applying a mix that
contains these contexts. Scheduling a job creates one instance of the job for each
valid context within a mix.
For example, a job might be designed run on a mix of test computers including:
x86-based, Microsoft Windows XP Professional in the German
language.
x86-based, Microsoft Windows Server 2003 in the English language.
Itanium-based, Windows Server 2003 in the English language.
In developing a mix to fit a specific job or set of jobs, the constraints and
contexts used can be global (applying to all tests, computers, or schedules) or
customized (applying to the specific job or schedule at hand). Default
constraints and contexts can be used or custom sets may be created.
Global mixes
A global mix can be of two types. A simple mix is a straight-forward collection of
contexts with each context containing a set of constraints, all of which are applied
evenly. An Advanced mix, however, is a complex mix with a pre-defined set of
rules that apply to the contexts.
Custom mixes
A custom mix is a user-defined collection of contexts based on specified
combinations of dimensions and constraints and is applied to an individual job or
schedule at hand.

Applying mixes to a job


In WTT Jobs, a mix defines the dimensions that can be combined and used in
conjunction with a set of rules to specify the valid combinations. The inherited
contexts from the mix are generally applied with the job common contexts, and
they filter those contexts that conflict with the common contexts of the job.
The job context gives you the ability to specify which combinations of values
make sense for a statically defined job. To provide the flexibility at run time to
specify which combinations make sense for the actual run, WTT supports
specifying contexts at schedule time. However, the contexts specified might
conflict with those chosen for the job and you must resolve these conflicts. In the
application of a mix to both a job and a schedule, conflicting contexts within those
mixes are automatically detected and can be removed.
The job context is determined by mapping the mix contexts associated with the
job, filtering it according to the common contexts defined within the job, and then
removing the contexts that conflict with the jobs common contexts and LMS
common context.
The job context solves the problem of duplication. Only one job needs to be
defined and the job contexts determine which computers that this job can run on.
The flexibility of the mix contexts and application of constraints of a given context
ensures that the right combinations of values are applied.

Applying mixes to a schedule


A mix can therefore also be applied to a schedule. The schedule mix has the same
form as the job mix. In other words, it generates contexts by applying common
contexts to the associated mix context, either global or custom. The results are
then applied to all of the common contexts of the jobs that are selected to run. If
these contexts conflict, then a context must be removed to eliminate the conflict.
If there is no conflict, then the schedule contexts are added to the job contexts
and thus act as a multiplier.

Logical Machine Set (LMS)


An LMS contains all of the hardware information about its component computers
that is needed for the job in question. This information is represented by sets of
constraints and values that the LMS uses to compare to the available test
computers to determine if a given computer has the required properties needed
to perform the job. These are a series of constraint-value pairs (sometimes
known as key-value pairs) that use logical operators in the form:
<constraint> <operator> <value>
For example, WTT\Proc = x86.
You can specify the primary LMS by selecting the LMS from the Primary LMS Box.
The Primary LMS must be set to inherit constraints from the job and may not
have its own constraints. Any additional LMSs can have one or more of their own

constraints, but must not be set to inherit constraints from the job. For additional
information, see Appendix C: Best Practices.

Using the Job Explorer Tree View


The Job Explorer tree view functions as a centralized location for administering
jobs in WTT. Using Job Explorer, you can create or modify jobs, add nodes,
search for jobs saved in the WTT database, as well as import and export job
information.

To use the Job Explorer view


1. On the Explorers menu, click Job Explorer and then select your
controller from the Datastore drop-down list.
2. Select either the Feature or Category tab.
3. Select a node on the tab to see all jobs within that node.
4. To work with a specific job or group of jobs, right-click it in the Job list and
select a command from the short-cut menu.
5. To add a new job, right-click the desired Feature node and then select
New Job.

Several important aspects about the Job Explorer trees should be noted:
By default the Query Builder is hidden. To display Query Builder click
the Show Query Builder button.

The Job Explorer tree and categories views can also be


hidden by clicking the Hide Hierarchy button.

When the Query Builder is hidden, clicking on a feature or


category node will automatically retrieve the jobs associated
with that node.
For additional information on the menu items available on the
job list view, please see the Job menu.

Creating a Job Feature or Category Node


Within WTT, jobs are organized in individual Feature nodes in Job Explorer. In
order to create a job, it is first necessary to create a Feature node in which to
store the job.

To create a job Feature node


1. On the Explorers menu, click Job Explorer.
2. Select your controller from the Datastore list.
3. On the Feature or the Category tab, right-click the Root ($) node, and
then click Add Node.
Note: Sub-nodes may be added to previously created nodes by rightclicking instead upon the specific parent node.
4. In the name box, type a name for the new node.
Note: It is recommended that the names chosen for new nodes be
easily identifiable. Nodes belonging to other testers on the same
controller can be viewed and search for applicable jobs that can be
copied, and easy identification of the original node may help solve future
problems.

Figure 4.2 Job Explorer with Tree View

Creating and Editing Jobs


Creating and editing jobs is the core activity around which WTT as a testing
framework is oriented. Jobs are created, edited and saved within Job Explorer

(accessible by clicking Job Explorer on the Explorers menu). Jobs are also
scheduled to be run there and may be monitored from Job Explorer or Job
Monitor (accessible by clicking Job Monitor on the Explorers menu).
Creation of a job can be very simple, but the options available to test engineers in
WTT can also make it quite complex. This section is designed to help users edit or
fine-tune jobs in order to allow for the complexity needed to make tests
accurately fit given situations.

The following subjects are included in this section:


Setting General Job Characteristics
Setting Runtime Parameters
Setting Job Constraints
Setting Job Mixes and Contexts
Setting an LMS
Setting Job Tasks
Setting Task Dependencies and Order
Advanced Task Dependencies
Setting Attributes for a Job
Setting Dimensions for a Config Job

Setting General Job Characteristics


The tabs displayed on the New Job form and the Edit Job form vary according to
which Role the job has, whether the job is and which type of task type is used.

To define the general characteristics of a job


1. On the Explorers menu, click Job Explorer and select your controller
from the Datastore list.
2. On the Feature tab, right-click the node you wish to contain the new job,
and then click New Job.
The Feature Path field shows the node in which the job will be
created. To select another feature, type the new feature name and
path within the box or use the Browse button.
To create a new node for this job, right-click either the root [$] or an
existing node, select Add Node, and then click the new node that
appears and rename it.
3. In the Job Name field, type a name for the job.
4. From the Assigned To drop-down list, select the user for this job.
5. From the Role drop-down list, select one of the following:
Automated: A job/test case with tasks that run automatically.

Library: A job that can be used as part of another, forming a subtask


that runs automatically.
Manual: A job with a set of tasks that must run manually.
Config: A job containing configuration tasks, such as setup or cleanup.
6. From the Priority drop-down list, select an execution priority for this job.
Note: Setting an execution priority does not affect the runtime
scheduling of the job. Instead, it is intended to provide a reference to
the user.
7. From the IsActive drop-down list, select True if you want to be able to
schedule this job, or False to prevent the scheduling of this job at this
time.
Note: The default setting for IsActive is True.
8. In the Total Variations box, type the number of expected test variations
for this job.
9. In the Expected Run Time boxes, enter the length of time you expect
the job to run.
10. In

the Description text box, type the description of the job.

11. Set any parameters, constraints, and so on, that you wish to use for this
job on the tabs below.
12. Click the Save button.

Setting Runtime Parameters


The parameters used within a WTT job can be either custom (local) or global.
Although you can access global parameters while creating a job, the only option
available is whether to allow the user to view the parameters at the schedule time
(IsScheduleDisplay). Global parameters can be added or edited by clicking
Parameters on the Admin menu.
Runtime Parameters
The custom parameters that can be defined when creating a new job are known
as runtime parameters. They behave like environment variables, but unlike
variables, they are not restricted to a specific computer. Run-time parameter
values that are added to the new job are defined at schedule time. This differs
from the static values included within the job definition. At schedule time, the
WTT Execution Agent replaces all runtime parameters with the user-defined
values and sends these values to the jobs.
Parameter Types
Two types of custom parameters are available:
String a string of descriptive or other text.

FileData an identifier denoting that the parameter contains file data or


information.
Parameters may also have a default value, or may be marked as "required," in
which case an input must be specified at schedule time or the job will not start.
Parameter Location
Parameter may be referenced in the following areas of jobs:
Execution Task
Command-line or command-line option
Username
Domain
Copy File Task
Filename
Server name
Destination
Password

To add a custom runtime parameter for a task


1. On the New Job form, click the Parameters tab.

On the Local tab, type a name for the new parameter in the
first empty cell in the Name column.
3. In the Type column, click the desired parameter type from
the drop-down list.
4. In the Description column, type a message that will be
displayed next to the parameter at scheduling time.
2.

Note: Use of a description is optional.


If you wish to view the parameters at schedule time, select
the ScheduleDisplay check box.
6. In the Value column, type a value for the parameter.
7. Continue adding the job, or click the Save button if it is
complete.
5.

To edit custom runtime parameters for a job


1. On the Explorers menu, click Job Explorer.

Select your controller from the Datastore drop-down list.


3. On the Feature tab, click the node containing the desired job
2.

and then click the Refresh

button.

Right-click the desired job and then click Edit.


5. On the Parameters tab, select the parameter you wish to
change, and then click the appropriate attribute to edit it.
6. Make changes as appropriate.
4.

7.

Click the Save button.

To delete a custom runtime parameter


1. On the Explorers menu, click Job Explorer.
2. Select your controller from the Datastore drop-down list.
3. On the Feature tab, click the node containing the desired job and then
click the Refresh

button.

4. Right-click the desired job and then click Edit.


5. On the Parameters tab, right-click the selected parameter, and then click
Delete.
6. Click the Save button.

Setting Job Constraints


Job constraints are is available only for the Automated, Manual, and Config job
roles.
For more information about constraints, see Constraints.

To add constraints to a job


1. Create a new job and then click the Constraints tab.
2. Click the empty cell in the Dimension column and select a dimension
from the drop-down list.
3. In the Operator column, click an appropriate operator from the dropdown list.
Note: The Operator list is based upon the Dimension chosen and may
change depending on the choice.
4. Type a value to compare to the Dimension in the Value column.
Note: Depending on the Dimension and Operator chosen, users may
be offered a drop-down list or combo box from which to select a specific
value. If this occurs, click the desired value from the selection available.
5. Add additional constraints if desired by clicking the Dimension column
and repeating the process of adding a constraint.
6. When the all desired constraints have been added, continue creating the
new job, and then click the Save button.

To edit constraints to a job


1. Right-click the job you wish to edit, click Edit, and then open the
Constraints tab.
2. From the list of constraints, select the constraint to edit.
3. Edit the constraint as necessary:
If the Dimension requires editing, click the Dimension column and
select a dimension from the drop-down list.
If the Operator needs editing, click an appropriate operator from the
drop-down list in the Operator column.
Note: The Operator list is based upon the Dimension chosen and
may change depending on the changes you make.
Type a value or select from the drop-down list if available in the Value
column.
4. Click the Save button.

To delete a constraint from a job


1. Right-click the job you wish to edit, click Edit, and then open the
Constraints tab.
2. From the list of constraints, click the arrow button to the left of the
constraint to be deleted and press the Delete key.
3. Click the Save button.

Setting Job Mixes and Contexts


By applying a mix to a test or series of tests, it becomes a relatively simple
matter to apply several sets of constraints or contexts to the job merely by
applying a mix that contains these contents. Scheduling a job creates one
instance for each valid context in a mix. Mixes therefore can be used to generate
multiple test cases for a job. Mixes can be either global or custom, but both types
cannot be used for the same job.
Global mixes can apply to any job on the controller and are created by clicking
Mixes on the Admin menu. Custom mixes apply only to the current job and are
created directly within the job.

To add a global mix to a job


1. Create a new job and then click the Constraints tab.

Click Add Mix.


3. Click Global Mix, and then click OK.
4. Click the desired mix from the list of available global mixes or
use Advanced Query Builder to find the appropriate mix.
2.

Click View to view the contexts contained within the mix.


6. Select a context and then click View to view the constraints
within that context.
7. Click Close to exit the constraints and context screen.
8. Click OK to add the mix.
5.

To add a local mix to a job


1. Create a new job and then click the Constraints tab.

Click Add Mix.


Click Local, and then click the desired type of local mix from
the Type drop-down list.
4. Click OK.
5. In the Name box, type a name for the local mix.
6. In the Description box, type a brief description of the new
mix.
7. Click Add to add a context to the mix.
8. In the Context Name box, type a name for the context.
9. Using the Constraints, Parameters, and Attributes tabs
below, add components to the context.
10. When all components of the context have been added, click
OK.
11. Add additional contexts to the mix as desired.
12. When all contexts have been added, click OK.
13. Continue creating the job, and then click the Save button.
2.
3.

To edit a local mix in a job


Edit functionality is available on the Constraints tab only when a job has a
local mix. This is because global mixes cannot be edited through an individual
job. Because they can be used for any job on the controller, global mixes may
only be edited by clicking Mix on the Admin menu.
1. Right-click the job you wish to edit, and then click Edit.
2. On the Constraints tab, click Edit.
3. Make changes to the mix as necessary:
To add another context to the mix, click Add and provide the
necessary context data.
To edit an existing context, click the desired context and then click
Edit. Make any changes necessary and then click OK.

To delete an existing context, click the target context, click Delete,


and then click Yes.
4. Click the Save button.

To remove a mix from a job


1. Right-click the job you wish to edit, and then click Edit.
2. On the Constraints tab, without selecting any constraints, click Remove.
3. Click OK.
4. Click the Save button.

To view and remove constraint conflicts


1. Right-click the job you wish to edit, and then click Edit.
2. On the Constraints tab, click Conflicts. A list of the conflicts occurring
between the jobs constraints and any constraint defined in the mix applied
to this job is displayed.
3. Edit or remove the mix or jobs conflicting constraint or context to resolve
the conflict.
4. Click OK.
5. Click the Save button.

Setting an LMS
A Logical Machine Set (LMS) is a logical grouping of one or more computers for
reporting purposes. An LMS specifies the quantity of computers and describes the
computer type that is required for the execution of the job. These requirements
can be either hardware or software oriented, or both.
There can be multiple LMSs per job.
LMS Definition
An LMS contains all of the hardware information about its component computers
that is needed for the job in question. This information is represented by sets of
constraints and values that the LMS uses to compare to the available test
computers to determine if a given computer has the required properties needed
to perform the job. These are a series of constraint-value pairs (sometimes
known as key-value pairs) that use logical operators in the form:

<constraint> <operator> <value>

For example, WTT\Proc = x86.

Operators can be one of the following SQL comparison operators: =, <, >, <=,
>=, <>, LIKE, NOT LIKE, IN. The value in each pair can be one of the following:
A system defined dimension.
A constant value.
A parameter (This requires pre-defining parameters within the job).
Variable LMS Size
Every LMS contains a minimum and maximum count, representing the minimum
and maximum number of computers that the WTT scheduler tries to find in order
to meet the computer specification for this job at schedule time. For example, if
an LMS is defined with a minimum of one and maximum of ten, the related job
requires at least one computer to execute. However, if the scheduler is able to
locate additional matching computers, it can allocate the job to up to ten
computers that match the computer constraints.

To add a new LMS for a job


The primary (first) LMS must be set to inherit constraints from the job and
may not have its own constraints. Any additional LMSs can have one or more
of their own constraints or may be set to inherit constraints from the job, but
must not conflict with the primary LMS.
Note: The LMS functionality is available only if the Automated job role is
selected.
1. Create a new job and then click Automated from the Role drop-down list.
2.

On the LMS tab, click Multi Computer Job.


Note: With the default setting of Single Computer Job, an
LMS is not used since an LMS implies the potential use of
more than one computer.

Click Add.
4. In the LMS Name box, type a name for the LMS.
5. In the Minimum and Maximum boxes, type the minimum
and maximum number of computers to make available for
executing this job.
3.

6. If this will be the primary LMS for the job, select the Make this LMS
primary check box. If this is not the primary LMS, clear the Make this
LMS Primary check box and then clear the Inherit constraints from
Job check box.
Note: The primary LMS is automatically set to inherit constraints from
the job and may not have its own constraints. Additional Logical Machine
Sets may not inherit constraints from the job but may have their own.

7. Click the empty cell in the Dimension column and select a dimension
from the drop-down list.
8. In the Operator column, click an appropriate operator from the dropdown list.
Note: The Operator list is based upon the Dimension chosen and may
change depending on the choice.
9. Type a value to compare to the Dimension in the Value column.
Note: Depending on the Dimension and Operator chosen, users may
be offered a drop-down list or combo box from which to select a specific
value. If this occurs, click the desired value from the selection available.
10. Add additional constraints if desired by clicking the next empty cell in the
Dimension column and repeating the process.
11. Click OK.
12. Click the Save button.

To select an LMS for a job


1. Right-click the job you wish to edit, and then click Edit.
2.
3.
4.

5.
6.

On the LMS tab, select the LMS that will be the LMS primary
for this job, and then click Edit.
Select the Make this LMS primary check box.
If other LMS are used on this job, select each in turn, click
Edit, and then clear first the Make this LMS Primary check
box and then the Inherit Constraints from Job check box.
Click OK.
Click the Save button.

To edit an LMS
1. Right-click the job you wish to edit, and then click Edit.

On the LMS tab, select the LMS that will be the LMS primary
for this job, and then click Edit.
3. Make any changes necessary to the LMS, and then click OK.
4. Click the Save button.
2.

To delete an LMS
1. Right-click the job you wish to edit, and then click Edit.
2. On the LMS tab, select the target LMS, and then click Remove.

3. Click Yes.
4. Click the Save button.

Setting Job Tasks


By defining the steps that are to be executed within a job (as well as the action to
take if the job fails), tasks provide the fundamental building blocks for running
jobs. In setting job tasks, the appropriate run phase of the job is selected and
then the sequence of the tasks and any dependencies between them is specified.
Task Run Phases:
Setup tasks provide the environment required on the test computers to
run the selected tests. For example, setup tasks might copy files down
from a network share to the client computer in order to install a required
application or include needed data.
Regular tasks are the tests that will actually run in the client computer.
For example, tasks that copy results or log files created by the tests are
usually executed in this phase. If a job only has simple copy and execute
tasks, these tasks are normally set to run in the Regular phase.
Cleanup tasks restore the original environment to the client computer.
For example, these tasks might delete the files generated by the tests or
restore data to the original form. These tasks may run even if the regular
tasks fail. For example, if FailAndStop is the selected Failure Action for a
regular task, and that task fails, cleanup tasks will still always execute in
order to restore the original environment.

Setting job tasks is a three-step process:


1. Add the basic task framework to the job. These options are the same for
all tasks.
2. Add the general task details for the specific type of task. These options are
different for each type of task.
3. Add the execution details to the task. These options are the same for all
tasks.

The Basic Task Framework (all Tasks)

To add tasks to a job


1. Create a new job and then click the Tasks tab.

In the Task Details group box, select the tab for the
execution phase of the task to be added.
3. On the chosen tab, select the method of execution desired:
sequentially, in parallel, or customized.
2.

Note: Only one method of execution may be used per job


assignment.

Figure 4.3 Task Details Group Box

Click Add.
Select the type of task to create, and then click OK.
6. On the General tab, type a name for this task.
7. Click an action to be performed if the task fails from the
Failure Action drop-down list.
8. Click an LMS to be used from the LMS drop-down list if this is
a multi machine job.
9. Select the Disable check box to prevent this task from being
scheduled unless it is enabled.
10. Complete the task details based on the specific type of task
(see below).
4.
5.

General Task Details (type-specific)

To add general task details for an Execute job


This is the basic task used to run tests and other executables.
1. In the Command text box, type the command to be executed by this
task.

Note: If directory names are used within the command to


be executed, they may not contain spaces within them. Any
directory names with spaces will cause the task to fail.
2.

3.

4.

5.
6.

If you want to create a new command shell for the execution


of this task, select the Create new command shell for this
task check box.
If the CommandLine specified will restart the computer after
the task is complete, select the This task causes the
machine it runs on to reboot check box.
If you do not want to use the default jobs directory for this
task, click the Use the following directory option, and then
type the working directory in the accompanying textbox.
To have the results of this task contribute to the job counts,
select the Rollup Results to Job check box.
Click the Execution Options tab.

To add general task details for a Copy File job


This task is useful for copying tests from one client computer to another.
1. Click Add.

In the Source box, type the source from where you want the
files copied.
3. If you do not want to use the default jobs working directory as
the destination, select the Custom option, and type the
destination directory and path in the accompanying text box.
4. Select the Exact Destination option if desired. If selected,
the value in the Destination box will be taken as a filename
instead of a directory.
2.

Note: Exact Destination can be used for copying single


files, but cannot copy multiple files or copy recursively.
If the files are to be copied with the file structure maintained,
select the Recursive check box.
6. Click the Execution Options tab.
5.

To add general task details for a Copy Results job


The Copy Results task is useful for organizing the job results files for later
review.
1. Click Add.

In the Source box, type the source from where you want the
files copied.
3. If you do not want to use the default jobs working directory as
the destination, select the Sub Folder check box and type a
destination directory name in the accompanying text box. This
will be created as a subdirectory under the default destination
directory.
4. Select the Exact Destination option if desired. If selected,
the value in the Destination box will be taken as a filename
instead of a directory.
2.

Note: Exact Destination can be used for copying single


files, but cannot copy multiple files or copy recursively.
If you wish to identify the results files for this job with the
specific computer being used, select the Prefix destination
file name with machine name check box.
6. If the files are to be copied with the file structure maintained,
select the Recursive check box.
7. Click the Execution Options tab.
5.

To add general task details for a Run Jobs job


This is the basic task used to run other jobs within this job.
1. Enter the job number of library job in the ID box or click the Browse
button to browse the available library job.
2.

If you entered a job number, click Resolve ID to verify the


job number.
Note: Iif you browsed for the specific job number, this is
unnecessary.

In the empty cell in the Library Job Param Name column,


click the library job name from the drop-down list.
4. If the Type column is empty, select a type from the dropdown list.
5. Click the Value column, and then click the Browse button,
select value options that are appropriate to your specific task,
and then click OK.
3.

6. To have the results of this task contribute to the job counts, select the
Rollup Results to Job check box.
7.

Click the Execution Options tab.

To add general task details for a Manual Prompt job


Manual Prompt tasks enable the job author to present the end-user (or test
engineer) with instructions for some manual intervention on the computer
under test.
1. In the Instructions to User box, type any needed instructions as you
would like them to be seen by the end-user.

Click the desired option depending on whether you wish to


supply the user with Pass and Fail buttons or a Continue
button.
3. If you wish to preview the button and instructions, click
Preview, and then click Pass or Continue.
2.

4. To have the results of this task contribute to the job counts, select the
Rollup Results to Job check box.
5.

Click the Execution Options tab.

Adding Execution Options (all tasks)

To add execution options for a task


Note: All of the following options are presented on the Execution Options
tab, although the availability of each will depend on the type of task chosen.
1. On the Execution Options tab, select the specific options for the task:

To Set Timeout Options, select this option.


o Cancel task after: Set the maximum amount of time the task
is allowed to run.

Select an option for when to declare that the task failed:


o Declare failure if the task is still running and time out the
task.
o Declare failure if the task stops before the timeout.
Set Task Failure Conditions Task failures can be declared under
the following conditions:
o Declare task failure based on exit status.
o Declare task failure based on results from log You can
specify logs of two types.
WTT Log The default log.
Custom Log You can specify a log file name and type
of log file.
Set User Context Details You can use the following context options.

o Run in System Context Runs the task as a process with the


computer's system as the user.
o Run as the User whoever is logged on to the First Active
Session
o Run with User Credentials You must specify the user
credentials below.

Run in User sessions the user who is logged in.

Run in Session zero the console with direct


control of the system. Run in Session Zero has the
following sub options:
Run only if the user is currently logged onto
Session 0 the user must be logged on to the
console with direct control of the system.
Run with these user credentials without
logging off the existing user in session Will
start the tests even if someone else is logged on
in Session 0. (The tests will be displayed on their
console.)
Note: If you run an execute task on a
computer where these credential have no
permissions, it will fail to logon. However, if you
run a CopyFiles task with this setting, using
the same computer and credentials, it will
succeed, because uses a net use command to
the new location rather than CreateProcess.

Specify the user credentials:


o Run as a Specific User: Enter the domain, username and
password for the user.
o Run as a Local Logical User: Enter the local name.

Figure 4.4 Execution Options dialog box

2. Click OK to close the task.


3.

Click the Save button.

Editing Job Tasks

To edit the tasks in a job


1. Right-click the job you wish to edit, and then click Edit.

On the Tasks tab, select the task to be edited, and then click
Edit.
3. Make any necessary changes, and then click OK.
4. Click the Save button.
2.

To delete a task from a job


Note: A task cannot be deleted if it has a dependency or if any other task is
dependent on it.
1. Right-click the job you wish to edit, and then click Edit.

On the Tasks tab, select the task to be edited, and then click
Remove.
3. Click Yes.
4. Click the Save button.
2.

Setting Task Dependencies and Order


When creating a job with more than one or two tasks in it (as most do),
establishing the relationship between each task quickly becomes important,
especially when creating complex test scenarios. In such cases, it is important to
establish not only the order in which tasks are executed, but also the
dependencies involved, when one application may depend on actions on other
computers before it can begin execution.

To add customized dependencies


1. While creating a new job, click the Tasks tab.
2. On the Regular tab, click Execute these tasks according to custom
dependencies.
3. Add all tasks for this job. For more information on adding tasks, see Setting
Job Tasks

Note: At least two tasks must be present in order to set dependencies.


4. Click Dependencies, and then click Add.
5. From the Task (T1) drop-down list click the first dependent task.
6. Click the task upon which the first task is dependent from the Depends
On drop-down list.
7. Select the dependency type.
Note: This option is not available if only a single computer is being
utilized.
8. Click OK.
9. Add additional dependencies if desired, and then click OK.

To edit customized dependencies


1. In Job Explorer, right-click the job to be edited, click Edit, and then click
the Tasks tab.
2. Click Dependencies.

3. Click the dependency desired.


4. Click Edit, and then edit the details.
5. Click OK.

To delete customized dependencies


1. In Job Explorer, right-click the job to be edited, click Edit, and then click
the Tasks tab.
2. Click Dependencies.
3. Click the dependency to delete.
4. Click Remove, and then click OK.
5. Click OK.
6. Click the Save button.

Advanced Task Dependencies


The issue of task dependency becomes more complex when more variables, such
as multiple computers, are added.
Consider this scenario: Job1 has two tasks, T1, and T2. Both tasks are assigned
to the logical machine set LMS-1 (minimum count =3, Maximum = 5). When Job1
is scheduled on a machine pool with sufficient computers, the scheduler will pick
up between 3 and 5 computers for this job.
In this case, if LMS1 consisted of four computers: M2, M6 M4, and M3 that were
available and met the constraints specified in the job, WTT would map the
machine list as follows:
M2 = Instance 1
M6 = Instance 2
M4 = Instance 3
M3 = Instance 4
With this mapping, the WTT execution agent (EA) will run the following:
Task T1(instance 1) on M2
Task T1(instance 2) on M6
Task T1(instance 3) on M4
Task T1(instance 4) on M3

Task T2(instance 1) on M2
Task T2(instance 2) on M6

Task T2(instance 3) on M4
Task T2(instance 4) on M3

Note: Regardless of what dependencies or execution order you specify the


instances of each task will run on the computers as shown above. There will
not be new task instances or missing task instances based on the dependency
type.
If the execution order is specified as Sequential it means task T2 depends on T1.
In this case all instances of task T1 will run on the computers first. Once all
instances of task T1 are complete, all machines will run task T2.
If the execution order is specified as Parallel, it means that all machines will run
their respective instances of task T1 and T2 independently. In this case, on any
given computer, task T1 will be run, followed by task T2 on that computer, but
independent of the progress of T1 on the other computers.
If the execution order is Custom, then individual dependencies between tasks
can be specified.

Setting Attributes for a Job


Attributes allow the end user to organize the same data in multiple hierarchical
ways. This means that the folder hierarchy that is displayed by default in Job
Explorer is simply another attribute. In many cases, as well as providing the
hierarchy, attributes provide a means to tie together objects of different types. As
an example, attributes could be used to categorize all objects involved in a failure
so that these were cross-referenced in reporting.

To add attributes to a job


1. While creating a new job, click the Attributes tab.

From the list of attributes, select those attributes that you


wish to associate with this job.
3. Continue to create the job and then click the Save button.
2.

To bulk edit job attributes


Bulk editing provides a way to perform a particular operation on a set of jobs
at one time. However, while in Bulk Edit mode only the General,
Description, Attribute, and Code Coverage tabs are enabled.
Note: Changes made in Bulk Edit cannot be undone.
1. On the Explorers menu, click Job Explorer and select your controller
from the Datastore drop-down list.

Select the Feature node in which the jobs to be edited are


located.
3. Select the jobs on which to perform the bulk edit operation,
right-click the job, and then click Edit.
4. Click Edit All Items at Once, and then click OK.
5. Make any changes necessary on the general page if needed.
6. On the Description tab, make any changes to the job
Description if desired.
7. On the Attributes tab, select the attributes to be applied by
the bulk edit. In the Bulk Edit Operation group box, select
the mode of applying these attributes to the jobs.
8. On the Code Coverage tab, select the files to be associated
with the jobs through bulk edit. Select the mode of
associating these files through bulk edit.
9. Click the Save button.
10. Click OK.
2.

Setting Dimensions for a Config Job


You can set dimensions only for the jobs that use a Config job role. Available
dimensions are shown below in Table 4.1: Config Job Dimensions.
Setting a dimension provides the scheduler with information about the effect a
particular job has on the dimension being used. The schedule then uses this
information and makes decisions accordingly. For instance, if a particular
dimension value is required for a test job, the scheduler can locate a different job
that produces the required job, and then run that job automatically so that the
original job can be run. Additionally, if multiple jobs require that specific
dimension, the scheduler can set the jobs to run on the same computer, avoiding
running the setup job repeatedly without need.
Two important points should be noted when using a Set operation:
If the Set operation relies on a specified value, then the job will only be
run when the specified value exactly matches the value required by the
calling job. This enables the user to distinguish between setup jobs that
use the same value set but yet still run different jobs depending on the
exact value.
If the Set operation uses a specific parameter name, then the parameter is
given the dimension value that is requested by the job initiating the setup
job. This allows the job itself to dereference the requested value in
command lines and similar locations.

The Clear all dimensions and rescan option is available to clear all the
dimensions and rescan them. This is required for fresh-install jobs.

Config Job Dimensions

Values

WTT\OSBuildNumber

Operating System build number (4 digit)

WTT\MachineName

Name of computer

WTT\OS

Operating System (such as XP, Win2k, etc.)

WTT\OSSKU

Operating System SKU (such as Professional,


Personal)

WTT\ProductType

Product Type (such as Server, Workstation)

WTT\SystemLocale

System Locale for the system

WTT\Processor

Processor type (such as x86, Itanium64, etc.)

WTT\FullMachineName

Fully qualified computer name with domain

WTT\ProcCount

Number of processors

WTT\VBL

VBL from where the Operating System is installed

WTT\RAM

Size of RAM on computer

WTT\SP

Operating System Service Pack number (such as


SP1, etc.)

WTT\SPBuildNumber

Service Pack build number (4 digit)

WTT\Build

Build type (chk, fre)

WTT\UILanguage

Current language on the system

WTT\Domain

Fully qualified domain name

WTT\UI-DPI

UI display settings

WTT\CLR

Common Language Runtime (CLR) version


installed on computer (such as v1.1.4332)

WTT\MachineRole

Role of computer (such as Domain Controller,


Domain Member, etc.)

WTT\DomainNetbios

Short domain name (such as TEST)

WTT\WindowsCoverageBuild

Whether this is a code coverage build (True or


False)

WTT\VirtualServer

Whether this is a virtual server (True or False)

Table 4.1 Config Job Dimensions

To add a Dimension to a job (setup phase)


1. While creating a new config job, click the Dimension Change tab.
2. In the Operation box, select Set from the drop-down.
3. In the empty cell in the Dimension column, select a dimension from the
drop-down list.
4. In the Value Type column, select the value type.
5. If the type is Value, type a value in the next column.
6. Select the Delete all Dimensions and Rescan check box if required.
7. Continue creating the job and when completed, click the Save button.

To add a Dimension to a job (cleanup phase)


1. While creating a new Config job, click the Dimension Change tab.
2. In the Operation box, select Delete from the drop-down list.
3. Select the dimensions to be cleared.
4. Continue creating the job and when completed, click the Save button.

To delete dimensions from a job


1. Right-click the desired job, and then click Edit.
2. On the Dimension Change tab, right-click the dimension to be removed.
3. Click Delete.
4. Click Yes.
5. Click the Save button.

Using Job Explorer


Job Explorer is used to query and view existing jobs and to create new jobs and
group them into Categories or Feature nodes. A Feature node allows grouping of
jobs by product feature during testing, whereas a Category allows grouping of
logically similar jobs. You can use categories to narrow down the search by
providing various filter criteria in the filter control. The jobs that are found that
match the provided filter criteria are displayed in a list. You can right-click a job
displayed in the list view to open a shortcut menu where you can perform various
operations on it.

The following subjects are included in this section:


Jobs Toolbar Options
How to Use the Job Explorer Tree View
Job Explorer Feature and Category Commands
Exporting and Importing Data from a Feature Node
Short-Cut Menu from a Job

Jobs Toolbar Options


The following toolbar options are available in Job Explorer:
Open a saved file

Opens a saved file in Job Explorer.


Save current component to a file

Saves a component or Job Explorer data. This can be used to save filter settings
as well as selected features or categories. It also saves display information from
the Job Explorer such as column widths, and columns selected for display. It
does not, however, save the contents of a list view.
Print the current component data

Prints the component / explorer data


Refresh

Retrieves the results of any query that the user has built from the datastore.
Show/Hide Hierarchy

Displays the left-pane of the Result Explorer, showing the Feature and
Category tabs. The default setting for the Hierarchy button is On.
Show/Hide Query

Displays the query group box in the right pane of the Result Explorer, allowing
users to run simple or advanced queries. The default setting for the Query button
is Hide.

Datastore

Displays associated controllers that host the Jobs Definition and Jobs Runtime
services. This is the first drop-down list on the Job Explorer toolbox.

How to Use the Job Explorer Tree View


The Job Explorer tree view functions as a centralized location for administering
jobs in WTT. Using Job Explorer, you can create or modify jobs, add nodes,
search for jobs saved in the WTT database, as well as import and export job
information.

To use the Job Explorer tree view


1. On the Explorers menu, click Job Explorer.

Select your controller from the Datastore drop-down list


3. In the Feature list in the left-hand pane, click the Feature
node you want. You may also click the root ($) node and add
a new Feature node. When you click a Feature node, all jobs
present within it are displayed in the right pane.
2.

Note: You must have permissions in order to access a Feature node.


Any user who creates a node and therefore owns it has these
permissions by default.
4. To work with a job, right click the job in the Jobs list and select a
command from the shortcut menu.
5.

To add a new job, right-click the parent Feature node, and


then click New Job, or click the New Job button.

Job Explorer Feature and Category Short-Cut Commands


When a user right-clicks a node in Job Explorer that they have permissions to
access, the following commands are available on the shortcut menu.
New Job

Create a new job in this node. This opens the Job form where the details of the
new job may be entered. To create a job, a user must have Write permissions for
the target node.
Add Node

Create a new node under the selected parent or root ($) node. Users must have
Security Write and Node Write permissions to do this. After creating the new
node, it is renamed from the default name by editing the label.
Rename

Rename the selected node. Users must have Security Write permission for the
node. The name may be modified by editing the nodes label. Each node in a
given path must be unique.
Delete

Delete the selected node, all child nodes and all jobs inside of all affected nodes.
Users must have Security Write permission for the node. Users are warned before
the node is deleted, and are not allowed to delete the node if any job in the node
or any of its child nodes is currently scheduled to run.
Export

Export details of all the jobs in the selected node to a specified destination. All job
details, such as constraints, contexts, tasks, and LMSs are exported, although by
default, results are not exported. Exporting a node across a datastore requires
Security Write and Node Write permissions. Exporting to a disk requires Write
permission on the local computer hard drive.
Import

Import details of all jobs in the selected node from a specified source. Importing
requires Security Write and Node Write permissions.
Cut, Copy, Paste, Drag and Drop

Move selected nodes to other locations, within Job Explorer or across the
datastores. For all operations, all jobs present inside the original node are
transferred recursively. A cut operation requires Security Write permission. For a
Copy operation no permission is required, but for Paste and Drag and Drop
operations require Security Write and Node Write permissions.
View Results

Display Results for the scheduled jobs in the selected feature. This opens the
Result Explorer screen.
Properties

Allow the current user to add Feature-level Write security permissions for other
users. Access to the Properties dialog requires Security Write permission on the
node.

Exporting and Importing Jobs from Job Explorer


With the Export command, the user can export a job, including all associated
parameters, dimensions, attributes, mixes, library jobs, as well as feature and
category settings. Using the Import command, the user can import jobs from the
specified source.

To export Jobs
1. On the Explorers menu, click Job Explorer and select your controller
from the Datastore drop-down list.

Click a Feature node and then select a job or jobs to be


exported.
3. On the File menu, click Export Job.
2.

Note: The export command is also available by rightclicking the selected job and then clicking Export.
Use the default location for the export destination or click
Browse to find another location, and then click Start.
5. Click OK.
4.

To import Jobs
1. On the Explorers menu, click Job Explorer and select your controller
from the Datastore drop-down list.
2. On the File menu, click Import Jobs.
3. In the Import Source Directory group box, type the directory path for
the jobs to be imported in the text box or click Browse to select the
directory.
4. In the Feature Handling group box, select the feature location into which
you wish to import the jobs:
If you wish the jobs to retain their original feature hierarchy mapping,
select Import hierarchy and append to and use Browse to select
the feature location.
Note: In some cases, users may not have permission to import a
job to the feature hierarchy specified by the job being imported. If
the On Error, remap to check box is selected, users may specify
where in the feature hierarchy these jobs should be imported if a
permissions error occurs. Otherwise, the job will not be imported if a
permissions error occurs.

If you do not wish to retain the imported jobs original feature hierarchy
(effectively flattening that hierarchy), select Ignore hierarchy and
remap to and use Browse to select the desired feature node.
5. In the Category Handling group box, select the category location to
which you wish to import the jobs:
If you wish the jobs to retain their original category hierarchy mapping,
select Import hierarchy and append to and use Browse to select
the category node to which you wish to append the imported jobs.
If you wish to drop all category mappings for the imported jobs, select
Ignore hierarchy.
o If you wish to remap all imported jobs to a specific category
(without the previous category mappings), select the Remap all to
check box and use Browse to select the category to which you
wish to append the imported jobs.
6. In the Job Collision Handling group box, select the options appropriate
for importing these jobs. These options include:
On collision, create copy (generate GUID): This option will
create a copy of the Job to be imported with a new GUID so as to
not overwrite the existing Job with the same GUID.
Overwrite: This option will overwrite (if permission allows) the
existing job with the same GUID. This option is also available if the
Job Name, Job Owner, or Feature Hierarchy are the same.
Note: If you use one or more of these options, you must also
specify to Copy (generate GUID) or Do Not Import if the
GUID matches but not the options you selected.
Prompt during import: Will prompt you during the import what to
do if a job GUID collision occurs. At that time you will be able to
select to Overwrite, Copy (generate a GUID), or Do not
Import.
Do Not Import: Will fail any job that is to be imported that has a
matching GUID of an existing job.
7. In the Library Job Collision Handling group box, select the options
appropriate for importing these jobs. These options include:
Use Job Options for Library Job Collisions: This option will
treat imported library jobs in the same fashion as other imported
jobs.
On collision, create copy (generate GUID): This option will
create a copy of the library job to be imported with a new GUID so
as to not overwrite the existing library job with the same GUID.

Overwrite: This option will overwrite (if permission allows) the


existing library job with the same GUID. This option is also
available if the Job Name, Job Owner, or Feature Hierarchy are
the same.
Note: If you use one or more of these options, you must also
specify to Copy (generate GUID) or Do Not Import if the
GUID matches but not the options you selected.
Prompt during import: Will prompt you during the import what to
do if a library job GUID collision occurs. At that time you will be
able to select to Overwrite, Copy (generate a GUID), or Do not
Import.
Do Not Import: Will fail any library job that is to be imported that
has a matching GUID of an existing library job.
8. In the Global Mix Handling group box, select the appropriate options for
importing these jobs, since jobs will attempt to import their associated
global mixes. These options include:
Use existing: Will not import the global mix and use the already
existing global mix with the Imported Job.
Overwrite: This option will overwrite (if permission allows) the
existing global mix, with the global mix being imported.
Prompt: This option provides a description for the user at import
time to select the Use existing or Overwrite options in the case
of a global mix collision.
9. In the Global Parameter Collision Handling group box, select the
appropriate options for importing these jobs, since jobs will attempt to
import their associated global parameters. These options include:
Use existing: Will not import the global parameter and use the
already existing global parameters with the imported job.
Overwrite: This option will overwrite the existing global
parameter, with the global parameter being imported.
Prompt: Will prompt user at import time to select the Use
existing or Overwrite options if a global parameter collision is
present.
10. In the Attribute Addition group box, select the options appropriate for
importing these jobs when the imported attributes do not match those you
are currently using. These options include:
Add any missing attributes on import: This option will attempt
to create any missing attributes associated with the imported job (if
permissions allow).

Ignore missing attributes, use existing: This option will


attempt to map the imported attributes to existing attributes and
all mismatching attributes will be ignored.
Drop all attribute mappings: This option will ignore all imported
attribute mappings and the imported job will have no associated
attributes.
11. Click Start.

To add permissions for a user of a Feature node


Security Write permissions are required to grant feature-level Write
permissions for other users.
1. On the Explorers menu, click Job Explorer and select your controller
from the Datastore drop-down list.
2.
3.
4.
5.
6.
7.
8.

Right-click a Feature node, and then click Properties.


On the Security tab, all authorized users of the Feature node
are listed in the User/Group list.
To see the permissions for a listed user, click the users name.
To add a new user, click Add.
Select the user from the user list, and then click OK.
Select the permissions to grant to the new user.
Click Apply, and then click OK.

To change or remove permissions for a user of a Feature node


1. On the Explorers menu, click Job Explorer and select your controller
from the Definition Controller drop-down list.
2.
3.
4.
5.
6.

Right-click a Feature node, and then click Properties.


On the Security tab, click the target users name.
To change permissions, select or clear the new permission
check box.
To remove a user, click Remove.
Click Apply, and then click OK.

Job Explorer Short-Cut Commands


When you click a job in Job Explorer, a shortcut menu is displayed containing the
following commands:
Schedule

Schedules the selected jobs. See Using the Scheduler for more information.
Insert Results

Inserts results for the selected jobs into the results log. Manual results can be
logged from Job Explorer using this command.
Insert Results as List

Bulk inserts multiple job results, as long as the jobs contain similar
configurations.
View Results

Displays the results associated with the one or more selected jobs. For detailed
procedures, see Viewing Job Results.
Report

Presents the Job Report for one or more selected jobs in a printable format. The
report contains job details such as Jobs common constraints, mixes, context
information along with its constraints, task details, and LMS details.
Edit

Allows you to edit the details of the selected job. The menu is applicable for
multiple selections of the jobs. In bulk edit mode only General and Attribute
details for the job can be edited. A job can be edited by double-clicking it in the
Job List view. This opens the job in the Read-Only mode. To edit the job, click
Edit Job button. For detailed procedures, see Creating and Editing Jobs.
Categories

Adds or removes the selected job from the selected category. This command is
available for the job only if you select a Category in the tree view pane first.
Delete

Deletes single or multiple jobs. Before deletion it asks for confirmation. Deletion
of job is not permitted if the selected job is scheduled.
Export

Exports details of the selected jobs in the selected feature to the entered
destination. This exports job details, such as constraints, contexts, tasks, and
LMSs, associated with the job and does not export the results corresponding to
the selected jobs by default.
Add / Remove Columns

Adds or removes the selected column in the Job Explorer list view display.
Sort Columns

Moves the selected column in the Job Explorer list view display.
Column Chooser

Allows user to select field names for columns in the Job Explorer list view display.

Exporting Jobs Details


You can select one or more jobs from the Job Explorer query results list for the
export operation.

To export details of selected jobs


1. On the Explorers menu, click Job Explorer and select your controller
from the Datastore drop-down list.

Click a Feature node and then select a job or jobs to be


exported.
3. Right-click the selected job and then click Export.
4. Use the default location for the export destination or click
Browse to find another location, and then click Start.
5. Click OK.
2.

Using the Job Explorer Query Function


Users can search for jobs with common attributes within a controller using the
Query Builder within Job Explorer. Either Categories or Features may be searched
by selecting the node to be searched, constructing a query in Query Builder to
search for common job attributes, and then by clicking the Refresh

button.

To Query a Job in Job Explorer


1. On the Explorers menu, click Job Explorer and select your controller
from the Datastore drop-down list.

If the Query Builder panel is not displayed, click the Show


Query Builder button.
3. Select the node on the Feature tab or Category tab under
which you wish to search for a job.
2.

Note: Selecting a node will only permit a search of that


node. Any child nodes will require a separate query.
4.

To perform a simple query, click the Job Simple Query tab


and define the search parameters desired:
Job Contains String or substring present in the name of the job.

Task Contains String or substring present in the name


of the task.
Param Contains String or substring present in the
name of the parameter.

Assigned To Jobs assigned to the particular user


selected from the drop-down list.
Priority Priority associated with the job.
IsActive Limit query to active or inactive jobs, or search
all jobs.
Attributes Jobs containing the attribute selected from
the drop-down list.
Choose Dimension Jobs with dimension selected from
the drop-down list.

To perform an Advanced Query


1. To perform an advanced query, click the Advanced Query tab and define
the search parameters:

Note: The top row in the query clause indicates the


datastore being searched. This defaults to the datastore
currently being utilized but may be modified if desired.
Click beneath the first row in the And/Or column, and
then choose an And/Or operator from the drop-down list.
In the Field Name column, select a parameter to search
for from the drop-down list.
In the Operator column, select a query operator from the drop-down
list.

In the Value column, type a search string or value for which to search.
2. Add additional query clauses if desired.
3.

After defining a query, click the Refresh

button.

To Create a Complex Query involving a Sub-Object


1. Click the Advanced Query tab and define the search parameters:

In the And/Or column, choose And from the drop-down


list.

In the Field Name column, select a parameter to search


for which has a + sign appended to it from the drop-down
list.
In the Operator column, the allowed operators are Has or
Has Not.
Do not select a value in the Value column.

Figure 4.5 Advanced Query parameters(Complex Query)

2.

A new query row will be formed automatically. Add another


query clause and define search parameters:
Click beneath the first row in the And/Or column, and
then choose an And/Or operator from the drop-down list.
In the Field Name column, select a parameter to search
for from the drop-down list.
In the Operator column, select a query operator from the drop-down
list.

In the Value column, type a search string or value for which to search.
3. Add additional query clauses if desired.
4.

Click the Refresh

button.

Note: Incorrect will be ignored during query formation. For


example, if the Field Name selected is GUID, and the value
provided is not a valid GUID, the clause will be ignored.

Figure 4.6 Completed Complex Query

To Group/UnGroup clauses within Advanced Query Builder.


Clauses may be grouped together when they are at the same hierarchy in the
Field Name column. For example, if two clauses are both listed under a Field
Name of Task List (+).

1. Within Advanced Query Builder, select the clauses to be


grouped together (or alternatively, the clauses to ungroup).
2. Right-click the clauses and select Group Clauses or
Ungroup Clauses depending on your need.
3. The grouping is represented by lines joining the beginning and
end clauses.

Figure 4.7 Grouped Query Clauses

Using the Scheduler


WTT Scheduler is used to run a single job or a group of jobs. WTT Scheduler is
accessed through Job Explorer, by right-clicking on a job (or group of jobs), and

then clicking the shortcut menu when you click a job in the Job Explorer (Test
Cases) query list view.
During scheduling, various constraints and options can be applied to the selected
jobs. Based upon these options, WTT Scheduler creates a Result, which is a
scheduled instance of a job. Each Result generated by the Scheduler is associated
with specific information, including the computers on which the job will run,
parameters being used, and the Result Collection associated with the Result.
Note: A controller must be selected prior to scheduling jobs as the WTT
Scheduler cannot schedule across multiple controllers.

The following subjects are included in this section:


Schedule Toolbar Options
Creating a Schedule
Adding Constraints to a Schedule
Setting Schedule Options
Setting Mailing Options for a Schedule

Schedule Toolbar Options


The following toolbar buttons are available on the Schedule Jobs form.
Open a saved file

Opens an existing file saved for scheduling. You can update the information again
and overwrite it.
Save current component to a file

Saves current schedule data, including all settings.


Print the current component data

Prints current schedule data.


Create schedule

Creates the new schedule.

Creating a Schedule
The process to create a schedule consists of the following procedures:

To create a job schedule


1. On the Explorers menu, click Job Explorer.
2. Select your controller from the Datastore drop-down list.

3. On the Feature tab, click the desired node and then click the Refresh
button.
4. Right-click the job to schedule, and then click Schedule.
Note: More than one job may be scheduled at a time by holding down
the CTRL- key, and then clicking each job to be scheduled.
5. On the Machines tab, from the drop-down list select a machine (asset)
pool from which to schedule this job. You must have permissions for a
machine pool in order to schedule jobs to it.
Note: You must have Write permissions for a selected machine pool in
order to schedule jobs to it.
Machine pools that have the property Schedule as a unit are prefixed
with *. Schedule as a unit behavior means that if a computer from a
machine pool with that property is selected, then all computers in that
pool are reserved, but the deployment will only be on the selected
computers.
6. To schedule the job only on specific computers within this machine pool,
select the Restrict Machine Selection to Specific Machines in
Machine Pool check box, and then select the check boxes for the specific
computers to use.
Select the Restrict Machine Selection to Specific Machines in
Machine Pool option.
Select in the list view the computers to be considered for scheduling
and clear those not to be scheduled.
7. On the Schedule Options tab, review the options for results location,
timing, and schedule behavior.
8. When all desired options have been set, click the Create Schedule
button.

Adding Constraints to a Schedule


Constraints can be added within a Schedule in addition to any constraints
attached to the job when it was created. In addition, you can add common
contexts and global or custom mixes in the same manner.
Conflicts can occur between a jobs constraints and those of the schedule if they
are contradictory. These conflicts can be viewed by clicking the Conflicts button
available on the Schedule Constraints tab when scheduling a job.

To add constraints to a schedule


1. When creating a job schedule, select the Schedule Constraints tab.

2. In the empty cell in the Dimension column, select a dimension for this
constraint from the drop-down list.
3. In the Operator column, select an appropriate operator from the dropdown list.
4. In the Value column, type a value for the dimension.
Note: The dimension value may be a single value or alternatively, a list
of values depending on the selected dimension and operator.
5. Continue creating the job schedule.

Setting Schedule Options


On the Schedule Options tab you can specify Run settings and Schedule behavior.
The Result Collection to be associated with the results can also be selected.

To add options to a schedule


1. When creating a job schedule, select the Schedule Options tab.
2. Type a name in the Result Collection box or select a file using the
Browse [] button.
3. From the Start Trying to Schedule this Job drop-down list, select when
the Scheduler should start trying to schedule the selected job. If you
choose No Sooner Than, enter a time and date in the Date box as well.
4. Enter a time limit for the scheduler in the Give up trying to schedule
after boxes.
5. Enter a time-out limit for the job in the Time out boxes.
6. Select the Schedule Behavior Settings that fit the needs of your test
case:
Normal The normal Scheduler is set to schedule the job, on the
timeframe you specified above.
Smart Scheduler - The Smart Scheduler will automatically schedule
your job, and will modify your test computer to make it fit the job
constraints you have set if necessary.
Manual The normal Scheduler is set to schedule the job, but does not
create the Run records.
Multiplier Count Selecting this option schedules the job to run more
than once with the same constraints applied for all runs. For each
result created in the schedule, copies of the Result equal to the
multiplier count are generated and then scheduled. This count is
primarily designed for stress testing.
Note: The number of runs is scheduled using the box to the right,
which carries a default setting of one instance.

Private run Allows a job to be run, but the results will not be
logged.

Setting Mailing Options for a Schedule


With Schedule Mailing Options, users can specify who will be notified about the
schedule and what information to include in the message.

To set mailing options for a schedule


1. When creating a job schedule, select the Mailing Options tab.
2. In the Email and CC boxes, type the e-mail addresses of the users to
whom you wish to send the messages.
Note: The Email setting defaults to the email address of the job
creator.
3. Select which information about the job/test case to send by e-mail.
4. When you finish scheduling the job, click the Create Schedule button.

Scheduler Fundamentals
The Scheduler is responsible for finding physical computers in an machine pool
that satisfy all the requirements for all the constraints in a given Run, while
considering the users permission on the machine pool and computer.

Terminology
Heartbeat

A message sent from the test client to the controller that validates the
condition of the computer.
Job Delivery Agent

The component responsible for the interaction between the test computers
and the controller.
Run

A group of job instances that can be scheduled as a unit.

Scheduler Prioritizing
Scheduler is a backend component which resides in the WTT 2.0 database and is
invoked every 10 seconds, attempting to allocate computers for runs that need to
be executed.

The Scheduler will find the runs that need to be scheduled based on the schedule
start time and try to schedule them in the machine pool in which they need to be
run. The machine pool in which the jobs need to be run may have computers
(called free pool computers) and sub pools attached to it. The Scheduler would
consider the following while scheduling a run:
Does the user have "execute" permission on the computers to be used.
Are the computers within a machine pool marked as Schedule as a Unit.
Consider the following scenario: Machine Pool MP1 contains computers M1, M2,
M3 and sub pool MP2. MP2 in turn contains computers M4, M5, M6, M7 and sub
pool MP3. MP3 contains computers M8, M9, M10, M11, M12 and MP3 is marked as
Schedule as a Unit. This can be seen in the following diagram:

Figure 4.8 Machine (Asset) Pool tree view


The Scheduler will give highest priority to free pool computers within the machine
pools. Computers in a machine pool that are marked as Schedule as a Unit are
given the lowest priority.
In this scenario, Scheduler would attempt to schedule the run in the following
order:
1. In the computers of MP1 and MP2 (M1, M2, M3, M4, M5, M6, M7)
2. In the computers of MP3 (M8, M9, M10, M11, M12)
The Scheduler will schedule a run across machine pools/sub pools if the machine
pool does not have the Schedule as a Unit flag set. If the Schedule as a Unit
flag is set, then the computers in that machine pool would be reserved exclusively
for the run. In this case, the Scheduler will either schedule a run among
computers M1, M2, M3, M4, M5, M6, M7 or computers M8, M9, M10, M11, M12,
but not across both sets.

Additional Scheduler Considerations

Scheduler will schedule runs only on computers that have public key.
Scheduler will schedule runs only on computers that have a heartbeat
registered in the last 30 minutes.
Scheduler will schedule runs only on computers that are in a Ready state
and not executing any other run.
Scheduler will schedule runs on computers having IsExplicit dimensions
only if that dimension is asked for by the run.

Twin Scheduler
Twin Scheduler is a backend component which gets invoked by the Job Delivery
Agent. The Twin Scheduler too is responsible for finding physical computers in a
machine pool that satisfy all the requirements for all the constraints in a given
run.
Once a run is executed and the Job Delivery Agent returns with the run data,
instead of freeing the computers used by the run, the Job Delivery Agent will
invoke the Twin Scheduler. The Twin Scheduler will check if any other run having
the same constraint(s) is waiting to be scheduled. If it finds any run matching the
above-mentioned criteria, it will schedule the run on the same set of computers.
Twin Scheduler Considerations
Twin scheduling can be done only if the computers still satisfy the run
constraints.
Twin scheduling can be done only if all the computers are still associated
to the same machine pool(s).

Smart Scheduler
In WTT Jobs, testers create jobs to perform a wide variety of common tasks.
Some jobs are created to prepare computers to run tests, some are created to
cleanup computers after running, some are created to run the tests themselves
and some are created to do a combination of the three. These different roles can
be classified as setup, cleanup and regular jobs. This distinction is used to help in
the organization and the scheduling of jobs.
This approach, however, requires that the correct jobs run in the right order so
that the tests may run successfully on a properly prepared computer. Because the
setup, regular, and cleanup jobs are not associated with each other, the tester
must use their expert knowledge in order to ensure this. This process is time
consuming as well as prone to errors. It also makes sharing tests across teams
more difficult.
Additionally, the Smart Scheduler cannot make connections between jobs of
different roles and it therefore cannot make optimizations that cut down on the

amount of setup and cleanup jobs that get run. For example, two jobs share the
same setup and cleanup jobs but run different tests. With an optimized process,
these tests could be run one after the other with just one setup and cleanup
instead of repeating these steps unnecessarily.
Smart Scheduler addresses such scenarios. Intelligence is built into Smart
Scheduler to understand the effect on the dimensions on computers due to
running a particular job (known as config jobs). Smart Scheduler understands this
information and makes decisions accordingly. If a particular dimension value is
required, Smart Scheduler can then try to find a computer that already has that
dimension value or to locate a job that produces that dimension value. In the
latter case Smart Scheduler will automatically execute the job on that computer
so that the original chosen job is executed. In addition to this, if multiple jobs
that require that dimension are queued then the Smart Scheduler can optimize
the process by ensuring that the jobs run on the same computer and thus avoid
running the same setup job again and again.
The Smart Scheduler uses the config jobs to prepare the computers as per the
requirements of the run. The config jobs may be either Set Operations or Delete
Operations:
Set Operation - If a value is specified, then this job will only be run when
the value specified matches that required by the calling job. This can be
used to distinguish setup jobs that have the same value set but need to
run different varieties according to the current value. If a parameter name
is given then the parameter is given the dimension value requested by the
job that initiated the setup job. This allows the job itself to de-reference
the requested value in command lines.
Delete Operation If a dimension is specified, then this job will be run
when the dimension needs to be deleted from the computer. This setting is
used when for example, after a proprietary application is installed and
tests run using it, additional jobs should not be scheduled on that
computer until the application has been removed.

Common User Scenarios and Best Practices


For best use of the schedule, several standard practices are useful to keep in
mind:
If smart scheduling feature is not required, then clear the Use Smart
Scheduler to Dynamically Add Jobs to satisfy Constraints option in
the Schedule Edit UI. This will ensure faster scheduling of Jobs.
Mark the machine pool as Schedule as a Unit only when required.
Avoiding setting this flag will ensure maximum and efficient utilization of
the computers in the machine pool.

Additionally, the following common user scenarios may assist testers in utilizing
the Scheduler to their best advantage within the WTT framework.
For additional tips and suggestions for use, see Appendix C: Best Practices.

Common Team Setup (Sharing)


Wilmas team has built up some automation utilities that all their test cases use.
These utilities include executables, batch scripts and DLLs. All the tests written by
people in the same team need to have these files copied to the computer and a
simple setup script executed that updates the path to include the install directory.
This process can take a while, so it is desirable that a computer that has had the
files installed not repeat the process.
Using the Smart Scheduler makes this simple: Wilma creates a setup job that
does the install and creates a WilmaTeamSetupDone user dimension which is set
to true by the setup job. To keep things in order she also creates a cleanup job
that deletes the files and undoes the action of the script. She sets this job to
delete the WilmaTeamSetupDone dimension.
To ensure that her tests execute in the right environment she simply adds
WilmaTeamSetupDone=True to the constraints for the tests that need this setup.
Whenever anyone in the team runs these tests the Smart Scheduler will ensure
that if a computer is available with the setup job already complete, that computer
is selected first for the tests.

Private Binary Installation


Barney is a developer who has just built a fix to a bug assigned to him. His fix is
to a system binary and he needs to test the fix before checking it in. The tests to
verify the fix come from the test organization and he would like to be able to
easily run these with his private binary installed.
Smart Scheduler, working with the Privates infrastructure, makes this easy:
Barney uploads his binary using the Privates infrastructure and selects the tests
to run. The Privates infrastructure adds a special constraint (for example,
PrivateBinaryID) that indicates the private binary that needs to be installed. This
particular dimension is also set up to indicate that it must be explicitly asked for
by jobs (IsExplicit) so that we prevent accidental scheduling of tests on
computers with installed privates. The PrivateBinaryID dimension is added to the
schedule so all jobs in the schedule will be given this constraint. Smart Scheduler
will then attempt to find computers that have the private installed and, since
there are none, will look for the job that can generate the required dimension
value. It will then find and run the private binary install.

This allows Barney, without knowing anything about the test details, to run the
tests that exercise his binary in the right way and validate his fix without having
to wait for a build release cycle.

Using Tests from Another Team


Fred works on server clusters testing DFS. Recent changes in the clustering
product are likely to impact DFS. To be certain that he covers DFS in his testing,
Fred decides he should run some of the DFS teams tests. He navigates to the
DFS test tree and adds a test from there to his schedule which already includes
his duster tests. Unbeknownst to Fred, the DFS test needs some setup jobs to run
first.
Smart Scheduler looks at the jobs Fred ran and acknowledges the fact that the
DFS job needs the additional setup. By querying the runtime controller, the
scheduler finds the setup job for the DFS test and automatically adds it to the
run. It also finds the cleanup job and adds this to run after the DFS test. All of
this happens without Fred needing to worry about it his testing is always
performed in the right environment
Without Smart Scheduler, Fred would likely have to consult the DFS team about
how to run their test.

SQL, IIS, and Operating System Installation


Dino works in SQL. He tests the latest IDW builds of the operating system with
SQL and Microsoft Internet Information Services (IIS) installed. Most of his jobs
have constraints that involve SQL, IIS and the operating system build number as
parameters. When he runs his tests he fills in the parameters according to the
current milestone and IDW build. He selects a lab and Smart Scheduler examines
the computers. If the current setup does not match, Smart Scheduler will find the
jobs that are required to install the operating system, install and configure IIS,
and then install SQL. Clearly, it is important that these jobs run in the right order
or the operating system installation could wipe out the SQL dimension, or the SQL
install (which in this case requires IIS) could run before IIS and fail.
Such scenarios are handled by Smart Scheduler. The SQL job has constraints on
IIS and the operating system, and the IIS job has constraints on the operating
system. The operating system job has no constraints and is marked as a fresh
install. From this information, Smart Scheduler determines a run order and adds
the jobs to the schedule automatically.

Smart Scheduler Considerations


Smart scheduling is done only if the smart scheduling option is enabled for
the given machine pool.
Smart scheduling is done only if the smart scheduling is enabled for the
schedule.
While smart scheduling, preference is given to computer(s) which have
been idle for the longest period of time.
While smart scheduling, preference is given to computer(s) having
IsExplicit dimensions.

Chapter 5: Job Results


After a job is scheduled, its progress can be tracked in the Job Monitor, the Result
Explorer, or Result Collection Explorer. Each provides a view of the job progress
through a different viewpoint. Job Monitor organizes the jobs by the machine
pools to which they belong, Result Explorer does the same thing except that it
organizes the jobs by the Feature node or Category to which they belong. Result
Collection Explorer provides an overall view of the individual jobs and their
results.
In each, users may right-click selected results to perform operations on them.

The following subjects are included in this chapter:


Monitoring Jobs
Using the Result Explorer
Using Result Collection Explorer
Test case management

Monitoring Jobs
Job Monitor is used to track the status of a job or task on the machine pool on
which it was scheduled. As well, it can show the current status of the machines
within the selected machine pool, so the user can monitor the computers
themselves. The Job Monitor lists all results of the jobs executed on the
machines in the selected machine pool.

The following subjects are included in this section:


Job monitor toolbar options
Asset Pool Short-Cut Commands
Computer List View Short-Cut Commands
Job Execution Status Short-Cut Commands
Task Execution Status Short-Cut Commands
Querying results in Job Monitor
Quick Schedule of a job on machine(s)

Job Monitor Toolbar Options


The following toolbar options are available in Job monitor.

Open a saved file

Opens a saved file in the Job Monitor.


Save current component to a file

Saves the Job Monitor component or explorer data. This can be used to save
filter settings and selected features or categories. It also saves display
information from the Job Monitor such as column widths, and columns selected
for display. It does not, however, save the contents of a list view.
Print the current component data

Prints the component / explorer data


Refresh

Retrieves the results of any query from the datastore that the user has built.
Show/Hide Hierarchy

Displays the left pane of the Job Monitor, showing the Asset Pool Hierarchy.
The default setting for the Hierarchy button is On.
Show/Query Builder

Displays Query Builder in the right pane of the Job Monitor, allowing users to
run simple or advanced queries. The default setting for the Show Query Builder
button is Off.
Show/Hide Task List

Displays the Task Execution Status list in the right pane. The default setting for
the Show/Hide Task List button is On.
Show/Hide Machine List

Displays the machine list in the right pane. The default setting for the
Show/Hide Machine List button is On.

Datastore

Displays the associated controllers that host the Jobs Definition and Jobs Runtime
services. This is the first drop-down list on the Job Explorer toolbox.

Machine Pool Short-Cut Commands


Schedule

Displays the Run Job on the user interface, allowing users to select a valid single
job to be scheduled on the selected computers, with schedule time parameters.
Add Machine Pool

A new machine pool may be added under any other machine pool, providing the
user has Write permissions to the parent machine pool.
Manage LLU

Users may create, update, or delete Local Logical Users (LLU) and Local Symbol
Users (LSU) on the set of computers in the selected machine pool.
Delete

Allows users to delete the selected machine pool and all child machine pools. All
the machines are then moved to the Default Pool. This requires Write permissions
on the parent machine pool.
Rename

Allows users to rename the selected machine pool, providing they have Write
permissions on the pool.
Properties

Displays the General and Security (permissions) properties for the selected
machine pool.

Machine List View Short-Cut Commands


Manage LLU

Users may create, update, or delete Local Logical Users (LLU) and Local Symbol
Users (LSU) on the selected computers in the computer list view.
Move

Allows users to move the selected computer from the current machine pool to the
selected machine pool.
Change Status

Allows users to change the status of selected computer(s).


Schedule

Displays the Run Job on all user interfaces, allowing users to select a valid single
job to be scheduled on the selected computers, with schedule time parameters.
Latest HW Configuration Log

View the latest hardware configuration log.

Add / Remove Columns

Adds or removes the selected column in the Computer List View display.
Sort Columns

Moves the selected column in the Computer List View display.


Column Chooser

Allows users to select the field names for each column in the Machine List View
display.

Job Execution Status View Short-Cut Commands


Job Report

Presents a report about the selected job in a printable format. The menu can be
used for multiple job selection. This report contains job details including common
constraints, mix, context information with associated constraints, tasks details,
and LMS details.
Result Report

Displays the Result report in a printable format. This menu is applicable for more
than one selection.
Cancel

Marks a particular result for cancellation. A particular result for a component can
be cancelled only if the execution of the job in that component, such as Job
Scheduler or EA, is stopped.
Add To Result Collection

Adds a single result or multiple result to a particular result collection. A result


collection can be queried by applying different filters. Results are added to the
Result collection and returned as a result of the query. A result can be added to
multiple result collections.
Remove Result From Collection

Removes particular result(s) from the corresponding collection. Confirmation is


made before removing the result.
Trigger Execution

Triggers an automated job for execution if it is scheduled as manual. Confirmation


is made before changing the pipeline of the scheduled result.
Edit Result

Allows editing of a single result or bulk entries.


Delete

Allows deletion of a single result or bulk entries.


View Error

Displays errors corresponding to the selected job result.

Add / Remove Columns

Adds or removes the selected column in the Job Execution Status list view
display.
Sort Columns

Moves the selected column in the Job Execution Status view display.
Column Chooser

Allows user to select field names for columns in the Job Execution Status list view
display.

Task Execution Status View Short-Cut Commands


View Error

Displays errors corresponding to the selected job result.


Test Log

Displays test logs.


Infrastructure Log

Displays infrastructure logs.


HW Configuration Log

Displays hardware (Sysparse-generated) configuration logs.


Add / Remove Columns

Adds or removes the selected column in the Task Execution Status view display.
Sort Columns

Moves the selected column in the Task Execution Status view display.
Column Chooser

Allows user to select field names for columns in the Task Execution Status view
display

Querying results in Job Monitor


Results may be queries using query builder only, using machine list only, or by
using a combination of both.

To query for results in Job Monitor


1. On the Explorers menu, click Job Monitor.
2. Select your controller from the Datastore drop-down list.
3. Select the machine pool to be investigated.
4. Click Show Query Builder to run a query on the specific job or jobs
desired.

5. Click the Refresh

button.

6. Click an instance of the job in the Job Execution Status box to display
the list of tasks within the job and their associated status.

Quick Schedule of a job on computer(s)


A job could be scheduled on one or more machines from Job Monitor.

Quick Schedule from the Job Monitor Machine Pool tree


1. On the Explorers menu, click Job Monitor
2.
3.
4.
5.
6.
7.
8.
9.

Right-click a machine pool, and then click Schedule.


All computers in the selected pool (including child pools) are
displayed with their current status.
Enter the job ID to be scheduled, or click Browse to select
the job from those available.
If you have entered the job ID, click Resolve ID to verify the
job ID.
Enter schedule time parameters, if required.
Enter a name for the result collection or Browse to the result
collection to be used.
Click Start.
Click Done.

Schedule from Machine list view


1. On the Explorers menu, click Job Monitor.
2. Right-click a computer in the Machine list view.
3. Enter the job ID to be scheduled, or click Browse to select the job from
those available.
4. If you have entered the job ID, click Resolve ID to verify the job ID.
5. Enter schedule time parameters, if required.
6. Enter a name for the result collection or Browse to the result collection to
be used.
7. Click Start.
8. Click Done.

Using the Result Explorer


Result Explorer is used to view and work with the results for the existing jobs. A
Result is a unit of work created by the scheduler when a particular job is

scheduled. For more information about Results, see Jobs Best Practice
Recommendations.

The following subjects are included in this section:


Result Explorer Toolbar Options
Result Explorer Short-Cut Commands
Viewing Job Results
Adding Manual Job Results to the Results Log
Changing the Column Display and Sort on the Job Results Form
Querying Results in Result Explorer
Editing Results in Result Explorer

Result Explorer Toolbar Options


The following toolbar buttons are available in Result Explorer.
Open a saved file

Opens a saved file in the Result Explorer.


Save current component to a file

Saves Result Explorer component or explorer data. This can be used to save
filter settings as well as selected features or categories. It also saves display
information from the Result Explorer such as column widths, and columns
selected for display. It does not, however, save the contents of a list view.
Print the current component data

Prints the component / explorer data


Refresh

Retrieves the results of any query from the datastore that the user has built.
Show/Hide Hierarchy

Displays the left-pane of the Result Explorer, showing the Feature and
Category tabs. The default setting for the Hierarchy button is On.
Show/Hide Query

Displays the query group box in the right pane of the Result Explorer, allowing
users to run simple or advanced queries. The default setting for the Show/Hide
Query button is Hide.

Bottomlist

Displays a Task box at the bottom of the Result Explorer. The default setting
for the Bottomlist button is Off.

Datastore

Displays the associated controllers which host the Jobs Definition and Jobs
Runtime services. This is the first drop-down list on the Job Explorer toolbox.

Result Explorer Short-Cut Commands


Job Report

Presents a report about the selected job in a printable format. The menu can be
used for multiple job selection. This report contains job details including common
constraints, mix, context information with associated constraints, tasks details,
and LMS details.
Result Report

Displays the Result report in a printable format. This menu is applicable for more
than one selection.
Add To Result Collection

Adds a single result or multiple result to a particular result collection. A result


collection can be queried by applying different filters. Results are added to the
Result collection and returned as a result of the query. A result can be added to
multiple result collections.
Remove Result From Collection

Removes particular result(s) from the corresponding collection. Confirmation is


made before removing the result.
Trigger Execution

Triggers an automated job for execution if it is scheduled as manual. Confirmation


is made before changing the pipeline of the scheduled result.
Edit Result

Allows editing of a single result or a bulk entry.


Delete

Allows deletion of a single result or a bulk entry.


View Error

Displays errors corresponding to the selected job result.


Test Log

Displays test logs.

Infrastructure Log

Displays infrastructure logs.


HW Configuration Log

Displays hardware (Sysparse-generated) configuration logs.


Add / Remove Columns

Adds or removes the selected column in the Result Explorer list view display.
Sort Columns

Moves the selected column in the Result Explorer list view display.
Column Chooser

Allows user to select field names for columns in the Result Explorer list view
display.

Task Results Short-Cut Commands


View Error

Displays errors that correspond to the selected job result.


Test Log

Displays test logs.


Infrastructure Log

Displays infrastructure logs.


HW Configuration Log

Displays hardware (Sysparse-generated) configuration logs.


Add / Remove Columns

Adds or removes the selected column in the Result Explorer list view display.
Sort Columns

Moves the selected column in the Result Explorer list view display.
Column Chooser

Allows user to select field names for columns in the Result Explorer list view
display.

Viewing Job Results


When a job is scheduled, a unit of work called a Result is created. The Result
Explorer can be used to view and work with the recorded Results.

To view job results


1. On the Explorers menu, click Result Explorer and select your controller
from the Datastore drop-down list.

2.

Select the Feature node containing the job and then click

the Refresh

button in the toolbar to run a query for the

jobs with results. You can enter other criteria in the Simple
Query group box for the search before running the query.
The following information returned by the Results query is displayed on
the Results Query form.
Computer configuration.
Result status - User can assign that result to a particular user.
Various counts, such as Pass and Fail, depending on the success or
failure of the tasks associated with the selected jobs.
Change information, if a particular user has modified this result on a
particular date.
Resolution information, such as that the user has resolved a
particular issue, the type of resolution, and the resolution date.
Result creation information - This field is auto-populated with the
currently logged on user name.
Log - Here the user can specify a location for the log files.
General Information regarding this result.

Viewing Job Errors


Occasionally errors may occur when scheduling or running a job. These may be
observed using the View Errors command.

To view errors associated with a particular run of a job


1. On the Explorers menu, click Result Explorer.
2. Click the Feature or Category to which the job belongs.
Note: The same result can be achieved by opening Job Monitor and
clicking the machine pool to which the job belongs.
3. Right-click the job run in question, and then click View Errors.
4. If errors are associated with the job run, the error log will be displayed.
Each entry has a detail description associated with it to explain the error.
5. Click OK.

Working with the Results Log


Some jobs produce result logs that are parsed by WTT, after which the results are
reported to the corresponding test cases.

To access the result logs


1. On the Explorers menu, click Result Explorer, and then select your
controller from the Datastore drop-down list.
2.

Select the Feature node or Category containing the job in


question.

Right-click the desired job run and then click Test Log.
4. This opens a folder containing all the jobs system logs as well
as any other logs that get created by the tasks.
3.

To edit the result log of a particular jobs run


1. On the Explorers menu, click Result Explorer, and then select your
controller from the Datastore drop-down list.

Select the Feature node or Category containing the job in


question.
3. Right-click the desired job run and then click Test Log.
4. This opens a folder containing all the jobs system logs as well
as any other logs that get created by the tasks.
5. Right-click the desired log, and then click Edit.
2.

Adding Manual Job Results to the Results Log


After running manual jobs, the results can then be inserted directly into the result
log for more efficient record keeping.

To insert individual job results into the Results log


1. On the Explorers menu, click Job Explorer and then select your
controller from the Datastore drop-down list.
2. Click the Feature node containing the desired job. If Query Builder is
displayed, then click the Refresh

button to run a job query. You can

also enter additional search criteria under Simple Query before running
the query.
3. Right-click the desired job on the Job Explorer list view, and then click
Insert Results to open the New Result dialog box.

Figure 5.1 New Result dialog box

4. Enter the new result information to be included in the dialog box, including
result statistics, log location, job description and other information.
5. Click the Save button.

To bulk insert job results into the Results log


1. On the Explorers menu, click Job Explorer and then select your
controller from the Datastore drop-down list.
2. Click the Feature node containing the desired job. If Query Builder is
displayed, then click the Refresh

button to run a job query. You can

also enter additional search criteria under Simple Query before running
the query.
3. Right-click the desired job on the Job Explorer list view, and then click
Insert Result as List.
Note: Job results may be bulk inserted to multiple jobs at the same
time by selecting multiple jobs at once using the CTRL key, right-clicking
the selections, and then clicking Insert Result as List. When this is
done, results may be different, but basic configuration information must
be the same across the jobs.

Figure 5.2 Bulk Insert Result dialog box

4. Select the test computer from the Machine drop-down list. If a specific
computer is selected, configuration information will be automatically
populated.
If the test computer is not on the list, type the name of the computer
in the Machine box and press TAB. Add configuration information on
the test computer by selecting a dimension from the drop-down list in
the Dimension column and then typing a dimension value in the
adjacent Value column. Add all configuration information needed for
the test computer.
Note: The dimensions entered here will be saved under the name
entered in the Machine box and will be available for later use.
5. Select the test information for each job, including result statistics,
Assigned To, Bug DB, and Description.
6. Enter a Job Description applicable to all selected jobs if desired.
7. Click the Save button.

Changing the Column Display and Sort on the Job Results Form
The results pane is customizable to the specific needs of the user, including
adding or removing columns or sorting them in a prescribed manner. This is done
using the commands available on the Results short-cut menu.

To add or remove the columns displayed in the Results list view


1. On the Explorers menu, click Result Explorer.
2. Select your controller from the Datastore drop-down list.
3. Click the Feature node containing the desired job. If Query Builder is
displayed, then click the Refresh

button to run a job query. You can

also enter additional search criteria under Simple Query before running
the query.
4. In the Results List, right-click any job, and then click Add Remove
Columns.
5. Add or remove columns as follows:
To add a column, click a desired field in the Available Fields box, and
the click Add. The desired field will now appear in the Results List for
each test.
To remove a current column, click a specific field in the Current Fields
box, and the click Remove. The selected field will no longer appear in
the Results List for each test.
6. Click OK.
7. To adjust the width of the new column(s), drag a column edge to the
desired location.
Note: Column width can also be adjusted within the Add Remove
Columns dialog box by typing a new width in the Column Width box
prior to closing the dialog box. However, this new width will be applied to
all columns uniformly.

To sort the job list in the Result list view in a particular order
1. On the Jobs menu, click Result Explorer.
2. Select your controller from the Datastore drop-down list.
3. On the Feature tab, click the desired node, and then click the Refresh
button.
4. In the Results List, right-click any job, and then click Sort Columns.
5. Click the field to be sorted in the Available Fields box, and then click
Add. Repeat for each field to be sorted.

6. For each field to be sorted (the fields in the Current Fields box), click
Ascending or Descending to determine the sort type.
7. Click each field and the Up or Down arrow to adjust the sort order for the
fields.
8. Remove any unwanted sort columns by clicking on that field in the
Current Fields box, and the click Remove.

Note: If more than one field is added to the Current Fields box to sort,
the topmost field will be sorted first, followed by the other fields in the order
that they are listed.

Querying Results in Result Explorer

To query a result in Result Explorer


1. On the Explorers menu, click Result Explorer and select your controller
datastore from the Datastore drop-down list.
2. Search for results either within the Feature nodes, or by Category, as
appropriate.
To search within a feature node, click the desired node on the Feature
tab.
You can also search for results on the Feature Root ($) as well, which
will search all jobs throughout the hierarchy on the selected controller.
To search by category, click the Category desired on the Category
tab.
3. Click the Hide Query button.
4. Choose either a Simple Query or an Advanced Query to run.

To run a simple query


1. On the Simple Query tab, type the name of the result collection you wish
to search in the Result Collection box, or browse for the collection using
the Browse button.
2. Choose the search parameters to use in your search:
Some of the parameters available include:
ID Tthe unique identifier associated with the result.
Name Contains String or substring that is contained within the job
name of the result collection sought.
Attribute Tthe attribute identifier associated with the result, from
the drop-down list.

Choose Dim From the dimensions available on this controller.


Status Status of the job, from the drop-down list.
HResult The unique error ID associated with the result.
Failed In The specific component associated with a failure, selected
from the drop-down list.
Time The timespan during which the results were created or updated
in the selected feature or category.
3. Click the Refresh

button to run the query.

To run an Advanced Query


1. On the Advanced Query tab, change any of the base parameters
necessary.
Note: The default query is for the full datastore on the controller, which
would return the results that are already displayed. To narrow down the
search, it is necessary to enter additional query parameters.
2. Click beneath the open cell below And/Or, and then in the newly opened
cell, click either And or Or from the drop-down list.
3. In the open cell below Field Name, click a parameter to search by, from
the drop-down list.
4. In the open cell below Operator, click an operator from the drop-down
list.
5. In the open cell below Value, type a value of the parameter for which to
search.
6. Click the Refresh

button to run the query.

Editing Results in Result Explorer


Editing log or job results of tests can sometimes be necessary in WTT. Using the
edit function in Result Explorer, users are able to make changes, add details, or
make other edits to job results with minimal effort. Single or bulk results may be
modified.

To edit results
1. On the Explorer menu, click Result Explorer.
2. Select your controller from the Datastore drop-down list.
3. In the results pane, right-click a job, and then click Edit.
4. Edit the data in the available fields as needed.

5. Click the Save button.

Using Result Collection Explorer


While a result is the unit of work created when a job is scheduled, a result
collection is a set of those scheduled job results. As such, it provides users with a
centralized point of view for tracking test-run status by associating aggregate
result values such as PassedJobs, FailedJobs, NotRunJobs, NotApplicableJobs, and
Total Jobs. These counts track summary information for all results included in a
collection.
Note: If only a single datastore is present in the Datastore drop-down list, all
result collections created in the past seven days will be displayed by default in
the results pane.

The following subjects are included in this section:


Result Collection Toolbar Buttons
Result Collection Short-Cut Commands
Querying a Result Collection

Result Collection Toolbar Buttons


The following toolbar buttons are available in Result Collection:
Open a saved file

Opens a saved file in the Result Explorer.


Save current component to a file

Saves component or explorer data from the Result Explorer. This can be used
to save filter settings and selected features or categories. It also saves display
information from the Result Explorer such as column widths, and columns
selected for display. It does not, however, save the contents of a list view.
Print the current component data

Prints the component / explorer data


Refresh

Retrieves from the datastore the results of any query that the user has built.
Hide Query

Displays the query group box in the right pane of the Result Explorer, allowing
users to run simple or advanced queries. The default setting for the Hide Query
button is Off.

Datastore

Displays the associated controllers which host the Jobs Definition and Jobs
Runtime services. This is the first drop-down list on the Job Explorer toolbox.

Result Collection Short-Cut Commands


Create New Collection

Creates a new collection for organizing results. This collection can be named by
the user or a name may be automatically generated. The collection is empty by
default.
Delete

Deletes a selected collection, although the results within the collection are not
deleted.
View Results

Displays the results that are present in the selected result collection. This
command invokes Result Explorer from within Result Collection and displays
standard result information.
View Rollup Counts

Displays the rollup count for scheduled jobs (including passed, failed, attempted,
and so on.) whose results are displayed in the selected collection. This also
provides a query builder that can be used to filter the jobs and then view the
result based upon the chosen criteria.
Add / Remove Columns

Adds or removes the selected column in the Result Collection list view display.
Sort Columns

Moves the selected column in the Result Collection list view display.
Column Chooser

Allows user to select field names for columns in the Result Collection list view
display.

Result Collection Column Options


Completed Jobs

The controller has started the job and has received notification from the client
computer that the job has been finished. This does not imply that job has
passed, failed or any other information other than the job is completed.
Investigate Jobs

The controller has started the job and has received notification from the client
computer that the job has been finished. This job has subsequently been marked
for investigation by a user.
Cancelled Jobs

The controller has received notification from the client computer that this job has
been cancelled by a user.
Resolved Jobs

The controller has received notification from the client that this job has been
resolved, possibly after registering a failure.
In Progress Jobs

The controller has started the job but has not yet received notification from the
client computer that the job has been finished.
Actual Run Time

The actual time taken to run the job, in seconds.


Create Time

The time the job has been created on the controller.


Estimated Run Time

The estimated time for the job run, as provided when creating the job.
GUID

A unique global ID number for the job, provided by the controller when the job is
created.
ID

A unique number within the WTT Result Collection, usually given in the order of
job creation.
Run Time Left

The estimated time remaining for a job that is in progress.


Signed Of

Whether the job has been signed off by the tester or not. This is indicated by a 1
if the job has been signed off, and a 0 if it has not.
Status

If a job is complete or not complete.

Querying a Result Collection


When queried, Result Collection displays those results present in the selected
Datastore. Although only a small set of result columns are displayed, additional
details may be viewed by adding columns using the short-cut menu.

To query a result in Result Collection


1. On the Explorers menu, click Result Collection and select your
controller from the Datastore drop-down list.

Click the Show Query Builder button.


3. In the Query Builder, define the appropriate search
parameters:
2.

Name Contains Result Collections with names that contain the


specified string.

Scheduled By Result collections having results


scheduled by the specified user, selected from the dropdown list.
Result Collection Status Result collections with the
specified status, selected from the drop-down list.
Created During Result collections created during the
specified period, selected from the drop-down list. The
period can be customized by selecting Choose Date from
the list and then specifying the dates.

4. Click the Refresh

button.

Using Result Rollup


Result rollup provides testers with a summary view of rolled up counts for test
jobs. These can be viewed by the individual job, by feature or category, or by
attribute, depending upon the tab chosen in the Rollup tree view. Users may also
create a simple query using Query Builder to locate a specific job.
Result Rollup may be accessed by selecting Result Rollup on the Explorers
menu.
Note: If Query Builder is not displayed (this is the default setting), the rollup
results are displayed when the user selects a specific node. However, if Query
Builder is displayed, it is necessary to click the Refresh
the counts.

The following subjects are included in this section:

button to retrieve

Result Rollup Toolbar buttons


Result Rollup Short-Cut Menu
Querying in Result Rollup

Result Rollup Toolbar Options


The following toolbar options are available in Result Rollup:
Open a saved file

Opens a saved file in the Result Explorer.


Save current component to a file

Saves component or explorer data from the Result Explorer. This can be used
to save filter settings and selected features or categories. It also saves display
information from the Result Explorer such as column widths, and columns
selected for display. It does not, however, save the contents of a list view.
Print the current component data

Prints the component / explorer data


Refresh

Retrieves from the datastore the results of any query that the user has built.
Show/Hide Hierarchy

Displays the left-pane of the Result Explorer, showing the Feature and
Category tabs. The default setting for the Hierarchy button is On.
Show/Hide Query

Displays the query group box in the right pane of the Result Explorer, allowing
users to run simple or advanced queries. The default setting for the Hide Query
button is Off.

Datastore

Displays the associated controllers which host the Jobs Definition and Jobs
Runtime services. This is the first drop-down list on the Job Explorer toolbox.

Result Rollup Short-Cut Commands


Add / Remove Columns

Adds or removes the selected column in the Result Collection list view display.

Sort Columns

Moves the selected column in the Result Collection list view display.
Column Chooser

Allows user to select field names for columns in the Result Collection list view
display.

Result Rollup Column Options


Min Build

The minimum OS build configuration required to perform the job if one has been
specified.
Max Build

The maximum OS build configuration required to perform the job if one has been
specified.
Total

The total number of scheduled job variations of the job. This will be 0 if the job
has been cancelled.
Attempt%

The percentage of the total number of scheduled job variations that have been
attempted. This percentage should include the jobs passed, jobs failed, and jobs
in progress, but not those cancelled or not run.
Pass%

The percentage of the total number of scheduled job variations in which all stated
tasks have passed..
Fail%

The percentage of the total number of scheduled job variations in which one or
more stated tasks have failed.
Not Run%

The percentage of the total number of scheduled job variations that have not yet
been started by the controller.
Bugs

Whether the job has been associated with a bug (or bugs) or not. This is
indicated by a 1 if a bug is associated, and a 0 if it is not.

Querying Results in Result Rollup

To query a result in Result Explorer


1. On the Explorers menu, click Result Rollup and select your controller
datastore from the Datastore drop-down list.
2. Search for results either within the Feature nodes, by Category, or by
Attribute, as appropriate.
To search within a feature node, click the desired node on the Feature
tab.
You can also search for results on the Feature Root ($) as well, which
will search all jobs throughout the hierarchy on the selected controller.
To search by category, click the Category desired on the Category
tab.
To search by attribute, click the specific parent or child attribute on the
Attribute tab.
3. Click the Show Query Builder button if Query Builder is not displayed.
4. Run a Simple Query.

To run a Simple Query


1. On the Simple Query tab, type the name of the result collection you wish
to search in the Result Collection box, or browse for the collection using
the Browse button.
2. Choose the search parameters to use in your search:
Some of the parameters available include:
Collection Name String or substring that is contained within the
collection name sought.
Job Name Contains String or substring that is contained within the
job being sought.
Choose User Based on the user who either created the job or owns
it.
Min Minimum build number associated with the selected build type.
Max Maximum build number associated with the selected build type.
Choose Dimensions Based on the selected dimension and the
dimension value of the constraint. This constraint may be associated
with either the job context or schedule context
Choose Time The time span during which results were created or
updated in the selected feature or category.
3. Click the Refresh

button to run the query.

Chapter 6: WTT Administration


A Windows Test Technologies (WTT) administrator is responsible for managing a
WTT Controller. This includes maintaining the list of users who are authorized to
use various features on the controller as well as maintaining the computer
configurations used as constraints for testing. Additionally, global parameters and
global mixes are added and maintained by the controller administrator as well as
Test Leads.

The following subjects are included in this chapter:


Managing Enterprises
User Administration
Dimensions
Global Parameters
Global Mixes

Managing Enterprises
Managing enterprises is available to users with administrator privileges. A list of
currently registered controllers and datastores is displayed and administrators
may add, edit, or delete controllers or datastores from an enterprise.

To add a controller to an enterprise


1. On the File menu, click Manage Enterprise.
2.
3.
4.
5.
6.

Select your enterprise from the Selected Enterprise dropdown list.


Click Register.
Type a name for the new controller in the Server Name box.
Type a name for the new datastore in the Database Name
box.
Click OK.

To remove a controller from an enterprise


1. On the File menu, click Manage Enterprise.

Select your enterprise from the Selected Enterprise dropdown list.


3. Select the controller to remove, and then click UnRegister.
4. Click Yes.
2.

5.

Click OK.

User Administration
In order to for individual testers to work with WTT test computers or schedule
jobs on a WTT Controller, it is necessary for a WTT Administrator to grant them
access to that controller. This is done in SQL Server Enterprise Manager by
granting the WTT_DATASTORE_USERS role to the user.

If a job or machine needs to be assigned to a user that has not installed WTT
Studio, their name will not yet be selectable in the list of WTT users, and needs to
be manually added for this to be possible. WTT makes this easy for administrators
with an easy-to-use dialog box that readily lists all users and their domains for
the selected controller.
Administrators can access the dialog by clicking on Users on the Admin menu.
Note: If a user's regular domain credentials are used to run tasks in WTT, they
will be compromised over the network. Therefore, users should create a special
username, called a local logical user (LLU), on each client computer in order to
run tasks. An LLU is created from a command line using the WTTCMD CommandLine tool. See Appendix D: WTTCMD Command Tool.

To add a user for a controller


1. On the Admin menu, click Users.
2. Select your controller from the Show Users from the Controller dropdown list.
3. Click Add.
4. In the Add New User(s) dialog box, type the domain and alias of the
user to be added.
Note: Users should be added in the format DOMAIN\username. Multiple
users may be added at the same time, separated by a semi-colon.
5. Click OK, and then click Close.

To remove a user from a controller


1. On the Admin menu, click Users.
2. Select your controller from the Show Users from the Controller dropdown list.
3. Click the user name to be deleted.
4. Click Delete, and then click Yes.

5. Click Close.

Dimensions
A dimension is a customized key-value pair that a test computer automatically
reports to the WTT database through Sysparse whenever the computer reboots.
Custom computer configuration queries can be created as dimension, as well as
strings or lists where appropriate.
For example, for a video driver dimension key, NVIDIA is a possible value, just as
4123 might be a possible value for an operating system build number key.

The following subjects are included in this section:


Adding and editing dimensions
Computer Configuration Query dimensions

Adding and Editing Dimensions

To add a string dimension


1. On the Admin menu, click Dimensions.
2. Select your controller from Show Dimensions from the controller dropdown list.
3. Click Add.
4. Select String, and then click OK.

Figure 6.1 Select Dimension Type dialog box


5. In the Name box, type the text string for the new dimension.
6. Click OK.

To add a list dimension


1. On the Admin menu, click Dimensions.
2. Select your controller from Show Dimensions from the controller dropdown list.
3. Click Add.
4. Select List, and then click OK.
5. In the Name box, type a name for the new list dimension.
6. In the Value field, click List and select a value.
7. In the List of values box, type the values that the user can choose for
this dimension.
8. Click OK.

To edit a dimension
1. On the Admin menu, click Dimensions.
2. Select your controller from Show Dimensions from the controller dropdown list.

3. Click the selected dimension, and then click Edit.


4. Make any changes necessary in the dimension fields.
5. Click OK.
Note: Dimensions installed when WTT Studio was installed may not be
edited.

To delete a dimension
1. On the Admin menu, click Dimensions.
2. Select your controller from Show Dimensions from the controller dropdown list.
3. Click the selected dimension, and then click Delete.
4. Click Yes.
Note: Dimensions installed when WTT Studio was installed may not be
deleted.

Machine Configuration Query Dimensions


MCU is one piece of the entire end-to-end scenario involved in creating custom
dimensions and being able to have values for these custom dimensions filled in
for each machine in a particular machine pool. These dimension values can then
be used to constrain jobs and create reports, based on the values associate with
the job result, at a later date. They can also be used to auto-generate test
matrices to provide a full range of possible test cases.

Getting Started
MCU works by applying an MCU Policy to a particular machine pool and then when
each machine in that machine pool reports its machine configuration data to the
WTT Controller specified for the Asset Pool, it will store the values for the policy
into the WTT database. These values can then be used to constrain jobs, create
reports, and so on.
Since the MCU Policy is applied to a specific machine pool, it is important to know
how to create pools, associate a pool with a controller, how to install the WTT
Client software on your test machines, and how to add that test machine to the
pool. This topic assumes knowledge of this process and instead focuses on
creating the MachineConfigQuery dimension and the subsequent MCU Policy for a
particular pool.
Once a client is part of a specific machine pool, then you can start creating
MachineConfigQuery dimensions and associating them with the machine pool in
order to get MCU working. You could create the dimensions first and associate
them with the machine pool before moving machines to the pool as well.

When the WTT Client software is installed, it will gather machine configuration
information when its service starts. The WTT Client service calls Sysparse which
gathers the machine configuration information and saves it into an XML file. The
client service then sends the XML file to the controller, which parses it and stores
the information into the WTT database.
By creating a MachineConfigQuery type of dimension and associating it with a
pool to create an MCU Policy for the pool, the controller service will also call MCU
to enforce the MCU Policy for that machine.
In order to create a MachineConfigQuery dimension, youll need to be familiar
with two things:
1. Sysparse XML format: Where in the Sysparse output are the values you
want associated with your custom dimension? A sample Sysparse output
file can be found in the WTT Software Development Kit (SDK).
2. XPath query syntax. MCU uses XPath queries to retrieve the values, so you
will need to be very familiar with its syntax and use. See:
http://msdn.microsoft.com/library/enus/xmlsdk/htm/xpath_ref_overview_0pph.asp
The Dimension Editor UI provides some sample queries and a pointer to a sample
XML file to help you create the right query for your data.

To add a machine configuration query dimension


1. Determine where the values are in the SysParse XML output that you want
associated with your custom dimension.
2. On the Admin menu, click Dimensions.
3. Select your controller from Show Dimensions from the controller dropdown list.
4. Click Add.
5. Click Machine Configuration Query, and then click OK.
6. On the Dimension tab, type the name of the new dimension in the Name
box.
An easily decipherable naming convention is recommended, such as:

<Component>\<MyDimension>. For example:


Networking\TestAddress.
7. In the Query box, type your query to retrieve the values you want for this
dimension.
Note: Use XPath query syntax to construct your query. It is important
to remember that XPath queries are case sensitive.
WTT provides samples queries to help you create the correct query for
your data. These are available immediately beneath the Query box.

For additional references, see http://msdn.microsoft.com/library/enus/xmlsdk/htm/xpath_ref_overview_0pph.asp

Figure 6.2 Add Dimension dialog box


8. In the Test Path box, type an appropriate test path or click Browse to
browse to one.
Note: WTT provides a sample path to assist you in typing a correct
path: Click Sample Path, immediately beneath the Browse button to
access this sample. If can't reach the sample path, other locations to
look for Sysparse XML files are the following:

a. On a client machine, it is located in %windir%\wttbin


b. On the controller/router, they are located on
\\<controller>\systemlogs\<date>\sysparse
9. Click Test Query to display the query results, and then click OK.

Create MCU Policy for an Asset Pool


The following will step you through associating the MachineConfigQuery
dimension you created with your new Asset Pool to create the MCU Policy for this
pool.

To create an MCU Policy for your Asset Pool:


1. On the Asset menu, click My Assets.
2. Select your controller from the Datastore drop-down list.
3. On the My Search tab, right-click the desired machine pool, and then click
Properties.
4. On the MCU Policy tab, confirm that the Computer Configuration
Query dimension that you wish is in the Current Policy Box.
If the desired dimension is not in the Current Policy box, click the
dimension name in the Dimension Choices, click the right-arrow
button to move the dimension to the Current Policy box, and then
click Apply.
5. Click OK.

Figure 6.3 MCU Policy dialog box

Your MachineConfigQuery dimension is now mapped to your machine pool. Now,


MCU knows how to update your machine configuration values for each machine in
your pool.
Note: While this dimension is now mapped to the machine pool, this does not
mean that the computers in this machine pool have the information stored
yet. In order to enforce the policy for the first time, see Verify MCU Policy.

Verify MCU Policy


There are two ways to verify the MCU Policy is working for a particular machine
pool:
Restart the WTTSvc on a client in the pool.
Update all the computers in the pool at once using the WTT Studio UI.

If the configuration for a computer has changed, then the best option is restarting
the WTTSvc on the client computer. If the configuration hasnt changed, then the
machine config values can simply be updated based on the existing computer
configuration that has already been collected using the WTT Studio UI.

Updating by restarting WTTSVC on the client computer


1. On the Start menu of the client test computer that you wish to have
updated values for, click Run.
2. In the Run dialog box, type cmd.
3. At the command prompt, type:
net stop wttsvc
net start wttsvc

When WTTSvc starts up again, it will automatically start Sysparse_com.exe,


which gathers the machine configuration information and uploads it to the
controller that has been specified for your machine pool.
With the Sysparse XML output, the controller calls into MCU, enforcing the MCU
Policy for your test computers data. (This can also take up to 10 minutes to
complete.)

Updating by using the WTT Studio UI


1. On the Asset menu, click My Assets.
2. Select your controller from the Datastore drop-down list.
3. On the My Search tab, right-click the desired machine pool, and then click
Update Machine Config Values.
4. Click the Start button on the Update MCU Policy dialog.

Figure 6.4 Update MCU Policy dialog box

Verify update
After either method is used to update the values, use the following procedure
to verify the MachineConfigQuery dimension query and any results
management policy is being enforced correctly:
1. On the Asset menu, click My Assets.
2. Select your controller from the Datastore drop-down list.
3. On the My Search tab, click the desired machine pool and then click the
Refresh

button.

4. Right-click the computer that WTTSvc was restarted on, and then click
View Computer Details.
5. On the Computer Attributes tab, confirm that the MachineConfigQuery
dimension is listed along with the specific result from the query against the
computers Sysparse XML data.

Figure 6.5 Computer Attributes List

For additional information, see Appendix G: Machine Config Query Dimensions.

Global Parameters
Parameters function similarly to environment variables but are not restricted to a
specific environment as are variables. Global parameters are stored at the
controller level (in the automation database) and can be applied to any job on the
given controller. All global parameters will be displayed when the user clicks
Parameters on the Admin menu.
Teams can create global parameters in their automation database any anytime
and no special permissions are required to setup a global parameter on a team's
automation database.
There are also global parameters that will be replicated to all automation
databases from the Master database. (although usually only for Jobs or Library
Jobs that also get replicated from the Master). These begin with the designation
Windows\ and thus can easily be identified as being from the Master database.
Teams should not create global parameters with the name Windows\ in their
automation database, nor should they edit any parameters with that name in
their automation database.
Tip: Global parameters can also be a useful way to remain organized, by keeping
key parameter definitions clustered in one location.

To add a global parameter


1. On the Admin menu, click Parameters.
2. Select your controller from the Show Parameters from the Controller
drop-down list.
3. Click Add.
4. In the Name box, type a name for your parameter.
For example, if the parameter is designed to insert the current build
number in the job, the name might be "Build Number."
5. In the Value box, type the value to be substituted for the parameter
name.
In our build number example, this might be the actual build number,
for example: "4.00.29.2345."
6. In the Description box, type the description to be used for the parameter.
In our build number example, this might be "build."
7. Select the IsScheduleDisplay check box to display the parameter when
scheduling if desired.
8. Click OK, and then click Close.

To edit a global parameter


1. On the Admin menu, click Parameters.
2. Select your controller from the Show Parameters from the Controller
drop-down list.
3. Select the parameter to edit.
4. Click Edit to open the Edit Parameter screen.
5. Make changes to the parameter properties as needed.
6. Click OK, and then click Close.

To delete a global parameter from a controller


1. On the Admin menu, click Parameters.
2. Select your controller from the Show Parameters from the Controller
drop-down list.
3. Select the parameter to remove.
4. Click Delete, and then click Yes.
5. Click Close.

Global Mixes
A mix is a set of one or more contexts, just as a context is a set of one or more
constraints that are applied to jobs and schedules. When you design a test or a
set of tests, you can apply several sets of contexts by applying a mix that
contains these contexts. Scheduling a job creates one instance of the job for each
valid context within a mix.
For example, a job might be designed run on a mix of test computers including:
x86-based, Microsoft Windows XP Professional in the German language.
x86-based, Microsoft Windows Server 2003 in the English language.
Itanium-based, Windows Server 2003 in the English language.
The constraints and contexts that you use can be global, applying to all tests,
computers, or schedules, or they can be local, applying to the job or schedule at
hand. You can either use the default constraints and contexts or create your own
custom sets.
For information on local mixes, see Setting Job Mixes and Contexts.
Global mixes
A global mix can be a Simple mix or an Advanced mix; a simple mix is a collection
of contexts with each context containing a set of constraints. An Advanced mix is
a complex mix with a set of rules that apply to the contexts.
When creating a global mix, it is necessary to use contexts to cover all necessary
combinations of criteria.
For instance, a user has a context defined as:
processor within {x86, IA64, AMD64}
with a constraint that specifies:
language within {US (English), Ger (German)}.
Using a Simple mix, only one occurrence of the job will actually be distributed at
scheduling time (even assuming enough computers utilizing the full range of
architectures and languages. To run all six combinations using a Simple mix, it
would be necessary to define a simple mix context for each of the following
combinations (six combinations total):
Arch = x86, lang = US
Arch = x86, lang = Ger
Arch = IA64, lang = US
Arch = IA64, lang = Ger
Arch = amd64, lang = US
Arch = amd64, lang = Ger
Alternately, an advanced mix could be used to dynamically generate them.

Working With a Global Simple Mix

To Create a Global Simple Mix


1. On the Admin menu, click Mix, and then select your controller from the
Datastore drop-down list.
2. On the File menu, click New Global Mix.
3. Select Simple from the Mix Type drop-down list, and then click OK.

Figure 6.6 Selecting the Mix Type

4. In the Name box, type a name for the simple mix.


5. In the Description, type a brief description of the mix.
6. On the Constraints tab, click Add, and add the desired context.
7. Click OK.
8. Click the Save button.

To add a Context to a Simple Mix


1. On the Admin menu, click Mix, and then select your controller from the
Datastore drop-down list.
2. Using Query Builder, search for the desired mix, and then click the
Refresh

button.

Right-click the selected mix and then click Edit.


4. On the Constraints tab, click Add, and add the desired
context.
5. Click OK.
6. Click the Save button.
3.

To edit a Context in a Simple Mix


1. On the Admin menu, click Mix, and then select your controller from the
Datastore drop-down list.
2. Using Query Builder, search for the desired mix, and then click the
Refresh

button.

Right-click the selected mix and then click Edit.


On the Constraints tab, click the context to be edited, and
then click Edit.
5. Make any necessary changes, and then click OK.
6. Click the Save button.
3.
4.

To delete a Context from a Simple Mix


1. On the Admin menu, click Mix, and then select your controller from the
Datastore drop-down list.
2. Using Query Builder, search for the desired mix, and then click the
Refresh

button.

Right-click the selected mix and then click Edit.


On the Constraints tab, click the context to be removed, and
then click Delete.
5. Click OK.
6. Click the Save button.
3.
4.

Note: Mix contexts can be deleted only if they are not being used or referred
to by any job or schedule.

Setting Constraints for a Simple Mix Context

To add a constraint to a simple Mix Context


1. On the Admin menu, click Mix, and then select your controller from the
Datastore drop-down list.
2. Using Query Builder, search for the desired mix, and then click the
Refresh
3.
4.

button.

Right-click the selected mix and then click Edit.


On the Constraints tab, click the context to be edited, and
then click Edit.

Click the Constraints box below the existing constraints.


6. In the empty cell in the Dimension column, select a
dimension for the new constraint from the drop-down list.
7. In the Operator column, select a corresponding operator
from the drop-down list.
5.

Note: The choice of operators will vary according to the


choice of dimension made by the user.
Select or enter an appropriate value for the constraint in the
Value column.
9. Click OK.
10. Click the Save button.
8.

To edit a constraint of a simple Mix Context


1. On the Admin menu, click Mix, and then select your controller from the
Datastore drop-down list.
2. Using Query Builder, search for the desired mix, and then click the
Refresh
3.
4.
5.
6.
7.

button.

Right-click the selected mix and then click Edit.


On the Constraints tab, click the context to be edited, and
then click Edit.
Click the Constraints box below the existing constraints.
In the empty cell in the Dimension column, select a
dimension for the new constraint from the drop-down list.
In the Operator column, select a corresponding operator
from the drop-down list.
Note: The choice of operators will vary according to the
choice of dimension made by the user.

Select or enter an appropriate value for the constraint in the


Value column.
9. Click OK.
10. Click the Save button.
8.

To delete a constraint from a simple Mix Context


1. On the Admin menu, click Mix, and then select your controller from the
Datastore drop-down list.

2. Using Query Builder, search for the desired mix, and then click the
Refresh

button.

Right-click the selected mix and then click Edit.


On the Constraints tab, click the context to be edited, and
then click Edit.
5. Right-click the constraint to be removed and then click Delete
Clause.
6. Click OK.
7. Click the Save button.
3.
4.

Setting Parameters for a Simple Mix Context

To add a Parameter to a simple Mix Context


1. On the Admin menu, click Mix, and then select your controller from the
Datastore drop-down list.
2. Using Query Builder, search for the desired mix, and then click the
Refresh

button.

Right-click the selected mix and then click Edit.


Click the Parameter tab.
5. On the Local tab, type a name for the new parameter in the
first empty cell in the Name column.
6. In the Type column, click the desired parameter type from
the drop-down list.
7. In the Description column, type a description to be displayed
next to the parameter at scheduling time.
3.
4.

Note: Use of a description is optional.


8.

If you wish to view the parameters at schedule time, select


the ScheduleDisplay check box.

8.

In the Value column, type a value for the parameter.

Click OK.
10. Click the Save button.
9.

To edit a Parameter of a simple Mix Context


1. On the Admin menu, click Mix, and then select your controller from the
Datastore drop-down list.

2. Using Query Builder, search for the desired mix, and then click the
Refresh
3.
4.
5.
6.
7.
8.

button.

Right-click the selected mix and then click Edit.


Click the Parameter tab.
On the Local tab, select the parameter to edit, and then click
the specific fields to change.
Make any necessary changes.
Click OK.
Click the Save button.

To delete a Parameter from a simple Mix Context


1. On the Admin menu, click Mix, and then select your controller from the
Datastore drop-down list.
2. Using Query Builder, search for the desired mix, and then click the
Refresh
3.
4.
5.
6.
7.

button.

Right-click the selected mix and then click Edit.


Click the Parameter tab.
On the Local tab, right-click the parameter to remove, and
then click Delete.
Click OK.
Click the Save button.

Setting Attributes for a Simple Mix Context


Attributes allow the end user to organize the same data in multiple hierarchical
ways. This means that the folder hierarchy that is displayed by default in Job
Explorer is simply another attribute. In many cases, as well as providing the
hierarchy, Attributes provide a means to tie together objects of different types.
As an example, Attributes could be used to categorize all objects involved in a
failure so that these were cross-referenced in reporting.

To add attributes to a Simple Mix Context


1. On the Admin menu, click Mix, and then select your controller from the
Datastore drop-down list.
2. Using Query Builder, search for the desired mix, and then click the
Refresh

button.

3.
4.
5.
6.
7.

Right-click the selected mix and then click Edit.


Click the Attributes tab.
From the Attributes list, select those attributes you wish to
associate with this context.
Click OK.
Click the Save button.

To edit attributes of a Simple Mix Context


1. On the Admin menu, click Mix, and then select your controller from the
Datastore drop-down list.
2. Using Query Builder, search for the desired mix, and then click the
Refresh
3.
4.
5.
6.
7.

button.

Right-click the selected mix and then click Edit.


Click the Attributes tab.
From the Attributes list, select those attributes you wish to
associate with this context.
Click OK.
Click the Save button.

To delete attributes from a simple Mix Context


1. On the Admin menu, click Mix, and then select your controller from the
Datastore drop-down list.
2. Using Query Builder, search for the desired mix, and then click the
Refresh
3.
4.
5.
6.
7.

button.

Right-click the selected mix and then click Edit.


Click the Attributes tab.
From the Attributes list, clear those attributes you wish to
disassociate from this context.
Click OK.
Click the Save button.

Working with a Global Advanced Mix

To create a Global Advanced Mix


1. On the Admin menu, click Mix, and then select your controller from the
Datastore drop-down list.
2. On the File menu, click New Global Mix.
3. Select Advanced from the Mix Type drop-down list, and then click OK.
4. In the Name box, type a name for the advanced Mix.
5. In the Description box, type a brief description of the mix.
6. Add contexts to the advanced mix.
7. Click OK.
8. Click the Save button.

To edit a Global Advanced Mix


1. On the Admin menu, click Mix, and then select your controller from the
Datastore drop-down list.
2. Using Query Builder, search for the desired mix, and then click the
Refresh

button.

Right-click the desired mix and then click Edit.


4. Make any changes necessary to the mix and associated
contexts.
5. Click the Save button.
3.

To delete a Global Advanced Mix


1. On the Admin menu, click Mix, and then select your controller from the
Datastore drop-down list.
2. Using Query Builder, search for the desired, mix and then click the
Refresh

button.

Right-click the mix to be removed, and then click Delete.


4. Click Yes.
5. Click the Save button.
3.

Setting Dimensions and Parameters for a Global Advanced Mix

To add dimensions and parameters to a Global Advanced Mix.


1. On the Admin menu, click Mix, and then select your controller from the
Datastore drop-down list.
2. Using Query Builder, search for the desired mix, and then click the
Refresh

button.

Right-click the selected mix and then click Edit.


4. Click Add/Remove Dimensions.
5. Select the dimensions you wish to associate with the mix.
3.

Note: It is recommended that at least two dimensions


should be selected for an Advanced Mix.
6.

If you do not wish to create a constraint associated with the


chosen dimensions, clear the Create Constraint check box.
Note: The Create Constraint check box is selected by
default.

7.

If you wish a dimension to be treated as a parameter, select


the Create Parameter check box.
Note: To enable the Create Parameter check box, it may
be necessary to click the dimension header.

If you wish to delete a dimension value under a dimension,


right-click the appropriate cell and select Delete Value.
9. To set the operator for a dimension value, select the cell that
contains the dimension value and then select the operator
from the Operator drop-down list at the bottom of the form.
8.

Note: By default, Equals is selected for a dimension value.


10. To

change the properties of a dimension or parameter in the


properties, click the Mix Properties group box. To exclude a
dimension or parameter from the list used for generating
context, set the IncludeInGeneration property to False.
11. Select the combination type to be used to generate the
context for the advanced mix in the Generation Settings
group box. These types include:

All Combinations This method will generate every


possible combination of values for all dimensions in this
mix.
Random This method will generate random
combinations of dimension values. The number of contexts
produced will be equal to the dimension with the largest
number of values. You can adjust the probability of a value
that is picked by adjusting its weight within the Properties
box.
Flat This method will generate combinations of
dimension values similar to the way they are laid out in the
dimension list. The number of context produced will be
equal to the number of values in the largest dimension. For
dimensions that have fewer values than others, the values
for the dimension will be repeated to fill in the missing
cells.
N-Wise This method will take a sampling of all the
dimensions in this mix. The number of contexts produced
depends on the combinatory order. The probability of
whether a value is picked can be changed by adjusting the
weight of the value in the Properties box.
Note: The combinatory order controls how large of a
sample to take from the dimensions in the mix. For
example, a combinatory order of 2 will generate
contexts using every unique pair of dimension values. An
order of 3 will produce context with every unique triple
of dimension values. The higher the combinatory order,
the more contexts that will be produced. Setting the
combinatory order equal to the number of the dimensions
in the mix is equivalent to selecting All Combinations.
12. Click

Define Rules and create rules for the mix as needed.

Figure 6.7 Parameter Dependency Groups (Advanced Global Mix)


To create rules on parameters, a dependency group should be created.

Dependency groups allow the user to specify


relationships or rules between values to exclude any
inconsequential combinations from generation. The
relationships are primarily of two types: Inclusion
Dependencies and Exclusion Dependencies. There
can be many Dependency Groups in an Advanced Mix
but dependency rules between values are self contained
within a group. Rules cannot be specified across
dependency groups.

Parameter dependency groups can be created Add a


dependency group or by right-clicking in this section
and then clicking Add Dependency Group. Dimensions
that are added as parameters on the Dimensions and
Parameters tab are displayed in the Action
parameters section. To view the parameter values of a

parameter, select an Action parameter from the list.


Parameter values may be added for inclusion or
exclusion by clicking on the Add as Inclusion or Add
as Exclusion buttons on the form. Parameter values
that are added as inclusions are displayed in green color
and those added as exclusions are displayed in red
color. The rules are always defined from left to right,
that is, the same order in which the parameters are
shown in the Action parameters section. The rules
should be defined on a minimum two Action parameters
for generating contexts.
o

Inclusion Rules: to create an inclusion rule between two


values, the corresponding values from both the parameters
should be added as inclusions.
For example, to have a context generated with Longhorn
and IA32_on_Win64 and no other combination on Longhorn,
select Longhorn from WTT\OS and click Add as Inclusion.
Select IA32_on_Win64 from WTT\Processor and click Add
as Inclusion. A context would be generated for the
combination Longhorn and IA32_on_Win64 parameter
values only and not for the other combinations on Longhorn
parameter value.

Exclusion Rules: to create an exclusion rule between two


values, the value from the first action parameter should be
added as exclusion and the value from the second
parameter should be added as inclusion.
For example to exclude contexts with Windows Server 2003
and IA32_on_Win64, select Windows Server 2003 from
WTT\OS and click Add as Exclusion. Select
IA32_on_Win64 from WTT\Processor and click Add as
Inclusion. A context with the combination Windows Server
2003 and IA32_on_Win64 will not be generated.

If the desired combinations cannot be achieved with a single


dependency rule then multiple dependency rules can be defined.

Figure 6.8 Advanced mixes generated contexts


13. To

preview the contexts generated for the mix, click


Generate Contexts.
14. Contents on the Preview pane may be selected for
association with the mix.
Note: By default, all contexts in the Preview pane are
selected for association with the mix.
15. Click

the Save button.

To edit dimensions and parameters in a Global Advanced Mix.


1. On the Admin menu, click Mix, and then select your controller from the
Datastore drop-down list.
2. Using Query Builder, search for the desired mix, and then click the
Refresh
3.

button.

Right-click the desired mix and then click Edit.

Make any changes desired to dimension or parameter values.


5. Click Generate Contexts.
6. Newly generated context are displayed on the New Contexts
tab. Existing contexts that are associated with the mix are
displayed on the Existing Contexts tab. To update the mix,
select the new contexts to associate with the mix.
7. Click the Save button.
4.

To delete dimensions and parameters from a Global Advanced Mix.


1. On the Admin menu, click Mix, and then select your controller from the
Datastore drop-down list.
2. Using Query Builder, search for the desired mix, and then click the
Refresh
3.
4.
5.
6.
7.

button.

Right-click the desired mix and then click Edit.


Right-click the dimension or parameter to be removed and
select Delete Dimension.
Click Generate Contexts.
Select the contexts that you wish to associate with the mix
and clear those contexts that you wish to remove.
Click the Save button.

Chapter 7: WTT Autotriage


One of the most important tools in Windows Test Technologies (WTT), Autotriage
reduces manual intervention during testing and therefore helps testers save time
in the testing process. Autotriage supports the automatic assessment and triage
of most types of system and application crashes encountered during testing,
including user mode breaks as well as kernel mode breaks.
WTT Autotriage captures as much crash information as possible when a test
computer has a system or application failure, creating a data dump that testers
can use for offline debugging. In addition, it provides additional functions,
including:
Easy comparison and grouping of crashes generated with the same stack
trace or bucket ID.
Automatic e-mail messaging when a crash occurs, providing users with
appropriate crash information.
Holding or releasing computers for further investigation if a similar crash
occurs in the future.
Autotriage is automatically installed with WTT on each client and server computer
where WTT test jobs are run. It is also attached to the WTT kernel debugger if it
has been installed on the client computer.

The following subjects are included in this chapter:


Terminology
Working with WTT Autotriage tools

Terminology
Kernel Mode
Kernel-mode code has permission to access any part of the system and is not
restricted as is user mode code. It can gain access to any part of any other
process running in either user mode or kernel mode.
Performance-sensitive operating system components run in kernel mode. In this
way they can interact with the hardware and with each other without requiring
the overhead of context switching. All kernel-mode components are fully
protected from applications running in user mode. They can be grouped as
follows: Executive, Kernel, HAL and Window and Graphics Subsystem.

The possibility of data corruption or system damage is much greater with kernel
mode process errors. If a process erroneously accesses a portion of memory that
is in use by another application or by the system, the lack of restrictions on
kernel mode processes forces Windows to stop the entire system. This is known
as a blue screen or bug check.
Malfunctioning hardware devices or device drivers with bugs that reside in kernel
mode are often the culprits in bug checks. A bad SCSI adapter, a malfunctioning
drive controller, or defective memory chips can corrupt memory contents and
alter program pointers so they attempt to access an incorrect address in memory.
Local symbol user
A domain account used by Autotriage to connect to symbol shares. This account
should have network access and administrator permissions on the client
computer. This account is used only by the Autotriage tool and can be created
from a command-line using the WTTCMD Command-Line tool, or from a separate
LSU interface. See Appendix D: WTTCMD Command Tool and Appendix L:
Managing LLU and LSU Functions.
User mode
Applications and subsystems run on the computer in user mode. Processes that
run in user mode do so within their own virtual address spaces. They are
restricted from gaining direct access to many parts of the system, including
system hardware, memory not allocated for their use, and other portions of the
system that might compromise system integrity. Because processes that run in
user mode are effectively isolated from the system and other user mode
processes, they cannot interfere with these resources.
User mode processes can be grouped as follows: System Processes, Server
Processes, Environment Subsystems and User Applications.
Stack Trace
A stack represents the context of the thread at any given time. The context is
defined as the point at which the thread has reached in the code and, to some
extent, how it got there. Programs are separated into functions and threads call
these functions, starting at main. A function is free to call another function, which
is free to call another, and so on. When a thread calls a function, it needs to
remember where it was before the call so that it can get back there. For this
reason the thread stores information on the stack. Information is pushed onto the
stack each time a function call is made. As the name suggests, the information
stacks up and the top of the stack contains the last piece of information pushed
onto the stack. The address to return to is given by RetAddr in a real stack trace.
When the called function returns, this information is popped off the stack so that
the stack is the same as it was before the call and the thread is returned to
executing the instruction immediately after the call.
Using the stack to trace back from a function or procedure to the original function
or procedure that generated this call gives us the Stack Trace.

Dump files
Dump files are the files created when a process crashes or the system crashes.
These dump files contain information about the stacks, memory, registers and
other system data that is useful for diagnosing the failure.
Dump files can be categorized based on the amount of data they contain. In the
case of user mode dumps, there are mini dumps and full dumps. Mini dumps are
created with minimum information such as stack information and thread
information. On the other hand, full dumps include the entire memory space of a
process, the program's executable image itself, the handle table, and other
information useful to the debugger.
Debuggee
The debuggee is a computer that is being debugged by another computer (the
debugger). The debuggee needs to be connected to the debugger through a
cable. The debuggee is also sometimes referred to as the target computer.
Debugger
The debugger is a computer used for debugging another computer (called the
debuggee). The debugger is also sometimes referred to as the host computer.

Working with WTT Autotriage tools


A special user, called a local symbol user must be created to run Autotriage on a
debuggee computer.
The credentials should have network access and have administrator permissions
on the debuggee (client) computer. The password is echoed back as you type, so
be sure to use a test/lab account, only.
Warning: The password is echoed back as you type and stored in plain text on
the client computer so be certain to use a test or lab account only. Do not use
your CORPNET credentials.

To set up symbol share access on a debuggee computer


Use the WTTCmd Command Tool command /addsymboluser from a command
prompt to add a local symbol user to the client computer for Autotriage or
Setup.
1. On the taskbar, click Start, and then click Run.

2.
3.

In the Run dialog box, type cmd, and then click OK.
At the command prompt, type

WTTCmd.exe /addsymboluser /user:<username> /domain:<domain>


/password:<password>
Where:
<username> is the user name to be added.

<domain> is the domain name of the user name.


<password> is the password for the user name.
For example:
WTTCmd.exe /addsymboluser /user: abc /domain:test /password:
abc123

To set up Kernel Debugger for the WTT Client (debuggee).


WTT provides a setup script for setting a kernel debugger for the WTT Client
computer (debuggee). The script, WTTKDSetup.cmd, is available on the server
where WTT Controller is installed. This script needs to be run for each
computer that is connected to the debugger. They can be connected via either
COM port or 1394.
1. From the debugger computer, on the taskbar, click Start, and then click
Run.
2. In the Run Dialog box, type cmd and then click OK.
3. At the command prompt, type:
WTTKDSetup.cmd <debuggeeName> <WTTServer> <COM/1394>
<PortNumber> </b BaudRate> </y SymPath> </d DebuggerPath>

Where:
<DebuggeeName> is the name of the debuggee computer
<WTTServer> is the name of the WTT Server name
<COM/1394> is a choice of Com or 1394 debugging
<PortNumber> is the port number in the case of COM port debugging and
the channel number in the case of 1394 debugging
</b BaudRate> is the optional baud rate. Applicable for COM port
debugging.
</y SymPath> is the optional path for the symbol lookup.
</d DebuggerPath> is the optional debugger path from which the
debugger package is installed.

For Example:
WTTKDSetup.cmd client-test server-test COM 1
WTTKDSetup.cmd client-test server-test /y c:\symbols

For additional information on using WTTKDSetup.cmd, at the command


prompt, type WTTKDSetup.cmd /?

Default Behavior of Autotriage Tools


When WTT Client is installed on the client computer using the default options, the
debugger-less mode is enabled for kernel crashes.
The debugger-less mode enables the machine to reboot and create kernel
dumps on system failures.
The types of failures caught by debugger-less mode include all system
crashes (often referred to as bug checks, system crashes, fatal system
errors or stop errors)
If you select the Kernel Debugger option during the WTT Client install, see Client
Setup.

Chapter 8: Resolver
Resolver is the failure tracking and management tool for both Windows Test
Technologies (WTT) and Unified Stress Testing (UST). It can help users by
facilitating failure tracking and helping to manage failures and their resolutions
from initial triaging stage to final resolution.

The following subjects are included in this chapter:


Terminology
Working with Resolver
Resolver Best Practices

Terminology
Kernel Mode Crash

A crash detected in a component that runs in privileged mode such as the


kernel or a device driver.
User Mode Crash

An unhandled user mode exception that is detected by WTT. This will initiate
the WTTTriage.exe tool.
Hold Bit
Indicates whether the computer on which the crash happened should held for manual
triage (Hold 1) or is released (Release 0). Note that if the hold bit is set for
release, AutoTriage will not hold any computers on which the same crash happens in
the future.
Task Failure

A test failure within the task of a given job. This will return a non-zero code by
the task within a job's logged results to indicate test failure.
Reporting Category

A flag used for reporting purposes for filtering or sorting in reports. There are
different values depending on the failure type:
For crashes:
Ignore ignore this failure in the reports.
Pass count this failure as Pass in the reports.
Fail count this failure as Fail in the reports.
Test count this failure as a test failure in the reports.

For task failures:


Product count this failure as a product failure in the reports.
Script count this failure as a script failure in the reports.
Infrastructure count this failure as an infrastructure failure in the
reports.

Working with Resolver

To Query for Failures in Resolver

1. From the Process menu, click Resolver.


2. In the And/Or column, select And from the drop-down list.
3. In the Field Name column, select Stage Type from the drop-down list.
4. In the Operator column, select Equals from the drop-down list.
5. In the Value column, select WTTOMFailure from the drop-down list.
6. Click the Refresh

button.

Note: a more specific query may be defined by using Query Builder to


add additional columns and values. However, the first line of the query
must always have the Stage Type equaling WTTOMFailure.

Figure 8.1 Failure Query using Resolver

To filter failures by run type (stress/BVT/other)


1. From the Process menu, click Resolver.
2. Click beneath the existing row in the query builder
3. In the And/Or column, select And from the drop-down list.
4. In the Field Name column, select Stage Type from the drop-down
list.
5. In the Operator column, select Equals from the drop-down list.
6. In the Value column, select WTTOMFailure from the drop-down list.
7. In the next row of the And/Or column, select And from the dropdown list.
8. In the Field Name column, select Activated Reason from the dropdown list.
9. In the Value column, type the value of the run type (for instance:
Stress, BVT, Other).
10. Click the Refresh

button.

Figure 8.2 Failures filtered by run type

To filter failures by stress type


1. From the Process menu, click Resolver.
2. Click beneath the existing row in the query builder
3. In the And/Or column, select And from the drop-down list.
4. In the Field Name column, select Stage Type from the drop-down
list.
5. In the Operator column, select Equals from the drop-down list.
6. In the Value column, select WTTOMFailure from the drop-down list.
7. Click beneath the new row in the query to add another row.
8. In the And/Or column, select And from the drop-down list.
9. In the Field Name column, select Activated Reason from the dropdown list.
10. In the Value column, type Stress.

Figure 8.3 Failures filtered by stress type

11. Click beneath the new row in the query to add another row.
12. In the And/Or column, select And from the drop-down list.
13. In the Field Name column, select XML from the drop-down list.
14. In the Value column, type the stress type name (such as DirectX).
15. Click the Refresh

button

To Filter Failures by BucketID


1. From the Process menu, click Resolver.
2. Click beneath the existing row in the query builder
3. In the And/Or column, select And from the drop-down list.
4. In the Field Name column, select Stage Type from the drop-down
list.
5. In the Operator column, select Equals from the drop-down list.

6. In the Value column, select WTTOMFailure from the drop-down list.


7. Click beneath the new row in the query to add another row.
8. In the And/Or column, select And from the drop-down list.
9. In the Field Name column, select StageDynamicFailureList(+)
from the drop-down list.
10. In the Operator column, select Has from the drop-down list.
11. In the Field Name column, select StageFailureBucket(+) from the
drop-down list.
12. In the Operator column, select Has from the drop-down list.
13. In the Field Name column, select BucketID from the drop-down list.
14. In the Operator column, select Contains from the drop-down list.
15. Type the value to filter by in the Value column.
16. Click the Refresh

button

Figure 8.4 Failures filtered by bucket ID

To Filter Failures by MachineName


1. From the Process menu, click Resolver.
2. Click beneath the existing row in the query builder
3. In the And/Or column, select And from the drop-down list.
4. In the Field Name column, select Stage Type from the drop-down
list.
5. In the Operator column, select Equals from the drop-down list.
6. In the Value column, select WTTOMFailure from the drop-down list.
7. Click beneath the new row in the query to add another row.
8. In the And/Or column, select And from the drop-down list.
9. In the Field Name column, select StageDynamicFailureList(+)
from the drop-down list.
10. In the Operator column, select Has from the drop-down list.
11. In the Field Name column, select
StageFailureMachineConfigValueList(+) from the drop-down list.
12. In the Operator column, select Has from the drop-down list.
13. In the Field Name column, select MachineConfigVal from the dropdown list.
14. In the Operator column, select Contains from the drop-down list.
15. Type the value to filter by in the Value column.
16. Click the Refresh

button.

Figure 8.5 Failures filtered by machine name

To Filter Failures by Remote


1. From the Process menu, click Resolver.
2. Click beneath the existing row in the query builder
3. In the And/Or column, select And from the drop-down list.
4. In the Field Name column, select Stage Type from the drop-down
list.
5. In the Operator column, select Equals from the drop-down list.
6. In the Value column, select WTTOMFailure from the drop-down list.
7. Click beneath the new row in the query to add another row.
8. In the And/Or column, select And from the drop-down list.
9. In the Field Name column, select StageDynamicFailureList(+)
from the drop-down list.
10. In the Operator column, select Has from the drop-down list.
11. In the Field Name column, select Remote from the drop-down list.
12. In the Operator column, select Contains from the drop-down list.
13. Type the value to filter by in the Value column.

14. Click the Refresh

button.

Figure 8.6 Failures filtered by remote

To Filter Failures by StackTrace


1. From the Process menu, click Resolver.
2. Click beneath the existing row in the query builder
3. In the And/Or column, select And from the drop-down list.
4. In the Field Name column, select Stage Type from the drop-down
list.
5. In the Operator column, select Equals from the drop-down list.
6. In the Value column, select WTTOMFailure from the drop-down list.
7. Click beneath the new row in the query to add another row.
8. In the And/Or column, select And from the drop-down list.
9. In the Field Name column, select StageDynamicFailureList(+)
from the drop-down list.
10. In the Operator column, select Has from the drop-down list.

Figure 8.7 Failures filtered by stacktrace

11. In the Field Name column, select StageFailureBucket(+) from the


drop-down list.
12. In the Operator column, select Has from the drop-down list.
13. In the Field Name column, select StackTrace from the drop-down
list.
14. In the Operator column, select Contains from the drop-down list.
15. Type the value to filter by in the Value column.
16. Click the Refresh

button

To view a failure in read only mode


1. Use Query Builder to retrieve failure records from your selected
datastore.
2. Select a desired failure and double-click the entry.

3. Use the Next and Previous buttons to view additional failures within
the Resolver window.
4. Click the Exit button to close the window. After fetching the failure
records using steps in the previous section double click a failure in the
failure list.
5. On the Crash Info tab, click Connect to view the failure on the
debugger to which the failed computer is connected.

Figure 8.8 Viewing Failures in Read-Only mode

To reassign a failure
1. Use Query Builder to retrieve failure records from your selected
datastore.
2. Select a desired failure, right-click an entry, and the click Edit.
An alternative method of editing a failure is to double-click the
selected failure to invoke read-only mode, and then click the Edit
button on the toolbar.
3. Select the machine owner, job owner or triage team to whom you wish
to reassign the failure from the Assigned To drop-down list. If the
individual or group is not listed, type in the desired alias.

4. Click the Save

button.

Figure 8.9 Failure Edit dialog box

To Resolve a failure
1. Use Query Builder to retrieve failure records from your selected
datastore.
2. Select a desired failure, right-click an entry, and the click Edit.
An alternative method of editing a failure is to double-click the
selected failure to invoke read-only mode, and then click the Edit
button on the toolbar.
3. Select Resolved from the Status drop-down list.
4. Type the alias of the user resolving the failure in the Resolved By box.

Note: This can be your alias or the alias of the person on behalf of
whom it is being resolved.
5.

Select a resolution from the Resolution drop-down list.

If Unsolved is selected, the failure will automatically be assigned to


the UST Triage Team.
If Known Bug or File New Bug is selected, the user will need to
associate one or more bugs with the failure.
Select a reporting category from the Reporting Category drop-down list.
7. Select a desired hold bit from the Hold Bit drop-down list, either Hold 1
or Release 0 to indicate whether to hold the computer for manual
triage or release it.
6.

Note: If the failure does not have the correct symbols (the Bucket ID is
WRONG_SYMBOLS), this drop-down list is disabled. In this case, it
is necessary to follow the steps on How to Handle Failures with
WRONG_SYMBOLS BucketID) in order to set the hold bit.
8.

Additional optional changes that can be made include changing the failure
priority, changing the creation reason, or assigning the failure to other
users.

9. Click the Save

button.

To Close/Reactivate a failure
1. Use Query Builder to retrieve failure records from your selected
datastore.
2. Select a desired failure, right-click an entry, and the click Edit.
An alternative method of editing a failure is to double-click the
selected failure to invoke read-only mode, and then click the Edit
button on the toolbar.
3. On the Status drop-down list, select Closed to close the failure, or
Active to reactivate it.
4. Click the Save

button.

To Edit the Failures Crash Info


1. Use Query Builder to retrieve failure records from your selected
datastore.
2. Select a desired failure, right-click an entry, and the click Edit.

An alternative method of editing a failure is to double-click the


selected failure to invoke read-only mode, and then click the Edit
button on the toolbar.
3. On the Crash Info tab, make any changes necessary.
Note: If the Bucket ID is WRONG_SYMBOLS, then it is necessary
to follow the steps on How to Handle Failures with WRONG_SYMBOLS
BucketID in order to be able to change this item.
4. Click the Save

button.

To associate a bug with the failure


1. Use Query Builder to retrieve failure records from your selected
datastore.
2. Select a desired failure, right-click an entry, and then click Edit.
Alternatively, you may double-click the selected failure to invoke
read-only mode, and then click the Edit button on the toolbar.
3. On the Link To Bugs tab, select a bug database from the Bug
Database drop-down list.
4. Type the ID of the bug in the Bug ID box and click Add. This step
may be repeated to associate more than one bug with this failure.
5. Click the Save

button.

To view the Job/Result information associated with the failure


1. Use Query Builder to retrieve failure records from your selected
datastore.
2. Click the linked ID number in the Job ID or Result ID boxes,
depending on whether you wish to view the job or result information.

Figure 8.10 Job/Results Details dialog box (Resolver)

To handle crashes that use the WRONG_SYMBOLS bucket ID


When a failure is created, WTT Autotriage attempts to fix the associated
symbols and get a correct stack trace and bucket ID. If it cannot do this it
assigns the failure to the WRONG_SYMBOLS bucket and ignores the stack,
setting the hold bit to Hold 1.
1. On the debugger computer, fix the symbols and then get the correct
stack trace. Run !analyze on the debugger to get a new bucket ID.
2. In WTT, from the Process menu, click Resolver.
3. Click the Refresh

button.

4. Right-click the target failure and click Edit.


5. On the Crash Info tab, past the new stack trace into the Stack Trace
box.
6. Paste the new bucket ID into the Bucket ID box.

7. In the Hold Bit drop-down list, select Release 0 or Hold


1depending on your requirements.
8. Click the Save

button.

To enable e-mail notifications from Resolver for a failure


1. Use Query Builder to retrieve failure records from your selected
datastore.
2. Select a desired failure, right-click an entry, and then click Edit.
3. On the Notifications tab, click Add.
4. In the General box, type the notification that you wish to receive in
the event of a failure.
5. In the Mailer box, type the aliases of any individuals you wish to be
notified in the event of a failure, separated by a semicolon. Do no put
a semicolon at the end of the list.
Note: The machine owner or the failure assignee will receive an
email by default when the failure is updated or is idle for 4 hours if
the failure occurred during a stress run.
6. Click the Save

button.

To enable detailed e-mails from Resolver


This option provides users with enterprise information in the e-mail
notifications generated by Resolver. The extra information may help direct
users to the correct enterprise in large testing organizations.
Note: This process modifies the underlying foundation of WTT and may
only be used with private WTT Controllers.
1. Using SQL Server, back up the SQL stored procedure
ProcessNotificationMail_SP by saving it into a new file.
2. Use the SQL Enterprise Manager or SQL Query Analyzer to edit
the ProcessNotificationMail_SP stored procedure.
Note: This stored procedure is installed when the Failure service is
installed and should be in the same datastore as the Failure service.
3. Set the variable @IdentityServer to the name of the identify server in
your enterprise.
For example:
SET @IdentityServer='ent1id.ntdev.corp.microsoft.com'

4. Set the variable @IdentityDatabase to the name of the identity


database in your enterprise.
For example:
SET @IdentityDatabase='wttidentity'
5. Save the updated stored procedure.

Resolver Best Practices


Several practices may be helpful when working with Resolver:
Double-clicking on a failure stage in Resolver will display the selected
failure in read-only mode.
Only the Next/Previous buttons can be used when viewing failures in readonly mode.
Review the Stage and Process User Guides prior to using Resolver for
failure analysis.
After triaging a failure, to keep Autotriage from holding a computer when
the same crash occurs, set the hold bit to Release 0.

Chapter 9: Notification Service


The Notification Service wakes up every couple minutes, polls the notificationqueued items, and executes them based on the information from the Notification
Queued Item data. If the Notification runs successfully, it is deleted from the
Queued Item table and copied to the Archive Queued Item List. In case of
notification execution failure, the Notification Service keeps trying until the
maximum retry time is reached. If it still does not succeed, it then sends a failure
notice to the notification owner.
Notification service is based on thread pooling. Each thread is responsible for
executing a certain number of items. The maximum number of items per thread
and the total number of threads per service is configurable. The wake up interval
also is configurable.

Notification Service Across an Enterprise


One Notification service is sufficient to operate all datastores in an enterprise. If
the enterprise has multiple datastores, the notification service opens one thread
for each datastore.

The following subjects are included in this chapter:


Notification Service Independent Setup
Notification Service Best Practices
Configuring a Standalone Notification Service

Notification Service Independent Setup


The Notification Service is installed along with the regular Windows Test
Technologies (WTT) setup. However, it can also be installed independently.

Service Setup Requirements


The Controller set-up (When Database Option selected): The Notification
Service works per enterprise. If the enterprise information is not there, then it
uses the default controller defined in the Notification Config. XML file.
Required Files: WTTNotification.Exe, NotificationConfig.xml, InstallScript.vbs.
These required files can be copied from the Controller Installation share location:
\\wttbuildsrv\build\beta2\<ChooseBuild>\<ChooseArch>\bin.
User Account:
The account that the service runs on requires the following access:

The account should have Log on As Service permission on the computer


where the service is running. This can be set from the Control Panel by
clicking Administrative Tools, clicking Local Security Policy, clicking
Local Policies, clicking User Rights assignment, and then selecting
Logon as a service.
The account should have sufficient access to the Runtime datastore to
which the Notification Service is connecting.
The account should have access to create a log file and to write in the
Event log on the computer where the service is running.
Note: The account should not have SourceDepot access.

To set up Notification Service independently


1. Gather the following files from the Controller Installation share Bin folder:
WTTNotification.Exe
NotificationConfig.XML
InstallScript.vbs
2. From a command line, run the InstallScript. InstallScript.vbs takes two
parameters
Action (Install or UnInstall)
The Service physical path
For example, C:\Windows\WttBin\WttNotification.exe
InstallScript.VBS /a [action] /s [servicepath]

For Example: InstallScript.vbs /a install /s


C:\Windows\WTTBin\WttNotification.exe
3. When prompted, enter the user credentials, and then click OK. For more
information, see Requirements to Setup the Service.

4. Using In the Windows Task Manager, verify that the


Notification Service is started.

To uninstall an Independent Notification Service

If the Notification Service is installed along with the WTT setup, then the
Notification Service is uninstalled while uninstalling WTT. However, if the service is
installed independently, then you must run installscript.vbs with the Uninstall
parameter to uninstall it.
For Example: InstallScript.vbs /a Uninstall /s
C:\Windows\WTTBin\WttNotification.Exe

Notification Service Best Practices

Look at the Notification.log file for any error log generated by the
Notification Service in the event of a failure.

Make sure that the account on which the service is running has
sufficient permissions to access the controller datastore to which the
service is configured.

Configuring a Standalone Notification Service


The Notification Service can work independently, without requiring the enterprise
setup. For WTT, the notification relies on the notification configurable XML. You
can configure Notification Service options in the XML file.

To edit the Notification Configuration


1. If the Notification Service is already running, stop the service using the
following command from a command line:
net stop WTTNotification
2. Open the %WINDIR%\WTTBin\NotificationConfig.xml file.
3. Change the value for the configurable items that you are interested in and
save the XML file.
IdentityServerName: The Identity server name. The fully-qualified
domain name or IP address may be required.
Identity DBName: The identity datastore for the entire enterprise
JobsRunTimeController: The run-time database. This database
contains the service that obtains the queued items. This entry should
be in the Identity databases DSLink table.
LogPath : The Log file path associated with the Notification Service
log.
FromE-mail: The e-mail address of the admin to whom the e-mail is
sent in case of any service failure.
SMTPServer: The SMTP server name or the IP address of the SMTP
server.
NoOfThreads: Total number of threads that the Notification Service
will open.
MaxRecordsPerThread: The maximum number of items
(notifications) to be executed for each thread.
Sleep Time : The sleeping time for the Notification Service in
milliseconds.
4. Start the service using the following command from a command line:

net start WTTNotification

Chapter 10: Scenario Builder


By creating a new test harness such as WTT, test developers are able to write test
specific code in terms of reusable code that can be used for multiple tests, leaving
WTT to handle infrastructure code. In this way, testing becomes both easier for
the test engineer, and more flexible, because the infrastructure provides the ways
of combining the items created, generating parameter combinations, running jobs
in multi-threaded or distributed fashion, and measuring test performance, as well
as detecting any leaks and providing logging and synchronization functions.
This combination of test code, logic and test data is called a scenario, and
provides a framework for building advanced test plans.

The following subjects are included in this chapter:


Terminology
Working with Scenario Builder

Terminology
Item / Test Item

An item or test item is a reusable segment of test code, which can be combined
with other items to build a scenario. WTT does not impose any restrictions on how
an item is defined or written.
Object Item

An object item statement functions as broker for test code written in either managed or
unmanaged code. This can be used for calling existing code without doing any rewriting. This
statement allows the creation of object instances and then the calling of methods on these
instances.
Scenario

An end-to-end set of steps encompassing test code, combining logic and test
data, and used to complete a specified task. An example of a scenario might be to
create a file.
Scenario Definition Language

An XML file which defines test scenarios (the .SDF files binds test item into a scenario). This
can be visualized as an expression / grammar for a given scenario.
Statement

A statement informs WTT which items need to be instantiated in a scenario. There


are two types of statements: Control Statements, which provide information
about how an item needs to be instantiated (for example: in parallel or remotely),
and Item Statements, which are the actual test items (for example: Create File).
Variation

A variation is an instance of a scenario. For example, if creating a file is a


scenario, than a variation of this might be creating a valid file with a valid file
name.

Working with Scenario Builder


Scenario Builder provides testers with an end-to-end solution for rapid test
development within WTT, enabling test developers to use an object oriented focus
in writing test specific code while leaving the WTT harness to handle the
infrastructure. Because of the complexity of Scenario Builder, however, only a
brief introduction is provided here. For additional information and discussion of
Scenario Builder, see the WTT release site.

Designing a Scenario

To create a Scenario
1. On the Tools menu, click Scenario Builder.
2. Click the New Scenario Builder Document button.
3. In the tree view, drag and drop statements to form the outline of the
desired scenario.
4. Click on each statement and rename that statement on the Sequence
tab.
5. On the Managed Validator tab, enter the appropriate assembly name
and class or browse to the file.
6. On the Parameters tab, add parameters for the statement as desired.
7. Create individual variations for each statement as needed.
8. Add additional statements and variations to complete the desired scenario.
9. You can able to modify the sequence by moving statements up or down.
10. Click the Save button, enter a file name in the Name box, and then click
Save.

Executing Scenarios
Two options are available to testers for executing scenarios within WTT:
Execute the scenario as a library job.
Launching the scenario as an executable.

Either may be used, with usage depending on the needs of the jobs being run.

To execute a Scenario Builder Library Job


1. On the Explorers menu, click Job Explorer.
2. Right-click the desired feature node and click New Job.
3. Add the desired information for this job. For additional information on
creating a job, see Creating and Editing Jobs.
4. On the Tasks tab, select the Regular tab, and then click Add.
5. Select Run Job, and then click OK.
6. Type a task name in the Name box.
7. Click the Browse button and navigate to the Scenario Builder library
job.
Note: The Scenario Builder library job is a default job that allows the
integration of individual scenarios into a library job framework. It
requires the parameter code.sdf.
8. Select the Scenario Builder library job and then click OK.
9. Click the first empty cell in the Library Job Param Name column and
from the drop-down list, click code.sdf.

Figure 10.1 Adding the Scenario Builder library job.

10. In the Value column, click the Browse [...] button and navigate to the
desired scenario (.sdf) file. Click OK to import it.
11. Click the Save button and schedule the job.

To launch a scenario as an executable


1. On the Explorers menu, click Job Explorer.
2. Right-click the desired feature node and click New Job.
3. Add the desired information for this job. For additional information on
creating a job, see Creating and Editing Jobs.
4. On the Parameters tab, select the local tab. In the first empty cell in the
Name column, type code.sdf.
5. In the Type column, select FileData from the drop-down list.

6. In the Value column, click the Browse [...] button and navigate to the
desired scenario (.sdf) file. Click OK to import it.
7. On the Tasks tab, click the Add button.
8. Select Execute and click OK.

Figure 10.2 Using Scenario Builder as an executable.

9. Type a task name in the Name box.


10. On the Execute tab, type wttsb.exe f [code.sdf] in the
CommandLine box, and then click OK.
11. Click the Save button and schedule the job.

Appendix A: Glossary
Asset

A computer or a device (component or peripheral) that is suitable for test


cases.
Asset Pool

A virtual collection of computers and devices created by a user to organize the


testing process. An asset owner may create one or more pools to manage
those assets.
Asset Tag

The inventory control number used to track corporate assets. Asset tags
usually start with either an E, L, or V followed by a five or six digit number,
although in some cases a simple six digit number is used. If an asset tag
starts with a V, it should be followed by either five or six digits. If an asset tag
starts with an E or an L, it should be followed by a six digit number. If the
asset tag has no preceding letter, then it should be a six digit number.
Associated Device

A device that is provided by the vendor along with the computer. Associated
devices are required to stay with the computer and are sent back to the
permanent owner when the computer is either retired or returned to the
vendor. An example of an associated device is the AC adaptor that comes with
a laptop.
Attached Device

A device that is added to a computer on a temporary basis. Examples include


printers and scanners.
Attributes

Custom properties defined by the type owner for defining specific type data.
Authority

Synonymous with the type owner.


Automation Controller

A set of services and applications that run constantly, allowing WTT to


function.
Automation DataStore

The database where Sysparse stores the WTT test case automation data. WTT
test cases stored in the Automation DataStore can be grouped and scheduled
as needed to complete a test pass.
Also: Controller. (Deprecated term: Controller database.)
Categories

Categories allow the user to sort data items into logical groups while keeping
the information for that data in only one location. In WTT all categories from
all teams are available to all users. Categories are grouped in a hierarchy so
that users can easily browse to the categories used by their team, while
ignoring the categories of others.
The concept of categories is equivalent to the term test suite as used by
many teams in the Windows division. A test suite is a category that is used to
select a group of tests to be executed. Test Cases can belong to more than
one test suite.
Child Asset Pool

An asset or machine pool that is a part of another "parent" asset pool. Several
child asset pools can be created as part of a parent asset pool in the
asset/machine pool hierarchy.
Cleanup Job

Cleanup tasks normally execute after setup tasks are completed or after a
failure action has allowed job flow-control to come to the cleanup tasks.
A cleanup task may also be a job that is scheduled within a stress job that is
executed after the regular stress tests.
Common Context

A collection of constraints common to the job to be run. This defines the


common Context for the Job as a whole.
Computer

Computers can run the Execution Agent (EA) and Job Delivery Agent. The
Automation Datastore stores the information about the computer and its
status. Changes to the computers configuration are identified and updated to
the database by Sysparse each time that the client computer starts.
Computers are typically organized into asset or machine pools.
Computer Config

The information that the computer reports via Sysparse to the Automation
DataStore.
Config Job Role

A job that does a set up activity such as a smart installation or smart cleanup
when required by any other job before executing it.
Constraint

A logical set of dimensions that describe a class of computers. A constraint


limits which computers the WTT Job Scheduler can use for a specific WTT job.
Constraints help users target a set of tests against multiple sets of computers
of the same class without the need to reschedule the tests for each computer.
Contexts

A set of constraints that can be applied to test jobs and schedules when
designing a test or a set of tests.
Controller

The WTT Controller hosts the Job Delivery Agent and WTT Execution Agent,
along with other services and applications that run constantly and perform the
functions fundamental to the operation of WTT. Additional WTT Controllers can
be tied to the Controller Automation Datastore.
Controller Admin

A user with full access to all the objects in a WTT Controller database.
Controller User

The Controller runs a few specific services in order to complete its jobs, which
run under an account that is entered during Controller setup. This account is
referred to as the Controller User or the service account.
Copy Results Task

Moves log files or other data files off the test computer and onto the log file
server. Typically, these files are separate from other WTT log files that are
output by the executable tasks.
Current Owner

The Current Owner is the user who currently has the asset in his or her
possession, who may be the same or different than the Permanent Owner.
Custom Dependencies

Specific dependencies that are custom set by the user.


Custom Mixes

A local collection of contexts created on a specific job based on specified


combinations of dimensions and constraints.
Default Pool

The pool where a newly registered computer is initially placed.


Dependency

A dependency defines which tasks must complete before another task can
begin. Advance dependencies can be set between a subset of computers using
a dependency index.
Device

A component or peripheral hardware part that cannot run the Execution Agent
(EA).
Dimension

An aspect of a computer's configuration paired with a value and used in


setting constraints. For example: video driver = NVIDIA.
Dimension key-value pairs are automatically reported to the Automation
Datastore whenever the computer starts.

Dimension Value

The specific model or other value of the dimension aspect or component. For
example, NVIDIA is a possible dimension value for the video driver dimension
and 4123 is a possible build number for an operating system build.
Execution Agent (EA)

The WTT Jobs tool that runs on client computers whenever they are started. It
runs Sysparse, and then takes the resulting XML file, parses it, and uploads
the data to various WTT and Asset Tracking databases.
Execution Phase

Defines where in the job execution that the task is executed. (Deprecated
term: Category.)
The phase within the job run where the job executes. Examples include Setup,
Main, and Cleanup.
The execution phase provides a way to organize tasks into groups within a
job. Each execution phase can have its own ordering and dependency-base
execution.
Execution Order

Specifies the order in which tasks are to be run. (Deprecated term:


Dependency.)
Execution Task

A command line to any executable file. The execution task can be any
executable type such as EXE, VBS, and CMD.
Failure Action

The Failure Action per task allows the user to specify how the job should
continue if the current task fails.
Global Mixes

A global mix is a mix that can be applied to any job on a controller. It can be a
simple mix or an Model-based Test Development Environment (MDE)-based
mix. A simple mix is a collection of contexts with each context containing set
of constraints. An MDE-Based mix is a complex mix with a set of rules
associated to the contexts.
Group

An arbitrary set of tests. WTT uses the term "Job" to encapsulate the
information required to automate a test case. A Job consists of several pieces
of information that prescribe how the test case is to be executed. Test Cases
are extended with this data, also referred to as a "Job Definition."
Job

The logical entity that defines how a test case is executed. A typical Job
contains Logical Machine Set, Task, Dependency and Parameter definitions.
Each of these elements determines how the test case runs.

Job Collection

A set of scheduled jobs used mainly for a conceptual view of scheduled jobs
and results. Job collections are in place mainly to provide a centralize point of
view for tracking test run status.
Job Constraints

A logical set of dimensions designed to describe a class of computers. Job


constraints are applied to a job to limit which computers the WTT Job
Scheduler can use for a specific WTT job. Constraints also can be applied to a
schedule item.
Job Delivery Agent

The agent that queries the database for scheduled jobs and delivers them to
the client computers. Sometimes also called the Push Daemon. A database
can have more than one Job Delivery Agent, depending on the load on the
database.
Job Name

The display name given to a test case or job. This is the string that is
displayed when browsing or exploring test cases.
Job Role

Provides information to WTT about how the job is to be used. Different roles
place different restrictions on jobs. Also referred to as the
JobExecutionTypeID.
Job Run

See Schedule.
Key

Variables that can be used by the test case executable. These variables can be
set at job creation time.
Library Job

A job that can be referenced as a part of another job. A library job can have
no logical machine sets (LMS) defined and it cannot be scheduled directly.
Local Logical User

A local account used to protect domain credential of a user in WTT. The


mapping between the LLU and the user credentials is stored in the client
computer with the password encrypted. The WTT Client accesses this table to
resolve the Local Name to the actual credentials (username, password, and
domain). By using a local logical user (LLU) instead of your own domain
credentials, you avoid having your credentials compromised.
Local Symbol User

This is a credential store to access a symbol share. This store is accessible to


all the users on the computer. This is primarily used to access symbol share
by WTT Autotriage when any process breaks into debugger.

Logical Machine Set (LMS)

A logical machine set containing the definition of the quantity and description
of the computer type that is required for the execution of a job. These
requirements can be both hardware and software orientated and multiple
logical machine sets may be used per job.
Main Job

Regular tasks that are executed once all setup tasks have completed. Their
ordering and dependencies are respected. The Main Job is executed in the
main phase of the job. (Deprecated term: Regular)
Manual Job

A job where the steps required to run the test case are performed manually
by the user.
Mix

A set of one or more contexts (sets of constraints) applied to jobs and


schedules. The constraints are used by WTT to select computers for a test
case. When a job is designed, a user can apply several sets of constraints
(contexts) at one time by applying a mix that contains these contexts.
Scheduling a job creates one instance for each valid context in a mix.
Mix Algorithm

Module that generates a stress mix of tests for a stress job.


Object Model

The abstract layer that implements all the database operations.


Parallel

All given tasks run at the same time.


Parameters

Run-time parameters behave like environment variables, but are not


restricted to a specific computer. At schedule time, the WTT Execution Agent
(EA) replaces all parameters with the user-defined values, and propagates
those strings to the jobs. They allow each task to accept user data that is
defined when the test is executed. This differs from static values that are
defined within the job definition. Parameters provide the ability to submit
schedule-time preferences to the jobs selected in the schedule. For example:
a parameter with the name "Build Number" might be replaced with the actual
build number "4.00.29.2345" when the job is scheduled. When the job is run,
the command that is executed in the job receives the actual value, not the
parameter name.
Permanent Owner

The individual who is directly responsible for a given asset on a permanent


basis. This may or may not be the same individual as the Current Owner.
Registration

The process of entering a computer or device into the Asset Tracking portion
of WTT.
Reporting Category

A flag used for reporting purposes for filtering or sorting in reports. Different
values exist, depending on the failure type. For crashes, values are ignore,
pass, fail and test; for task failures, values are product, script, and
infrastructure.
Resolver

The failure tracking and management tool for both WTT and UST.
Result

The unit of work created when a job is scheduled.


Result Collections

While a result is the unit of work created when a job is scheduled, a result
collection is a set of those scheduled job results. As such, it provides users
with a centralized point of view for tracking test-run status by associating
aggregate result values such as PassedJobs, FailedJobs, NotRunJobs,
NotApplicableJobs, and Total Jobs. These counts track summary information
for all results included in a collection.
Role

The role designates the type of job, keyed on how the job is to be used.
Run Jobs Task

Runs a library job within the context of the current job. Because library jobs
themselves cannot contain Run Jobs tasks, WTT jobs are limited to a single
embedded layer.
Schedule

A Schedule uses the information defined within the Job or Jobs and prepares
the jobs for execution in the designated order. Sometimes referred to as a job
run.
Schedule Constraints

A constraint that is applied the scheduling of a job and is therefore created at


the schedule time. (Deprecated term: configuration)
Secure Object

A Controller database object that has enforced permissions.


Setup Job

A setup job is normally executed as part of the setup phase of a schedule. A


setup job can also be a job that is scheduled within a stress job that is
executed before the regular stress tests.
Sequential

All given tasks run in a specific order.

Stress Job

The job that contains all the setup jobs, stress tests, and cleanup jobs for
running stress.
Stress Type

The overall grouping of a set of stress tests within a set of groups.


Sysparse

The WTT tool that runs on client computers and takes a snapshot of the
computers configuration. Sysparse inventories the computer's hardware
components and provides information for WTT to use to determine which
computers to schedule for testing. The Sysparse outputs the results of an XML
file used by WTT when applying constraints and for other operations.
Target Owner

The intended owner when transferring ownership of a system from one person
yourself to someone else.
Task

The set of operations that execute when a job runs or define what action to
take if the job fails. The user can assign a task to run on one or more logical
machine sets. There are four types of tasks: Execution, Run Jobs, Copy File,
and Copy Results.
Task Dependencies

Defines the execution order of tasks across individual computers or logical


machine sets (LMS) within a job. They are the basis for creating complex
client-server types of test scenarios where one application depends on a
number of actions on different computers before it can begin execution.
Temporary Owner

A user who has borrowed an asset from someone is a Temporary Owner.


Test Case

See Job.
Test Case Management (TCM)

A management solution (often including an automation framework) starting


tests, tracking test assets, gathering and analyzing test data, and logging the
test results to backend databases. WTT is an example of a complete TCM.
Test Config

The reported result.


Tracked Device

A device that has an asset tag and/or serial number and a defined device
Label. It can also be a standalone device such as the printer or scanner.
Type Owner

The user who has the permissions to configure the stress type.

Vendor

An original equipment manufacturer (OEM).


Wintrack

E-mail alias copied on all loan and transfer requests for tracking.
WTT Controller

See Controller.
WTT Execution Agent (EA)

See Execution Agent.

Appendix B: Accessibility
Options
The following topics provide alternate options designed to enhance accessibility to
the functionality within the Windows Test Technology (WTT) environment.
For additional information on accessibility issues at Microsoft, see the Microsoft
Accessibility website.

The following subjects are included in this appendix:


Global Keyboard Shortcut Combinations
Keyboard Shortcut Combinations
Accessibility Known Issues

Global Keyboard Shortcut Combinations


The following global keyboard shortcuts can be used for added accessibility to
major components of WTT Studio from anywhere within the WTT environment:
Alt + X + J

Job Explorer

Alt + X + R

Result Explorer

Alt + X + M Job Monitor


Alt + X + C

Result Collection

Alt + X + E

Error Log Viewer

Alt + X + O

Result Rollup

Alt + S

Stress Menu

Keyboard Shortcut Combinations


The following keyboard shortcuts can be used for added accessibility within the
WTT environment.
Note: Although capital letters are shown in the following accessibility table,
these are not required for the shortcut key combinations.

Operation

Keyboard Shortcut

Keyboard Shortcut

Using Ctrl

Using Alt

File Menu

Alt + F

Open

Ctrl + O

Alt + F + O

Save

Ctrl + S

Alt + F + S

Save As

Alt + F + A

Manage Enterprise

Alt + F + M

Print

Ctrl + P

Alt + F + P

Print Preview

Alt + F + V

Exit

Alt + F + X

New Job

Ctrl + N

Alt + F + N

Export Jobs

Alt + F + E

Import Jobs

Alt + F + I

Save Job

Ctrl + T

Alt + F + S

Create Schedule

Ctrl + Shift + S

Alt + F + C

Save Result

Ctrl + T

Alt + F + S

New Global Mix

Ctrl + N

Alt + F + N

Save Mix

Ctrl + T

Alt + F + S

New Log Type

Ctrl + N

Alt + F + N

New Stage

Ctrl + N

Alt + F + N

New Template

Ctrl + N

Alt + F + T

New Result Collection

Ctrl + N

Alt + F + N

Edit Menu

Alt + E

Undo

Ctrl + Z

Alt + E + U

Redo

Ctrl + Y

Alt + E + R

Cut

Ctrl + X

Alt + E + T

Copy

Ctrl + C

Alt + E + C

Paste

Ctrl+ V

Alt + E + P

Delete

Del

Alt + E + D

Categories

Ctrl + Shift + C

View Menu

Alt + E + I

Alt + V

Query

Ctrl + Q

Alt + V + Q

Hierarchy

Ctrl + H

Alt + V + H

Refresh

F5

Alt + V + R

Task Results

Ctrl + T

Alt + V + T

Machines

Ctrl + M

Alt + V + M

Asset Menu

Alt+ S

Register Asset - Computer

Alt+ S + R + C

Register Asset - Device

Alt+ S + R + D

Bulk Add Computers

Alt + S + B

My Assets

Alt + S + M

Vendor Management

Alt + S + V

Asset Management - Asset Loan

Alt+ S + S + L

Wizard
Asset Management - Asset

Alt+ S + S+ T

Transfer Wizard
Asset Management - Sysparse -

Alt + S + S + P + A

Add Sysparse Files


Asset Management - Sysparse -

Alt + S + S + P + M

Sysparse merge manager


Reports - Computers Across WTT

Alt + S + P + C

Reports - Devices Across WTT

Alt + S + P + D

Explorers Menu

Alt + X

Job Explorer

Alt + X + J

Result Explorer

Alt + X + R

Job Monitor

Alt + X + M

Result Collection

Alt + X + C

Error Log View

Alt + X + E

Result Rollup

Alt + X + O

Admin Menu

Alt + A

Mix

Alt + A + M

Log Type

Alt + A + L

Users

Alt + A + U

Parameters

Alt + A + P

Dimensions

Alt + A + D

Attributes

Alt + A + A

Unified Reports Cube Admin

Alt + A + C

Process Menu

Alt + P

Stage Explorer

Alt + P + S

Process Explorer
Management - Process Template

Ctrl + Shift + P

Alt + P + E
Alt + P + M + P

Explorer

Tools

Alt + T

Scenario Builder

Alt + T + S

Plugin Manager

Alt + T + P

Metric - Configuration

Alt + T + M + C

Metric - Analyser

Alt + T + M + A

Window

Alt + W

Toolbar

Alt + W + T

StatusBar

Alt + W + S

Cascade

Alt + W + C

Tile

Alt + W + T

Opened Window

Alt + W + [window
number]

Help

Alt + H

Contents

Alt + H + C

Plugins

Alt + H + P

About WTT Studio

Alt + H + A

Metric Analyzer

Alt + M

Dataset - Load

Alt + M + L

Dataset - Save

Alt + M + S

Scenario Builder

Alt + B

Validate

Alt + B + V

Run

Alt + B + R
Table B.1 Accessibility Shortcut Hotkey Combinations

Accessibility Known Issues


The following issues are known issues related to accessibility within the WTT
environment. Although it is anticipated that they will be corrected in future
releases of WTT or other products, a specific timeframe is not available at the
present time.

Issue: The keyboard shortcut Alt + Hyphen does not actuate shortcut
menu for current child MDI window when maximized. Alt + Hyphen is the
standard accessibility key for navigating MDI child windows and does not
reliably function within WTT.

Source of problem: Believed to be a Win-Form bug.


Current Status: Postponed until after WTT 2.0 RTM Release.
Workaround: Press Alt+Space to open the shortcut menu on the
main MDI window, then press RIGHT ARROW to open the shortcut
menu on the child MDI window.

Appendix C: Best Practices


The following topics provide recommendations for the best practices to follow
when using Windows Test Technology (WTT).

Asset Tracking Best Practice Recommendations


When registering devices, be as specific as possible when entering the device
label so that it can be found easily in a query.
When registering computers or devices, double check all asset tag numbers to
be sure you are entering the correct tag number.

Jobs Best Practice Recommendations


The following topics provide recommendations for the best practices to follow
when designing tests, implementing jobs and naming items.

Test Structure and Design


When designing tests to use with WTT, treat a job as a complete end-to-end
test run for a test. Although in very large scenarios it may be necessary to
group jobs together, the best method is usually to design each job to setup an
environment for the test first, then run the test, and finally to clean up the
environment. Such a job uses specialized tasks for the purposes of setting up,
running, and cleaning up the test. This is made easier because WTT Jobs
allows you to easily encapsulate all necessary information about a test run
from the hardware needs to the necessary steps involved, and from the
computer interactions to the parameters a particular test depends on. Treating
a job as a complete test run makes it easier to schedule later on.
Cleanup can be either a part of a job (a task) or a separate job that is
scheduled concurrently if the cleanup is designed for multiple jobs. WTT does
not do test clean-up and restoration automatically. It is the job owner's
responsibility to ensure that the job restores each test computer to its original
state after the test is completed.
Do not create a job that contains large numbers of tests, if possible. Instead,
create multiple jobs and schedule them simultaneously as a single job run.
Putting different tests into a single job is contrary to the idea of a job
representing an end-to-end test run. Separating jobs also makes the
individual jobs more easily reusable. Unless all of the tests that you are
putting into the job are meant to run together all of the time, you will have
trouble running them separately later on.

When a FileData parameter is used to create a file in the JobsRuntime folder, a


useful action can be to type a short script into the parameter such as
script.vbs and you will then have a file called script.vbs in your jobs runtime
folder to execute. However, while this will work for VBScript and Jscript, it will
not work for CMD or BAT scripts. If these types are used, the file will not be in
the proper format for the script interpreter to parse and it will break. It is
expected that this functionality will change in a coming release of WTT.
If a particular step in a job will be used in many other jobs/tests, consider
putting that step into a library job by itself. If, for example, all of your tests
require the creation of a series of shares on your test computer, then it might
be better to make that setup step a library job. The use of library jobs
provides a high degree of re-usability for tasks that are needed by individual
tests and still gives the flexibility to package multiple tests together at
schedule time. Any common steps that are defined as library jobs can be used
at schedule time for bulk testing setup.
It should be noted that parameter names cannot be edited after being
scheduled if they are still mapped to another parameter. For instance, if a
main job has a parameter that is mapped to a library job parameter, once that
job has been schedule, the main job parameter name may not be changed
while it is still mapped. This is to keep from breaking runjob tasks that depend
on that main job parameter name. To change the name, unmap the parameter
in the runjob task, either by deleting the task or editing it and deleting the
mapping. When this is done, the main job parameter name may be changed.
When creating executable tasks, directory paths may not contain spaces
within them. Any paths that do contain spaces will cause that task to fail. This
applies whether the directory path is static or is a parameter value. For
example: if a parameter value = C:\Program Files, the task will fail with the
following error: no such dir: c:\Program.

Job ImplementationLogical Machine Set (LMS)


During test design do not hard code the computer name, type, full path name,
or share information unless it is absolutely necessary or it is already part of
the setup of the test itself. Information such as a computer names are
dynamically selected based on the availability of resources at the time of
scheduling. Your test can access this information through variables and
parameters.
Always use a name for your logical machine set (LMS). Scheduler may have
problems processing an LMS that has an empty string as its name.
It is not always necessary to define an LMS. If a job is not computer-specific
and can run on any hardware, then it is better not to include an LMS
definition. This is more efficient because it allows Scheduler to simply locate
any computer that is available.

If your job has an LMS as part of its definition, then all tasks within the job
must have an LMS assigned to them. A task with no LMS in a job that has an
LMS definition is treated as an invalid task.
When using LMS default parameters to extract the computer names selected
by the LMS, differing results may be returned depending on the type of job
being used. For example: given a job with two LMS, with two computers
each:
PrimaryLMS = "Test1"
SecondaryLMS = "Test2"
Within a main job you can use the default parameters of the LMS names to
extract the machines selected by the LMS. In other words, running the
following command-line on either LMS will return both computer names:
ECHO PrimaryLMS:[PrimaryLMS]&&ECHO SecondaryLMS:[SecondaryLMS]
with the following output:
PrimaryLMS: Test1
SecondaryLMS: Test2

Job ImplementationFile Copying


It is your responsibility to design the job so that it uses the copy file task to
copy the files and the test binaries that are required by the job. Tests should
not pull down files directly on their own. When offline execution becomes
available in future WTT releases, all files will be packaged based on the copy
file associated with the job. The only way to do this correctly and have your
test be available offline is to use the copy file task to get test binaries for a
job.
Do not use the Copy File task to transfer files from one remote location to
another remote location. The Copy File task is intended for transferring files
from a remote computer to the local computer. (You can only specify
permissions on the location you are copying from). Use multiple copy tasks if
you need different sets of files available on different computers in the LMS.
Always use the full UNC path name for a server listing. File lists should contain
only file names (and/or wildcards) for the sake of consistency.
Beware of the x86-based and I64-based architecture difference. Use the
environment variables [PROCESSOR_ARCHITECTURE] if necessary. The x86based and I64-based binaries are distinct and in many cases stored in a
separate location. Since you might not know the exact specification of the
computer you will be getting, your file copying should also account for this
fact. This may not be applicable if you want to enforce a WOW64-type test
run.

Use a wildcard to copy a large number of files from one location. WTT
supports system wildcards for copying files just like the Copy.exe. For
example, using *.txt is much cleaner than typing or browsing entries file by
file.
When creating a Copy File task, you can specify system environment variables
in your paths using the following method:
\\server\share\[PROCESSOR_ARCHITECTURE]\test_binaries\*
It is important to note that any environment variable from the user profile will
not be expanded in the context of the user that is running the Copy File task,
but rather in the context of local system.
o Common examples of user environment variables might be:
%USERNAME%, %TEMP%, etc.
o In an execute task, you would simply keep using the %% format and
ensure that CMD.EXE was running the task, which would handle
expansion under the credentials of the user running the task.
o For CopyFiles task, there is no execute commandline to take advantage
of this behavior. As a result, there is no built-in method to use the USER
variables for the user specified in the copyfiles task.
A workaround to this behavior is to make use of the WTTCMD
/SysInitKey functionality within an execute task in order to capture
the value of the user profile environment variable desired. This is
done by:
1. Adding an execute task to the job, usually in the setup phase
but in any case prior to the point where the variable is used.
2. This task must execute under the same user context that the
Copy File task will execute under.
3. The following command-line should be used:
WTTCMD /sysinitkey /key:KEYNAME /value:%VARIABLE%
where
KEYNAME is the name used in the Copy File task
%VARIABLE% is the exact user environment variable that needs
to be captured, for example %TEMP%, %HOMEDRIVE%, or
%USERNAME%.
4. Select the Create new command shell for this task check
box.
5. Save the task, and then use [KEYNAME] in the subsequent Copy
File task.

When a Copy File task is used to copy a test binary to the JobsWorkingDir and
a subsequent execute task attempts to run this binary directly, this will fail
with an error in finding the binary file.
This occurs because the working directory for WTTSVC is
\WTT\JobsWorkingDir\ whereas the actual job has a working directory of
\WTT\JobsWorkingDir\JobRuns\<folder name>, where <folder name> is the
Run GUID folder name.
Although WTT can call CreateProcess() with the working directory as specified
in the task, CreateProcess() does not use the working directory to find the
specific process being run in the task, so therefore it must either be in the
system path or be explicitly added to the command-line. If that task runs
notepad.exe, it will therefore succeed because notepad.exe is in the path. It
would also succeed if CMD.EXE is run (or the Create new command shell
for this task option is selected).
Therefore there are three workarounds available for this problem:

Select the Create new command shell for this task option. This
launches CMD, which then uses RunWorkingDir as the CMD working
directory. CMD will then run the remainder of the command-line tasks
from this directory.

Prefix the binary with the [WTTRunWorkingDir]\ parameter.


For example, instead of:
Test.EXE switch name
Use:
[WTTRunWorkingDir]\Test.EXE switch name

Specify a custom working folder and then use the entire path to the
binary.
For example:
C:\TESTBINS\Test.EXE switch name

Job ImplementationTask Execution Phase and Dependency


Always choose the appropriate task category for each task within the job.
There are implicit dependencies built into each task category. For example, all
setup tasks are run before any regular task is run. This can save you time in
defining the dependencies.
Always include some sort of user credentials (either a domain user
name/password or a Local Logical User LLU) to access shares and copy files
whenever possible. A WTT execution agent creates processes running under a
local system account by default, so it is best to create a local logical user

(LLU) account for most operations. The User Name and Password tab under
Execution Options of the task details page can have parameters that allow the
username and password to be applied at schedule time. This is also useful for
running tests under different user credentials. Note: Be aware that using
domain credentials will transmit the password in clear text. If security is a
concern, use an LLU. If the command line used will make a particular task
reboot, make sure the reboot check box is selected under Task Execution
Conditions. WTT relies on this check box to determine if the reboot is a bug
check or just an action that the task must take.

Using Batch Files in Jobs


Rather than using batch files in WTT, it is recommended that you instead treat
the WTT job as you would a batch file. For example, to replace a batch file:
o Create a Copy task in the setup Task area of the job to copy down all
files needed.
o Create multiple Execute jobs to run the executables that were copied.
o Create a cleanup task to either delete the files on the local computer or
copy them up to the systemlog share.
If it is necessary to use batch files, the following practices are recommended:
o Avoid using Start, as this will spawn a child process that will be killed
when the parent process exits WTT.
o If Start must be used, use Start /wait, as this will make the batch file
stay where it is until the process finishes.
When WTT starts a batch file, it assumes that when the batch file is finished,
the job task is therefore done. This will result in WTT killing all processes
started by the batch file.
Using batch files makes troubleshooting more difficult:
o Batch files do not normally provide clues as to problem location.
o Large batch files with multiple tasks provide few indications of
problems. By thinking of each executable as a different task and
designing the test in this fashion, reporting where problems occur
during testing is much simpler than when using a large batch file.
o Handing off jobs to other testers is easier without batch files, as a list
of tasks is much easier for others to troubleshoot than batch files.
If batch files are used and job tasks need to be continued after the batch file
itself has finished, using the Create Breakaway Process is recommended. More
information on this process may be found under WTT at http://toolbox/.

Naming Conventions (Tests, Features, and Categories)


When defining job folder names, test names, and group names, only certain
valid characters can be used. The valid characters supported by WTT are:
a-z, A-Z,0-9
!@#$^*()?-_=[]{}
space, semicolon(;), comma(,), period(.), single quotes()

Security Concerns
Because of the large number of computers on networks running with insecure
patch levels, insecure user accounts and other security issues, it is
recommended that users install and run Microsoft Baseline Security Analyzer
(MSBA), available from the Microsoft web site at
http://www.microsoft.com/mbsa in order to audit and correct important
security concerns. This is especially important for WTT Controllers, which have
full control over all systems attached to them, and can make use of any LLU on
a client.

Keys, Parameters, and Environmental Variables


FAQs
Why does WTT sometimes expand %WINDIR% and sometimes does
not?
WTT itself will never expand a %VARIABLE%. Where the confusion lies is
that when an execute task is run using CMD (in other words, it is either
run from the command-line or by selecting the Create new command
shell for this task option), then the CMD shell expanded the %VARIABLE
% into its resulting value. When an execute task is not sent to CMD (which
is separate from WTT), the %VARIABLE% is not expanded.

Why doesn't WTT expand %WTT\SomeDimension%?


Neither CMD nor WTT will expand a %PARAMETER% or %DIMENSION%.
CMD has no concept of WTT parameters and dimensions, and WTT only
expands [PARAMETER] or [DIMENSION] format. Use
[WTT\SomeDimension] instead.

Why doesn't WTT expand [USERNAME] or [USERDNSDOMAIN], but


does expand [WINDIR] ?
WTT expands environment variables in the context of SYSTEM.
To see the environment variables that will work in [VARIABLE] format,
open SYSDM.CPL, click the Advanced tab, and then click the

Environmental Variables button. You can then examine the System


Variables section.
In order to expand variables in a USER context, ensure the execute task
logs on as a user and then pass the %VARIABLE% to a CMD session
(either by using CMD /C from the command-line, or by selecting the
Create new command shell for this task option.

Other Considerations
If there are dimensions, parameters, or local system environment variables
that share an identical name, WTT will expand the name in brackets to the
first value defined when it searches in the following order:
1. Key
2. Parameter
3. Dimension
4. System Environmental Variable.
For example:
All systems automatically have %WINDIR% defined as a system
environment variable that points to the location of the main Windows
folder, as in C:\WINDOWS.
If a job is created with a task to execute ECHO [WINDIR]&&PAUSE and
executed, WTT will look first for keys, then parameters, then dimensions,
and then environment variables, and will subsequently return the
environmental variable value C:\WINDOWS.
If a dimension named WINDIR is then added to the job with its value set
to dimension and the job executed again, WTT will look again look for
keys, then parameters, and then dimensions, and finding one, will expand
[WINDIR] to return dimension rather than C:\WINDOWS.
If a parameter named WINDIR is added to the job with the value param
and the job executed, WTT will look first for a key, and then a parameter,
and finding one, will expand [WINDIR] to return param rather than
dimension or C:\WINDOWS
And lastly, if a key named WINDIR is added to the job with the value key
and the job executed, WTT will look first for a key, and finding one, will
expand [WINDIR] to return key, rather than param, dimension or
C:\WINDOWS

Appendix D: WTT Logger


Windows Test Technologies (WTT) Logger is an XML-oriented logging solution used
for WTT. It is based on a producer-consumer (publisher-subscriber) model, so it
simultaneously supports multiple outputs such as file, console, and debugger. In
addition, the model can be extended to accommodate further requests, such as
encrypted log file.
Note: The WTT Logger functionality described here is of a basic informational
nature only. For more information and customization, see the WTT Logger
Software Development Kit (SDK).

The following subjects are included in this appendix:


WTT Logger Functions
WTT Logger Terminology
Working with WTT Logger
WTT Logger Integration with WTT Jobs

WTT Logger Functions


WTT Logger has native support for COM, C/C, and Visual C#. The WTT Logging
engine is accessed through IWTTLog, a top level interface. The IWTTLog functions
are divided into the following types.
Tracing
Every message generated by the WTT Logger is represented by a tracing object
internally. A tracing object has several attributes, including Run Time Information
(RTI), context, trace level, and trace priority. A trace object is passed around by
different devices in the binary form and is converted to the final XML format when
it reaches a subscriber.
Topology configuration
WTT Logger allows devices to be connected in various ways. Topology
configuration is used to manage the internal organization of the devices. Users
can pass in a device configuration string from the top-level interface to describe
the topology for a test run.
Program flow control
Program flow control provides you with a consistent way of checking and handling
test errors in your programs. Using this, users can triage test failures
automatically.
Test case management

Test case management is used to manage all test cases in a test run. It includes
marking the start and the end of each individual test case. Upon starting a test,
the logger creates a context based on this test automatically. Similarly, end test
closes the test case context. It also includes setting test case information as well
as test computer-specific information so that auto-logging is possible. A rollup
XML is generated at the end of the test run to summarize the overall pass/fail
results.

Terminology
WTT Log Device

In WTT Logger terms, a device is an object that can process a trace in certain
way.
WTT Log Device String

The WTT Log Device String is a string that represents the configuration of
logging outputs.

Working with WTT Logger


It is relatively easy to get started with using WTT Logger. From a very high level,
the general usage pattern looks similar to the following one.

To get started using WTT Logger


1. Set up the outputs by calling one of the following:
In C, call WTTLogCreateLogDevice().

In COM, call IWTTLog::CreateLogDevice().


In Visual C++, call CWTTLogger::CreateLogDevice().
For Visual C++, you must also instantiate a CWTTLogger object prior to
calling any tracing API.
2. Use tracing by calling the various tracing functions to log your data or
messages.
These APIs include Trace() and UserDefinedTrace() or other helper
functions such as StartTest() or EndTest().
3. Clean up by calling one of the following:
In C, call WTTLogCloseLogDevice() and WTTLogUninit().

In COM, call IWTTLog::CloseLogDevice().


In Visual C++, call CWTTLogger::CloseLogDevice().

Code Example of a Typical Application Using WTT Logger


The following code example gives a brief look-and-feel of a typical user
application that uses WTT Logger.
#include "stdafx.h"
#include "wttlog.h"
int _tmain(int argc, _TCHAR* argv[])
{
HRESULT hr = S_OK;
LONG

hDevice = NULL;

CLogger Logger;
//
// define the outputs to which logs will go
//
Logger.CreateLogDevice
(
L"$LocalPub($LogFile:file=foo,writemode=overwrite)",
&hDevice
);
//
// run a test case and add some logs
//
Logger.StartTest (L"Test1", hDevice) ;
//
// execute some test code
//
Logger.Assert(FALSE,__WFILE__,__LINE__) ;
Logger.EndTest (L"Test1", WTT_TESTCASE_RESULT_FAIL,
L"End of Test1", hDevice) ;
//
// run another test case and add some logs
//
Logger.StartTest (L"Test2", hDevice) ;

//
// execute some test code
//
Logger.Trace (
WTT_LVL_WARN,
hDevice,
__WFILE__,
__LINE__,
L"Warning Message"
) ;
Logger.EndTest (L"Test2", WTT_TESTCASE_RESULT_PASS,
L"End of Test2", hDevice) ;
//
// clean up
//
Logger.CloseLogDevice(NULL, hDevice) ;
}

Coding WTT Logger


The following interfaces are available for a user application to use WTT Logger:

To code for WTT Logger using C/C++


C/C++ programmers perform the following tasks to use WTT Logger.
1. Header Files: For a header file, include WTTLog.h in your code and include
WTTLogDef.h for predefined WTT Logger constants.
2.

Coding: Call CreateLogDevice() and CloseLogDevice() to start and end


the logging session. Then you can call Trace() or other helper functions in
between for various tracing purposes.
Note: For C users, you must create an instance of CWTTLogger class

before any WTT Logger API is called.


3. Stub Libs: Use WTTLog.lib and WTTLogCM.lib to statically link to the
WTT Logger target DLLs during the compilation time.
4.

Target DLLs: For target DLLs, use either WTTLog.dll or WTTLogCM.dll.


The difference is that WTTLogCM.dll is the COM version of the WTT Logger
and has more dependencies on the other DLLs in the system.

5.

Microsoft recommends that users link to WTTLog.dll for minimal


dependency.

To code for WTT Logger using scripting


Scripting users perform the following tasks to use WTT Logger.
1. Register the COM version of the WTT Logger DLL which is WTTLogCM.dll.
2.

Use WTTLogger as an ActiveX object in the scripts.

WTT Logger Integration with WTT Jobs


If a user application is run under WTT test framework, the execution agent (EA)
can determine the overall result of the test run if WTT Logger is used. WTT Jobs
collects the PFRollup information from the file <testguid>.xml that is generated
by WTT Logger automatically when the test run ends. In this XML file, it captures
not only the PFRollup results, but also the test environment.
<testguid.xml> has the following format:
<TaskResult Total="2" Pass="1" Failed="1" Blocked="0"
Warned="0" Skipped="0" StartTime="10/26/2002 9:25:16"
EndTime="10/26/2002 9:25:17" />
Note: In the user application, you may call WTT_LVL_ROLLUP as many times as
you want. However, it will not, however, cause the WTT Jobs to perform rollup
work.

Appendix E: WTTCMD Command


Tool
WTTCMD is a command-line tool for test case developers that acts as a wrapper
for some WTT functions associated with a task. This tool is not intended to be
used by other users.
WTTCMD provides simple functions that test case developers can insert in a
process within a task that is run as part of a test case under Windows Test
Technologies (WTT). They include execution variables, task cancellation and other
commands. Generally these commands are aimed at a single computer for a
single test.
Note: Because WTTCMD is not installed on WTT Controllers, the commands
discussed in this appendix may only be used on client computers.

WTTCMD Code sample


Several tasks can execute the WTTCMD process from inside their executable
code. For example a task written in C/C++ that wants to delete a logical user on
the client machine WTT controller might do something like this:
if (THINGS_GO_WRONG)
{
exec("WTTCmd.EXE /deletelogicaluser /localName:Local");
}

WTTCMD Commands
The WTTCMD Command-Line tool provides commands for the following functions:

Evaluate a key/value pair using the system defined .ini files


Use /sysevalkey to find the value of a key on a client computer. This may be
useful for analyzing and troubleshooting a test job on a client computer.
The command syntax for this feature is:
WTTCmd.exe /sysevalkey /key:<key> /default:<default value>
This prints out the value of the evaluated key to the standard output in the
following format:
WTTCmdSysevalkey:EvaluatedKey = <evaluated value>
The task would then have to parse the output of the above command and get
the evaluated value.

Note: This feature can only be used from inside a task. It will fail if invoked
directly from a command prompt.

Evaluate keys in a string using the system defined .ini files


Use /sysexpandstr to find the values of several keys in a string on a client
computer. This may be useful for analyzing and troubleshooting a test job on a
client computer.
The command syntax for this feature is:
WTTCmd.exe /sysexpandstr /string:<string> /default:<default>
Where:
<string> is a string that needs to be expanded. The string can contain
multiple keys that need to be recursively evaluated.
For example: %debugger% %Options% -test:netsniff
<default>

is the default value to be returned if any of the keys are not

defined.
If any of the keys (enclosed between % characters) is not resolved, it returns
the default expanded string. This prints out the expanded string to the
standard output in the following format:
WTTCmdSysExpandStr:ExpandedString = <evaluated value>
The task would then have to parse the output of the above command and get
the evaluated value.
Note: This feature can only be used from inside a task. It will fail if invoked
directly from a command prompt.

Cancel the current WTT job


Use /canceljob to kill a job when a certain event occurs. For example, you
could have a special error logged to the controller database when this task is
run.
The command syntax for this feature is:
WTTCmd.exe /canceljob /wait:<time to wait> [/computer:<Name of
the computer where the run has to be cancelled>]
Note: This feature can only be used from inside a task. It will fail if invoked
directly from a command prompt.

Inform the WTT service on a client computer to have EA reboot the


system
Use /eareboot to tell the WTT service Execution Agent to reboot the computer
when executing a task.
The command syntax for this feature is:
WTTCmd.exe /eareboot [/restart] [/timeout:<timeout in seconds
before which to restart>]

Note: This feature can only be used from inside a task. It will fail if invoked
directly from a command prompt.
Additionally, /taskwillreboot is another command-line switch which can be
used with WTTCmd.exe /eareboot. By supplying the /taskwillreboot switch,
the WTT service Execution Agent is told that the task which initiated the
WTTCmd.exe command will itself initiate a shutdown. If this switch is not
specified, EA will initiate the reboot process.

Local Symbol Users


Special Note regarding WTTCMD access to LSU functionality: Through
WTT 2.0 Beta 2, WTTCMD.exe was the only way to manage an LSU. With WTT 2.0
RTM, either WTTCMD.exe or the UI may be used to manage an LSU. WTTCMD.exe,
however, must be used to query for an LLU on a local computer.
For more information on the LSU UI, see Appendix K: Managing LLU and LSU
Functionality.

Add a local symbol user on the client computer


Use the WTTCmd command /addsymboluser from a command prompt to add
a local symbol user to the client computer for Autotriage or Setup.
1. On the taskbar, click Start, and then click Run.

2.
3.

In the Run dialog box, type cmd, and then click OK.
At the command prompt, type
WTTCmd.exe /addsymboluser /user:<username>
/domain:<domain> /password:<password>
Where:
<user> is the user name to be added
<domain> is the domain name to be added
<password>

is the password for the user name to be added.

For example:
WTTCmd.exe /addsymboluser /user:abc /domain:test
/password:abc123
This will configure the local symbol user as the default symbol user.

Delete a local Symbol user


Use the WTTCmd command /deletesymboluser from a command prompt to
delete the default local symbol user from the client computer.
1. On the taskbar, click Start, and then click Run.

2.
3.

In the Run dialog box, type cmd, and then click OK.
At the command prompt, type
WTTCmd.EXE /deletesymboluser

This will delete only the default symbol user (which is


represented as *) created by wttcmd /addsymboluser
command line above.
Note: All Local Symbol Users configured in a computer may be deleted
with the command:
WTTCmd.EXE /cleansymboluser

Query local symbol user on the client computer


Use the WTTCmd command /querysymboluser from a command prompt to
query all local symbol user to the client computer for Autotriage or Setup.
1. On the taskbar, click Start, and then click Run.

2.
3.

In the Run dialog box, type cmd, and then click OK.
At the command prompt, type
WTTCmd.exe /querysymboluser

This will display the list of symbol users configured on the machine.

Local Logical Users


Special Note regarding WTTCMD access to LLU functionality: Through WTT
2.0 Beta 2, WTTCMD.exe was the only way to manage an LLU. With WTT 2.0 RTM,
WTTCMD.exe can still be used to manage an LLU but it is recommended that
users use the Manage LLU UI to manage all LLU functions. WTTCMD.exe should
only be used locally to query for an LLU on a local computer. For more information
on the Manage LLU UI, see Appendix K: Managing LLU and LSU Functionality.

Add a local logical user on the client computer


A local logical user is added using WTTCMD from a command line after the
WTT Client is installed.
1. On the taskbar, click Start, and then click Run.

2.
3.

In the Run dialog box, type cmd, and then click OK.

At the command prompt, type the following command:


WTTCmd.EXE /addlogicaluser /localName:<localName>

/user:<username> /domain:<domain>
/password:<password>
Where:
<localName> is the local name referring to the user credential to be
added.
<username>

is the user name to be added.


<domain> is the domain name to be added.

<password>

is the password for the username to be added.

For example:
WTTCmd.EXE /addlogicaluser /localName:Local /user:abc
/domain:test /password:abc123
4. Grant the LLU administrator rights to the client computer.

Delete a local logical user


Use the WTTCmd command / deletelogicaluser from a command prompt to
delete a local logical user from the client computer.
1. On the taskbar, click Start, and then click Run.

2.
3.

In the Run dialog box, type cmd, and then click OK.
At the command prompt, type
WTTCmd.EXE /deletelogicaluser /localName:<localName>
Where:
<localName> is the local name referring to the user credential to be
deleted.
For example:
WTTCmd.EXE /deletelogicaluser /localName:Local

Note: All Logical Users configured in the computer may be deleted using
the command:
WTTCmd.EXE /cleanlogicaluser

Query local logical user on the client computer


A Local logical user can be queried on the client computer using WTTCMD
from a command line after the WTT Client is installed.
1. On the taskbar, click Start, and then click Run.
2.

In the Run dialog box, type cmd, and then click OK.
3.

At the command prompt, type


WTTCmd.exe /querylogicaluser

This will display the list of logical users configured on the machine.

Appendix F: WTTOMCMD
Command Tool
WTTOMCMD is a command-line utility that provides test engineers with an
alternate data update mechanism to the UI. It serves to facilitate the porting of
data across Windows Test Technologies (WTT) controllers or to provide a means
to tie existing job creation/execution automations with the WTT framework.
Actions that can be performed with this tool include the import/export of test
cases, results, adding or updating results, updating assets, as well as the
scheduling of test cases.
The command-line interface of this tool provides test case developers with a
natural means to provide an extra layer of automation to repetitive or highvolume WTT user actions.WTTOMCmd is built with WTT and is available on the
release share among the x86 binaries. To use it, it should be downloaded to the
folder from which WTT Studio is run, as it depends on other Object Model binaries.
The configuration file WTTOMCmd.exe.config must also be copied for the utility to
function correctly. Note that unlike the WTTCmd utility, this is not a client-side
tool, and therefore sits in a single location only, from where it interacts with the
WTT backend.

The following subjects are included in this appendix:


Terminology
WTTOMCmd Syntax
Working with WTTOMCmd
WTTOMCmd configuration file
WTTOMCmd Best Practices

Terminology
MachineConfig

A description of the characteristics of the computer that a given result came


from. Every result record must have an associated MachineConfig.
Dimension

A property of the computer in the MachineConfig. Some commonly used


dimensions are pre-populated by WTT, such as WTT\OS for operating system.
However, user-defined dimensions may also be created, as with a dimension
called BuildFarmNumber to record the number of the build farm.
Value

The value of a particular dimension, within the finite set of values specified for the
dimension at the time it was created.

Saved schedule

A schedule that has been saved as a .wtq file. This schedule will consist of the
job(s) to be scheduled, any mixes or constraints, parameter values and schedule
operations.
Import/Export XML files

These files are the XML form of exported objects. When an object is exported,
other objects that it tightly depends upon are usually exported as well. Similarly,
when an object is imported, its dependent objects are imported. In the XML files,
each object type gets its own folder, and within these folders, every object forms
its own XML file.

WTTOMCMD Syntax
The standard command-line syntax for WTTOMCmd is of the following form:
wttomcmd.exe [/databaseparams] [/commanddirective] [/properties].

Database Parameters
Database parameters uniquely identify the database that the utility should
perform the desired operations on. They comprise of the following three
mandatory arguments:
Identity Server Name
The name of machine that functions as the Identity Server in the WTT
Enterprise and should provided in the following format
/IdentityServer <identityservername>
Identity Database Name
The name of the database on the Identity Server. This is typically the name of
the database as seen in SQL Enterprise Manager when connected to the
Identity Server.
/IdentityDatabase <databasename>
Logical Datastore Name
The logical name given to the database on the Identity Server. This is usually
the name seen in the datastore drop-down list in WTT Studio.
/LogicalDatastore <logicaldbname>

Command Directive
The command directive specifies what action is to be performed. The following
commands are supported:
/importjob

Inserts or imports records into the database. Jobs and results can be imported
with this action declarative.
/exportjob
Used to export a job from the database to XML files. The utility uses the core
Import/Export functionality and the resulting XML files are configured as if
they had been exported from the UI.
/importresult
Used to import results from XML files into the database. The utility uses the
core Import/Export functionality and expects the XML files to be exactly as if
they were created by a standard export operation.
/exportresult
Used to export a result from the database to XML files. The utility uses the
core Import/Export functionality and the resulting XML files are configured as
if they had been exported from the UI.
/updateresult
Used to modify an existing result record or add a new one. If the specified
result record does not exist in the database, a new one is created with the
supplied information. Note: If the result record is specified using a ResultID
that does not exist, a new result will be created but it will not have the same
ID. This is because the ID is auto-generated by the database. As a result, the
supplied ResultID will effectively be dropped.
/createschedule
Used to schedule a previously saved XML (.wtq) file. The input XML should
have a schema similar to that of a schedule saved from the UI.
Note: This ability to schedule jobs on the command-line was previously
provided by the command-line utility WTTCmdSch, which has been retired and
will no longer be supported.
/updateasset
This is used to set either the status of an asset, or to move it to another asset
(machine) pool, or both.

Properties
Properties are the various arguments needed by the particular command to
successfully complete the database operation. They are provided in the following
format:
property:value <property:value ...>
For every supported command directive, at least one property is required.
However, depending on the operation, a certain minimum set of properties may
be mandatory. Specific properties required are listed under the supported

WTTOMCmd operations in Working with WTTOMCmd. As well, the configuration


properties provided in the WTTOMCmd.exe.config file are described more fully in
WTTOMCmd configuration file.
Note: Unlike the command directive, there is no leading front slash before the
property.

Path: The path to the Import/Export XML files. Because these files tend to
be in a collection of folders, the path supplied should point to the parent
folder of this collection.
JobID: The specific ID of the target job. Note that the combination
Path+job name cannot be substituted because it does not uniquely identify
a job.
JobGUID: The GUID of the target job.
ResultID: The ID of the result that should be exported or updated. If
updating a result, a new result will be created if no result is found in the
database with the specified ID. However, because the ID field is autogenerated by the database, the new records ID will not be the one that
was supplied.
ResultGUID: The GUID of the result that should be updated. If no result
is found in the database with the specified GUID, a new one will be
created. As in the case of the ResultID, because the GUID field is autogenerated by the database, the new records GUID will not be the one that
was supplied.
Pass: The Passed count for this result. If not provided, this property will
default to 0.
Fail: The Failed count for this result. If not provided, this property will
default to 0.
NotRun: The Not Run count for this result. If not provided, this property
will default to 0.
NotApplicable: The Not Applicable count for this result. If not provided,
this property will default to 0.
ResultStatus: The current status of the result. This property can have a
value of Cancelled, Completed, InProgress, Investigate or Resolved.
StartTime: The start time of the test run for this result. If not provided,
this property defaults to the current time. The time may be provided in
any format that can be converted by DateTime.Parse method.
EndTime: The ending time of the test run for this result. If not provided,
this property defaults to the current time. The time may be provided in
any format that can be converted by the DateTime.Parse method.

AssignedTo: The user alias of the person this test case is assigned to. If
this property is not provided, the current logged on user will be used as a
default.
LogLocation: The location of the log files. If not provided, this property
will remain blank.
MachineID: The ID of the computer upon which the test case was run.
The MachineConfig of this computer will be attached to the result. The
specified computer must exist in the database for this property to be
used..
MachineName: The name of the machine that the test case was run on.
The MachineConfig of this machine will be attached to the result. The
specified machine must exist in the database.
Dimension: The name of an existing dimension. This is required to create
a new MachineConfig this describes the machine on which the test case
was run.
Value: The value for the dimension described above.
AssetID: The ID of the target computer record.
AssetName: The name of the target computer.
AssetStatus: The current status of the target computer. This property can
have a value of Ready, Running, Manual or Debug.
AssetPoolID: The ID of the asset (machine) pool to which the computer
should be moved.
AssetPoolPath: The name of the asset (machine) pool along with its
path, to which the computer should be moved.
ReportType: The items to include in the job import report. These can be
either Full or FailureOnly.
AttributeOptions: Options for Export/Import when the job being
imported has attributes that do not exist in the target database.
GlobalMixOptions: Options for Export/Import when the job being
imported has mixes that cause conflicts.
GlobalParamOptions: Options for Export/Import when the job being
imported has parameter conflicts.
JobConflictOptions: Options for Export/Import when the job being
imported already exists in the database.
JobOverwriteOptions: Options to determine under what conditions a job
should be overwritten if it already exists.
LibraryJobConflictOptions: Options for Export/Import when the library
job being imported already exists in the database.
LibraryJobOverwriteOptions: Options to determine under what
conditions a library job should be overwritten if it already exists.

RemapHierarchy: Indicates whether the feature hierarchy of the jobs


being imported should be recreated.
AppendFeature: The feature path into which the features/jobs should be
imported.
RemapFeature: The feature path into which the features/jobs should be
imported in case there is an error importing into AppendFeature.
ImportCategory: The category to which the category hierarchy being
imported should be appended or mapped.
ImportHierarchy: Indicates whether the category hierarchy should be
imported.
ResultsOnly: Indicates whether only result XMLs should be scanned and
imported. These XMLs should point to jobs already existing in the
database.
ResultConflictOptions: Options for Export/Import when the result being
imported already exists in the database.

Working with WTTOMCmd


WTTOMCmd provides a number of tasks that can be operated from the commandline. The properties supported by each command directive are listed below,
followed by syntax and additional directions.
Note: Command Directive properties are shown in the format: <property>,
with the property information discussed in Properties above.

To import jobs
Supported properties:
Path
AttributeOptions
GlobalMixOptions
GlobalParamOptions
JobConflictOptions
JobOverwriteOptions
LibraryJobConflictOptions
LibraryJobOverwriteOptions
RemapHierarchy
AppendFeature
RemapFeature

ImportCategory
ImportHierarchy
ReportType
All properties except path may be provided in the configuration file, and are
hence optional on the command-line. In case of a conflict, the value passed
on the command-line overrides the value found in the configuration file.

The command syntax for this feature is:


wttomcmd [database params] /ImportJob path:<path>
For example:
wttomcmd /IdentityServer MyServer /IdentityDatabase
DB1205 /LogicalDatastore StressDB /ImportJob Path:C:\API
test cases\Print

To export a job
Supported properties:
Path
JobID
JobGUID
Note: Either the JobID or JobGUID may be provided. If both are supplied,
JobGUID is used.

The command syntax for this feature is:


wttomcmd [database params] /ExportJob JobID:<JobID>
Path:<path>

For example:
wttomcmd /IdentityServer MyServer /IdentityDatabase
DB1205 /LogicalDatastore StressDB /ExportJob JobID:22
Path:C:\API test cases\Print

To import results
Supported properties:
Path
AttributeOptions
GlobalMixOptions
GlobalParamOptions
JobConflictOptions

JobOverwriteOptions
LibraryJobConflictOptions
LibraryJobOverwriteOptions
RemapHierarchy
AppendFeature
RemapFeature
ImportCategory
ImportHierarchy
ReportType
ResultsOnly
ResultConflictOptions
Note: All properties except Path may be provided in the configuration file, and
are hence optional on the command-line. In case of a conflict, the value
passed on the command-line overrides the value found in the configuration
file.

The command syntax for this feature is:


wttomcmd [database params] /ImportResult Path:<path>

For example:
wttomcmd /IdentityServer MyServer /IdentityDatabase
DB1205 /LogicalDatastore StressDB /ImportResult Path:C:\API
test cases\Print

To export a result
Supported properties:
Path
ResultID
ResultGUID
Note: Either the ResultID or ResultGUID may be provided. If both are
supplied, ResultGUID is used.

The command syntax for this feature is:


wttomcmd [database params] /ExportResult ResultID:<ResultID>
Path:<path>

For example:

wttomcmd /IdentityServer MyServer /IdentityDatabase


DB1205 /LogicalDatastore StressDB /ExportResult ResultID:436
Path:C:\API test cases\Print

To add or update a result


Supported properties:
ResultID
ResultGUID
JobID
JobGUID
Pass
Fail
NotRun
NotApplicable
ResultStatus
StartTime
EndTime
AssignedTo
LogLocation
MachineID
MachineName
Dimension
Value
Note: Not all of these properties are required.

Expected command-line variations:


The result record to be updated may be identified by providing either
the ResultID or ResultGUID. If the specified result does not exist in the
database, a new one will be created. In this case, the ID or will not be
the ID requested as it is automatically generated by the database.

The job to which this result belongs is identified by


providing either the JobID or JobGUID. Every result must
be associated with a job in the database. If the specified
job does not exist, the command will fail.

Provide the ResultStatus as either Cancelled, Completed,


InProgress, Investigate or Resolved.
Optionally, users may provide the Pass, Fail, NotRun and
NotApplicable counts. A default value of 0 will be entered
for any of these counts that are not provided while creating
a new record. If an existing record is being updated, any
original values will not be affected.
Users may provide the StartTime and EndTime. If either of
these are not provided, the current time will be used as
default while creating a new result. When updating an
existing result, the old values will be unchanged.
Users may provide the AssignedTo property. If not present,
the current logged on user will be used as default when
adding a new result.
Users may provide the LogLocation. If not supplied, this
property will remain blank when creating a new result.
MachineConfig is required for this result. This may be
provided in several ways. The most efficient way is to
identify the computer that the test case for this result ran
on. WTTOMCmd will then pull up the MachineConfig of that
computer and attach it to the result. The computer may be
identified by either the MachineID or MachineName. A
second option is to provide a set of (at least one)
dimension-value pairs. The dimension(s) specified should
already exist in the database and the value(s) given must
be valid for the particular dimension. For every dimension
string provided, WTTOMCmd will query the database for a
computer that matches the query, and a new
MachineConfig property is created. Because it must match
its query with an appropriate record from the database,
this approach will be slower than providing the MachineID
or MachineName.

The command syntax for this feature is:


wttomcmd [database params] /UpdateResult
[ResultID:<ResultID> | ResultGUID:<ResultGUID>]

[JobID:<JobID> | JobGUID:<JobGUID>]
ResultStatus:<ResultStatus> Pass:<Pass> Fail:<Fail>
NotRun:<NotRun> NotApplicable:<NotApplicable>
StartTime:<StartTime> EndTime:<EndTime>
AssignedTo:<AssignedTo> LogLocation:<LogLocation>
[MachineID:<MachineID> | MachineName:<MachineName> |
Dimension:<Dimension> Value:<Value>]

Examples of this command include:


Example 1:
wttomcmd /IdentityServer MyServer /IdentityDatabase
DB1205 /LogicalDatastore StressDB /UpdateResult
ResultID:585 JobID:44 ResultStatus:Completed Pass:4
Fail:1 NotRun:2 NotApplicable:0 MachineName:TestDell9

Example 2:
wttomcmd /IdentityServer MyServer /IdentityDatabase
DB1205 /LogicalDatastore StressDB /UpdateResult
ResultGUID:F9168C5E-CEB2-4faa-B6BF-329BF39FA1E4
JobGUID:936DA01F-9ABD-4d9d-80C7-02AF85C822A8
ResultStatus:InProgress MachineID:12

Example 3:
wttomcmd /IdentityServer MyServer /IdentityDatabase
DB1205 /LogicalDatastore StressDB /UpdateResult ResultID:72
JobID:4 ResultStatus:Completed StartTime:1/9/2004 5:35:12
AM EndTime:1/9/2004 2:04:39 PM MachineName:Stress10

Example 4:
wttomcmd /IdentityServer MyServer /IdentityDatabase
DB1205 /LogicalDatastore StressDB /UpdateResult
ResultID:188 JobID:51 ResultStatus:Investigate
AssignedTo:johndoe Dimension:WTT\OS Value:Longhorn
Dimension:WTT\Processor Value:X86 Dimension:WTT\Build
Value:chk

To schedule jobs
Supported properties:
Path

The command syntax for this feature is:


wttomcmd [database params] /CreateSchedule Path:<path>

For example:
wttomcmd /IdentityServer MyServer /IdentityDatabase
DB1205 /LogicalDatastore StressDB /CreateSchedule
Path:C:\Schedules\Overnight Run.wtq

To update computer status


Supported properties:
AssetID
AssetName
AssetStatus
Note: Either Asset ID or AssetName must be supplied, but it is not necessary
to supply both, as the second will be ignored if the first property is valid.

The command syntax for this feature is:


wttomcmd [database params] /UpdateAsset [AssetID:<AssetID> |
AssetName:<AssetName>] AssetStatus:<AssetStatus>

For example:
wttomcmd /IdentityServer MyServer /IdentityDatabase
DB1205 /LogicalDatastore StressDB /UpdateAsset AssetID:36
AssetStatus:Running

To move a computer to diferent pool


Supported properties:
AssetID
AssetName
AssetPoolID
AssetPoolPath
Note: Either Asset ID or AssetName must be supplied, but it is not necessary
to supply both, as the second will be ignored if the first property is valid.

The command syntax for this feature is:


wttomcmd [database params] /UpdateAsset [AssetID:<AssetID> |
AssetName:<AssetName>] [AssetPoolID:<AssetPoolID>]

For example:
wttomcmd /IdentityServer MyServer /IdentityDatabase
DB1205 /LogicalDatastore StressDB /UpdateAsset
AssetName:TeamTest4 AssetPoolID:8
or:
wttomcmd /IdentityServer MyServer /IdentityDatabase
DB1205 /LogicalDatastore StressDB /UpdateAsset
AssetName:TeamTest4 AssetPoolPath:$\TeamPool\MyPool\PoolC

WTTOMCmd configuration file


The WTTOMCmd configuration file (WTTOMCmd.exe.config) provides an alternate
way to supply selected command-line parameters. Those parameters supported in
the configuration file are described below.
Note: Not all command line parameters can be specified in the configuration file
instead of the command-line.

JobImportOptions
These parameters dictate the import behavior when a conflict occurs with a job
or a dependent being imported into the database.
AttributeOptions
Specifies what Export/Import should do when the job being imported has
attributes that do not exist in the target database. Supported values are:
Add Add any missing attributes to the target database while importing.
UseExisting Ignore all attributes that are missing in the target
database. Include only those present.
Drop Drop all attribute mappings while importing.
GlobalMixOptions
Specifies what should be done if the job being imported has mixes that cause
conflicts. Supported values are:
UseExisting Ignore mixes that are not already present in the database
and use only mix versions from the database.
Overwrite If a mix being imported already exists, overwrite it.
GlobalParamOptions
Specifies what should be done if there is a conflict with the parameters in the job
being imported. Supported values are:

UseExisting Ignore the parameters that do not already exist in the


target database and import only the ones that do.
Overwrite Overwrite any parameter that already exists.
JobConflictOptions
Specifies what should be done when a job being imported already exists in the
database. Supported values are:
Copy Create a copy of the job with a new GUID.
Overwrite Overwrite the existing job.
Skip Do not import the conflicting job. If this job is being imported as a
dependent of a result, the result will not be imported.
JobOverwriteOptions
Options to determine whether a job should be overwritten. Supported values are:
None Always overwrite.
NameMatch Overwrite if the job name matches.
OwnerMatch Overwrite if the job owner matches.
FeatureMatch Overwrite if the feature hierarchy matches.
SkipIfNoMatch Skip the job if none of the overwrite options match.
LibraryJobConflictOptions
Dictates what should be done when a library job being imported already exists in
the database. Supported values are:
Copy Create a copy, i.e. import the library job in with a new GUID.
Overwrite Overwrite the existing library job.
Skip Do not import the conflicting library job.
LibraryJobOverwriteOptions
These parameters indicate if a library job should be overwritten. Supported values
are:
None Always overwrite.
NameMatch Overwrite if the library job name matches.
OwnerMatch Overwrite if the library job owner matches.
FeatureMatch Overwrite if the feature hierarchy matches.
SkipIfNoMatch Skip this library job if none of the overwrite options
match.

FeatureImportOptions
These parameter options indicate how Export/Import should match the feature
hierarchy being imported with what exists in the target database.
RemapHierarchy

Indicates if the feature hierarchy of the jobs being imported should be recreated.
Supported values are:
True Import the feature hierarchy and append it to the feature path in
AppendFeature.
False Ignore the exported feature hierarchy and import all jobs into the
feature path in AppendFeature.
AppendFeature
Indicates the feature path into which the features (or jobs) should be imported.
This option should be passed with the trailing backlash. It can be root ($\).
RemapFeature
Indicates the feature path which should be used instead of the one mentioned in
AppendFeature if there is an error. It can be root ($\) or blank. A likely scenario
where this will be used is when the user does not have permissions to import into
the feature path provided in AppendFeature.

CategoryImportOptions
These parameter options indicate how Export/Import should handle clashes
between the category hierarchy being imported and what exists in the target
database.
ImportCategory
Indicates the category to which the hierarchy should be appended (or remapped,
depending on the value of ImportHierarchy).
ImportHierarchy
Indicates if the category hierarchy should be imported. Supported values are:
True Import the category hierarchy of the exported jobs and append it
to the category indicated in ImportCategory.
False Do not import the category hierarchy and remap all jobs to the
category indicated in ImportCategory.

JobImportReportOptions
A success/failure report is automatically generated by the job import module and
may be customized using parameters in the configuration files.
Note: While results are being imported, jobs are imported as well as dependent
objects. Currently, however, report functionality for results is not available and
report generated will describe the job import part only.
ReportType
Indicates the type of report to be generated. Supported values are:

FailureOnly Report includes only failed items.


Full Report includes all items both failed and passed.

ResultImportOptions
These parameters dictate the handling of conflicts or mismatches while importing
results.
ResultsOnly
Indicates whether the import XML files contain only the result related files, and
not the job related files, and therefore that every result points to a job already
existing in the database. Supported values are:
True Import only the result XML files. They point to jobs already exist in
the database. If the job is not found, the result object not be imported.
False The XML files for the jobs are also being provided and they should
be imported along with the results.
ResultConflictOptions
Indicates what should be done if the result being imported already exists in the
database. Supported values are:
Copy Import the result as a copy using a new GUID..
Overwrite Overwrite the existing result.
Skip Do not import the current result and skip to the next result.

WTTOMCmd Best Practices


This utility contains references to the Object Model DLLs. Because of this,
it should be run from a location that has the WTT Studio UI installed.
However, it should not be run on client (test) computers.
When a new result is added by using /UpdateResult, the ID of the new
result record will be returned as the environment exit code by the utility.
The following points should be considered while constructing commandlines for WTTOMCmd:
o Command-line usage is not case sensitive except for the values of
dimension and falue properties.
o All three database parameters must appear right after the
command.exe invocation. They may, however, appear in any order.
o Properties come after the command directive and may appear in
any order, with one exception. When specifying the MachineConfig
for a result using dimension and value pairs, each dimension used
must be immediately followed by its corresponding value.

Dimension-value pairs may, however, be interspersed among other


properties.
o When the MachineConfig is specified by means of dimension-value
pairs, for every dimension provided, there is be a query against the
database to get the DimensionID. It is therefore considerably more
efficient to use the MachineName or MachineID if they are
available.
o When using the Path property, do not supply a trailing '\.' The
language runtime does not seem to handle it well; therefore the
Object Model will be unable to parse the path.
o When using the AssetPoolPath property to move a computer to a
different pool, do supply a trailing '\.' Otherwise the AssetPool will
not be found.
Some command-line parameters can also be supplied in the configuration
file. If a parameter is found in the file as well on the command-line,
however, the command-line version is used. Hence, the configuration file
should be filled with typical commonly used values, which may be
overridden on the command line if required.

Appendix G: Unified Stress


Testing
Unified Stress Testing (UST) is one of the basic plugins available for Windows Test
Technologies (WTT) and its functionality is available on the Stress menu. Among
the functionality available through UST are Stress Scheduler, Test Mix
Management, and Resolver.

The following subjects are included in this appendix:


Stress Scheduler
Test Mix Management
Resolver (located in Chapter 8)

Stress Scheduler
Stress Scheduler is used to schedule stress runs on machines from machine
pools, it generates stress mixes based on machines capabilities and schedule it to
controllers to which the machines are registered.

The following subjects are included in this section:


Terminology
Stress Scheduler Best Practices
Working with Stress Scheduler

Terminology
Stress Mix

A selection of stress tests to be executed on the selected computers.


RS

Recurrent Scheduler, a component of the stress service that is responsible for


scheduling stress recurrently based on the options that the user has set in the
Stress Scheduler UI.
Regular stress jobs (tests)

The tests selected during mix generation.


Setup stress jobs (tests)

Jobs that are executed before regular jobs in order to do setup tasks. These
are part of the stress run unless manually unchecked.
Clean up stress jobs (tests)

Jobs that are executed after the regular jobs to do cleanup tasks.
Mandatory stress jobs (tests)

These are tests that always run regardless of the generated mix, and cannot
be unchecked even manually. All the above three types of tests can be either
mandatory or optional.

Stress Scheduler Best Practices


To simply schedule stress using default settings, choose the computers to
stress and then click Schedule. A mix will be generated for each computer
and scheduled on it using the default settings.
Default scheduling settings include the following: Different mix for each
computer, Run only once, Start stress now, and No end date/time set for
stress run.
To select/deselect all computers listed on the Assets tab, or all tests on
the Jobs tab, use the Select All or Deselect All buttons.
If a computer does not appear on the Asset tab, click My Assets on the
Assets menu to confirm that the computer status is set to Ready.

Working with Stress Scheduler

Choose computers to schedule stress on


1. On the Stress menu, click Schedule Stress, and then select your
controller from the DB Controller drop-down list.
2. Expand the machine pool tree and select the machine pool containing the
desired computers.
3. Select the computers upon which you wish to schedule stress.
4. To schedule stress on the selected computers using default settings, click
Schedule.

To manually generate and customize mixes for selected computers


1. Select the computers upon which to schedule stress
2. On the Jobs tab, select the stress type upon which to base the stress mix
from the Stress Type drop-down list.
3. Wait until the mix is generated and populated in the jobs list next to the
computers. Click the computer for which you want to manual generate a
mix.
4. Customize the mix by selecting or clearing the stress jobs in the job lists.

Note: The jobs listed with the icon next to them are mandatory and
cannot be unchecked.
By default, the job list shows only Regular jobs. This view can be
changed to show all jobs, setup jobs only, regular jobs only, or cleanup
jobs only.

Figure G.1 Stress Job List

5. To regenerate the last stress mix that ran on the computer, right-click the
specific computer, and then click Last Mix.
6. To regenerate a stress mix from the previous on the computer, right-click
the computer, and then click Previous Mix. Click the desired mix from the
list.
7. To regenerate a mix from the previous mixes that ran on the machine right
click the machine and choose Previous Mix from the context menu then
choose your mix from the list.

To Schedule the same mix on all computers


1. Generate a stress mix manually for one of the computers
2. On the Jobs tab, select the Same Mix for All Machines check box.
Note: The machine lists will be disabled when this option is selected.

To change scheduling options for one time stress run


1. Generate a stress mix for at least one computer.
2. On the Options tab, select the One Time Only option in the Scheduling
Type group box.
3. If you do not wish to debug the stress break, select the Private Run
check box.
4. In the Job Time group box, set the Start date/time for the stress run.
5. Select an option for ending the stress run.

Figure G.2 One-time Only Options (Stress Scheduler)

To change scheduling options for recurrent stress runs


1. Generate a stress mix for at least one computer.
2. On the Options tab, select either Recurrent Same Mix (for the same
mix every run) or Recurrent New Mix (for a different mix every run) in
the Scheduling Type group box.
3. If you do not wish to debug the stress break, select the Private Run
check box.
4. In the Scheduling Rule group box, select either Daily or Weekly for the
frequency of stress runs.
If a Daily run is selected, users can specify if they wish to extend the
stress run on Friday to Monday by selecting the Enable Long
Weekend Runs check box.
If a Weekly run is selected, select the days on which to run stress by
selecting the corresponding days check box.
5. In the Range of Occurrences group box, select the date on which to
start the recurrent stress schedule.
6. Select an option for ending the stress run.

Figure G.3 Recurrent Stress Run Options (Stress Scheduler)

Test Mix Management


Test Mix Management is a tool provided by UST that can be used to efficiently and
effectively manage Stress Tests, Test Groups and Stress Types. This tool offers
features including:
Efficient means for approving and controlling tests, as well as setup and
cleanup jobs for stress testing
Ease and convenience of creating groups of tests and managing their
properties.
Ease and convenience of creating a new stress type by selecting test
groups and setup/cleanup jobs, and then managing the behavior of the
new stress type.
Tracking the history of changes being made to stress objects.
Means to quickly run queries on these objects.
Means to impose security restrictions so that only users who are
designated to control a particular stress type may modify it.
Means to limit the ability to make any changes to objects to the
administrators of stress types while still offering other users read-only
information.

The following subjects are included in this section:


Terminology
Test Mix Management Best Practices
Working with Test Mix Management

Terminology
Test Group

A collection of Stress Tests that are related in terms of their behavior or the
component of the operating system that they are supposed to test.
Stress Type

A collection of test groups put together to perform the stress testing of


different operating systems or to emphasize on the stressing of a particular
component of the operating systems.

Stress Type Owner

A user who has been granted the permission to modify a stress type. Edit and
update operations on a stress type are limited to users who own that type.
Additionally, any user who is the owner of any stress type in the system can
also create or modify all stress groups and approve or disapprove jobs for
stress.
UST Administrators

A user who has unlimited permissions on all objects in the system, that is,
they have the ability to make any change to any object (tests, groups or
types).

Test Mix Management Best Practices


Use the Database button to frequently save the changes youve made to
the database.
The Refresh button can only be used for discarding changes since the last
time you saved your work to the database. The refresh is made from this
last save.

Working with Test Mix Management

How to Create a Stress Type


1. On the Stress menu, click Test Management, and then select your
controller from the DB Controller drop-down list.
2. In the left pane, right-click the Types tree root, and then click New.
If this menu option is not enabled, the user is not registered as a UST
Administrator.
3. Enter a name for the new Stress Type node created.
Note: New Stress Types may only be created by administrators of the stress
database in the enterprise. All other must submit a request for a new stress
type to BGIT, who will then create the type.

Figure G.4 Test Management Window

How to Create a Stress Group


1. On the Stress menu, click Test Management, and then select your
controller from the DB Controller drop-down list.
2. In the left pane, right-click the Group tree root, and then click New.
If this menu option is not enabled, the user is not registered a UST
Administrator.
3. Enter a name for the new Stress Group node created.

How to Approve a Job submitted for Stress


1. On the Stress menu, click Test Management, and then select your
controller from the DB Controller drop-down list.
2. In the left pane, expand the Jobs tree root.
3. Select the Submitted Jobs node to view a list of all jobs submitted for
stress.
4. Right-click a job whose Status is marked as Submitted or Resubmitted,
and then click Job Operations.

If this menu option is not enabled, the user is not registered a UST
Administrator.
5. Click the selection for job approval as appropriate:
Approve as Stress Test.
Approve as Setup Job.
Approve as Cleanup Job.

Figure G.5 Approving Submitted Stress Jobs

How to reject a Job submitted for Stress


1. On the Stress menu, click Test Management, and then select your
controller from the DB Controller drop-down list.
2. In the left pane, expand the Jobs tree root.
3. Select the Submitted Jobs node to view a list of all jobs submitted for
stress.
4. Right-click a job whose Status is not marked as Rejected, and then click
Job Operations.
If this menu option is not enabled, the user is not registered a UST
Administrator.

5. Select Reject Job.

How to add Stress Groups to a Stress Type using Copy/Paste


1. On the Stress menu, click Test Management, and then select your
controller from the DB Controller drop-down list.
2. In the left pane, expand the Groups tree root.
3. Select one or more groups, right-click the selection, and then click Copy.
4. Click the Types tree root and navigate to the Stress Type node to which
the group will be added.
5. Right-click the selected node, and then click Paste.

Figure G.6 Copying Stress Groups

How to add Stress Groups to a Stress Type using drag-and-drop


1. On the Stress menu, click Test Management, and then select your
controller from the DB Controller drop-down list.
2. In the left pane, expand the Groups tree root.
3. Select one or more groups and drag the groups to the Stress Type to
which you wish to add these groups.
4. Drop the groups on the selected Stress Type.

How to add Stress Tests to a Stress Group using Copy/Paste


1. On the Stress menu, click Test Management, and then select your
controller from the DB Controller drop-down list.
2. In the left pane, expand the Group tree root.
3. Select one or more tests, right-click the selection, and then click Copy.
4. Expand the Group tree root and navigate to the Stress Group to which
the tests will be added.
5. Right-click the Stress Group, and then click Paste.

How to add Stress Tests to a Stress Group using drag-and-drop


1. On the Stress menu, click Test Management, and then select your
controller from the DB Controller drop-down list.
2. In the left pane, expand the Group tree root.
3. Select one or more tests and drag the tests to the Stress Group to which
you wish to add these test.
4. Drop the tests on the selected Stress Group.

How to add Setup/Cleanup Jobs to a Stress Type using Copy/Paste


1. On the Stress menu, click Test Management, and then select your
controller from the DB Controller drop-down list.
2. In the left pane, expand the Jobs tree root.
3. Expand Approved Jobs and then expand the Setup or Cleanup Jobs
node.
4. Select one or more jobs, right-click the selection, and then click Copy.
5. Expand the Type tree root and navigate to the Stress Type to which the
jobs will be added.
6. Right-click the Stress Group, and then click Paste.

How to add Setup/Cleanup Jobs to a Stress Type using drag-and-drop


1. On the Stress menu, click Test Management, and then select your
controller from the DB Controller drop-down list.
2. In the left pane, expand the Jobs tree root.
3. Expand Approved Jobs, and then expand the Setup or Cleanup Jobs
node.
4. Select one or more tests and drag the tests to the Stress Type to which
you wish to add these test.
5. Drop the tests on the selected Stress Type.

How to add an owner to a Stress Type


1. On the Stress menu, click Test Management, and then select your
controller from the DB Controller drop-down list.
2. In the left pane, expand the Types tree root, right-click the Owners
node, and then click New.
3. Enter the user name for the Owner to add for the new node.

How to Edit Properties of a Stress Object


1. Right-click the object name (Stress Test, Group or Type name), and then
click Properties.
2. Edit the properties as needed.
3. Click OK.

How to View History of changes to an Stress Object


Right-click the object name (Stress Test, Group or Type name), and then
click History to view the history of changes for the object.

Appendix H: Machine
Configuration Query Dimensions
Machine configuration query dimensions are part of a suite of UI, services and
libraries that take information from an individual computer and store it in the WTT
database in a manner customized to each team's needs. It is comprised of
additions to the Admin Dimensions UI, the Asset Pool UI, as well as the WTT
Controller service. Collectively, this suite is called Machine Configuration Update,
or MCU.

Overview of Machine Configuration Query


process
The following is an overview of how data is collected on a machine, sent to the
WTT controller and stored in the database:
1. A WTT Client, running on a test computer, launches Sysparse, which
gathers configuration information and sends that information in the form
of an XML file to the computers WTT Router. Sysparse gathers a set of
core configuration information as well as all name-value pairs stored in a
specific key in the registry.
2. WTT Router then forwards this information to the WTT Controller (which
can be the same machine), which handles the incoming XML file.
Specifically, the wttssvc.exe service parses the data and either updates the
existing computer record in the WTT Database, or creates a new computer
record. It then calls the WTTMachineConfiUpdate.dll UpdateConfigValues
method with the Machine ID returned from the update or create.
3. WTTMachineConfigUpdate.dll contains the following core MCU algorithm
implementation:
Create new MachineConfig (preserve existing non-MCU data).
Call out to the MCU Plugin manager which will call all installed plugins.
Plugins modify and return updated MachineConfig to MCU
For each MachineConfigQuery dimension in the MCU Policy for this
computers Asset Pool, run the dimensions query against the
computers Sysparse XML data, and do the following for each query
result value:

o Add query result to Dimensions DimensionValueList, if it does not


exist.
o Add query result to MachineConfigs MachineConfigValueList, if it
does not exist.
Commit updated Machine and MachineConfig information to the WTT
Database.

Common Usage Scenarios for MCU


The following are several common usage scenarios for MCU.

Scenario 1: Gathering data for job scheduling


Test teams need to be able to create and use dimensions to constrain Jobs. For
example, the networking test team only wants to run IPv6 tests on a machine
that is configured with IPv6. In this case, the dimension is IPv6Installed and the
constraint is IPv6Installed=True. With this, the test team can create a job
constraint such that the job scheduler will only push the job out to computers
which have IPv6 installed.

Scenario 2: Gathering data for job results reporting


Other test teams, particularly hardware device test teams, not only want to run
the same set of tests over a wide variety of hardware manufacturers but also
generate reports based on each distinct hardware manufacturer. For example, a
CD-ROM Read Test is the same regardless of manufacturer. The CD-ROM test
team wants to be able to run that test, but have the actual hardware
manufacturer recorded with each test result so later a report can be run, not
based on the last result of the CR-ROM Read Test, but based on each distinct
hardware manufacturer the test was run against. In this case, they want to be
able to use WTT to push a single Job out to 50 computers in their pool which have
50 different CD-ROMs, and with each computers job result, always have the CDROM Manufacturer dimension value automatically stored with the job result.
Once that job result is stored with that information, the test team can then run
the reports it needs, for example: What is the pass rate for all CD-ROM test jobs
where CD-ROM Manufacturer dimension value = AcmeCDROMs?

Scenario 3: Gathering data for automated test matrix generation


Auto-generation of test matrices based on dimensions and stored dimension
values is also possible. Matrix generation tools can leverage the dimension and
subsequent dimension values that are created by MCU to create complete test
matrices. For example, we have a dimension of NetworkDeviceType which has

possible values of Wired and Wireless. We have another dimension of Processor


which has possible values of x86, amd64 and ia64. Matrix generation tools can
take the Dimension and Dimension Values and then automatically generate all
possible test cases, in this case:
Test case 1: Wired, x86
Test case 2: Wired, amd64
Test case 3: Wired, ia64
Test case 4: Wireless, x86
Test case 5: Wireless, amd64
Test case 6: Wireless, ia64
These types of matrix generation tools require the Dimension and Dimension
Values as input. MCU now makes it possible to automatically discover and store
this information so tools like this can then do their work and auto-generate all the
possible test cases.
MCU is a great set of tools that make WTT a very powerful test automation
framework. It is the key that allows teams flexibility in the types and amount of
information that is stored in the WTT database that can be used to schedule their
jobs, run their reports and auto-generate their test matrices using tools like MDE.

Machine Configuration Update Best Practices


Know XPath and the Sysparse output format. XPath queries are case
sensitive and complex. Unfortunately the Sysparse XML output is
inconsistent in its output and if users are not careful, this can cause
problems.
When creating custom dimensions, name them
<Component>\MyDimension for consistency and ease of reference. For
example: Networking\IPv4Address
Re-use dimensions created by other teams if they get the appropriate
results.
When working with XPath queries, make sure the path to the query will
be consistent across all operating systems. Do not query on elements
in the XML that may or may not be filled in every time by Sysparse.
For example, in one case, a query is made to get a CD-ROMs deviceid
which is present in the devnode tree. Two problems arise with the
original query:
a. The Class property isnt filled in consistently by Sysparse because
certain operating systems do not provide that information to
Sysparse. What might look like a good query on Windows XP
therefore does not work on Microsoft Windows 2000
Professional. Creating a query that searches for something that is
guaranteed to be present across all platforms is important.

b. Many queries assume a certain depth in the devnode tree. In this


case, the CD-ROM devnode may or may not always be at a given
depth in the devnode tree, depending on what HW config is on the
machine.
The following query works on Microsoft Windows XP, but does not
work on Windows 2000 Professional:
infograb/INFOGRAB_BLOCK_DEVNODE/devnode/devnode/devnode/d
evnode/devnode/devnode/devnode[class='CDROM']/deviceid
This query, however works on both, and accounts for varying
depth:
infograb/INFOGRAB_BLOCK_DEVNODE//devnode/Devnodeex[devnod
eguid='{4D36E965-E325-11CE-BFC108002BE10318}']/../deviceid
In this case, the second query is querying on the Devnodeguid,
which is guaranteed to be there, whereas the class name value, or
other properties like devicedescription, may or may not be
consistent across operating systems. Additionally, the // after
infograb/INFOGRAB_BLOCK_DEVNODE is what accounts for the
varying depth. It causes the query engine to do a recursive search
so data wont be missed if it is at a varying depth in the tree.
Note: When creating queries you should go as deep as you can
before using the // in order to speed up the search for the results.

Additional XPath samples


An XPath query that looks for something that does not start with a
particular value and returns the value of the description attribute of the
results:
infograb/INFOGRAB_BLOCK_DEVNODE/devnode//Devnodeex[not(startswith(infprovider, 'Microsoft'))]/../devicedescription
In this case it returns all device descriptions whose infprovider is not
'Microsoft'.
The not() and starts-with() are very handy, along with contains(), which
uses the same syntax as starts-with().
An XPath query that returns "True" if it finds something and "False" if it
does not. In this case, it looks to see if there is more than one processor
in the system:
Boolean(infograb/INFOGRAB_BLOCK_PROCESSOR[numofproc > 1])
An XPath query that returns the number of disk drives:
count(infograb/INFOGRAB_BLOCK_DISK/disk)

An XPath query that returns whether IPv6 is installed or not:


Boolean(infograb/INFOGRAB_BLOCK_NETWORK/NDIS/NetworkAdapter/Ipv
6)

For additional XPath info: http://msdn.microsoft.com/library/enus/xmlsdk/htm/xpath_ref_overview_0pph.asp

Troubleshooting Updates
If the Update does not seem to happen
There are several reasons why this may happen:
You need to wait longer for Sysparse to complete collecting the data on
the test computer and send it to the controller. It should take no more
than 20 minutes, depending on the speed of the test machine.
The query may not have returned any results. If the query does not work
against the test computers specific XML file, then you wont have any
results.
A permissions problem may exist. For example, if you have a lab router
that transfers the Sysparse XML to a separate controller that is servicing
several machine pools, make sure the controller machine does have full
permission to update the machine records for your machine pool. As well,
it needs to have access to the share where the XML data is stored.
Beta 2 XPath queries may not have been converted to the new RTM format
(see below).

Mapping Beta 2 XPath Queries to RTM XPath Queries


The Sysparse XML schema changed dramatically between WTT 2.0 Beta 2 and
WTT 2.0 RTM. Here are some before and after XPath queries to help you map
your own custom queries:
Before

infograb/object[@name='INFOGRAB_BLOCK_DEVNODE']//object[@name='devnode']/object[@name='Devnode
08002BE10318}']/../property[@name='deviceid']

infograb/object[@name='INFOGRAB_BLOCK_NETWORK']/object[@name='NDIS']/object[@name='NetworkAdap
with(@value,'VIRTUAL'))]/../property[@name='description']

boolean(infograb/object[@name='INFOGRAB_BLOCK_NETWORK']/object[@name='NDIS']/object[@name='Net

s'])
count(infograb/object[@name='INFOGRAB_BLOCK_DISK']/object[@name='disk'])

For additional assistance creating MachineConfigQuery dimensions:


For questions on the sample XPath queries provided in WTT, post questions
to mailto:wtttalk.
For questions on understanding the SysParse XML schema and contents,
post questions to mailto:wtttalk or mailto:syspse.
For help with more complex XPath queries or XPath in general, you should
see the online documentation on XPath, or post questions to the XML
Discussions DL (mailto:xml), or to the XML MSDN Community forum:
http://communities.microsoft.com/newsgroups/default.asp?
icp=msdn&NewsGroup=microsoft.public.xml.

Appendix I: Sysparse
Sysparse is a tool installed on a client computer that inventories the computer's
hardware components and provides that information to Windows Test
Technologies (WTT) for use in determining which computers to schedule for
testing. The WTT client calls Sysparse each time that client restarts. Additionally,
Sysparse maintains its in-memory tree while the client is running. Tests running
on the client can use the Sysparse APIs to query data from Sysparse in real-time
and ask Sysparse to refresh data in real-time. As well, the Sysparse APIs can be
used to load custom gatherers which collect additional information. For full
documentation on the Sysparse API set, see the WTT Software Development Kit
(SDK).
The Sysparse tool was developed on the Infograb technology and consists of the
Infograb engine and gatherers. This engine creates and manages data in an inmemory tree, and provides interfaces for tests to programmatically retrieve
computer data in order to make run-time decisions. Gatherers collect the data
when called by the engine and use the Sysparse APIs to return the data they
collect to the engine. The data collected by Sysparse is used by WTT to enable
scheduling against specific hardware constraints and to create a persistent,
unique ID for each computer in the WTT database.
Base Sysparse
Base Sysparse consists of the Infograb engine and the standard gatherers
that are compiled in the Sysparse executable.
Custom Gatherer
The Custom Gatherer (also referred to as a COM gatherer or a plug-in
gatherer) consists of a set of COM objects called through the Sysparse
engines APIs. These COM objects collect data in addition to the data collected
by Base Sysparse.
Registry Gatherer
As an alternative to writing custom gatherers, Sysparse has a registry data
gathering feature for custom data. Users can add new key-value pairs to the
registry location below. Sysparse automatically collects all key-value pairs
from this registry location and makes them available for scheduling.
HKEY_LOCAL_MACHINE\Software\Microsoft\WTT\Sysparse
\ExtendedData
Note: This registry key may only be populated with data of type REG_SZ.
Keys with other types of data will be ignored by the gatherer. As well,
nesting of keys under this key is not allowed. Any keys nested under this
key will be ignored by the gatherer.

All keys have a default (nameless) value. In the cases where this value is
implemented, the resulting xml output will contain the entry value node, but
not the KeyName node

The following sample data will be used to demonstrate how the Sysparse XML will
look.
[HKLM\Software\Microsoft\WTT\Sysparse\ExtendedData]
Test-client 1s Key=6481062
Test-client 2s Key=Test2

When this data is collected by the gatherer, Sysparse will produce the following
XML:

<INFOGRAB_BLOCK_WTT>
<RegistryInfo>
Software\Microsoft\WTT\Sysparse\ExtendedData
<RegistryKey>
<KeyName>Test-client 1s Key</KeyName>
<KeyValue>6481062</KeyValue>
</RegistryKey>
<RegistryKey>
<KeyName>Test-client 2s Key</KeyName>
<KeyValue>Test2</KeyValue>
</RegistryKey>
</RegistryInfo>
<INFOGRAB_BLOCK_WTT>

Sysparse Coverage Detection


WTT uses SYSPARSE to collect system information. In addition to system data
being collected, some of it is published as the computer configuration. These can
be viewed as computer attributes in the Asset UI.
Additionally, Sysparse has the ability to recognize whether a build was Code
Coverage enabled or not. Additionally, the code coverage status of a build will
also appear as an attribute in the default configuration.

Sysparse Data
The data is collected by the API CoverageIsInstrumentedBuild
(_CoverageIsInstrumentedBuild@0) in WinCover.dll. Since this API/DLL is only
available on coverage builds, it is loaded dynamically.
IF the DLL/API is not available, the node will not be present in the xml output.
The data will appear in the Sysparse output in this manner:

<INFOGRAB_BLOCK_RUNTIME>
< CoverageIsInstrumentedBuild >1</
CoverageIsInstrumentedBuild >
</INFOGRAB_BLOCK_RUNTIME>

Computer Attribute
When the Sysparse output is moved to the server, an attribute will be added
automatically to the Computer Configuration in this form:

WTT\WindowsCoverageBuild = {True or False}

Caveats
A false positive may occur under certain circumstances. If a coverage-enabled
build was installed, then WinCover.dll will be present on the hard drive. If the
computer is upgraded to a non-coverage-enabled build, Sysparse will erroneously
publish that the build is coverage-enabled.
Workaround: Clean install before and after using a coverage-enabled build.
For more information, see additional documentation regarding WinCover.dll at this
location:
http://codecoverage/v2/defaultredir.aspx?
displayNavTree=false&path=/Help/displayhelp.aspx&params=navPageURL
%3D/Help/CCSavingData.htm

Appendix J: Log Viewer


The Windows Test Technologies (WTT) log viewing feature provides users with a
set of configurable options for viewing WTT log data in a format for easy review.
This functionality also allows access to this log data from either Result Explorer or
Job Monitor within WTT, or directly through Windows Explorer.
Three component views are available within the log viewing feature of WTT, and
provided views of:
Test Log
Machine Configuration
Infrastructure Log

The following subjects are included in this appendix:


Terminology
Working with Log Views

Terminology
Log View

One of a group of predefined formats in which a log file can be displayed for
viewing within WTT. Log views are defined by view transforms, and thus,
additional formats may be defined by WTT users.
HW Configuration Log View

The HW Configuration Log view formats and displays the log files produced by
Sysparse. These log files contain detailed information about the configuration
of the test machines, which is collected each time the computer boots and
transferred from the test client to the Controller.
Infrastructure Log View

The Infrastructure Log view formats and displays the files produced by server
to client communications via the WTT Service, including information on
scheduled jobs.
Test Log View

The Test Log view formats and displays the selected test log file. These files
are transferred from the test clients to the Controller after the selected tests
have run.
View Transform

A set of instructions installed on the WTT file store and applied to a specific
log file. These transform files use an .xslt file format and application of these
transforms converts the raw XML data of a log file into an organized HTML
page for easy viewing.

Working with Log Views


WTT Logs may be viewed from Result Explorer and Job Monitor within WTT, or
through Windows Explorer. Although this last option may be used without
involving the context of the WTT environment at all, WTT does provide an
available option for using Windows Explorer within the WTT environment.

To view logs from Result Explorer


1. On the Explorers menu, click Result Explorer.
2. From the Datastore drop-down list, select your datastore.
3. Click the desired Feature or Category node.
4. Right-click the job result you wish to view, point to Test Log,
Infrastructure Log, or HW Configuration Log, and then click the
specific computer whose log you wish to view.
Note: If the view has already been filtered to a specific machine or a
specific task on a machine, it is only necessary to click on the Test Log,
Infrastructure Log, or HW Configuration Log as appropriate.
5. If a warning regarding loading very large files appears, click OK.
Note: To view logs based on individual tasks making up a specific job, click
the Show Task List button to display the Task Execution Status pane, and
then click on the desired job. Right-click the desired task to display the log.

To view logs from Job Monitor


By default, all jobs in the selected Feature or Category node are displayed
within Job Monitor. To view the jobs performed by a specific computer, select
the desired computer from the Machines list to filter the jobs by computer.

1. On the Explorers menu, click Result Explorer.


2. From the Datastore drop-down list, select your datastore.
3. Click the desired Feature or Category node.

4. Right-click the job result you wish to view, point to Test Log,
Infrastructure Log, or HW Configuration Log, and then click the
specific computer whose log you wish to view.
Note: If the view has already been filtered to a specific machine or a
specific task on a machine, it is only necessary to click on the Test Log,
Infrastructure Log, or HW Configuration Log as appropriate.
5. If a warning regarding loading very large files appears, click OK.
Note: To view logs based on individual tasks making up a specific job, click
the Show Task List button to display the Task Execution Status pane, and
then click on the desired job. Right-click on the task to display the desired
log.

To view logs using Windows Explorer


The log view may be launched through Windows Explorer outside the WTT
environment by opening the XML log file directly. However, most users
wishing to view the log file through Windows Explorer are likely to prefer to
view it through WTT.
Within either Job Monitor or Result Explorer, right-click the job result
you wish to view, point to Test Log, Infrastructure Log, or HW
Configuration Log, and then click Explorer.
Note: Unlike other log views, the Explorer option is not available in the
Task Execution Status pane.

Log Views
Each log view initially displays information in the default raw XML format. A
number of predefined view transforms are provided to allow users to see the data
in different formats depending on individual needs. These transforms are
available in the View drop-down list.

Test Log View


The View drop-down list within the Test Log view provides the following
predefined views:
Default: The default view displays the raw unfiltered and unformatted xml
log file.
Complete: This selection provides a complete report of all test results
from the selected test log.

Failure: This selection formats and displays information about each failed
test from the selected test log.
Not Run Tests: This selection formats and displays information about
each blocked or skipped test.
Summary: This selection formats and displays a summary of test results
and lists each test result.
Infrastructure Log View
The View drop-down list within the Infrastructure Log view provides the following
predefined views:

Default: The default view displays the raw unfiltered and unformatted
XML log file.

CompleteView: This selection provides a complete report of all results


from the selected infrasturcture log.

ErrorandWarning: This selection provides a summary of all errors and


warnings reported by the test log.

Errors: This selection provides a summary of all errors reported by the


test log.

ThreadView: This selection provides a summary of test results organized


by Thread ID.

HW Configuration Log
The View drop-down list within the HW Configuration Log view provides the
following predefined views:

Default: The default view displays the raw unfiltered and unformatted
XML log file.

Machine: This selection formats and displays the Sysparse hardware


configuration data that uniquely identifies a specific computer on which
jobs were run.

Runtime: This selection formats and displays the Sysparse runtime


information for selected computers.

WTT: This selection formats and displays the registry information for the
selected computer.

Using Custom View Transforms


If you have created a custom XSLT view transform template, it can be added to
the View drop-down list for each log view using the Browse button.

To add a custom view transform


1. Open the desired log view.
2. Click the Browse button.

3. Within Windows Explorer, navigate to the WTT Log View Transform


Folder on the Controller.
Note: This may already be the default folder.
4. Copy the desired XSLT file into this folder and close Windows Explorer.
5. Close and reopen the log viewer.

Appendix K: Extending WTT (The


UI Framework)
The Windows Test Technologies (WTT) User Interface (WTTStudio) is highly
extensible and configurable. It can be roughly viewed as a collection of UI
modules (or plug-ins) hosted within a UI Framework. This UI Framework provides
support for plug-in discovery and plug-in management (Add/Remove,
Enable/Disable, and so on.). Additionally, common UI components such as menus
and toolbars, as well as standard UI functionality such as saving and printing, can
also be leveraged in the attached plug-ins.

Terminology
Plug-in

The module to be hosted within the UI Framework. A plug-in typically consists


of a manifest, .NET assemblies, help file, and resource files.
Group

A plug-in can have one or many groups, each of which is a functional unit
typically consists of menu items, toolbar buttons, and windows.
Manifest

An XML file containing a definition of a given plug-in, including assembly


names, help file name, resource file names, and UI components.
Group Instance

UI components instantiated by user interaction. A group may or may not allow


instance creation.
Static Contents

The UI components defined by the plug-in manifest within a group that will be
instantiated when WTT Studio starts, including such things as menu items
used to create a group instance.
Dynamic Contents

UI components as defined by plug-in manifest within a group that will be


instantiated only through user interaction. Dynamic contents form a group
instance.
Single Instance Group

A group that only allows one group instance creation. Any attempts to create
another instance will simply cause the existing instance to be re-activated
Multiple Instance Group

A group that allows multiple instances creation.


Startup Group

When WTT Studio starts, it will automatically create an instance for the groups
marked as startup group. This flag can be set from within Plugin Manager
Plugin Manager

The user interface within WTT Studio, used to manage plug-in settings. It is
available through the Tools menu.

Working with UI Framework

To write a Plug-in (Overview)


1. Create a plug-in manifest based on the Plugin.xsd scheme included in the
WTT Software Development Kit (SDK).
This manifest should be named <Plug-in Name>.xml.
2. If your plug-in manifest contains user-defined types, create .NET
assemblies for them. These types should be defined in the assemblies.
Types referenced in dynamic contents should implement the Extensibility
interface.
3. Create a help file and resource files as declared in the manifest.
4. Place the plug-in manifest, assemblies, help file and resource files into a
single directory.

To deploy a Plug-in
1. If they are not already so located, place the plug-in manifest, assemblies,
help file and resource files into a single directory.
2. On the Tools menu, click Plugin Manager.

Figure K1 Plugin Manager

3. Click Add.
4. Click Browse and navigate to your manifest file.

Figure K.2 Open Manifest dialog box

5. Select your manifest file, and then click Open.


6. Click OK, and then click OK again to close the Plug-in Manager.

To remove a Plug-in:
1. On the Tools menu, click Plugin Manager.
2. Select the plug-in to be removed and then click Remove.
3. Click Yes to confirm the removal, and then click OK.
4. Click Yes if you want to restart WTT Studio immediately or No if you wish
to do it later.

To disable a Plug-in Group:


1. On the Tools menu, click Plugin Manager.
2. Select the plug-in node containing the group to be disabled and expand it
by clicking the + icon.

Figure K.3 Selecting a Plugin Group

3. Click View Groups.


4. Select the group to disable.

5. Click Disable.
6. Click OK.

To enable a Group
1. On the Tools menu, click Plugin Manager.
2. Select the plug-in node containing the target group and expand it by
clicking the + icon.
3. Click View Groups.
4. Select the group to enable.
5. Click Enable.
6. Click OK.

To set a Group as Startup Group


1. On the Tools menu, click Plugin Manager.
2. Select the plug-in node containing the target group and expand it by
clicking the + icon.
3. Click View Groups.
4. Select the group to set up as a startup group.
5. Click Set as Startup Page.
Note: This button will only be visible when the group is not currently a
startup group.
6. Click OK.

To set a Group as Non-startup Group


1. On the Tools menu, click Plugin Manager.
2. Select the plug-in node containing the target group and expand it by
clicking the + icon.
3. Click View Groups.
4. Select the group to set up as a startup group.
5. Click Clear Startup Flag.
Note: This button will only be visible when the group is set up as a
startup group.
6. Click OK.

Appendix L: WTT Metric


Collection
Windows Test Technologies (WTT) Metric provides an overall generic framework
for collecting, storing, analyzing, and archiving various system metrics that are
associated with test systems.

The following subjects are included in this appendix:


Terminology
Working with the Configuration UI
Working with the Analyzer UI
Jobs Integration
Advanced Options

Terminology
Metric

An abstract term referring to any one of several sets of information or


statistics, including system performance counters and system pool tag
information. A metric usually consist of a Name and its associated value.
Client Engine

An unmanaged runtime engine on a client computer that collects and reports


requested metric data to the WTT database.
Configuration UI

The managed UI framework that allows for configuring information on the


type or selection of metrics to collect, as well as other general information.
Analyzer UI

The managed UI framework that allows for querying collected metric


information and displaying it in graphical format. It also allows for exporting
that data for offline analysis.
Database

The WTT automation database storing the collected metric data collected by
the client engines.
Plug-in

External software module that plugs into existing modules to add or improve
functionalities to the overall framework.
IPF

Short for Intelligent Pass/Fail. This consists of two pieces - a Configuration UI


plug-in and a client engine plug-in
IPF UI plug-in

A plug-in that loads into the Configuration UI and allows for setting thresholds
on metrics that are being collected. This allows for the defining of pass/fail
thresholds of jobs based on metrics.
IPF Client plug-in

A plug-in that loads into the client engine and monitors the actual metrics
collected at run-time. It tests whether these metrics are within defined
thresholds and then takes appropriate actions as defined if the Metrics fall
outside of the defined thresholds.
Criteria

The definition for a threshold for a given metric(s) that is specified in the IPF
UI plug-in.

Working With the Configuration UI


Metric Configuration UI allows for configuring a list of metrics to collect, as well as
other general information.
It consists of six main types of settings:
General
EventLog
Perfmon (Performance Monitoring)
Pool Tags
Process
Intelligent Pass/Fail Configuration

General
This section allows users to configure the following settings:

Output Log File Name


The log file name that is located on the client computer that holds the
collected metric information, and is periodically sent to the database. Note
that the log file will only contain the last reported data to the database. This is
defaulted to WTTMetricData.log.
Report Interval (minutes)
The time in minutes between recollecting/refreshing the metric data before
sending an update to the database. This is defaulted to sixty (60) minutes.
Do not Report to DB

The client engine will not report the metric data to the database providing for
flexibility to the users to monitor the data offline, if required. This check box is
cleared by default.
Load Plugin
Allows for users to load their own custom configuration UI plug-ins to extend
existing functionality.

EventLog
This section allows user to configure the following Eventlog related settings
Event Source Settings
Allows users to specify event source filters to apply on the event logs, so that
only events from a specific EventSource are collected. Use the Add/Remove
buttons to add or remove one or more EventSources from the displayed list.
The list of EventSource filters is defaulted to (default) meaning Events from
all EventSources are to be collected.
Event Category Settings
This group allows user to specify event category filters to apply on the event
logs so that only events matching specific EventCategory are collected. Check
all the categories of events you want to be collected for separately for each
EventSource added above. The default is not to collect any category of events
for any EventSources(thus no eventlogs at all) unless explicitly set by the
User

Note: The client engine will only report new events that have occurred since
the last time that the event logs were read.

Perfmon (Performance Monitoring)


Users can add or update system performance counters (including complete
counter paths) on the list of metrics to be collected by using the Add/Remove
buttons. The list of counters is empty by default, which means that no
performance data is collected.
The client engine uses the system PDH libraries to collect data regarding
performance counters. If counter details are being entered manually using the
edit controls, it is important to make sure that they conform to the expected
counter path syntax. This is not relevant if the Add/Remove buttons are used
instead.
Also note that the Configuration UI uses the system PDH libraries to display the
available performance counters for you to select from. The displayed list of
counters are available on the local system at the time you are selecting them and
may not necessarily be available on the actual client systems. Users need to

make sure that the selected performance counters will, in fact, exist on the client
machines while the client engine is collecting this information during run-time.

Pool Tags
User may select the driver pool tag(s) and associated information about the
selected pool tag(s) that is collected from the client machines at run-time. This
information may be updated from a list of excluded tags, and the final list in the
Include section is what the client engine will collect. Currently, collection of the
following data for each pool tag is supportedNonpaged Allocations, Nonpaged
Frees, Nonpaged Used, Paged Allocations, Paged Frees and Paged Used. By
default, no pool tag information is collected.
The Pool Tag option must be enabled on clients for the client engine to be able
to collect this data. The engine will not enable the option by itself. If the client
does not have it enabled, it will fail, and ignore collecting this data on the clients.
Additionally, the list of displayed pool tags are derived from the pooltags.txt file
(from the WS03 sources).

Process
Users may specify the process name(s) for the processes to collect information
on. The list of process to collect information on can be added to or updated, along
with the specific counter used, by using the Add/Remove buttons. Currently, the
engine supports collecting the following data for each process Process ID,
HandleCount, WorkingSet, VirtualBytes, PagedPoolBytes and NonPagedPoolBytes.
The user can also enter the service name (such as wuauserv, dhcp, and so on)
and the engine will pick the correct process during run-time under which the
service is running and collect data about that process. This allows for tracking
services that run under a common service host (such as svchost.exe). In this
case, the information that is collected will also include other services that are
running in the same process.

Intelligent Pass/Fail Configuration


Users may specify thresholds on the various metrics selected in all above
sections. Users may also specify what actions, if required, to take in case the
metrics either do not meet or exceed the threshold.

To add a new threshold for a metric


1. On the Tools menu, point to Metric, and then click Configuration.
2. On the Intelligent Pass/Fail Configuration tab, select the metric or
node to which you wish to add a threshold in the Available Metrics box.
3. Click Add Metric.

4. Select an appropriate operator for the threshold value specified from the
Operator drop-down list. (These include <, >, ==, and so on to dictate
whether the metric should stay below, above or equal to the threshold
value.)
5. Type an appropriate threshold value for your metric in the Threshold box.
6. Select an operator from the And/Or drop-down list if you wish to combine
this threshold with other thresholds. If ignored, only one threshold is set.
7. Click OK.

Additional thresholds may be added by repeating the steps and all may be
rearranged using the Move Up or Move Down buttons if desired.
Note: You should select whether the test will pass or fail if the collected metrics
meet the specified threshold. Additionally, a command-line to execute in case of
failure may be dictated. By default, selecting Pass will make the engine to log a
Pass in the appropriate WTT Log for the job when the threshold is met and
continue monitoring the metrics. Selecting Fail will make the engine to execute
the failure command-line, if specified, before logging a Fail in the appropriate WTT
log for the job and exiting.

Configuration UI FAQs
How can users get a cumulative log that contains all the updates that are
being sent to the database?
This feature is not currently supported in WTT Metric as this is contrary to the
basic concept of storing collected metric data in one location.
Can users reduce the report interval below one minute?
This is not currently supported due to the expectation that this would grow
the WTT database excessively while not providing significant value.
Can users write custom Configuration UI tabs?
See Advanced Metric Usage information.
Can event logs be filters using something other than Event Source and
Event Category?
This feature is not currently supported in WTT Metric due to unknown usage
information.
Can counters be added that are not part of standard system performance
counters?
See Advanced Metric Usage information.
Can pool tags be added that are not listed in the Pooltags section?

This feature is not currently supported in WTT Metric due to unknown usage
information.
Can additional metrics other than what is displayed be added to pool
tags?
This feature is not currently supported in WTT Metric due to unknown usage
information.
Can specific processes be added when there will be multiple instances of
that process running at a given instant?
The metric feature allows users to add service names instead of just
processes which you can use to differentiate between different service hosts
with the same process name. This allows users to differentiate between
processes with same process names. WTT Metric does not currently support
more than this. If WTT Metric cannot distinguish between processes due to
their naming, the client engine will pick only the first instance of that process
(process name) and report data on that process only.
Can additional metrics on processes be added above what is displayed?
This feature is not currently supported in WTT Metric due to unknown usage
information.
Can more than a simple command-line be run when metrics do not meet
the thresholds?
This feature is not currently supported in WTT Metric due to unknown usage
information.
Can users dynamically add/update configuration/settings at run-time?
See Advanced Metric Usage information.
Can users programmatically add/update configuration settings?
See Advanced Metric Usage information.
Can tests do more than Pass or Fail, if metrics do not meet the specified
criteria?
See Advanced Metric Usage information.

Working with Analyzer UI


The Metric Analyzer UI allows for querying metrics stored in the database for
purposes such as charting, analysis or offline storage. It consists of the following
sections:
Queries
Display

Queries
Users may query the stored metrics in the selected datastore based on various
filters. There are two primary ways to query for metrics: based on job results, or
based on the computer name.

To query based on job result


1. On the Tools menu, point to Metric, and then click Analyzer.
2. Select the appropriate database in the Data Store drop-down list.
3. In the Query Chooser group box, select the Result option.
4. Click Browse.
5. Select the result containing collected metric data, and then click OK.
Note: by default, all the results for the selected datastore are displayed.
You can filter the displayed results using the query builder within the
Result Chooser.
6. Select one or more metrics from the Metrics list.
7. Click the Refresh

button.

If the selected result above has metric data collected, then the Machines
drop-down list is populated with all computers that were part of that job
result. The Metrics list box on the right is populated with the metric name(s)
that are collected for the computer in Machines drop-down list.
The Start time and End time are set by default to the start and end periods
for the complete job selected. The user can further filter the metrics by
changing these dates so as to view only a selection of that data.

Display Section
Users may also view the queried metrics in graphical form. Currently, these can
be displayed in three formatsline graph, bar graph and spreadsheet by selecting
the desired format in the Select View drop-down list. The default view is a line
graph.
Users may have more than one query and thus more than one set of metric data
open at the same time, allowing for comparison of metric data or further analysis
by the user. Each time a new query is defined, a new set of view formats is
created. The user can move between multiple views and queries using the Select
Query drop-down list.
Spreadsheet View Tips
As well as being able to view data in a spreadsheet format, this view enables
users to select a section of the spreadsheet and use the Copy button to copy that

data to a clipboard, where it can then be pasted into any application that can
handle the OLE transition appropriately.
You can also use the Export to Excel button to export the complete data set
displayed to a new Microsoft Excel worksheet, if Excel has been installed.
All data from a query may also be stored in a comma delineated file (*.csv) and
saved on a local hard-disk by selecting Save when pointing to Data Set on the
Metric Analyzer menu.
Previously saved metric data can be loaded from a comma delineated file (*.csv)
by selecting Load when pointing to Data Set on the Metric Analyzer menu.
Note: Line and Bar graph formats of the data will be automatically re-created
when you load previously saved metric data from a CSV file.
Line and Bar Graph Tips
Both Line graph and Bar graph views of queried data may be saved as JPEG
images by clicking Save when pointing to Graph on the Metric Analyzer menu.

Analyzer UI FAQs
Can users change the resolution of the graph image that is saved?
This feature is not currently supported in WTT Metric due to unknown usage
information.
Can the user edit the data in the spreadsheet view
This feature is not currently supported in WTT Metric due to unknown usage
information.
However, the data can be exported to Microsoft Excel where you would be
able to edit the data as required.
Can users add more view formats?
This feature is not currently supported in WTT Metric due to unknown usage
information.
Can users do more than just view the data?
This feature is not currently supported in WTT Metric due to unknown usage
information.
Can users programmatically access the data through the UI?
This feature is not currently supported in WTT Metric due to unknown usage
information.
Can users add more Filters to query metric data than what is currently
supported in the UI?
This feature is not currently supported in WTT Metric due to unknown usage
information.

Jobs integration
To assist users in setting metric configurations to collect and analyze job data
immediately without the added complexity of invoking the proper command-line
for the client engine, a common Library Job has been added to default
installations of the WTT infrastructure.
To use this feature, users, need to include this library job as a sub job within their
actual jobs.

To include the library sub job within a test job


1. Complete the settings and configurations desired within the
Configuration UI. For more information, see Working with the
Configuration UI above.
2. On the Explorers menu, click Job Explorer and select your controller
from the Datastore list.
3. On the Feature tab, right-click the job to which you wish to add the
metrics library job, and then click Edit.
4. On the Tasks tab, select the Regular tab.
5. Click Add.
6. Select the Run Job option, and then click OK.
7. Complete the Add Task Details dialog, specifically using the following
details:
Failure Action: Fail and Stop is recommended so as to allow users
to track any failures appropriately.
Run Job: Select Browse, and then select Metric, and then click OK.
Library Job Parameters:
o Select config.Metric in the Library Job Param Name drop-down
list.
o Click the browse button [. . .] in the Value column.
o Click Import, and select the saved metric file to add.
o Click OK.
8. Select the Execute these tasks in Parallel option to ensure that the
metric sub job runs in parallel to all other tasks in the Job.
Note: If task dependencies already exist within the job (preventing
running tasks in parallel), the WTT Metrics client engine will not be able
to collect and report metrics properly.

Note: Only one metric configuration sub job can be saved per job. If
multiple configurations are needed to collect different metrics, it will be

necessary to create different jobs whose Config.Metric parameter will store


those different configurations.

Job Integration FAQs


Can a job parameter be created with a different name than
Config.Metric?
The parameter may have any name, but still needs the .Metric suffix since
this is what the WTT Studio UI framework uses to identify that this parameter
will contain metric configuration.
Can users have more than one instance of the Metric Client engine
reporting different metrics?
This feature is not currently supported in WTT Metric due to unknown usage
information.
Can users add their own metrics and have the client engine report them
to the database/datastore?
See Advanced Metric Usage information.

Advanced Options
A number of customizations or advanced options can be added by component
teams to customize metrics collection and analysis to meet their specific needs.
Most classes, DLLs and methods that are needed for customization are available
within Source Depot, within the WTT Development tree. For additional
information, see the WTT Software Development Kit (SDK).

Customization Procedures

To write a custom Configuration UI plug-in


1. Derive your class from the ConfigurationBase abstract class and override
the following properties:
ConfigurationName: This property is used for retrieving the
identifying name of this component.
Configuration: This property is for storing and loading the XML
configuration to and from the configuration plug-in.
2. Call the ParentContainer.DataChanged() method from within your
class whenever you have updated data that you would like to store in the
configuration file.
3. Drop your assembly into the
<WTTStudioInstallDir>\Microsoft.WTT.Metric.Configuration\
directory and load it in the Configuration UI using the Load Plugin
button.

To write a custom Client Engine plug-in for data collection


1. Create an unmanaged DLL that exposes the following required methods:
GetDllVersion: Called to query the APIs supported by this DLL.
Initialize: Called on startup to allow for any initialization.
Terminate: Called on shutdown.
GetMetricClasses: Return a collection of supported metric classes.
SetTraceLevel: Set information, warning, and error traces.
2. Implement the IMetricCollection interface that has the following
methods:
GetName: returns the name used to find the configuration for this
metric collector in the XML.
Configure: This is called to allow the metric class to configure itself
using information stored in the XML.
CollectMetrics: Logs the metrics to be collected to the output log.
3. Configure is passed an instance deriving from IConfiguration that allows
for querying the configuration store. Typical usage is as follows:
FetchFirst: get first metric matching query
FetchNext: get next metric matching specific query
FetchClose: clean up handle

CollectMetrics is passed an instance deriving from ILogWriter which is


used to store the collected metric. Typical Usage is as follows:
BeginGroup (Some Group Name)
AddMetric (Repeat for all metrics)
EndGroup

To write a custom Client Engine plug-in for Intelligent Pass/Fail


1. Create an unmanaged DLL that exposes the following required methods:
GetDllVersion: Called to query the APIs supported by this DLL.
Initialize: Called on startup to allow for any initialization.
GetMetricPassFail Return a collection of supported metric ipf
classes
Terminate Called on shutdown
2. Implement the IMetricPassFail interface that has the following methods:
GetName: returns the name used to find the configuration for this
metric collector in the XML.

Configure: This is called to allow the metric class to configure itself


using information stored in the XML.
PassOrFail Read the current Metric values on Input and return the
pass/fail decision
3. Configure is passed an instance deriving from IConfiguration that allows
for querying the configuration store. Typical usage is as follows:
FetchFirst: get first metric ipf matching query.
FetchNext: get next metric ipf matching specific query.
FetchClose: clean up handle.

Deploying Custom Client Engine Plug-ins


The method used to deploy custom client engine plug-ins and integrating them
with Jobs varies depending upon the type of plug-in used:
Client Engine plug-in for Data Collection
This plug-in should be installed on each and every client machine along
with the WTT client software. It can be placed in any directory on the
client machine that suits test needs as long as that directory is part of the
system PATH environment variable. However, for increased maintainability,
it is recommended that this plug-in be placed in the WTT Client installation
directory on the client computers.
Client Engine plug-in for Intelligent Pass/Fail
As with the plug-in for Data Collection, this plug-in should be installed on
each and every client machine along with the WTT client software. It can
be placed in any directory on the client machine that suits test needs as
long as that directory is part of the system PATH environment variable.
However, for increased maintainability, it is recommended that this plug-in
be placed in the WTT Client installation directory on the client computers.
Configuration UI plug-in
This is a plug-in to the WTT Studio Metric Configuration UI and as such
should be installed on machines along with the WTT Studio software. You
will have to place this plug-in in the directory:
<WTTStudioInstallDirectory>\Plugins
\Microsoft.WTT.Metric.Configuration
where

<WTTStudioInstallDirectory> is the directory in which WTT


Studio is installed.
In addition, you will have to load your plug-in specifically every time you
need to use it.

Jobs Integration with Custom plug-in(s)


Depending on what kind of custom plug-in(s) you have, integrating those plug-ins
with regular jobs may vary from Jobs Integration above.
If you have developed a custom Client Engine plug-in for Data Collection or a
custom Client Engine plug-in for Intelligent Pass/Fail that does not have
any corresponding configuration data stored using the configuration UI, then Jobs
integration is different.
Deploy the custom plug-ins as described above.
Save a configuration file called MyConfig.Metric using the Metric
Configuration UI. See Working with the Configuration UI for assistance.
Rename the metric configuration file saved toay MyConfig.txt, removing
the .Metric extension.
Edit the configuration file using Notepad and append the following within
the <WTTMetricConfig> tags:
<Group Name="<name>" Type="Metric" UIDllName=
ClientDllName="<ClientDllName>">
</Group>
where
<name> is the unique group name for the component.
<ClientDllName> is the client plug-in DLL name.

Create a job using Job Explorer to collect Metrics with the following local
parameter in addition to standard job information:
o

parameter name: MyMetricConfig.Txt

parameter type: FileData

parameter value: the imported metric configuration file


(MyConfig.txt).

Note: see Creating and Editing Jobs for assistance if needed.

Add a regular task to the main job to be integrated with. This task must
run in Parallel to all other tasks and be defined as follows:
1. Select Run Job as the task type and click OK.
2. Type a task name in the Name box.

3. On the Run Job tab, click the Browse button and navigate to the
available library jobs. Select the Metric library job and click OK.
4. From the Library Job Param Name drop-down list, select
config.metric.
5. From the Job Param Name drop-down list, select the parameter
created above (MyMetricConfig.txt), and then click OK.
Note: Only one Metric library subjob can be defined per job. If multiple
configurations are needed, different jobs will need to be created.

Appendix M: Managing LLU and


LSU Functions
A Local Logical User (LLU) and Local Symbol User (LSU) should see the Credential
Triplets (User Name, Domain, and password) that are used by Windows Test
Technologies (WTT) for various purposes. Although they are configured similarly,
the uses to which each are put vary considerably.
LLU A Local Logical User is a Credential Triplet that functions as if it were the
test engineers domain credentials with regard to WTT. This provides two main
functions: to provide secure test access for testers without risking exposure of
their domain credentials, and secondly, to provide a layer of abstraction between
a user and task definitions.
LSU A Local Symbol User is a Credential Triplet that is used only by the WTT
Autotriage application. It provides access for Autotriage across computers and
operating systems within WTT in order to triage task failures.

Local Logical Users (LLU)


A Local Logical User (LLU) is a local credential triplet (user name, domain and
password) configured on a WTT client computer to ensure secure access. Any
user can configure an LLU on any WTT client, provided they have administrator
access to that client. This may be done remotely using the WTT Studio UI and can
also be performed in multiple computers at once. (Note: per Windows policy, any
user creating an account must themselves have administrator access to the
computer on which they are creating the account; the LLU account itself does not
require administrator access, except for use in tasks which themselves require it.)
Configuring an LLU creates an entry in a file (LLUTable.xml) on the client
computer that maps the LLU to an actual user credential triplet. It does not,
however, check for the validity of the credential given. Nor does it attempt to
create the user account given, if the account does not exist. It also does not
attempt to change the password of the user if the password is updated through
WTT Studio.

Usage of an LLU
There are two primary reasons for using an LLU:
1. To substitute for sensitive account information in jobs or tasks. If a test
engineer specifies sensitive accounts in a task definition, that account is
vulnerable to password disclosure. To avoid this, a user can configure a

LLU with the sensitive account information and configure the task to use
the LLU. The actual account information remains on the client computer
and remains safe while the LLU is used.
Note: It should be remembered, however, that a LLU is available to any
administrator on the WTT client computer upon which is located. An LLU
should therefore only be configured on client computers that are
themselves secure.
2. To abstract a user name from job task definitions. For example, if a job is
created which will be used by multiple testers in different teams, using an
LLU rather than actual accounts is useful because the actual account may
not be valid in different setups. If an LLU is configured for the task, the
new setup only has to make sure that the appropriate LLU is present on
the WTT client computers.

Additionally, an LLU gives the WTT test team an easy way to maintain test
accounts. If a test account is used within task definitions, every time that test
account password changes, each task where the account is used will require
updating. If an LLU was used, however, all that is necessary is to update the
account password on all the client test computers using a simple bulk operation in
the WTT Studio UI.

Several tips that may assist users with managing LLUs include:
To find out what LLU is configured a specific computer, users can look at
file c:\WTT\JobsWorkingDir\Security\LLUTable.xml
The created LLU will stay with the system until WTT is uninstalled. It will
stay across operating system installations. This means that tester will not
need to create an LLU each time after an operating system upgrade or
after booting to a different operating system installation.
LLU Operation from the UI is not allowed on computers in the Default Pool
because the default pool is not secure.

Configuring an LLU
Two things that are important to consider when configuring an LLU:
For any LLU request to succeed, the target computers must have the WTTSVC
service running.
The LLU user interface works directly with the client computers selected.
Therefore, if there is not a direct connection between the WTT Studio interface
and the client computer, the LLU request will fail.

To create an LLU
1. On the Explorers menu, click Job Monitor and then select your controller
from the Datastore drop-down list.
2. Right-click an asset pool or select a set of computers, point to Manage
Logical Users, and then click New.
3. Select Local Logical User, and then click Next.
4. If the LLU is being configured from the WTT Controller, select Running
from a controller, and then click Next. Skip the next step.
5. If the LLU is being configured from a client machine, select Running from
another machine, type a domain and user name with administrative
rights on the client computers in the Domain\User Name box, and then
type the account password in the Password box. Click Next.
Note: This step should be skipped if the LLU is being configured from
the WTT Controller.

Figure M.1 Create LLU (credentials) dialog box

6. Type an account name for the LLU in the Local Name box.
7. Type the domain and user name associated with the LLU in the
Domain\User Name box.
8. Type the account password in the Password box, and then retype it in the
Confirm Password box.
9. Click Start.
10. When the LLU is configured, click Close.

To update an LLU
1. On the Explorers menu, click Job Monitor and then select your controller
from the Datastore drop-down list.
2. Right-click the asset pool on which the LLU is located, point to Manage
Logical Users, and then click Edit.
3. Select Local Logical User, and then click Next.
4. If the LLU is being configured from the WTT Controller, select Running
from a controller, and then click Next. Skip the next step.
5. If the LLU is being configured from a client machine, then select Running
from another machine and type a domain and user name with
administrative rights on the client computers in the Domain\User Name
box and type the account password in the Password box. Click Next.
Note: This step should be skipped if the LLU is being configured from
the WTT Controller.
6. Type the LLU name to be updated in the Local Name box.
7. Type the domain and user name associated with the LLU in the
Domain\User Name box. It is not necessary that this be an
administrative account.

Figure M.2 Edit LLU dialog box

8. Type the old account password in the Old Password box.


9. Type the new account password in the New Password box, and then
retype it in the Confirm Password box.
10. Click Start.

11. When the LLU is updated, click Close.

To delete an LLU
1. On the Explorers menu, click Job Monitor and then select your controller
from the Datastore drop-down list.
2. Right-click the asset pool on which the LLU is located, point to Manage
Logical Users, and then click Delete.
3. Select Local Logical User, and then click Next.

Figure M.3 Delete LLU dialog box

4. If you wish to delete single LLU, select Delete one and type the LLU name
in the Local Name box. If you wish to delete all local logical users on the
client computer, select Delete All.
5. Click Start.
6. When the LLU is deleted, click Finish.

Special Note regarding WTTCMD access to LLU functionality: Through WTT


2.0 Beta 2, WTTCMD.exe was the only way to manage LLU. With WTT 2.0 RTM,
WTTCMD.exe can still be used to manage LLU but it is recommended that users
use this UI to manage all LLU functions. WTTCMD.exe should only be used locally
to query for an LLU on a local computer.

To query for an LLU on a local computer


Note: Because WTTCMD is not installed on WTT Controllers, this may only be
performed on client computers.
1. On the taskbar, click Start, and then click Run.

2.
3.

In the Run dialog box, type cmd, and then click OK.
At the command prompt, type
WTTCmd.exe /QueryLogicalUser

Local Symbol User (LSU)


A Local Symbol User (LSU) can be thought of a credential store (user name,
domain and password) for symbol share access which works across operating
system installations. A user can configure an LSU on any WTT client, provided
they have administrator access to that client. This may be done remotely using
the WTT Studio UI and can also be performed in multiple computers at once.
Configuring an LSU creates an entry in a file (LSUTable.xml) on the client
computer. It does not, however, check for the validity of the credential given. Nor
does it attempt to create the user account given, if the account does not exist. It
also does not attempt to change the password of the user if the password is
updated through WTT Studio.
The LSU is used by the WTT Autotriage component to access a symbol share so
that it can resolve symbols to triage a failure. The LSU is accessible to any user
on the client machine.
Warning: Do not put sensitive account information into an LSU.

Usage of LSU
The LSU is currently used only by the WTT Autotriage component to resolve
symbol shares.
A LSU is identified by the network share it gives access to. For example, when
WTT Autotriage uses the LSU:
<\\MySymbolShare, UserA, DomainB, Password>
Autotriage will think that it needs to access \\Mysymbolshare for accessing the
symbols and therefore it will net use using this credential.
The LSU name * represents the account to be used for any share for
which any credential is not given.

When used, the LSU stays across computer restarts as well as across operating
system installations. As well, the password for the account can be updated

centrally from the WTT Studio UI, rather than having to be changed on each client
computer.

Configuring an LSU
Two things that are important to consider when configuring an LSU:
For any LSU request to succeed, the target computers must have the
WTTSVC service running.
The LSU user interface works directly with the client computers selected.
Therefore, if there is not a direct connection between the WTT Studio interface
and the client computer, the LSU request will fail.

To create an LSU
1. On the Explorers menu, click Job Monitor and then select your controller
from the Datastore drop-down list.
2. Right-click the asset pool on which the LSU is located, point to Manage
Logical Users, and then click New.
3. Select Local Symbol User, and then click Next.
4. If the LSU is being configured from the WTT Controller, select Running
from a controller, and then click Next. Skip the next step.
5. If the LSU is being configured from a client machine, select Running from
another machine, type a domain and user name with administrative
rights on the client computers in the Domain\User Name box, and then
type the account password in the Password box. Click Next.
Note: This step should be skipped if the LSU is being configured from
the WTT Controller.

Figure M.4 Local Symbol User dialog box

6. Type the network share on which the LSU will be located in the Network
Share box.
7. Type the domain and user name associated with the LSU in the
Domain\User Name box.
8. Type the new account password in the New Password box, and then
retype it in the Confirm Password box.
9. Click Start.
10. When the LSU is configured, click Finish.

To update an LSU
1. On the Explorers menu, click Job Monitor and then select your controller
from the Datastore drop-down list.
2. Right-click the asset pool on which the LSU is located, point to Manage
Logical Users, and then click Edit.
3. Select Local Symbol User, and then click Next.
4. If the LSU is being configured from the WTT Controller, select Running
from a controller, and then click Next. Skip the next step.
5. If the LSU is being configured from a client machine, then select Running
from another machine and type a domain and user name with
administrative rights on the client computers in the Domain\User Name
box and type the account password in the Password box. Click Next.
Note: This step should be skipped if the LSU is being configured from
the WTT Controller.

Figure M.5 Edit LSU dialog box

6. Type the network share on which the LSU is located in the Network
Share box.
7. Type the domain and user name associated with the LSU in the
Domain\User Name box.
8. Type the old account password in the Old Password box.
9. Type the new account password in the New Password box, and then
retype it in the Confirm Password box.
10. Click Start.
11. When the LSU is updated, click Finish.

To delete an LSU
1. On the Explorers menu, click Job Monitor and then select your controller
from the Datastore drop-down list.
2. Right-click the asset pool on which the LSU is located, point to Manage
Logical Users, and then click Delete.

Figure M.6 Delete LSU dialog box

3. If you wish to delete single LSU, select Delete one and type the network
share on which the LSU to be deleted is located in the Network Share
box. If you wish to delete all local logical users on the client computer,
select Delete All.
4. Click Start.
5. When the LSU is configured, click Finish.

Special Note regarding WTTCMD access to LSU functionality:


Through WTT 2.0 Beta 2, WTTCMD.exe was the only way to manage an LSU.
With WTT 2.0 RTM, either WTTCMD.exe or this UI may be used to manage
an LSU. WTTCMD.exe, however, must be used to query for an LSU on a local
computer.

To query for an LSU on a local computer


Note: Because WTTCMD is not installed on WTT Controllers, this may only be
performed on client computers.
1. On the taskbar, click Start, and then click Run.

2.
3.

In the Run dialog box, type cmd, and then click OK.
At the command prompt, type
WTTCmd.exe /QuerySymbolUser

Appendix N: Unified Reporting


The Unified Reports (UR) Cube Admin dialog allows users to use the OLAP cube
directly for specific reporting needs. By selecting a Unified Reporting Cube Service
datastore and then adding JobsDefinition or JobsRuntime datastores to it, a user
is provided with common dimensions among available datastores and allowed to
choose reporting dimensions. The Unified Reports Cube Admin UI is very simple
and allows users to navigate using either a keyboard or mouse.

The following subjects are included in this appendix:


Unified Reports Cube Administration
Working with Unified Reports

Unified Reports Cube Administration


The Unified Reports module allows users to create multidimensional reports based
around a central database where all jobs and results are collected from WTT data
stores. Each Unified Reporting datastore contains a list of JobsRuntime
datastores, which store results, and JobsDefinition datastores, which contain the
jobs. Using this, a user can create a central reporting site, where reports against
all jobs and results in an enterprise can be run.
This Unified Reporting service datastore can be set up using WTT Enterprise Setup
and an OLAP database created, with all available JobsRuntime or JobsDefinition
datastores available to select report dimensions. If the Unified Reports Crawler
Service is running, it will immediately start collecting all jobs, results and selected
dimensions into a unified reporting service datastore. The OLAP cube will then be
updated based on the latency time set by the user and all reports will be available
through unified reports web site.

Unified Reports Module Architecture


The Unified Reports module is composed of following components:
Unified Reporting Service datastore
Unified Reports OLAP database
Unified Reports Cube Admin Dialog
Unified Reports Crawler Service
Unified Reports Web Site

Figure N.1 Unified Reports Architecture

Unified Reporting Service Datastore


When a datastore is available where the Unified Reporting Service has been
added and is active, it functions as a data warehouse for the OLAP database. It
provides users with a schema in which to store jobs, job contexts, feature
hierarchy, category hierarchy, attribute hierarchy as well as results. It also stores
all the constraints for these jobs, job contexts and results.
This schema is optimized for the OLAP database engine in order to complete
faster queries, in other words, to do faster joins and reads. The process is
seamless to the end users, because they access the OLAP database directly
through the Unified Reports Web Site and therefore are not aware of what
happens behind the scenes.
The Unified Reporting Service datastore also stores report dimensions as well as
the location of JobsRuntime and JobsDefinition datastores in order to access them
when needed from the Cube Admin UI..

Unified Reports OLAP Database


Online analytical processing (OLAP) is a technology designed to provide superior
performance for ad hoc business intelligence queries. An OLAP database is

designed to operate efficiently with data organized in accordance with the


common dimensional model used in data warehouses.
A data warehouse provides a multidimensional view of data in an intuitive model
designed to match the types of queries posed by analysts and decision makers.
OLAP organizes data warehouse information into multidimensional cubes based
on this dimensional model and then preprocesses these cubes in order to provide
maximum performance for queries that summarize data in various ways.
For example, a query requesting the percentage of passes and attempts for a
range of components for specific test criteria broken down by processor type and
language used could be answered using this method within a few seconds or less
regardless of how many hundreds of millions of rows of data are stored in the
data warehouse database.

Unified Reports Cube Admin Dialog


This dialog allows you to manage the Unified Reporting Service datastore and its
OLAP database. It allows you to perform following functions:
Create an OLAP cube for the Unified Reporting Service datastore.
Add or remove JobsRuntime and JobsDefinition datastores from which the
Crawler Service will retrieve jobs and results.
Select or clear report dimensions using check boxes.

Unified Reports Crawler Service


The Unified Reports Crawler Service is a Windows Service that scans all Unified
Reporting Service datastores within an enterprise and notes all jobs, job contexts,
results and report dimensions from the associated JobsRuntime and
JobsDefinition datastores. Every time a JobsRuntime or JobsDefinition datastore is
crawled, the service queries only those records which have changed since the
previous crawl. The service performs this operation continuously until paused or
stopped.

Unified Reports Web Site


As can be seen in Figure L.1 above, the Unified Reports web site is a presentation
layer on top of the OLAP cube and the Unified Reporting Service datastore. It
allows users to connect to their Unified Reporting OLAP cube and create reports
directly. As well, this is the place where end-users such as testers, test leads,
managers and others will consume these reports. A selection of default reports is
provided but users are also able to create or customize reports to fit their specific
needs showing roll up values against any or all dimensions or providing drill down
through the dimension hierarchy to the job and test result level. The web site
keeps track of all OLAP cubes within the enterprise and therefore allows reports to
be based upon any cube present throughout the enterprise

Working with Unified Reports

To generate a report using Unified Reports Cube Admin


1. On the Admin menu, click Unified Reports Cube Admin.
2. The Unified Reports Cube Admin dialog scans the WTT enterprise and
generates a list of all Unified Reporting Service datastores.
3. Double-click on the target reporting cube (Unified Reporting Service
datastore) from the Unified Report Cubes group box.

Figure N.2 Unified Reports Cube Administration dialog box


4. Select a JobDefinition or JobRuntime datastore checkbox.
5. Click Select.
6. Select or clear report dimension checkboxes within the Dimension group
box.

Figure N.3 Selecting Report Dimensions within Unified Reports Cube Admin

7. Click Apply.
8. Click OK to generate reports.

Appendix O: Unattended
Installation
Testers and test lab managers often find performing unattended installations of
WTT useful across several (or many) computers. Although the standard "quiet"
installation may be performed using the /q command-line option, many more
options are available, depending on whether the user needs to install a test
controller, test client, or test studio.

Test Controller

To perform an unattended installation of a test controller


A test controller is installed from either a network share or from an installation
CD.
1. On the taskbar, click Start, and then click Run.
2. In the Run dialog box, type cmd, and then click OK.
3. At the command prompt, type the following command:

<source>\Setup\<type>\Setup.exe

/qb

DBSAPASSWORD="<sa>" NOTIFICATIONUSER="<userid>"
NOTIFICATIONPASSWORD="<password>"
NOTIFICATIONDOMAIN="<domain>"
INSTPASSWORD="<installpassword>"
Where:
<source> is the source from which the controller is being installed.
<type> is the server architecture type, such as x86.
<sa> is the database sa password.
<userid> is the WTT Notification user ID being used for the
installation.
<password> is the WTT Notification user password.
<domain> is WTT Notification user account domain.
<installpassword> is the install user password.

To perform an unattended uninstall of a test controller


1. On the taskbar, click Start, and then click Run.
2. In the Run dialog box, type cmd, and then click OK.
3. At the command prompt, type the following command:

<source>\Setup\<type>\Setup.exe

/uninstall

Where:
<source> is the source from which the controller is being installed.
<type> is the server architecture type.

Optional Properties for unattended install/uninstall of a test controller:


WTTINSTFS the WTT install file share name.
WTTINSTDIR the WTT install directory mapped to file share name.
WTTLOGFS the WTT log file share name
WTTLOGDIR the WTT log directory mapped to file share name
PDUSERNAME the push daemon user name or log file share user
PDPASSWORD the push daemon password
PDDOMAIN the push daemon domain
INSTUSERNAME the install share user name
INSTMACHINE the install computer name where the install user will be
created.
INSTPASSWORD the install user password.
CREATEDB( "" for unchecked, "1" for checked) create database check
box
DBNAME the WTT database name.
IDENTITYSERVER the name of the identity server if CREATEDB is
unchecked. The server name\instance name can be specified in this field.
IDENTITYDATABASE the name of the database on the identity server.
AUTOMATIONSERVER the name of the server on which the DBNAME
already exists. The server name\instance name can be specified in this
field.
USERROLE the user role, set in the XML file that is passed to
WTTEnterpriseSetup.exe. By default this is set to DatastoreAdmins.
DRIVESELECTION the drive that will be used for the WTT directory, using
the form "C:\"
BUILD the installation root path (default = C:\WTTBin\2000\).

Test Client

To perform an unattended installation of a test client


A test client is installed directly from the test controller.
1. On the taskbar, click Start, and then click Run.
2. In the Run dialog box, type cmd, and then click OK.

3. At the command prompt, type the appropriate command:


Client with no debugger attached
\\<server>\wttinstall\client\Setup.exe /qb KERNELDEBUGATTACHED="No"
Client with a serial port kernel debugger attached
\\<server>\wttinstall\client\Setup.exe /qb SERIALDBG="Yes"
SERIALPORT="<port>" BAUDRATE="<rate>"
Client with a channel 1394 kernel debugger attached
\\<server>\wttinstall\client\Setup.exe /qb SERIALDBG="No"
CHANNEL="<channel>"
Client without the WTT Triage tools installed.
\\<server>\wttinstall\client\Setup.exe

/qb

ADDLOCAL=WTTBin

Where:
<server> is the test controller from which the test client is being
installed.
<port> is the serial port, such as COM1, through which the kernel
debugger is connected.
<rate> is the baudrate at which the kernel debugger is connected.
<channel> is the channel, such as 1394, through which the kernel
debugger is connected.

To perform an unattended uninstall of a test client


1. On the taskbar, click Start, and then click Run.
2. In the Run dialog box, type cmd, and then click OK.
3. At the command prompt, type the following command:
\\<Server>\wttinstall\client\Setup.exe /uninstall
Where:
<server> is the test controller from which the test client is being
installed.

Optional Properties for the unattended install/uninstall of a test client:


BUILD the installation root path (default = C:\WTTBin\2000\)
DEBUGGERNAME the name of the debugger (default = "DEFAULT"
ICFAGREE ICF is enabled on the system. This property is only used if ICF
is enabled and in this case, the property is "Yes." This property is case
sensitive.

DRIVESELECTION the Drive that will be used for the WTT directory, use
the form "C:\"

Test Studio

To perform an unattended installation of a test studio


A test studio is installed directly from the test controller.
1. On the taskbar, click Start, and then click Run.
2. In the Run dialog box, type cmd, and then click OK.
3. At the command prompt, type the following command:
\\<server>\wttinstall\Studio\Setup.exe /qb
Where:
<server> is the test controller from which the test studio was installed.

To perform an unattended uninstall of a test studio


1. On the taskbar, click Start, and then click Run.
2. In the Run dialog box, type cmd, and then click OK.
3. At the command prompt, type the following command:
\\<Server>\wttinstall\client\Setup.exe /uninstall
Where:
<server> is the test controller from which the test studio was installed.

Optional Properties for the unattended install/uninstall of a test studio:


BUILD installation root path (default = C:\WTTBin\2000\)

MSI uninstall commands


These uninstall commands for the MSI files are also located in the .ini files.
Test Controller:
Msiexec /x {FA5C10A3-0B0F-4E62-AF80-B3FD2C680CAE}
Test Client:
Msiexec /x {93006234-786B-462F-804F-EDED40662705}

Test Studio:
Msiexec /x {DE3938A9-F58A-4BCB-BEC5-036385AF3D5B}

Appendix P: Code Coverage


Code Coverage integration with WTT allows users to collect code coverage data on
a per Job basis. This coverage data is collected for all jobs run on any
instrumented build of Windows (see special cases below). The data is
automatically collected for the user and moved to the code coverage database for
analysis. There is no need to do anything more than install a Code Coverage build
and run your normal test suite within WTT.

Controller Setup
To set a WTT Controller to integrate with code coverage, use the
ccserversetup.cmd script from the run dialog box.

1. On the taskbar, click Start, and then click Run.

2.

In the Run dialog box, type

\\codecoverage\public\wtt\ccserversetup.cmd <WTTBin>
<WTTInstall> <SystemLog>

Where:
<WTTBin>

is the full path to the WTTBin directory on the WTT Controller.

<WTTInstall> is the full path to the WTTInstall directory on


the WTT Controller.
<SystemLog> is the full path to the SystemLogs directory on
the WTT Controller.
This script will fail if all three command-line parameters are not provided.
The script will copy code coverage files, client-side installation scripts, and
will schedule an export Job on the current Controller in order to send
appropriate coverage data to the Code Coverage database.

Note: With the scheduled task running on the WTT controller, code coverage
data collected from test machines is automatically sent to the code coverage
server every 2 hours. The frequency of this task can be changed by the WTT
administrator.

Client Setup
Setup on WTT client computers varies depending upon whether they use the
Longhorn operating system or not, as follows:
Longhorn Builds Longhorn builds do not require any additional
consideration as the necessary code coverage utilities are already in place
beginning with build number 4068. The only actions necessary are the
installation of the code coverage server script and then running a normal test
suite.

Non-Longhorn Builds Non-Longhorn builds will need to have a script run


after the code coverage server script is installed. This is not necessary if
Archon v1.98 is being used. In this case, the client-side will be run
transparently and will require no user intervention.
Manual Integration
If Code Coverage data is to be collected and Archon v1.98 is not
available or not used, or if the user creates their own Archon job
instead of using the OSDeploy library job, a post-install script must be
run as follows:

1. On the taskbar, click Start, and then click Run.

2.

In the Run dialog box, type

\\<wttcontroller>\wttinstall\codecoverage\ccclientsetup.cmd
Where
<WTTController> is the path to the WTT Controller used by the client
computer.

Job Considerations
For WTT, the Code Coverage team recommends creating one test, matched to one
trace, which is mapped to one job. Traces should be in the range of 30 seconds to
15 minutes to run. If your Job contains numerous traces that run for longer than
30 minutes or so, you should investigate breaking the tests up so that they
conform to the recommendation. If this is not possible, the user can create tasks
inside the job(s) to call CCSave.EXE to save the individual test cases. However, it
should be noted that as the traces are saved, the upper level Job-Trace will be
empty.

Recommendations for running jobs:


Run Time for a Given Job:
The run time for an individual Job should be in the neighborhood of 30
seconds to 15 minutes, with the maximum of no more than 30 minutes.
Running tasks in parallel:
During run time, tests cannot be scheduled to run in parallel. This is
because the tool cannot identify coverage data on a per process basis. For
example, if the user wants to run test1 and test2 in parallel, the coverage
data saved with test1 will actually be a mix of results for both test1 and
test2.
Running a Test:

Add the test computer installed with a code coverage Windows build
to a separate machine (asset) pool.

Schedule a test to run in the above machine (asset) pool.

During the test, coverage data is automatically saved with the full
name of the scheduled tests. Coverage data files are exported to
\\<WTT Controller>\SystemLogs\CCData after each test run. Where
<WTTController> is the path to the Controller.

Scheduling a Job to deploy a code coverage OS


Details on scheduling an OSDeploy library Job via Archon from within WTT
Studio can be found at the Archon website.
Binary Mapping tab in Job Explorer
The binary mapping page allows a user to designate the binary(s) that the
users test is designed to exercise. It also allows the user to predict the
running time of the test, provide contact information similar to using the
command line version of CCDATA.EXE.

Managing Code Coverage Data


View Code Coverage Data
Users can view their code coverage data on the Code Coverage website.
Additional information including terminology, processes, and how to use
the data is available at the same location.
Register new tests
When you create a job and run it on a code coverage build, a trace name
is automatically created using the full job name and feature path. All WTT
jobs are treated as Code Coverage traces when they are run on a Code
Coverage build.

Update existing tests


Updating a test is done through Job Explorer. The trace data associated
with the test will be automatically updated on the code coverage servers.
Delete existing tests
Deleting a test using Job Explorer will automatically delete the trace from
the Code Coverage servers. This change will eventually propagate to the
Code Coverage databases themselves, removing all trace data associated
with that job in the database.

Appendix Q: Windows Test Labs


The test labs listed in this appendix are those who most use and want test
systems from the OEMs.

Windows Test Lab Information


The information on the following test labs is based on information provided by
that lab and may not be complete. Each lab has provided its test focus to better
enable allocation of systems. Some have provided the type of systems that they
can make best use of. Each lab has a contact name or alias if you need more
information.

1394 Lab
Different computers can have many different chipset 1394 solutions as well as
different implementations of the same chipset. The 1394 lab team needs to test
on a wide variety of systems from all OEM's as well as new chipsets.
The team is Responsible for testing the core 1394 bus for all operating systems in
development.
Extensively test interoperability of Host controller and devices on a
daily basis.
Test through multiple classes of devices on all known types of host
controllers.
Disabler/remover tests (just host controllers).
Asych and Isoch loopback tests.
Storage and Digital Video device interoperability testing.
IP 1394 testing.
Power management tests (all systems, all ACPI sleep states).
WHQL/1394 test suites.
Contact Alias
ncrum

ACPI/Power Lab
As part of the Hardware Platforms Test Team, the ACPI/Power lab we check that
each BIOS implementation follows the ACPI 1.0B and ACPI 2.0. Our primary focus
is operating systems test, but bios verification is done at the same time because
of this.
Test focus
Thermal test

CPU adaptive throttling and processor power management (p-states, cstates)


CM Battery and smart battery
Power control panel UI
UPS.
Power manager and powerprof.dll.
Wake on LAN, USB, PS/2, and ring.
Powercfg.exe and sleep disable logging.
Verify that all APIC routing entries in the ACPI namespace are correct
(through multiprocessor ACPI testing).
Systems Needed
Any systems with different bios implementations with respect to
power/Plug and Play and especially systems with ACPI 2.0 features.
Any systems implementing new processor power management
techniques.
Any systems with new form factors, such as Desknotes.
Contact Alias
ncrum

Application Compatibility Lab


Lab
AEOEM (Application Experience OEM)
Test Focus
This lab focuses on testing the applications that ship on and with OEM systems in
a Microsoft Windows codename Longhorn environment.
This includes migrating the original operating system of the system (Microsoft
Windows XP most likely) to Windows Longhorn as well as some limited application
testing on clean installs of Windows Longhorn.
How they test the application depends on the application's market saturation and
input from the OEM's themselves. Some applications may be tested for 15
minutes while others may be tested for an hour or even two hours.
Systems Needed
For the OEMs that the Systems lab team tests, they are most interested in
Windows Longhorn-compatible systems with as many applications pre-installed as
they can get.
The team is also interested in testing applications on one of each OEM system
line.

However, the team cares more about the number of applications tested rather
than the number of systems tested. If the team receives one Sony system with
50 applications, it is comparable to two Sony systems with 25 applications each.
Contact Alias
ppagel

Audio Lab
The Audio Lab team tests devices such as video capture, TV tuners, DVD, DV
cameras, portable media devices, audio devices (ISA/PCI/USB/1394), and so on.
Test Focus
Plug and Play.
Power Management.
Low system resources/stress situations.
All of the above on various bus types ISA/PCI/AGP/USB/1394.
Driver/API/Core testing across many operating system SKUs and
computer configs.

Systems Needed
Multiprocessor computers in all flavors, AMD64, and laptops (with USB
and 1394 ports).
Contact Alias
brentm

Base Scenarios Lab


The Base Scenarios lab team is responsible for testing real-world corporate
scenarios in an effort to locate performance and reliability issues in Microsoft
Windows Server 2003. The team focuses on core components, such as memory,
drivers, I/O, and network drivers. Base Scenarios lab testing is unique in that the
team tests the system from end-to-end rather than focusing on a specific
component. We also chase down compatibility issues within our own products as
necessary.
The lab's original charter was to focus on the Microsoft Windows codename
Longhorn server before it was made a client-only release. So now the team's
roles are slightly reversed, although they are still responsible for testing
.NET services packs and so on. The team is now focusing more on the client side
of business rather than solely on servers as originally planned.
Systems Needed
Client systems (workstations) for Windows Longhorn testing.

Server systems for continued .NET testing and to promulgate future Windows
Longhorn testing. One of the team's goals is to smack as much stress on highend servers as possible in an effort to identify longevity issues that might
otherwise take several days, weeks or even months to manifest under normal
circumstances. The team uses client-server scenarios to test both ends of the
puzzle at the same time. For example, client computers request several streaming
media threads or web pages from a server that uses a SQL backend running on a
different server. The clients get stressed by the process of requesting and
validating the information, and the servers get stressed passing that information
out.
The team's scenarios are generated from real-world market and usage research.
They try to make tests match real-world scenarios as closely as possible to better
understand where product improvements are needed, while testing both client
and server scenarios in the Base Scenarios lab.
Contact Alias
ncrum

Bluetooth Lab
The Bluetooth lab tests radios from most radio manufacturers. One side of the
system is a PC running Windows with our Bluetooth stack. It talks to a variety of
devices to using real world scenario. Real world scenarios include cell phones,
hand held devices (CE/PPC), printers, access points, HID (mouse, keyboard), and
other Bluetooth-capable PCs. Most of the local device interfaces are USB- or PC
Card-based using the H4 (UART) interface. PCI-based devices are in the works.
Our test matrix extensively tests the Bluetooth stack, as well as the normal mix
of Plug and Play type testing to ensure these devices behave well in a Bluetooth
focus as well as Plug and Play, USB, power management. Interoperability with
other devices is something we are able to cover significantly better than most due
to the variety of devices we test with.
The Bluetooth team develops all Bluetooth tests that WHQL (Windows Hardware
Quality Labs) uses for qualification.
Contact Alias
ncrum

Embedded Lab
The Windows Embedded lab tests Windows XP Embedded and Microsoft Windows
codename Longhorn Embedded on a wide variety of PCs as well as embedded
devices such as Cash Registers, Set Top Boxes and Windows Based Terminals.
Test Focus

We're primarily concerned with ensuring Windows XP Embedded and Windows


Longhorn Embedded works on every piece of HW that is supported by Windows
XP-Pro and Windows Longhorn-Pro. We test every peripheral we can get our
hands on, everything from Audio, Video, SCSI, Multi Proc, and so on.
Systems Needed

Anything new.

Anything odd or out of the ordinary, as long as it is supported by


Windows Longhorn or Windows XP.

Contact Alias
ncrum

Hardware Experience Lab


Test Focus
The Hardware Experience Lab team checks to make sure that the following core
infrastructures work as intended in the design:
device/driver install infrastructure.
device/driver interface/notification infrastructure.
device/driver resiliency infrastructure (focused on installs and loading).
This team also checks to ensure that the various tools supplied (like SigVerif,
Device Manager, and so on) are reliable enough to provide the users accurate
information. The lab will also run sanity checks on driver packages and
applications to check for compatibility issues.
Contact Alias
ncrum

iSCSI Lab
The iSCSI lab tests interoperability between the Microsoft iSCSI Initiator and
vendor iSCSI targets. It also verifies target compliance with the iSCSI Standard.
Test Focus
The primary tests run to test the iSCSI targets and iSCSI HBAs are:
iSCSI Boot Test
iSCSI Digest Test
iSCSI Chap Test
iSCSI Ping Test
iSCSI Redirection Test
iSCSI Target iSNS Test
Exchange Loadsim (5 Days)

SQL TPCC (5 Days)


Storage Diskload test
FAT File I/O
NTFS File I/O
Mapped File I/O
Storage Data Verification Test
Storage Device Stress
SysCache
ACPI Stress
Tape I/O
Tape Partition
Tape Backup/Restore
Media Changer Stress
Signability Test
RAID Data Integrity Test
SCSI Compliance Test
WMI Test
Device Path Exerciser
Systems Needed
Need to replace 2 older 4 proc P2 400 systems.
Very interested in any small rack mountable systems with onboard
GigE or at least 1 PCI slot.
Also need some smaller form factor systems to be used as clients for
longhaul stress.
Does not have a lot of use for desktop type systems.
Contact Alias
ncrum

Kernel Lab
Test Focus
Multi-Proc kernel testing
Large memory testing including > 4GB PAE testing.
Win32 base APIs.
Process Management APIs.
Memory Management APIs.
Registry APIs.
Kernel stress.

WMI Event tracing test code.


Contact Alias
ncrum

Mobile Lab
Test Focus
Using a combination of the technology segments of Mobile Computing Test Team,
this team tests full mobile system integration including functionality testing of all
laptop ports (USB, IR, Serial, Parallel, and so on), device bays, docking stations,
CardBus Controllers, Video and Audio).
The team performs extensive Plug n Play testing of the CardBus controllers using
all device classes of PC Cards using a combination of manual and automated tests
that include hot plug, stop/start, disable/enable, install/remove, surprise removal,
device functionality, and dynamic resource rebalancing.
Power Management is an integral part of our testing and is integrated into every
aspect of our testing.
This team consists of several labs specializing in a technology area that is tested.
PC Card Lab: Builds test matrices around CardBus Controller Chipsets.
Laptop Integration Lab: Video and Audio Chipsets with a secondary
focus based on USB, IR, and 1394 chipsets.
This team is in the process of developing several additional test tools for Mobile
Plug and Play, CardBus Wake-on-LAN, and device functionality testing. Below are
some of the existing automated tests that the team currently uses:
Driver Verifier.
PMTE (Power Management Test Engine).
NTSTRESS
WHQL Test Kit for CardBus Controllers
Contact Alias
ncrum

Networking Lab
The Core Networking primarily tests network drivers, new versions of NDIS and
parts of the HCT, such as ndistester. Systems run stress daily, and BVTs and new
operating systems are loaded multiple times per week. On portable systems, we
work hard to get new drivers into the build, primarily networking, modem, video
and audio. We have instrumented a new procedure in our lab: When writing bugs,
we list the asset number of the computer, so we always know what hardware is
hitting what problems at any given moment.
Contact Alias

stevesu

OOBE Lab
In the OOBE lab we simulate the role of the OEM, verifying that this application
("Welcome to the operating system, get registered with Microsoft and the OEM,
get connected to the Internet") can be branded with OEM logos, additional
registration information, additional hardware tutorials, and communication
hardware interaction is seamless.
Test Focus
OEM Customization includes:
OEM Branding
OEM Registration Process
OEM Hardware Tutorial
This team thoroughly tests input devices and communications devices. The
team's goal is to ensure that OOBE is completed successfully on first boot and
consumer can connect to the internet.
Basic Functionality
Install Windows 2000
Power Management
USB Keyboards/Mouse
OOBE Device Usage
Sound cards
Modems
DSL Connections
Cable Modems
LAN cards
Feature Testing
OPK/Sysprep Interaction
Imaging Software Interaction
Hibernate/Standby Interaction
Global User Info post OOBE
Domain Join Interaction
User accounts creation within OOBE
Globalization Testing
German OOBE testing
MUI plus OOBE testing
Language packs plus OOBE testing

Systems Needed
Different OEM boxes, with different Images on how they customized OOBE in the
past, and how they want to continue customizing OOBE
Contact Alias
ncrum

OPK/Setup/Fresh Install Lab


The OPK Test Lab verifies that OEMs preinstall process is as bug-free and intuitive
as possible. This includes testing of sysprep, unattend.xml, setup (including fresh
install), and any Imaging solutions the team provides. The OPK attempts to
simulate what an OEM would do to preinstall on the factory floor with many
different combinations of hardware, applications and 3rd party drivers.
Test Focus
F6 3rd Party Mass Storage Drivers.
Pre-install 3rd party device drivers.
Pre-configure installation settings.
Preinstall different applications.
Verify image application on different HAL types.
Verify different processor types installed correctly.
Verify no Setup/Install timing issues on fastest computers available.
Verify Setup works seamlessly on LBA and Raid Arrays (IDE and SCSI),
including secondary controllers.
Verify USB Boot and Installation on supported hardware.

Systems Wanted
This tam is very interested in systems with some of the following configurations:
Some dual processors.
3rd party IDE and SCSI controllers.
Raid Arrays (onboard controller or pci).
Large disks (especially computers with multiple partitions or OEM
partitions, recovery partitions, and so on).
Systems supporting USB boot.
The more complex, the better (restore disks are a plus for getting drivers so we
can preinstall them, and so on).
Contact Alias
ncrum

Performance Lab
The Performance Lab team is responsible for all desktop, laptop, and client
performance on Windows 2000, Windows XP, or later. The team focuses
extensively on system responsiveness, browsing, games, applications, industry
benchmarks (sysmark, webmark, mobile mark, business winstone, content
creation winstone, and others), internal workloads, boot, power management,
hibernation, standby, and so on.
We are also responsible for providing architectural guidance to developers,
architects and others designing code, drivers, or what have your for the Windows
platform.
As a part of our work, we build tools to assess performance, provide executive
reports detailing performance issues, concerns and design changes we need to
make going forward. We have a large lab that used to stress computers and study
performance.
We are a part of a larger group responsible for Server performance as well.
Contact Alias
ronth

Plug and Play/PCI Lab


The Plug and Play/PCI lab is part of the Hardware Platforms Test Team. This lab
tests hardware/chipset configurations, runs a BVT of Plug and Play tests daily on
new builds, runs the hardware compatibility test (HCT) that is applicable to PCI
and Plug and Play on new systems, and runs regular test passes around
milestones with PCI bridge boxes and a large number of add-on cards.
Test Focus
Docking: all scenarios warm, hot, cold with devices and user scenarios.
PCI driver and chipset workarounds.
Kernel Plug and Play: multi-level rebalance, resource arbitration,
device enumeration, and detection.
Server scenarios: NUMA, Hyperthreading, Computer Check Arch, 64
bit.
Interrupt architecture.
Hal, Loader.
Hot plug scenarios for Dynamic Partitioning.
Systems Needed
Any multi-processor server computers.
64-bit computers.
Any computers with hot plug support of any kind.
Any computers with unique interrupt routing implementations.

Computers with multiple IO APICs.


This lab has a primary focus on the hardware. If the chipset is new, if it is new to
a specific vendor, or if it has interesting hotplug attributes, then we are interested
in seeing it.
Contact Alias
ncrum

RIS Remote Install Service Lab


RIS was introduced in Microsoft Windows 2000 to allow server-based
installation of an operating system onto client computers that do not currently
contain one. Improvements to RIS in the Windows Server 2003 family are
summarized in the following section.
Test Focus
With the release of Windows Server 2003, RIS now supports the following new
capabilities:
Deployment of Microsoft Windows 2000 Professional, Microsoft
Windows 2000 Server, Microsoft Windows 2000 Advanced Server,
Windows XP Professional, and the Windows Server 2003 family
operating systems.
Automation of the CIW using the Autoenter feature.
Enhanced cross-domain functionality.
Increased security by adding a masked double-prompt administrator
password.
Automatic DHCP authorization with Risetup.exe.
Auto-detection of the target system Hardware Abstraction Layer (HAL)
type to allow filtering of images from the CIW.
Support for the Recovery Console and support for Microsoft
Windows Preinstallation Environment.
Support for Microsoft Windows XP 64-Bit Edition Version 2003 and
the 64-bit versions of the Windows Server 2003 family.
Support for the Uniqueness Database in .sif files.
Support for Secure Domain Join.
Support for NTLM version 2 (NTLMv2)
Support for encrypted local administrator password entries.
Contact Alias
ncrum

Static Lab
Test Focus
Optical CD-ROM, CD-R/RW
Tape
Tape Changers
Smartcard readers
MPS Multi Port Serial adapters
Serial
Parallel
Removable drives
Functionality
Install Windows.
Power Management.
Running automated tools in all area.
ATAPI burn testing on CD-R/RW drives.
Backup utility testing performed on Tape and Changers.
RSM Testing performed on Changer libraries.
Automated loop back testing on serial and MPS.
Removable drive testing performed with RM Disk test tool.
Contact Alias
ncrum

Storage Lab
The Storage Lab thoroughly tests the IDE controller's basic functionality by
running through various configurations and tests. Our goal is to ensure that
Windows 2000 can be installed on the controller in test with full operating system
functionality.
Test Focus
Basic Functionality
Install Windows .NET Framework.
Power Management.
DMA Testing.
Installing and configuring a secondary hard disk drive.
Install the secondary hard disk drive.
Disk Management
Using Disk Management to crate and delete NTFS and FAT partitions.
RM Test

Basic Functionality
CD/PD
STape Test
HCTs
Manual Backup/Restore
Raid Testing/Multiple Adapters
Mixed Volumes
Striped Volumes
Spanned Volumes
Contact Alias
ncrum

System Migration Lab (formerly Upgrades)


System Migration (In-place Upgrade)
The System Migration Lab team verifies migration of Windows 2000, Windows XP,
and Microsoft Windows codename Longhorn systems to the same PC
running Windows Longhorn.
Test Focus
Basic Functionality
Supported upgrades for each previously released product and SKUs.
Unsupported upgrades, compatibility and blocking.
User interface of the Setup process.
System Settings
Desktop settings such as schemes and shortcuts.
Registry settings.
Control panel applet migration.
User accounts and settings.
Application Compatibility
Technology to migrate applications.
Technology to migrate user and configuration data.
Driver Compatibility
Down-level driver upgrade and arbitration.
Device/driver compatibility, compliance, and user notification.

System Migration: PC to PC Migration


In this lab we verify migration of Microsoft Windows 98, Microsoft Windows
Millennium Edition (Me), Microsoft Windows NT 4, Windows 2000, Windows XP,

and Windows Longhorn to another system running Windows Longhorn. Scenarios


tested include online (direct connect or network) and offline (intermediate store).
Test Focus
Basic Functionality
Ability to perform migration for each previously released supported
product and SKUs.
Non-supported migration, compatibility and blocking.
User interface of the Migration Tools.
System Settings
Desktop settings such as schemes and shortcuts.
Registry settings.
Control panel applet migration.
User accounts and settings.
Application Compatibility
Technology to migrate applications.
Technology to migrate user and configuration data.
Driver Compatibility
Downlevel driver upgrade and arbitration.
Device /driver compatibility , compliance, and user notification.
Systems Wanted
Computers need to include currently shipping OEM systems with a preinstalled
operating system to be representative of the type of computer the user will be
migrating or upgrading from. Additionally, computers that meet at least the
recommended Windows Longhorn hardware specification are needed and will
continue to be needed through Longhorn release to represent typical target
systems that users will Migrate/Upgrade to. Both Corporate/Enterprise class and
consumer class computers will be used because this feature set is targeted at
both groups.
Contact Alias
ncrum

Sustained Engineering (WinSE) Lab


The WinSE team is responsible for developing; testing and releasing HotFixes,
Security Releases and Service Packs for all previously released Operating Systems
following the first subsequent Service Pack Release (n+1). Currently, we own
Windows NT 4, Windows 2000, and Windows XP. To accomplish this, the WinSE
team has constructed a test team and lab environment closely mirroring the core
teams coverage and structure. Although some components remain with the

Windows core team for Service Pack testing ownership, most are transitioned to
WinSE upon release of SP1 for each operating system.
Our Labs include Base Storage Drivers, Kernel and File Systems, Storage Services,
Networking, Application Compatibility, Print and Imaging, Setup and Installer,
Security, Windows Update, ACPI, RAS, TS, and so on: Basically, all component
areas covered by the core team. As for our hardware requirements, we can always
use additional test systems and often are able to make use of systems and
devices that may not be as current as those required by the core team.
Contact Alias
JohBro

USB Lab
Different computers can have slightly different implementations of the same USB
chipset and sometimes different implementations of the same BIOS. We need to
test on a wide variety of systems from all OEMs, as well as new chipsets.
Responsible for testing the core USB bus and the HID stack for all
Windows operating system releases.
Extensively test interoperability of devices on a daily basis. For example, a
USB Net meeting scenario would involve multiple levels of hubs, a
keyboard, mouse, speakers, microphone, modem/NIC, storage device or
printer and cameras.
Test through multiple depths of hubs on all known types of host
controllers.
Disabler/remover tests (all devices, including hosts).
Bulk/ISO loopback tests (hosts/hubs).
Power management tests (all systems, all ACPI sleep states).
WHQL/USB-IF test suites.
Contact Alias
ncrum

VCT (Video) Lab


Display Driver Testing for current operating system and Display Related
Hardware. We own the testing of this area for any Microsoft operating systems in
development.
We also own testing of various other projects that relate to display driver testing:

Focus Areas for Display Drivers


Power Management

Additional Team Ownership


Core Display Components (Videoprt,

Watchdog)
D3D/DDRAW/OpenGL

DCT/HCT Display Testing

GDI

AGP Filters

Device Coverage

Monitors

Stress/Stability

Display Applet

DCT/HCT Conformance

Display Code Coverage

Multidisplay/Dualview

Display Performance

Laptop LID/Docking Scenarios

WHQL Display Help Relations

Setup/Plug and Play and

Team Web Development

Migration/SetupVars

Systems Wanted
For testing purposes this team requests any and all newer computers with a wide
variety of video adapters. Here is a small list of current adapters supported in this
team's testing of x86 Drivers
ATI Radeon family, Desktops and Laptops
Radeon 7000
Radeon 7500
Radeon 8500
Radeon 9000
Radeon 9100
Radeon 9500
Radeon 9700
Radeon 9800
Intel i830-i845, Almador family, Desktops and Laptops
Matrox Parhelia, desktop only
NVIDIA, NV5-NV35, Desktops and Laptops
Nvidia TNT2
Nvidia Vanta
Nvidia GeForce 256
Nvidia GeForce
Nvidia Quadro
Nvidia GeForce2
Nvidia Quadro2
Nvidia GeForce3

Nvidia GeForce4
Nvidia Quadro4
Nvidia GeForce FX
S3, Super Savage laptops
SIS Xabre family, desktop only
Trident CyberBlade Windows XP laptops
Additional Systems Wanted
Laptops, tablet PCs, and desktop systems with integrated video adapters.
OEM systems with the shipping operating systems that are from that
vendor to further the team's real world testing.
As a huge favor the team also requests laptops with serial comports (for
debugging).
Contact Alias
VCTLeads

WDEG/NCD-AVQ Lab
Test Focus
In the WDEG/NCD-AVQ lab we test that all function instances for hardware and
software on a system are available properly according to the Function Discovery
Functional and Design Specification. We test the following:
Verify all function instances enumerate properly and in the correct
category.
Verify all function instances can be properly activated.
Verify function instances report properly in a terminal services session.
Verify function instances work correctly using either managed or
unmanaged code.
Verify on device/software installation/removal that proper notification of
changes in available function instances occurs.
Various end-to-end and integration scenarios.
Systems Wanted
Any system with PCMCIA, USB, 1394, Bluetooth or other means to easily connect
and disconnect a device from a system
Also interested in any laptops that have docking stations or other hot changeable
docking units

AVQ Lab

In the AVQ lab we test that the AVQ APIs are working properly. Those APIs
include CPU Reserves, Memory Reserves, and disk IO reserves.
Systems Wanted
Any OEM custom hardware.
Single processors, dual processors, quad processors, hyper threading processors.
x86, Itanium-based, AMD64.

HiPPOP Lab
In the HiPPOP lab we test HiPPOP API (DLL in Windows Longhorn) which is
lightweight solution for remote access to COM objects over given transport (RDP
or TCP). Mostly the team just runs a series of tests using Remote Desktop
Connection between two computers.
Systems wanted
Any computer is useful. Single proc, Dual proc, Quad proc, Itanium-based,
AMD64.
Contact Alias
ncrum

You might also like