Professional Documents
Culture Documents
User Manual
WTT 2.0 RTM
Disclaimer
Information in this document, including URL and other Internet Web site
references, is subject to change without notice. Unless otherwise noted, the
example companies, organizations, products, domain names, e-mail addresses,
logos, people, places, and events depicted herein are fictitious, and no
association with any real company, organization, product, domain name, e-mail
address, logo, person, place, or event is intended or should be inferred.
Complying with all applicable copyright laws is the responsibility of the user.
Without limiting the rights under copyright, no part of this document may be
reproduced, stored in or introduced into a retrieval system, or transmitted in any
form or by any means (electronic, mechanical, photocopying, recording, or
otherwise), or for any purpose, without the express written permission of
Microsoft Corporation.
Microsoft may have patents, patent applications, trademarks, copyrights, or other
intellectual property rights covering subject matter in this document. Except as
expressly provided in any written license agreement from Microsoft, the
furnishing of this document does not give you any license to these patents,
trademarks, copyrights, or other intellectual property.
2004 Microsoft Corporation. All rights reserved.
Microsoft, MSDN, MS-DOS, Visual C#, Visual C++, Win32, Windows, Windows NT,
and Windows Server are either registered trademarks or trademarks of Microsoft
Corporation in the United States and/or other countries.
The names of actual companies and products mentioned herein may be the
trademarks of their respective owners.
Table of Contents
Chapter 1: Introduction...................................................................................
Windows Test Technologies Overview................................................................
WTT Features.....................................................................................
Windows Test Technologies Architecture............................................................
Enterprise Detail.................................................................................
Test Resources Detail............................................................................
Controllers...............................................................................................
Getting Started - Process Summary..................................................................
Chapter 2: WTT Setup.....................................................................................
WTT Setup Overview...................................................................................
Controller Setup.........................................................................................
System Requirements............................................................................
Hardware Requirements.........................................................................
User Account Requirements.....................................................................
Database Installation............................................................................
Microsoft .NET Framework Installation.......................................................
Installing WTT Controller.......................................................................
Client Setup..............................................................................................
Software Requirements..........................................................................
User Account Requirements.....................................................................
Installing WTT Client.............................................................................
MSXML Installation...............................................................................
WTT Studio Setup.......................................................................................
Software Requirements..........................................................................
User Account Requirements.....................................................................
.NET Framework Installation...................................................................
Installing WTT Studio............................................................................
Chapter 3: Asset Tracking.................................................................................
Asset Terminology.......................................................................................
Asset Pools...............................................................................................
Getting Started in Asset Management................................................................
Asset Tracking Best Practice Recommendations....................................................
Asset Pool Management Procedures..................................................................
Asset Tracking Procedures.............................................................................
Registering Assets................................................................................
Viewing and Editing Computer Details........................................................
Searching for Assets..............................................................................
Transferring an Asset.............................................................................
Asset Loans........................................................................................
Vendor Management.............................................................................
Monitoring Jobs..........................................................................................
Job Monitor Toolbar Options....................................................................
Machine Pool Short-Cut Commands..........................................................
Machine List View Short-Cut Commands.....................................................
Job Execution Status View Short-Cut Commands..........................................
Task Execution Status View Short-Cut Commands.........................................
Querying results in Job Monitor...............................................................
Quick Schedule of a job on computer(s)....................................................
Using the Result Explorer.............................................................................
Result Explorer Toolbar Options..............................................................
Result Explorer Short-Cut Commands........................................................
Task Results Short-Cut Commands............................................................
Viewing Job Results.............................................................................
Viewing Job Errors..............................................................................
Working with the Results Log.................................................................
Adding Manual Job Results to the Results Log..............................................
Changing the Column Display and Sort on the Job Results Form........................
Querying Results in Result Explorer..........................................................
Editing Results in Result Explorer............................................................
Using Result Collection Explorer....................................................................
Result Collection Toolbar Buttons............................................................
Result Collection Short-Cut Commands......................................................
Querying a Result Collection..................................................................
Using Result Rollup....................................................................................
Result Rollup Toolbar Options.................................................................
Result Rollup Short-Cut Commands..........................................................
Querying Results in Result Rollup.............................................................
Chapter 6: WTT Administration........................................................................ 118
Managing Enterprises..................................................................................
User Administration...................................................................................
Dimensions..............................................................................................
Adding and Editing Dimensions...............................................................
Machine Configuration Query Dimensions...................................................
Create MCU Policy for an Asset Pool.........................................................
Verify MCU Policy...............................................................................
Global Parameters.....................................................................................
Global Mixes............................................................................................
Working With a Global Simple Mix............................................................
Setting Constraints for a Simple Mix Context...............................................
Setting Parameters for a Simple Mix Context...............................................
Setting Attributes for a Simple Mix Context................................................
Working with a Global Advanced Mix.........................................................
Setting Dimensions and Parameters for a Global Advanced Mix.........................
Chapter 1: Introduction
This chapter gives an overview of Windows Test Technologies (WTT) and some of
its important features. In addition, it lists the major steps to follow for an initial
end-to-end experience, from setting up WTT to reviewing the results of a test
run.
WTT Features
WTT Automation Datastore provides data storage for test cases. These test
cases can be grouped and scheduled when necessary to complete a test pass.
Utilizing the datastore, users are able to extend test cases with automation
information used when running the job (or test case).
WTT Controller, which continuously runs a set of services and applications to
support the execution and logging of jobs.
Note: A Controller can host both an Automation Datastore and a WTT
Controller on the same computer.
Asset Tracking, which helps track hardware for testing, as well as supporting
the sophisticated test automation and reporting technologies of WTT. Asset
Tracking provides users with dynamic asset pool management, allowing them
to efficiently allocate computers, devices, and other peripherals, as well as
configure complex test scenarios.
Jobs, which can be easily created, queried, sorted, grouped, and selected for
execution. Jobs are a tool for automating test cases and are stored in feature
nodes in a tree-view that allows for easy organization.
Runtime parameter values, which can be set by the user, allowing variable
values whenever a job is scheduled to run. The use of runtime parameters can
also extend the usefulness of test cases and can be reused for extra flexibility
and consistency among teams.
WTT Sysparse, which gathers detailed information about client computers
used for testing, storing the information in the Asset Tracking database. WTT
users can use this customizable information to set up jobs using specified
dimensions which are used to find appropriate computers for test cases. This
information also allows users to efficiently organize test cases on large
numbers of diverse computers and devices.
Library jobs, which can be called up and referenced from within the context of
another job. Library jobs allow test cases to be shared and reused throughout
WTT.
Client Detail
Enterprise Detail
Controllers
Each WTT enterprise must contain at least one server known as the Controller.
The Controller is comprised of an Automation Datastore and one or more WTT
Controllers. The Automation Datastore contains information about WTT client
computers as well as current and past jobs. The WTT Controller hosts the Job
Delivery Agent and WTT Execution Agent, along with other services and
applications fundamental to the operation of WTT.
Additional separate controllers can be added to a WTT enterprise as needed
for project growth and load balancing. This distribution of controllers allows
WTT to provide support for labs where the computers do not have a direct
connection to the corporate network. It also allows individual teams to assess
whether they need to host their own WTT controller than use WTT-hosted
servers.
Controllers are an important entity to the end user in WTT. Although users can
access computers across controllers, basic logical groupings of individual client
computers (known as asset pools or machine pools) are specific to individual
controllers to allow for better asset control. This means that certain functions
that are based on asset pools cannot function across multiple controllers. WTT
Job scheduler, for example, schedules on the asset pool level, and therefore
cannot be used to schedule jobs across more than one controller
2.
3.
4.
5.
Monitor jobs.
View results.
Edit results, if appropriate.
View the failure logs and job reports.
Controller Setup
Before installing the WTT Controller software, the server must have either
Microsoft SQL Server 2000 or Microsoft Database Engine (MSDE) 2000
installed.
Note: In an enterprise environment, the setup of a WTT Controller will usually
be completed by enterprise administrators.
System Requirements
WTT Controller is supported on computers running the following software:
x86 version of Microsoft Windows XP SP1, Microsoft Windows Server
2003, or Microsoft Windows codename Longhorn.
Microsoft SQL Server 2000 or MSDE 2000 Service Pack 3a (SP3a).
Microsoft .NET Framework version 1.1. You should install this before you
begin the WTT server setup.
Hardware Requirements
WTT Controller setup is currently only supported on x86 architectures. Detailed
hardware requirements such as number and speed of processors, video and
network cards and hard disk capacities have not been determined as of this
release. It is recommended, however, that a server be selected that is well within
the hardware specifications identified for Windows Server 2003.
privileges on the database computer and the Controller User must have at least
User rights on that computer.
Database Installation
Before installing the WTT Controller software package, either Microsoft SQL
Server 2000 or MSDE 2000 SP3a must be installed.
To install MSDE
1. From the computer where you plan to install the Controller, download and
run the downloadable web package file from the Internet:
http://download.microsoft.com/download/8/7/5/875e38ea-e5824ee2-9485-b459cd9c0082/sql2kdesksp3.exe
1. In Windows, click Start and Run, and then at the prompt, type the path
where you saved the MSDE setup followed by:
\msde\setup sapwd=<databasepassword>
disablenetworkprotocols=0
where
<database password> is your database administrator password.
2. To start the MSDE installation, press Enter and wait for it to complete.
To make sure that the SQL Server 2000 services have started, confirm
that a green arrow is visible on the SQL Server 2000 icon is active in
the Windows System Tray on the Controllers desktop.
3.
4.
5.
6.
7.
8.
Click Next.
you wish to change the default log file share information,
type your own log file share settings. Otherwise accept the
defaults. Click Next.
11. Click Next, and then click Next again.
12. In the User Name, Password, and Domain boxes, type the
user credentials for the Controller User, and then click Next.
9.
10. If
Install.
14. When the installation is complete, click Finish.
All WTT Client operations must be stopped in order to remove the WTT Controller.
1. In Control Panel, click Add or Remove Programs.
Note: The Installer will remove the WTT database, provided that this
database was created upon installation.
Client Setup
The WTT Client installation package installs the client software on test computers
that are used to execute WTT jobs. Because this software ties the client systems
to the Controller and supplies the Automation Datastore with vital information,
each test system used to execute jobs under WTT 2.0 must have this software.
With it, each client can be uniquely identified and analyzed by Sysparse for WTT
and included in an asset pool for testing. The WTT Client can be installed alone or
on a computer with WTT Studio installed. WTT Controller and the WTT Client,
however, cannot be installed on the same computer.
Note: All examples in this section assume that you accept the default settings
for all installation steps.
Software Requirements
WTT Client is supported on the following operating systems: Microsoft Windows
2000 Professional SP4, Windows XP, Windows Server 2003, or Microsoft Windows
codename Longhorn.
WTT Client is supported on the following architectures: x86, Itanium-based, or
AMD64.
Note: Windows 2000 SP4 clients require MSXML support files be installed
before WTT Runtime Client can be installed and run. For instructions for
configuring your Windows 2000 SP4 clients, see Installing MSXML.
appropriate. If you do not have a kernel debugger attached, clear the Kernel
Debugger Attached option. Discussion of the kernel debugger settings is
beyond the scope of this document.
Note: directory names used for WTT installation may not have spaces within
them. Any directory names with spaces will cause the installation to fail.
1. Remove any previous versions of WTT Client and WTT Studio that are on
the client computer.
Where:
<server> is the name of a previously installed WTT Controller.
4. Click OK.
5. Click Next.
6. Read and accept the End User License Agreement, and
then click Next continue.
7. Click Next, and then click Next again.
8. If ICF is enabled on the target computer, a setup will display a
dialog advising you that a port must be opened in the firewall
to allow WTT Client to function. To continue installation:
Select Yes to continue.
Click Next.
2. Clear the Kernel Debugger Attached check box and then
click Next.
3. Click Install.
4. Click Finish to exit the Installer.
5. [Optional step] To install Autotriage only, run:
WTTCmd.exe /addsymboluser /user:<username>
/domain:<domain> /password:<password>
Where:
<username>
a. In the Run dialog box, type cmd, and then click OK.
2. At the command prompt, run the following command:
\\controller\<install share>\Debugger\WTTKDSetup.cmd
Note: For more information on this script run:
WTTKDSetup.cmd /?
from the install share. The script requires network access, so you need
to run the script under an account with network access.
Note: If the WTT Client is already listed, simply verify that the settings
are correct (see below), and verify that the item is enabled (selected).
5.
Click OK.
7. Verify that the WTT Client entry is selected, and then click
OK.
6.
Note: WTT Client install does best effort attempt to make sure that the ICF
will not block its working. If ICF is configured as off WTT setup will
succeed. If ICF is configured as on recommended then setup will put the
port number (TCP:1778) on the list of enabled port. If ICF is configured as
off without exception then WTT installation will fail.
If setup fails to detect the configuration of ICF then it will assume that ICF is
not configured and give a pop up (ONLY in attended setup) notifying the
user to that effect.
MSXML Installation
Before installing WTT Client on a computer running the Microsoft Windows 2000
SP4 operating system, it is necessary to install MSXML.
Note: This step is necessary only if running Microsoft Windows 2000.
To install MSXML
1. From the computer where you plan to install the WTT Client, install the
Microsoft Windows Installer 2.0 from:
http://www.microsoft.com/downloads/details.aspx?
displaylang=en&FamilyID=4B6140F9-2D36-4977-8FA1-6F8A0F5DCA8F.
2. Install the Microsoft XML (MSXML) Parser 3.0 Service Pack 4 (SP4) from:
http://www.microsoft.com/downloads/details.aspx?FamilyId=C0F860222D4C-4162-8FB8-66BFC12F32B0&displaylang=en.
3. Install the MSXML 4.2 SP2 (Microsoft XML Core Services) from:
http://www.microsoft.com/downloads/details.aspx?FamilyID=3144b72bb4f2-46da-b4b6-c5d7485f2b42&DisplayLang=en.
Software Requirements
To install the WTT Studio, the target computer must be running the following
software.
x86 versions of Windows XP, Windows Server 2003, or Microsoft Windows
codename Longhorn.
Microsoft .NET Framework version 1.1.
Note: WTT Studio setup is currently only supported on x86 architectures.
Asset Terminology
Asset
The database used by WTT to store client configuration and test case
information.
Child Asset Pool
An asset pool contained within another pool. Several child asset pools can be
created as part of a parent asset pool in the asset pool hierarchy.
Controller
A server that is configured to host the WTT Job Delivery Agent and WTT
Execution Agent, along with other services and applications integral to the
operation of WTT.
Current Owner
A user who currently has a given asset in his or her possession. This need not
be the same person as the Permanent Owner.
Default Pool
The asset pool in which computers are initially placed when registered with a
WTT Controller. From the Default Pool, they may be moved to specific asset
pools for better asset or test management.
Permanent Owner
A user who functions as the Current Owner of an asset while borrowing it for a
limited time period for testing periods from its Current Owner.
Target Owner
A device that has an asset tag and/or serial number, and a defined Device
Label. It is almost always an attached device, such as a printer or a scanner,
and is inventoried separately, rather than an associated device, which is not.
Transfer
The method for changing ownership of an asset from one permanent owner to
another.
Vendor
Asset Pools
An asset is a computer or a device (component or peripheral) suitable for test
cases. These are the client computers in the WTT test environment.
Computers and certain devices can be grouped into logical units by the WTT user.
These groups are called asset pools. Asset pools are displayed in a tree-view and
can be organized hierarchically. This allows for structured organization of large
numbers of computers as well as more flexible targeting for deployment of Jobs.
Properties of an asset pool include its name, its Job Delivery Agent computer, and
permissions. Users always have permissions to browse computers, but obtaining
permission to schedule or execute jobs on an asset pool are limited and can be
controlled by the asset pool owner.
Asset pools can be scheduled as a unit. When this option is selected, the
scheduler treats all computers in the pool as a single unit so that if any computer
or device from the pool is reserved by the scheduler, then all computers and
devices in the pool are reserved. This is particularly useful when the computers or
devices have some physical connection that ties them together. An example of
this is MSCS Clustering set up with shared storage. In this case it would be
undesirable for a job deployment to split across multiple clusters when the
intention is to execute on all nodes in the same cluster.
My Search
The My Search tab displays all asset pools supported by the selected controller.
From My Search, you can register new assets, create asset pools, move assets
from one pool to another, place dimensions on the asset pools or modify security
permissions.
Global Search
The Global Search tab provides a location for you to search through the
database containing all computers and devices registered as assets in WTT.
My Actions
The My Actions tab allows you full control over all loan and transfer actions
relating to your assets, including current and pending asset loans, approvals,
returned loans, and ownership transfers.
4. In the Name box, type a name for the new asset pool.
Note: The asset pool name must be unique within the selected
controller.
5. Select a Controller from the Job Delivery Agent drop-down list.
Note: If you have multiple options available and no job delivery agent is
selected, jobs scheduled for this asset pool will not be delivered.
6. If you wish the scheduler to treat all assets in the pool as a single unit,
select the Schedule as unit option.
Note: If this option is selected, all assets within the pool will be grouped
as a single testing unit for test selection.
7. Click OK.
Note: Deleting a parent asset pool will also delete all child asset pools.
All computers and devices within those asset pools will then be moved to
the Default Pool.
Users cannot delete the $, Default Pool or System asset pools.
Registering Assets
4. On the Computer List tab, right-click the computer to which you wish to
attach the device, and then click Associate Device.
Note: The computer to which the device is to be associated must
currently have a computer status of Ready.
5. Click the device to be associated.
Note: The device to be attached must have already been registered in
order to be associated to an already registered computer.
6. Click Associate Device.
7. Click OK.
Note: Because associated devices are intended to be permanently
connected to a computer, WTT allows devices to be disassociated from the
computer only if the device is externally connected to that computer. If a
device is intended for only a temporary connection to a computer, it should
be "attached," rather than "associated."
3. On the My Search tab, click the target asset pool, and then click the
Refresh
4. On the Computer List tab, right-click the computer to which you wish to
attach the device, and click Attach Device.
Note: The computer to which the device is to be attached must
currently have a computer status of Ready.
5. Click the device to be attached.
Note: The device to be attached must have already been registered in
order to be attached to an already registered computer.
6. Click Attach Device.
7. Click OK.
4. On the Computer List tab, right-click the computer to which the device is
attached. and click Edit Computer Details.
5. Click View Connected Devices.
6. Expand the tree view of the target computer and navigate to the
connected device in question.
7. Right-click the selected device and click Properties.
8. Click Advanced.
9. Click Unattach Device or Unassociate Device if available.
Note: If these options are not available, the device is permanently
connected to the computer and may not be removed.
10. Click Unattach Device or Unassociate Device.
11. Click OK.
3. On the My Search tab, click the asset pool containing the desired
computer, and then click the Refresh
that pool.
4. On the Computer List tab, right-click the desired computer, and then click
View Computer Details.
5. Click OK to close the dialog box or click View Connected Devices to see
device details.
6. To see individual device details, expand the computer folder in the
Connected Devices dialog box.
7. Right-click the specific device you wish to view, and then click Properties.
8. Click the Close button on Device Properties, and then click the Close
button on Connected Devices.
9. Click OK to close the dialog box.
that pool.
4. On the Computer List tab, right-click the desired computer, and then click
Edit Computer Details.
5. Edit appropriate computer details as necessary.
6. Click Save, and OK to save your changes.
button.
Quick Search will only search for assets that belong to the logged-in user.
Note: Query Builder will not limit searches to the assets of the logged-in
user unless the DSUserAlias parameter has been specified in the search
criteria.
Search results will be displayed on the Computer List tab or the
Device List tab depending upon the type of asset being sought.
Transferring an Asset
Because of the nature of enterprise testing in a large company, it is often
necessary to transfer ownership of one or more assets from one tester or test
group to another. With the WTT Asset Transfer Wizard, this is easily done, with
streamlined means allowing you to transfer ownership of one an asset to
someone else or to request ownership of an asset for yourself.
Asset Loans
Exchanging assets is a common and often necessary practice while conducting
product testing. To facilitate this, WTT allows for easy loan of assets from one
current owner to another (the permanent owner remains the same, however).
Notification of loan approval or rejection is automatically sent by e-mail.
The following basic rules are important to keep in mind when requesting an asset
loan:
A borrower may place a loan request for an asset only if the borrower is
not the current owner of the asset.
Multiple users may place loan requests for a single asset at the same
time. The current owner will choose from among the requests.
A single user may only place single requests for a given asset at one
time.
The temporary owner of the asset will be alerted to the acceptance of the
return, and the loan entry will be removed from the Asset Tracking History
table.
Vendor Management
In order to allow for testing on the widest range of OEM products, it is frequently
necessary to add to or modify Vendors or Vendor products. This is done through
the WTT Vendor Management tools.
The following tasks can be performed in Vendor Management.
To add a vendor
1. On the Asset menu, click Vendor Management.
2. Right-click the Vendor list, and then click Add Vendor.
3. In the Vendor Name box, type the name of the new vendor.
4. In the Description box, type a brief description of the vendor's service or
material.
5. Select the Flag as Popular check box to make the newly added vendor
appear as a popular vendor.
Note: New vendors may only be added by auditors.
6. Click Save, and then click No.
3. On the Vendor Divisions tab, right-click the Vendor Division list, and
then click Add Vendor Division.
4. In the Vendor Division 1 group box, type the new division name in the
Vendor Division Name box.
5. Type a brief description of the division function in the Description box.
6. Click Save, and then click No.
Four separate Vendor Divisions may be added at once.
Note: If the Retire Series check box is selected, the series will no
longer appear in the Series List.
6. Click Save, and then click OK.
To audit vendors
1. On the Asset menu, click Vendor Management.
2. On the Vendor List tab, right-click a vendor name, and then click Audit
Vendors.
3. Select a vendor check box in the Duplicate Vendors box.
4. If the selected vendor is a valid entry, click Correct Entry and the vendor
will be added to the Existing Vendors box.
5. If the selected vendor is not valid, click Delete Entry and the entry will be
deleted from the list.
6. If the selected vendor is a duplicate of an existing vendor entry, then click
the correct vendor from the Existing Vendor list, select the duplicate
entry check box in the Duplicate Vendors box, and then click Duplicate
Entry. The duplicate entry will be removed from the Duplicate Vendors
box.
7. Click No, and then click Cancel.
check box in the Duplicate Models box, and then click Duplicate Entry.
The duplicate entry will be removed from the Duplicate Models box.
The Duplicate Model entry will be replaced with the selected Existing
Model entry.
8. Click No, and then click Cancel.
Chapter 4: Jobs
This chapter provides information about working with jobs. Within Windows Test
Technologies (WTT), jobs form the primary action being performed, and consist of
the individual tasks and attributes forming a test sequence. Jobs may be a single
test or a group of tests, and can be limited to a single computer on one controller,
or can be distributed across multiple computers and controllers. A thorough
understanding of jobs is therefore essential to utilize WTT effectively as a test
framework.
Jobs
A job is a means of automating test cases and is a collection of tasks and
attributes forming a testing sequence. It can be a single test or a group of tests,
and can include tasks such as copying test files, setting test shares, running the
test, and result reporting. Essentially, a job is the amalgamation of the following
information needed to complete tests:
Runtime parameters.
Logical Machine Sets, which are sets of computer requirements and
constraints that you define.
Job Roles
Job roles provide information to WTT about how the job will be used. Different
roles place different restrictions on jobs.
Automated Job
An automated job is one where the individual steps required to execute
the test case are automated so as to require little hands-on action by the
test engineer during actual execution.
Library Job
A Library job is one that can be referenced from within the tasks of
another job. A library job is like a normal job; however a library job may
only be used by another job and therefore must not have a defined LMS.
Additionally, it cannot be scheduled directly. (Deprecated term: Sub job.)
A library job allows reusability of test cases throughout the WTT. However,
it can only use resources that were given previously to its parent job thus
a library job can only have access to the parameters and LMSs that its
parent job has. A library job also cannot contain references to additional
library jobs; rather, it is limited to one embedded layer.
Note: Library jobs are not designed to be executed by themselves, but
are rather hosted within another job.
Manual Job
A manual job is one where the individual steps required to execute the
test case are handled by the test engineer in a hands-on fashion.
Config Job
A Config job is one that performs a specific setup activity, such as a smart
installation or smart cleanup, in order to provide the asset configuration
needed for the execution of another job.
Tasks
A task is the smallest executable set of operations that a test engineer normally
defines for a job. Tasks specify what the test will do, as well as the action to take
if the job fails. All tasks have a number of characteristics in common:
Each task is assigned to run on one or more logical machine sets. If an
LMS contains more than one computer, then the task is normally
duplicated and run on all computers mapped to the LMS.
Copy File and EXE tasks are associated with their own run context,
including domain, user name, and password, as well as the running
directory.
Each task can be selected to include its result as part of the jobs result
statistic.
Note: Copy File and Copy Results tasks are exceptions to this rule
and their results are not included here.
Task Run Phase
Tasks may be set to run within a job during one (or more) of several distinct
phases of the execution:
Setup Specific tasks for initial job setup are executed.
The run phase of tasks is set when adding tasks to a new or existing
job.
Task Types
The specific type of task determines what type of actions are performed if the
task fails. A task can be one of the following types:
Executable Task A command line to any executable.
Copy File Task A method to mass-copy files from a remote location to
the computer under test.
Copy Results Task A method to mass-copy results from a local test
computer to a central logging location. Results are copied to a dynamically
generated destination directory or subfolder.
Run Job A task that runs a library job. The library job can be located
anywhere within the Feature tree; however, the library job that runs within
his task will only be able to access resources (such as global parameters)
that the calling run job can access.
Task Dependencies
Task dependencies define the relationship (execution order) between tasks across
individual computers or LMSs within a job. They are the basis for creating
complex client-server test scenarios where one application might depend on a
number of actions on different computers before it can begin to execute.
Task Dependencies Types
Types of Task Dependencies include:
Parallel All tasks within the job are executed simultaneously.
Sequential Tasks within the job are executed serially, in the task list
order set prior to execution.
Custom Tasks within the job are executed according to behavior that the
test engineer sets, including:
o A task is not executed unless all of the tasks on which it depends have
been executed on all target computers first.
o A task is not executed unless a task on which it depends on the same
computer is executed first.
o A task is not executed unless a task on which it depends on its
previous computer is executed first.
Parameters
Runtime parameters behave like environment variables, but are not restricted to
a specific computer and are not allowed within constraints. Parameters function
as placeholders in the job definition. They allow each task to accept user data
that can be defined either locally or globally as a job is created.
Note: Nesting parameters is not permitted within WTT.
Types of parameters
There are two types of parameters:
Local A parameter created and used for a specific job. Local parameters
are created as a job is being created.
Global A parameter created for a controller that can be used for any job
on that controller. Global parameters are created by clicking Parameter
on the Admin menu in WTT Studio.
Default parameters
WTT provides a number of default parameters, based on the characteristics of the
specific test computer or job being used. These include the following set of
default parameters:
Computer Config Parameters Computer properties such as Operating
Systems and language are available as parameters For example, WTT\OS
and WTT\Language.
Run-time Parameters Run-time properties of individual specific jobs are
populated as parameters, including:
WTTJobName The Name (TCM name) of the job that this task is part of.
WTTJobGuid The result GUID that this task is part of.
WTTLMSName The LMS that this task is assigned to.
WTTTargetMachineName The physical computer that this task is running
on. You can also get this from the [machinename] environment variable.
WTTRunGuid GUID of the Run that this task is part of.
WTTRunWorkingDir Default Working directory of the tasks.
<LMSName> This is mapped to a comma separated list of physical
computers that this LMS is mapped to.
WTTFullName Trace Name or fully qualified name of the job including the
feature path.
WTTControllerName Name of Controller/PD computer.
WTTDbMachineName Name of SQL Identity Server computer.
WTTDbName Name of logical datastore.
WTTMachinePoolName Name of machine pool being scheduled.
WTTTaskGuid GUID for the task being run.
WTTCopyLogsDest Location where test logs get copied using CopyResults
tasks.
Dereferencing parameters
Parameters can be de-referenced in a task command line by using [ ] brackets.
For example, if Path is a parameter is defined in a job, then dereferencing it
within a task command line requires it to be used as [Path].
Dimensions
Dimensions are customized pieces of information about a client computer that the
client automatically reports to the WTT database whenever it restarts. For
example, one dimension might be the processor manufacturer of the test
computer (ARM, Intel, or another vendor). WTT creates an initial set of default
Constraints
A constraint is a set of conditions under which a job can be executed. These
conditions are used to describe a class of computers which then allow lab
managers to target one set of tests against multiple sets of computer classes
without the need to reschedule the tests.
Contexts
A context is a set of constraints (logical descriptions of a computer class) that can
be applied to individual jobs and schedules. When a test or a set of tests is
designed, contexts can be applied in order to specify the test conditions under
which the job will be executed.
Common Context
In WTT, the user can specify a set of constraints for a job which is referred to as a
common context. If a job has an LMS then some constraints can be defined for
the LMS also. These are also referred to as common context, as they are common
to both the LMS and the job as a whole.
With common contexts, two types of LMSs are affected:
Primary LMSs - LMSs against which the results will be reported. These
LMSs must use the contexts of the job unchanged.
Other LMSs - LMSs not involved in reporting can use other contexts.
While these logical machine sets can inherit the common contexts of the
job, they can also define additional contexts. These new contexts are
appended to the inherited common contexts from the job and must not
conflict with them. If the LMS does not inherit from the jobs common
contexts, then the new contexts are the only contexts defined for that
LMS. These contexts can be made up of different and/or conflicting
contexts to those of the job itself.
As every job has a common context, at run time, we can add contexts to the
schedule. These contexts are referred to as common contexts of the schedule.
Schedule common contexts apply to a schedule and its associated mix, and also
to all of the jobs in the schedule. The schedule common contexts are compared
with the job common contexts; and if they conflict, then that job cannot be run
and must be removed.
Mixes
A mix is a set of one or more contexts, just as a context is a set of one or more
constraints that are applied to jobs and schedules. When you design a test or a
set of tests, you can apply several sets of contexts by applying a mix that
contains these contexts. Scheduling a job creates one instance of the job for each
valid context within a mix.
For example, a job might be designed run on a mix of test computers including:
x86-based, Microsoft Windows XP Professional in the German
language.
x86-based, Microsoft Windows Server 2003 in the English language.
Itanium-based, Windows Server 2003 in the English language.
In developing a mix to fit a specific job or set of jobs, the constraints and
contexts used can be global (applying to all tests, computers, or schedules) or
customized (applying to the specific job or schedule at hand). Default
constraints and contexts can be used or custom sets may be created.
Global mixes
A global mix can be of two types. A simple mix is a straight-forward collection of
contexts with each context containing a set of constraints, all of which are applied
evenly. An Advanced mix, however, is a complex mix with a pre-defined set of
rules that apply to the contexts.
Custom mixes
A custom mix is a user-defined collection of contexts based on specified
combinations of dimensions and constraints and is applied to an individual job or
schedule at hand.
constraints, but must not be set to inherit constraints from the job. For additional
information, see Appendix C: Best Practices.
Several important aspects about the Job Explorer trees should be noted:
By default the Query Builder is hidden. To display Query Builder click
the Show Query Builder button.
(accessible by clicking Job Explorer on the Explorers menu). Jobs are also
scheduled to be run there and may be monitored from Job Explorer or Job
Monitor (accessible by clicking Job Monitor on the Explorers menu).
Creation of a job can be very simple, but the options available to test engineers in
WTT can also make it quite complex. This section is designed to help users edit or
fine-tune jobs in order to allow for the complexity needed to make tests
accurately fit given situations.
11. Set any parameters, constraints, and so on, that you wish to use for this
job on the tabs below.
12. Click the Save button.
On the Local tab, type a name for the new parameter in the
first empty cell in the Name column.
3. In the Type column, click the desired parameter type from
the drop-down list.
4. In the Description column, type a message that will be
displayed next to the parameter at scheduling time.
2.
button.
7.
button.
Setting an LMS
A Logical Machine Set (LMS) is a logical grouping of one or more computers for
reporting purposes. An LMS specifies the quantity of computers and describes the
computer type that is required for the execution of the job. These requirements
can be either hardware or software oriented, or both.
There can be multiple LMSs per job.
LMS Definition
An LMS contains all of the hardware information about its component computers
that is needed for the job in question. This information is represented by sets of
constraints and values that the LMS uses to compare to the available test
computers to determine if a given computer has the required properties needed
to perform the job. These are a series of constraint-value pairs (sometimes
known as key-value pairs) that use logical operators in the form:
Operators can be one of the following SQL comparison operators: =, <, >, <=,
>=, <>, LIKE, NOT LIKE, IN. The value in each pair can be one of the following:
A system defined dimension.
A constant value.
A parameter (This requires pre-defining parameters within the job).
Variable LMS Size
Every LMS contains a minimum and maximum count, representing the minimum
and maximum number of computers that the WTT scheduler tries to find in order
to meet the computer specification for this job at schedule time. For example, if
an LMS is defined with a minimum of one and maximum of ten, the related job
requires at least one computer to execute. However, if the scheduler is able to
locate additional matching computers, it can allocate the job to up to ten
computers that match the computer constraints.
Click Add.
4. In the LMS Name box, type a name for the LMS.
5. In the Minimum and Maximum boxes, type the minimum
and maximum number of computers to make available for
executing this job.
3.
6. If this will be the primary LMS for the job, select the Make this LMS
primary check box. If this is not the primary LMS, clear the Make this
LMS Primary check box and then clear the Inherit constraints from
Job check box.
Note: The primary LMS is automatically set to inherit constraints from
the job and may not have its own constraints. Additional Logical Machine
Sets may not inherit constraints from the job but may have their own.
7. Click the empty cell in the Dimension column and select a dimension
from the drop-down list.
8. In the Operator column, click an appropriate operator from the dropdown list.
Note: The Operator list is based upon the Dimension chosen and may
change depending on the choice.
9. Type a value to compare to the Dimension in the Value column.
Note: Depending on the Dimension and Operator chosen, users may
be offered a drop-down list or combo box from which to select a specific
value. If this occurs, click the desired value from the selection available.
10. Add additional constraints if desired by clicking the next empty cell in the
Dimension column and repeating the process.
11. Click OK.
12. Click the Save button.
5.
6.
On the LMS tab, select the LMS that will be the LMS primary
for this job, and then click Edit.
Select the Make this LMS primary check box.
If other LMS are used on this job, select each in turn, click
Edit, and then clear first the Make this LMS Primary check
box and then the Inherit Constraints from Job check box.
Click OK.
Click the Save button.
To edit an LMS
1. Right-click the job you wish to edit, and then click Edit.
On the LMS tab, select the LMS that will be the LMS primary
for this job, and then click Edit.
3. Make any changes necessary to the LMS, and then click OK.
4. Click the Save button.
2.
To delete an LMS
1. Right-click the job you wish to edit, and then click Edit.
2. On the LMS tab, select the target LMS, and then click Remove.
3. Click Yes.
4. Click the Save button.
In the Task Details group box, select the tab for the
execution phase of the task to be added.
3. On the chosen tab, select the method of execution desired:
sequentially, in parallel, or customized.
2.
Click Add.
Select the type of task to create, and then click OK.
6. On the General tab, type a name for this task.
7. Click an action to be performed if the task fails from the
Failure Action drop-down list.
8. Click an LMS to be used from the LMS drop-down list if this is
a multi machine job.
9. Select the Disable check box to prevent this task from being
scheduled unless it is enabled.
10. Complete the task details based on the specific type of task
(see below).
4.
5.
3.
4.
5.
6.
In the Source box, type the source from where you want the
files copied.
3. If you do not want to use the default jobs working directory as
the destination, select the Custom option, and type the
destination directory and path in the accompanying text box.
4. Select the Exact Destination option if desired. If selected,
the value in the Destination box will be taken as a filename
instead of a directory.
2.
In the Source box, type the source from where you want the
files copied.
3. If you do not want to use the default jobs working directory as
the destination, select the Sub Folder check box and type a
destination directory name in the accompanying text box. This
will be created as a subdirectory under the default destination
directory.
4. Select the Exact Destination option if desired. If selected,
the value in the Destination box will be taken as a filename
instead of a directory.
2.
6. To have the results of this task contribute to the job counts, select the
Rollup Results to Job check box.
7.
4. To have the results of this task contribute to the job counts, select the
Rollup Results to Job check box.
5.
On the Tasks tab, select the task to be edited, and then click
Edit.
3. Make any necessary changes, and then click OK.
4. Click the Save button.
2.
On the Tasks tab, select the task to be edited, and then click
Remove.
3. Click Yes.
4. Click the Save button.
2.
Task T2(instance 1) on M2
Task T2(instance 2) on M6
Task T2(instance 3) on M4
Task T2(instance 4) on M3
The Clear all dimensions and rescan option is available to clear all the
dimensions and rescan them. This is required for fresh-install jobs.
Values
WTT\OSBuildNumber
WTT\MachineName
Name of computer
WTT\OS
WTT\OSSKU
WTT\ProductType
WTT\SystemLocale
WTT\Processor
WTT\FullMachineName
WTT\ProcCount
Number of processors
WTT\VBL
WTT\RAM
WTT\SP
WTT\SPBuildNumber
WTT\Build
WTT\UILanguage
WTT\Domain
WTT\UI-DPI
UI display settings
WTT\CLR
WTT\MachineRole
WTT\DomainNetbios
WTT\WindowsCoverageBuild
WTT\VirtualServer
Saves a component or Job Explorer data. This can be used to save filter settings
as well as selected features or categories. It also saves display information from
the Job Explorer such as column widths, and columns selected for display. It
does not, however, save the contents of a list view.
Print the current component data
Retrieves the results of any query that the user has built from the datastore.
Show/Hide Hierarchy
Displays the left-pane of the Result Explorer, showing the Feature and
Category tabs. The default setting for the Hierarchy button is On.
Show/Hide Query
Displays the query group box in the right pane of the Result Explorer, allowing
users to run simple or advanced queries. The default setting for the Query button
is Hide.
Datastore
Displays associated controllers that host the Jobs Definition and Jobs Runtime
services. This is the first drop-down list on the Job Explorer toolbox.
Create a new job in this node. This opens the Job form where the details of the
new job may be entered. To create a job, a user must have Write permissions for
the target node.
Add Node
Create a new node under the selected parent or root ($) node. Users must have
Security Write and Node Write permissions to do this. After creating the new
node, it is renamed from the default name by editing the label.
Rename
Rename the selected node. Users must have Security Write permission for the
node. The name may be modified by editing the nodes label. Each node in a
given path must be unique.
Delete
Delete the selected node, all child nodes and all jobs inside of all affected nodes.
Users must have Security Write permission for the node. Users are warned before
the node is deleted, and are not allowed to delete the node if any job in the node
or any of its child nodes is currently scheduled to run.
Export
Export details of all the jobs in the selected node to a specified destination. All job
details, such as constraints, contexts, tasks, and LMSs are exported, although by
default, results are not exported. Exporting a node across a datastore requires
Security Write and Node Write permissions. Exporting to a disk requires Write
permission on the local computer hard drive.
Import
Import details of all jobs in the selected node from a specified source. Importing
requires Security Write and Node Write permissions.
Cut, Copy, Paste, Drag and Drop
Move selected nodes to other locations, within Job Explorer or across the
datastores. For all operations, all jobs present inside the original node are
transferred recursively. A cut operation requires Security Write permission. For a
Copy operation no permission is required, but for Paste and Drag and Drop
operations require Security Write and Node Write permissions.
View Results
Display Results for the scheduled jobs in the selected feature. This opens the
Result Explorer screen.
Properties
Allow the current user to add Feature-level Write security permissions for other
users. Access to the Properties dialog requires Security Write permission on the
node.
To export Jobs
1. On the Explorers menu, click Job Explorer and select your controller
from the Datastore drop-down list.
Note: The export command is also available by rightclicking the selected job and then clicking Export.
Use the default location for the export destination or click
Browse to find another location, and then click Start.
5. Click OK.
4.
To import Jobs
1. On the Explorers menu, click Job Explorer and select your controller
from the Datastore drop-down list.
2. On the File menu, click Import Jobs.
3. In the Import Source Directory group box, type the directory path for
the jobs to be imported in the text box or click Browse to select the
directory.
4. In the Feature Handling group box, select the feature location into which
you wish to import the jobs:
If you wish the jobs to retain their original feature hierarchy mapping,
select Import hierarchy and append to and use Browse to select
the feature location.
Note: In some cases, users may not have permission to import a
job to the feature hierarchy specified by the job being imported. If
the On Error, remap to check box is selected, users may specify
where in the feature hierarchy these jobs should be imported if a
permissions error occurs. Otherwise, the job will not be imported if a
permissions error occurs.
If you do not wish to retain the imported jobs original feature hierarchy
(effectively flattening that hierarchy), select Ignore hierarchy and
remap to and use Browse to select the desired feature node.
5. In the Category Handling group box, select the category location to
which you wish to import the jobs:
If you wish the jobs to retain their original category hierarchy mapping,
select Import hierarchy and append to and use Browse to select
the category node to which you wish to append the imported jobs.
If you wish to drop all category mappings for the imported jobs, select
Ignore hierarchy.
o If you wish to remap all imported jobs to a specific category
(without the previous category mappings), select the Remap all to
check box and use Browse to select the category to which you
wish to append the imported jobs.
6. In the Job Collision Handling group box, select the options appropriate
for importing these jobs. These options include:
On collision, create copy (generate GUID): This option will
create a copy of the Job to be imported with a new GUID so as to
not overwrite the existing Job with the same GUID.
Overwrite: This option will overwrite (if permission allows) the
existing job with the same GUID. This option is also available if the
Job Name, Job Owner, or Feature Hierarchy are the same.
Note: If you use one or more of these options, you must also
specify to Copy (generate GUID) or Do Not Import if the
GUID matches but not the options you selected.
Prompt during import: Will prompt you during the import what to
do if a job GUID collision occurs. At that time you will be able to
select to Overwrite, Copy (generate a GUID), or Do not
Import.
Do Not Import: Will fail any job that is to be imported that has a
matching GUID of an existing job.
7. In the Library Job Collision Handling group box, select the options
appropriate for importing these jobs. These options include:
Use Job Options for Library Job Collisions: This option will
treat imported library jobs in the same fashion as other imported
jobs.
On collision, create copy (generate GUID): This option will
create a copy of the library job to be imported with a new GUID so
as to not overwrite the existing library job with the same GUID.
Schedules the selected jobs. See Using the Scheduler for more information.
Insert Results
Inserts results for the selected jobs into the results log. Manual results can be
logged from Job Explorer using this command.
Insert Results as List
Bulk inserts multiple job results, as long as the jobs contain similar
configurations.
View Results
Displays the results associated with the one or more selected jobs. For detailed
procedures, see Viewing Job Results.
Report
Presents the Job Report for one or more selected jobs in a printable format. The
report contains job details such as Jobs common constraints, mixes, context
information along with its constraints, task details, and LMS details.
Edit
Allows you to edit the details of the selected job. The menu is applicable for
multiple selections of the jobs. In bulk edit mode only General and Attribute
details for the job can be edited. A job can be edited by double-clicking it in the
Job List view. This opens the job in the Read-Only mode. To edit the job, click
Edit Job button. For detailed procedures, see Creating and Editing Jobs.
Categories
Adds or removes the selected job from the selected category. This command is
available for the job only if you select a Category in the tree view pane first.
Delete
Deletes single or multiple jobs. Before deletion it asks for confirmation. Deletion
of job is not permitted if the selected job is scheduled.
Export
Exports details of the selected jobs in the selected feature to the entered
destination. This exports job details, such as constraints, contexts, tasks, and
LMSs, associated with the job and does not export the results corresponding to
the selected jobs by default.
Add / Remove Columns
Adds or removes the selected column in the Job Explorer list view display.
Sort Columns
Moves the selected column in the Job Explorer list view display.
Column Chooser
Allows user to select field names for columns in the Job Explorer list view display.
button.
In the Value column, type a search string or value for which to search.
2. Add additional query clauses if desired.
3.
button.
2.
In the Value column, type a search string or value for which to search.
3. Add additional query clauses if desired.
4.
button.
then clicking the shortcut menu when you click a job in the Job Explorer (Test
Cases) query list view.
During scheduling, various constraints and options can be applied to the selected
jobs. Based upon these options, WTT Scheduler creates a Result, which is a
scheduled instance of a job. Each Result generated by the Scheduler is associated
with specific information, including the computers on which the job will run,
parameters being used, and the Result Collection associated with the Result.
Note: A controller must be selected prior to scheduling jobs as the WTT
Scheduler cannot schedule across multiple controllers.
Opens an existing file saved for scheduling. You can update the information again
and overwrite it.
Save current component to a file
Creating a Schedule
The process to create a schedule consists of the following procedures:
3. On the Feature tab, click the desired node and then click the Refresh
button.
4. Right-click the job to schedule, and then click Schedule.
Note: More than one job may be scheduled at a time by holding down
the CTRL- key, and then clicking each job to be scheduled.
5. On the Machines tab, from the drop-down list select a machine (asset)
pool from which to schedule this job. You must have permissions for a
machine pool in order to schedule jobs to it.
Note: You must have Write permissions for a selected machine pool in
order to schedule jobs to it.
Machine pools that have the property Schedule as a unit are prefixed
with *. Schedule as a unit behavior means that if a computer from a
machine pool with that property is selected, then all computers in that
pool are reserved, but the deployment will only be on the selected
computers.
6. To schedule the job only on specific computers within this machine pool,
select the Restrict Machine Selection to Specific Machines in
Machine Pool check box, and then select the check boxes for the specific
computers to use.
Select the Restrict Machine Selection to Specific Machines in
Machine Pool option.
Select in the list view the computers to be considered for scheduling
and clear those not to be scheduled.
7. On the Schedule Options tab, review the options for results location,
timing, and schedule behavior.
8. When all desired options have been set, click the Create Schedule
button.
2. In the empty cell in the Dimension column, select a dimension for this
constraint from the drop-down list.
3. In the Operator column, select an appropriate operator from the dropdown list.
4. In the Value column, type a value for the dimension.
Note: The dimension value may be a single value or alternatively, a list
of values depending on the selected dimension and operator.
5. Continue creating the job schedule.
Private run Allows a job to be run, but the results will not be
logged.
Scheduler Fundamentals
The Scheduler is responsible for finding physical computers in an machine pool
that satisfy all the requirements for all the constraints in a given Run, while
considering the users permission on the machine pool and computer.
Terminology
Heartbeat
A message sent from the test client to the controller that validates the
condition of the computer.
Job Delivery Agent
The component responsible for the interaction between the test computers
and the controller.
Run
Scheduler Prioritizing
Scheduler is a backend component which resides in the WTT 2.0 database and is
invoked every 10 seconds, attempting to allocate computers for runs that need to
be executed.
The Scheduler will find the runs that need to be scheduled based on the schedule
start time and try to schedule them in the machine pool in which they need to be
run. The machine pool in which the jobs need to be run may have computers
(called free pool computers) and sub pools attached to it. The Scheduler would
consider the following while scheduling a run:
Does the user have "execute" permission on the computers to be used.
Are the computers within a machine pool marked as Schedule as a Unit.
Consider the following scenario: Machine Pool MP1 contains computers M1, M2,
M3 and sub pool MP2. MP2 in turn contains computers M4, M5, M6, M7 and sub
pool MP3. MP3 contains computers M8, M9, M10, M11, M12 and MP3 is marked as
Schedule as a Unit. This can be seen in the following diagram:
Scheduler will schedule runs only on computers that have public key.
Scheduler will schedule runs only on computers that have a heartbeat
registered in the last 30 minutes.
Scheduler will schedule runs only on computers that are in a Ready state
and not executing any other run.
Scheduler will schedule runs on computers having IsExplicit dimensions
only if that dimension is asked for by the run.
Twin Scheduler
Twin Scheduler is a backend component which gets invoked by the Job Delivery
Agent. The Twin Scheduler too is responsible for finding physical computers in a
machine pool that satisfy all the requirements for all the constraints in a given
run.
Once a run is executed and the Job Delivery Agent returns with the run data,
instead of freeing the computers used by the run, the Job Delivery Agent will
invoke the Twin Scheduler. The Twin Scheduler will check if any other run having
the same constraint(s) is waiting to be scheduled. If it finds any run matching the
above-mentioned criteria, it will schedule the run on the same set of computers.
Twin Scheduler Considerations
Twin scheduling can be done only if the computers still satisfy the run
constraints.
Twin scheduling can be done only if all the computers are still associated
to the same machine pool(s).
Smart Scheduler
In WTT Jobs, testers create jobs to perform a wide variety of common tasks.
Some jobs are created to prepare computers to run tests, some are created to
cleanup computers after running, some are created to run the tests themselves
and some are created to do a combination of the three. These different roles can
be classified as setup, cleanup and regular jobs. This distinction is used to help in
the organization and the scheduling of jobs.
This approach, however, requires that the correct jobs run in the right order so
that the tests may run successfully on a properly prepared computer. Because the
setup, regular, and cleanup jobs are not associated with each other, the tester
must use their expert knowledge in order to ensure this. This process is time
consuming as well as prone to errors. It also makes sharing tests across teams
more difficult.
Additionally, the Smart Scheduler cannot make connections between jobs of
different roles and it therefore cannot make optimizations that cut down on the
amount of setup and cleanup jobs that get run. For example, two jobs share the
same setup and cleanup jobs but run different tests. With an optimized process,
these tests could be run one after the other with just one setup and cleanup
instead of repeating these steps unnecessarily.
Smart Scheduler addresses such scenarios. Intelligence is built into Smart
Scheduler to understand the effect on the dimensions on computers due to
running a particular job (known as config jobs). Smart Scheduler understands this
information and makes decisions accordingly. If a particular dimension value is
required, Smart Scheduler can then try to find a computer that already has that
dimension value or to locate a job that produces that dimension value. In the
latter case Smart Scheduler will automatically execute the job on that computer
so that the original chosen job is executed. In addition to this, if multiple jobs
that require that dimension are queued then the Smart Scheduler can optimize
the process by ensuring that the jobs run on the same computer and thus avoid
running the same setup job again and again.
The Smart Scheduler uses the config jobs to prepare the computers as per the
requirements of the run. The config jobs may be either Set Operations or Delete
Operations:
Set Operation - If a value is specified, then this job will only be run when
the value specified matches that required by the calling job. This can be
used to distinguish setup jobs that have the same value set but need to
run different varieties according to the current value. If a parameter name
is given then the parameter is given the dimension value requested by the
job that initiated the setup job. This allows the job itself to de-reference
the requested value in command lines.
Delete Operation If a dimension is specified, then this job will be run
when the dimension needs to be deleted from the computer. This setting is
used when for example, after a proprietary application is installed and
tests run using it, additional jobs should not be scheduled on that
computer until the application has been removed.
Additionally, the following common user scenarios may assist testers in utilizing
the Scheduler to their best advantage within the WTT framework.
For additional tips and suggestions for use, see Appendix C: Best Practices.
This allows Barney, without knowing anything about the test details, to run the
tests that exercise his binary in the right way and validate his fix without having
to wait for a build release cycle.
Monitoring Jobs
Job Monitor is used to track the status of a job or task on the machine pool on
which it was scheduled. As well, it can show the current status of the machines
within the selected machine pool, so the user can monitor the computers
themselves. The Job Monitor lists all results of the jobs executed on the
machines in the selected machine pool.
Saves the Job Monitor component or explorer data. This can be used to save
filter settings and selected features or categories. It also saves display
information from the Job Monitor such as column widths, and columns selected
for display. It does not, however, save the contents of a list view.
Print the current component data
Retrieves the results of any query from the datastore that the user has built.
Show/Hide Hierarchy
Displays the left pane of the Job Monitor, showing the Asset Pool Hierarchy.
The default setting for the Hierarchy button is On.
Show/Query Builder
Displays Query Builder in the right pane of the Job Monitor, allowing users to
run simple or advanced queries. The default setting for the Show Query Builder
button is Off.
Show/Hide Task List
Displays the Task Execution Status list in the right pane. The default setting for
the Show/Hide Task List button is On.
Show/Hide Machine List
Displays the machine list in the right pane. The default setting for the
Show/Hide Machine List button is On.
Datastore
Displays the associated controllers that host the Jobs Definition and Jobs Runtime
services. This is the first drop-down list on the Job Explorer toolbox.
Displays the Run Job on the user interface, allowing users to select a valid single
job to be scheduled on the selected computers, with schedule time parameters.
Add Machine Pool
A new machine pool may be added under any other machine pool, providing the
user has Write permissions to the parent machine pool.
Manage LLU
Users may create, update, or delete Local Logical Users (LLU) and Local Symbol
Users (LSU) on the set of computers in the selected machine pool.
Delete
Allows users to delete the selected machine pool and all child machine pools. All
the machines are then moved to the Default Pool. This requires Write permissions
on the parent machine pool.
Rename
Allows users to rename the selected machine pool, providing they have Write
permissions on the pool.
Properties
Displays the General and Security (permissions) properties for the selected
machine pool.
Users may create, update, or delete Local Logical Users (LLU) and Local Symbol
Users (LSU) on the selected computers in the computer list view.
Move
Allows users to move the selected computer from the current machine pool to the
selected machine pool.
Change Status
Displays the Run Job on all user interfaces, allowing users to select a valid single
job to be scheduled on the selected computers, with schedule time parameters.
Latest HW Configuration Log
Adds or removes the selected column in the Computer List View display.
Sort Columns
Allows users to select the field names for each column in the Machine List View
display.
Presents a report about the selected job in a printable format. The menu can be
used for multiple job selection. This report contains job details including common
constraints, mix, context information with associated constraints, tasks details,
and LMS details.
Result Report
Displays the Result report in a printable format. This menu is applicable for more
than one selection.
Cancel
Marks a particular result for cancellation. A particular result for a component can
be cancelled only if the execution of the job in that component, such as Job
Scheduler or EA, is stopped.
Add To Result Collection
Adds or removes the selected column in the Job Execution Status list view
display.
Sort Columns
Moves the selected column in the Job Execution Status view display.
Column Chooser
Allows user to select field names for columns in the Job Execution Status list view
display.
Adds or removes the selected column in the Task Execution Status view display.
Sort Columns
Moves the selected column in the Task Execution Status view display.
Column Chooser
Allows user to select field names for columns in the Task Execution Status view
display
button.
6. Click an instance of the job in the Job Execution Status box to display
the list of tasks within the job and their associated status.
scheduled. For more information about Results, see Jobs Best Practice
Recommendations.
Saves Result Explorer component or explorer data. This can be used to save
filter settings as well as selected features or categories. It also saves display
information from the Result Explorer such as column widths, and columns
selected for display. It does not, however, save the contents of a list view.
Print the current component data
Retrieves the results of any query from the datastore that the user has built.
Show/Hide Hierarchy
Displays the left-pane of the Result Explorer, showing the Feature and
Category tabs. The default setting for the Hierarchy button is On.
Show/Hide Query
Displays the query group box in the right pane of the Result Explorer, allowing
users to run simple or advanced queries. The default setting for the Show/Hide
Query button is Hide.
Bottomlist
Displays a Task box at the bottom of the Result Explorer. The default setting
for the Bottomlist button is Off.
Datastore
Displays the associated controllers which host the Jobs Definition and Jobs
Runtime services. This is the first drop-down list on the Job Explorer toolbox.
Presents a report about the selected job in a printable format. The menu can be
used for multiple job selection. This report contains job details including common
constraints, mix, context information with associated constraints, tasks details,
and LMS details.
Result Report
Displays the Result report in a printable format. This menu is applicable for more
than one selection.
Add To Result Collection
Infrastructure Log
Adds or removes the selected column in the Result Explorer list view display.
Sort Columns
Moves the selected column in the Result Explorer list view display.
Column Chooser
Allows user to select field names for columns in the Result Explorer list view
display.
Adds or removes the selected column in the Result Explorer list view display.
Sort Columns
Moves the selected column in the Result Explorer list view display.
Column Chooser
Allows user to select field names for columns in the Result Explorer list view
display.
2.
Select the Feature node containing the job and then click
the Refresh
jobs with results. You can enter other criteria in the Simple
Query group box for the search before running the query.
The following information returned by the Results query is displayed on
the Results Query form.
Computer configuration.
Result status - User can assign that result to a particular user.
Various counts, such as Pass and Fail, depending on the success or
failure of the tasks associated with the selected jobs.
Change information, if a particular user has modified this result on a
particular date.
Resolution information, such as that the user has resolved a
particular issue, the type of resolution, and the resolution date.
Result creation information - This field is auto-populated with the
currently logged on user name.
Log - Here the user can specify a location for the log files.
General Information regarding this result.
Right-click the desired job run and then click Test Log.
4. This opens a folder containing all the jobs system logs as well
as any other logs that get created by the tasks.
3.
also enter additional search criteria under Simple Query before running
the query.
3. Right-click the desired job on the Job Explorer list view, and then click
Insert Results to open the New Result dialog box.
4. Enter the new result information to be included in the dialog box, including
result statistics, log location, job description and other information.
5. Click the Save button.
also enter additional search criteria under Simple Query before running
the query.
3. Right-click the desired job on the Job Explorer list view, and then click
Insert Result as List.
Note: Job results may be bulk inserted to multiple jobs at the same
time by selecting multiple jobs at once using the CTRL key, right-clicking
the selections, and then clicking Insert Result as List. When this is
done, results may be different, but basic configuration information must
be the same across the jobs.
4. Select the test computer from the Machine drop-down list. If a specific
computer is selected, configuration information will be automatically
populated.
If the test computer is not on the list, type the name of the computer
in the Machine box and press TAB. Add configuration information on
the test computer by selecting a dimension from the drop-down list in
the Dimension column and then typing a dimension value in the
adjacent Value column. Add all configuration information needed for
the test computer.
Note: The dimensions entered here will be saved under the name
entered in the Machine box and will be available for later use.
5. Select the test information for each job, including result statistics,
Assigned To, Bug DB, and Description.
6. Enter a Job Description applicable to all selected jobs if desired.
7. Click the Save button.
Changing the Column Display and Sort on the Job Results Form
The results pane is customizable to the specific needs of the user, including
adding or removing columns or sorting them in a prescribed manner. This is done
using the commands available on the Results short-cut menu.
also enter additional search criteria under Simple Query before running
the query.
4. In the Results List, right-click any job, and then click Add Remove
Columns.
5. Add or remove columns as follows:
To add a column, click a desired field in the Available Fields box, and
the click Add. The desired field will now appear in the Results List for
each test.
To remove a current column, click a specific field in the Current Fields
box, and the click Remove. The selected field will no longer appear in
the Results List for each test.
6. Click OK.
7. To adjust the width of the new column(s), drag a column edge to the
desired location.
Note: Column width can also be adjusted within the Add Remove
Columns dialog box by typing a new width in the Column Width box
prior to closing the dialog box. However, this new width will be applied to
all columns uniformly.
To sort the job list in the Result list view in a particular order
1. On the Jobs menu, click Result Explorer.
2. Select your controller from the Datastore drop-down list.
3. On the Feature tab, click the desired node, and then click the Refresh
button.
4. In the Results List, right-click any job, and then click Sort Columns.
5. Click the field to be sorted in the Available Fields box, and then click
Add. Repeat for each field to be sorted.
6. For each field to be sorted (the fields in the Current Fields box), click
Ascending or Descending to determine the sort type.
7. Click each field and the Up or Down arrow to adjust the sort order for the
fields.
8. Remove any unwanted sort columns by clicking on that field in the
Current Fields box, and the click Remove.
Note: If more than one field is added to the Current Fields box to sort,
the topmost field will be sorted first, followed by the other fields in the order
that they are listed.
To edit results
1. On the Explorer menu, click Result Explorer.
2. Select your controller from the Datastore drop-down list.
3. In the results pane, right-click a job, and then click Edit.
4. Edit the data in the available fields as needed.
Saves component or explorer data from the Result Explorer. This can be used
to save filter settings and selected features or categories. It also saves display
information from the Result Explorer such as column widths, and columns
selected for display. It does not, however, save the contents of a list view.
Print the current component data
Retrieves from the datastore the results of any query that the user has built.
Hide Query
Displays the query group box in the right pane of the Result Explorer, allowing
users to run simple or advanced queries. The default setting for the Hide Query
button is Off.
Datastore
Displays the associated controllers which host the Jobs Definition and Jobs
Runtime services. This is the first drop-down list on the Job Explorer toolbox.
Creates a new collection for organizing results. This collection can be named by
the user or a name may be automatically generated. The collection is empty by
default.
Delete
Deletes a selected collection, although the results within the collection are not
deleted.
View Results
Displays the results that are present in the selected result collection. This
command invokes Result Explorer from within Result Collection and displays
standard result information.
View Rollup Counts
Displays the rollup count for scheduled jobs (including passed, failed, attempted,
and so on.) whose results are displayed in the selected collection. This also
provides a query builder that can be used to filter the jobs and then view the
result based upon the chosen criteria.
Add / Remove Columns
Adds or removes the selected column in the Result Collection list view display.
Sort Columns
Moves the selected column in the Result Collection list view display.
Column Chooser
Allows user to select field names for columns in the Result Collection list view
display.
The controller has started the job and has received notification from the client
computer that the job has been finished. This does not imply that job has
passed, failed or any other information other than the job is completed.
Investigate Jobs
The controller has started the job and has received notification from the client
computer that the job has been finished. This job has subsequently been marked
for investigation by a user.
Cancelled Jobs
The controller has received notification from the client computer that this job has
been cancelled by a user.
Resolved Jobs
The controller has received notification from the client that this job has been
resolved, possibly after registering a failure.
In Progress Jobs
The controller has started the job but has not yet received notification from the
client computer that the job has been finished.
Actual Run Time
The estimated time for the job run, as provided when creating the job.
GUID
A unique global ID number for the job, provided by the controller when the job is
created.
ID
A unique number within the WTT Result Collection, usually given in the order of
job creation.
Run Time Left
Whether the job has been signed off by the tester or not. This is indicated by a 1
if the job has been signed off, and a 0 if it has not.
Status
button.
button to retrieve
Saves component or explorer data from the Result Explorer. This can be used
to save filter settings and selected features or categories. It also saves display
information from the Result Explorer such as column widths, and columns
selected for display. It does not, however, save the contents of a list view.
Print the current component data
Retrieves from the datastore the results of any query that the user has built.
Show/Hide Hierarchy
Displays the left-pane of the Result Explorer, showing the Feature and
Category tabs. The default setting for the Hierarchy button is On.
Show/Hide Query
Displays the query group box in the right pane of the Result Explorer, allowing
users to run simple or advanced queries. The default setting for the Hide Query
button is Off.
Datastore
Displays the associated controllers which host the Jobs Definition and Jobs
Runtime services. This is the first drop-down list on the Job Explorer toolbox.
Adds or removes the selected column in the Result Collection list view display.
Sort Columns
Moves the selected column in the Result Collection list view display.
Column Chooser
Allows user to select field names for columns in the Result Collection list view
display.
The minimum OS build configuration required to perform the job if one has been
specified.
Max Build
The maximum OS build configuration required to perform the job if one has been
specified.
Total
The total number of scheduled job variations of the job. This will be 0 if the job
has been cancelled.
Attempt%
The percentage of the total number of scheduled job variations that have been
attempted. This percentage should include the jobs passed, jobs failed, and jobs
in progress, but not those cancelled or not run.
Pass%
The percentage of the total number of scheduled job variations in which all stated
tasks have passed..
Fail%
The percentage of the total number of scheduled job variations in which one or
more stated tasks have failed.
Not Run%
The percentage of the total number of scheduled job variations that have not yet
been started by the controller.
Bugs
Whether the job has been associated with a bug (or bugs) or not. This is
indicated by a 1 if a bug is associated, and a 0 if it is not.
Managing Enterprises
Managing enterprises is available to users with administrator privileges. A list of
currently registered controllers and datastores is displayed and administrators
may add, edit, or delete controllers or datastores from an enterprise.
5.
Click OK.
User Administration
In order to for individual testers to work with WTT test computers or schedule
jobs on a WTT Controller, it is necessary for a WTT Administrator to grant them
access to that controller. This is done in SQL Server Enterprise Manager by
granting the WTT_DATASTORE_USERS role to the user.
If a job or machine needs to be assigned to a user that has not installed WTT
Studio, their name will not yet be selectable in the list of WTT users, and needs to
be manually added for this to be possible. WTT makes this easy for administrators
with an easy-to-use dialog box that readily lists all users and their domains for
the selected controller.
Administrators can access the dialog by clicking on Users on the Admin menu.
Note: If a user's regular domain credentials are used to run tasks in WTT, they
will be compromised over the network. Therefore, users should create a special
username, called a local logical user (LLU), on each client computer in order to
run tasks. An LLU is created from a command line using the WTTCMD CommandLine tool. See Appendix D: WTTCMD Command Tool.
5. Click Close.
Dimensions
A dimension is a customized key-value pair that a test computer automatically
reports to the WTT database through Sysparse whenever the computer reboots.
Custom computer configuration queries can be created as dimension, as well as
strings or lists where appropriate.
For example, for a video driver dimension key, NVIDIA is a possible value, just as
4123 might be a possible value for an operating system build number key.
To edit a dimension
1. On the Admin menu, click Dimensions.
2. Select your controller from Show Dimensions from the controller dropdown list.
To delete a dimension
1. On the Admin menu, click Dimensions.
2. Select your controller from Show Dimensions from the controller dropdown list.
3. Click the selected dimension, and then click Delete.
4. Click Yes.
Note: Dimensions installed when WTT Studio was installed may not be
deleted.
Getting Started
MCU works by applying an MCU Policy to a particular machine pool and then when
each machine in that machine pool reports its machine configuration data to the
WTT Controller specified for the Asset Pool, it will store the values for the policy
into the WTT database. These values can then be used to constrain jobs, create
reports, and so on.
Since the MCU Policy is applied to a specific machine pool, it is important to know
how to create pools, associate a pool with a controller, how to install the WTT
Client software on your test machines, and how to add that test machine to the
pool. This topic assumes knowledge of this process and instead focuses on
creating the MachineConfigQuery dimension and the subsequent MCU Policy for a
particular pool.
Once a client is part of a specific machine pool, then you can start creating
MachineConfigQuery dimensions and associating them with the machine pool in
order to get MCU working. You could create the dimensions first and associate
them with the machine pool before moving machines to the pool as well.
When the WTT Client software is installed, it will gather machine configuration
information when its service starts. The WTT Client service calls Sysparse which
gathers the machine configuration information and saves it into an XML file. The
client service then sends the XML file to the controller, which parses it and stores
the information into the WTT database.
By creating a MachineConfigQuery type of dimension and associating it with a
pool to create an MCU Policy for the pool, the controller service will also call MCU
to enforce the MCU Policy for that machine.
In order to create a MachineConfigQuery dimension, youll need to be familiar
with two things:
1. Sysparse XML format: Where in the Sysparse output are the values you
want associated with your custom dimension? A sample Sysparse output
file can be found in the WTT Software Development Kit (SDK).
2. XPath query syntax. MCU uses XPath queries to retrieve the values, so you
will need to be very familiar with its syntax and use. See:
http://msdn.microsoft.com/library/enus/xmlsdk/htm/xpath_ref_overview_0pph.asp
The Dimension Editor UI provides some sample queries and a pointer to a sample
XML file to help you create the right query for your data.
If the configuration for a computer has changed, then the best option is restarting
the WTTSvc on the client computer. If the configuration hasnt changed, then the
machine config values can simply be updated based on the existing computer
configuration that has already been collected using the WTT Studio UI.
Verify update
After either method is used to update the values, use the following procedure
to verify the MachineConfigQuery dimension query and any results
management policy is being enforced correctly:
1. On the Asset menu, click My Assets.
2. Select your controller from the Datastore drop-down list.
3. On the My Search tab, click the desired machine pool and then click the
Refresh
button.
4. Right-click the computer that WTTSvc was restarted on, and then click
View Computer Details.
5. On the Computer Attributes tab, confirm that the MachineConfigQuery
dimension is listed along with the specific result from the query against the
computers Sysparse XML data.
Global Parameters
Parameters function similarly to environment variables but are not restricted to a
specific environment as are variables. Global parameters are stored at the
controller level (in the automation database) and can be applied to any job on the
given controller. All global parameters will be displayed when the user clicks
Parameters on the Admin menu.
Teams can create global parameters in their automation database any anytime
and no special permissions are required to setup a global parameter on a team's
automation database.
There are also global parameters that will be replicated to all automation
databases from the Master database. (although usually only for Jobs or Library
Jobs that also get replicated from the Master). These begin with the designation
Windows\ and thus can easily be identified as being from the Master database.
Teams should not create global parameters with the name Windows\ in their
automation database, nor should they edit any parameters with that name in
their automation database.
Tip: Global parameters can also be a useful way to remain organized, by keeping
key parameter definitions clustered in one location.
Global Mixes
A mix is a set of one or more contexts, just as a context is a set of one or more
constraints that are applied to jobs and schedules. When you design a test or a
set of tests, you can apply several sets of contexts by applying a mix that
contains these contexts. Scheduling a job creates one instance of the job for each
valid context within a mix.
For example, a job might be designed run on a mix of test computers including:
x86-based, Microsoft Windows XP Professional in the German language.
x86-based, Microsoft Windows Server 2003 in the English language.
Itanium-based, Windows Server 2003 in the English language.
The constraints and contexts that you use can be global, applying to all tests,
computers, or schedules, or they can be local, applying to the job or schedule at
hand. You can either use the default constraints and contexts or create your own
custom sets.
For information on local mixes, see Setting Job Mixes and Contexts.
Global mixes
A global mix can be a Simple mix or an Advanced mix; a simple mix is a collection
of contexts with each context containing a set of constraints. An Advanced mix is
a complex mix with a set of rules that apply to the contexts.
When creating a global mix, it is necessary to use contexts to cover all necessary
combinations of criteria.
For instance, a user has a context defined as:
processor within {x86, IA64, AMD64}
with a constraint that specifies:
language within {US (English), Ger (German)}.
Using a Simple mix, only one occurrence of the job will actually be distributed at
scheduling time (even assuming enough computers utilizing the full range of
architectures and languages. To run all six combinations using a Simple mix, it
would be necessary to define a simple mix context for each of the following
combinations (six combinations total):
Arch = x86, lang = US
Arch = x86, lang = Ger
Arch = IA64, lang = US
Arch = IA64, lang = Ger
Arch = amd64, lang = US
Arch = amd64, lang = Ger
Alternately, an advanced mix could be used to dynamically generate them.
button.
button.
button.
Note: Mix contexts can be deleted only if they are not being used or referred
to by any job or schedule.
button.
button.
2. Using Query Builder, search for the desired mix, and then click the
Refresh
button.
button.
8.
Click OK.
10. Click the Save button.
9.
2. Using Query Builder, search for the desired mix, and then click the
Refresh
3.
4.
5.
6.
7.
8.
button.
button.
button.
3.
4.
5.
6.
7.
button.
button.
button.
button.
button.
7.
button.
button.
Terminology
Kernel Mode
Kernel-mode code has permission to access any part of the system and is not
restricted as is user mode code. It can gain access to any part of any other
process running in either user mode or kernel mode.
Performance-sensitive operating system components run in kernel mode. In this
way they can interact with the hardware and with each other without requiring
the overhead of context switching. All kernel-mode components are fully
protected from applications running in user mode. They can be grouped as
follows: Executive, Kernel, HAL and Window and Graphics Subsystem.
The possibility of data corruption or system damage is much greater with kernel
mode process errors. If a process erroneously accesses a portion of memory that
is in use by another application or by the system, the lack of restrictions on
kernel mode processes forces Windows to stop the entire system. This is known
as a blue screen or bug check.
Malfunctioning hardware devices or device drivers with bugs that reside in kernel
mode are often the culprits in bug checks. A bad SCSI adapter, a malfunctioning
drive controller, or defective memory chips can corrupt memory contents and
alter program pointers so they attempt to access an incorrect address in memory.
Local symbol user
A domain account used by Autotriage to connect to symbol shares. This account
should have network access and administrator permissions on the client
computer. This account is used only by the Autotriage tool and can be created
from a command-line using the WTTCMD Command-Line tool, or from a separate
LSU interface. See Appendix D: WTTCMD Command Tool and Appendix L:
Managing LLU and LSU Functions.
User mode
Applications and subsystems run on the computer in user mode. Processes that
run in user mode do so within their own virtual address spaces. They are
restricted from gaining direct access to many parts of the system, including
system hardware, memory not allocated for their use, and other portions of the
system that might compromise system integrity. Because processes that run in
user mode are effectively isolated from the system and other user mode
processes, they cannot interfere with these resources.
User mode processes can be grouped as follows: System Processes, Server
Processes, Environment Subsystems and User Applications.
Stack Trace
A stack represents the context of the thread at any given time. The context is
defined as the point at which the thread has reached in the code and, to some
extent, how it got there. Programs are separated into functions and threads call
these functions, starting at main. A function is free to call another function, which
is free to call another, and so on. When a thread calls a function, it needs to
remember where it was before the call so that it can get back there. For this
reason the thread stores information on the stack. Information is pushed onto the
stack each time a function call is made. As the name suggests, the information
stacks up and the top of the stack contains the last piece of information pushed
onto the stack. The address to return to is given by RetAddr in a real stack trace.
When the called function returns, this information is popped off the stack so that
the stack is the same as it was before the call and the thread is returned to
executing the instruction immediately after the call.
Using the stack to trace back from a function or procedure to the original function
or procedure that generated this call gives us the Stack Trace.
Dump files
Dump files are the files created when a process crashes or the system crashes.
These dump files contain information about the stacks, memory, registers and
other system data that is useful for diagnosing the failure.
Dump files can be categorized based on the amount of data they contain. In the
case of user mode dumps, there are mini dumps and full dumps. Mini dumps are
created with minimum information such as stack information and thread
information. On the other hand, full dumps include the entire memory space of a
process, the program's executable image itself, the handle table, and other
information useful to the debugger.
Debuggee
The debuggee is a computer that is being debugged by another computer (the
debugger). The debuggee needs to be connected to the debugger through a
cable. The debuggee is also sometimes referred to as the target computer.
Debugger
The debugger is a computer used for debugging another computer (called the
debuggee). The debugger is also sometimes referred to as the host computer.
2.
3.
In the Run dialog box, type cmd, and then click OK.
At the command prompt, type
Where:
<DebuggeeName> is the name of the debuggee computer
<WTTServer> is the name of the WTT Server name
<COM/1394> is a choice of Com or 1394 debugging
<PortNumber> is the port number in the case of COM port debugging and
the channel number in the case of 1394 debugging
</b BaudRate> is the optional baud rate. Applicable for COM port
debugging.
</y SymPath> is the optional path for the symbol lookup.
</d DebuggerPath> is the optional debugger path from which the
debugger package is installed.
For Example:
WTTKDSetup.cmd client-test server-test COM 1
WTTKDSetup.cmd client-test server-test /y c:\symbols
Chapter 8: Resolver
Resolver is the failure tracking and management tool for both Windows Test
Technologies (WTT) and Unified Stress Testing (UST). It can help users by
facilitating failure tracking and helping to manage failures and their resolutions
from initial triaging stage to final resolution.
Terminology
Kernel Mode Crash
An unhandled user mode exception that is detected by WTT. This will initiate
the WTTTriage.exe tool.
Hold Bit
Indicates whether the computer on which the crash happened should held for manual
triage (Hold 1) or is released (Release 0). Note that if the hold bit is set for
release, AutoTriage will not hold any computers on which the same crash happens in
the future.
Task Failure
A test failure within the task of a given job. This will return a non-zero code by
the task within a job's logged results to indicate test failure.
Reporting Category
A flag used for reporting purposes for filtering or sorting in reports. There are
different values depending on the failure type:
For crashes:
Ignore ignore this failure in the reports.
Pass count this failure as Pass in the reports.
Fail count this failure as Fail in the reports.
Test count this failure as a test failure in the reports.
button.
button.
11. Click beneath the new row in the query to add another row.
12. In the And/Or column, select And from the drop-down list.
13. In the Field Name column, select XML from the drop-down list.
14. In the Value column, type the stress type name (such as DirectX).
15. Click the Refresh
button
button
button.
button.
button
3. Use the Next and Previous buttons to view additional failures within
the Resolver window.
4. Click the Exit button to close the window. After fetching the failure
records using steps in the previous section double click a failure in the
failure list.
5. On the Crash Info tab, click Connect to view the failure on the
debugger to which the failed computer is connected.
To reassign a failure
1. Use Query Builder to retrieve failure records from your selected
datastore.
2. Select a desired failure, right-click an entry, and the click Edit.
An alternative method of editing a failure is to double-click the
selected failure to invoke read-only mode, and then click the Edit
button on the toolbar.
3. Select the machine owner, job owner or triage team to whom you wish
to reassign the failure from the Assigned To drop-down list. If the
individual or group is not listed, type in the desired alias.
button.
To Resolve a failure
1. Use Query Builder to retrieve failure records from your selected
datastore.
2. Select a desired failure, right-click an entry, and the click Edit.
An alternative method of editing a failure is to double-click the
selected failure to invoke read-only mode, and then click the Edit
button on the toolbar.
3. Select Resolved from the Status drop-down list.
4. Type the alias of the user resolving the failure in the Resolved By box.
Note: This can be your alias or the alias of the person on behalf of
whom it is being resolved.
5.
Note: If the failure does not have the correct symbols (the Bucket ID is
WRONG_SYMBOLS), this drop-down list is disabled. In this case, it
is necessary to follow the steps on How to Handle Failures with
WRONG_SYMBOLS BucketID) in order to set the hold bit.
8.
Additional optional changes that can be made include changing the failure
priority, changing the creation reason, or assigning the failure to other
users.
button.
To Close/Reactivate a failure
1. Use Query Builder to retrieve failure records from your selected
datastore.
2. Select a desired failure, right-click an entry, and the click Edit.
An alternative method of editing a failure is to double-click the
selected failure to invoke read-only mode, and then click the Edit
button on the toolbar.
3. On the Status drop-down list, select Closed to close the failure, or
Active to reactivate it.
4. Click the Save
button.
button.
button.
button.
button.
button.
If the Notification Service is installed along with the WTT setup, then the
Notification Service is uninstalled while uninstalling WTT. However, if the service is
installed independently, then you must run installscript.vbs with the Uninstall
parameter to uninstall it.
For Example: InstallScript.vbs /a Uninstall /s
C:\Windows\WTTBin\WttNotification.Exe
Look at the Notification.log file for any error log generated by the
Notification Service in the event of a failure.
Make sure that the account on which the service is running has
sufficient permissions to access the controller datastore to which the
service is configured.
Terminology
Item / Test Item
An item or test item is a reusable segment of test code, which can be combined
with other items to build a scenario. WTT does not impose any restrictions on how
an item is defined or written.
Object Item
An object item statement functions as broker for test code written in either managed or
unmanaged code. This can be used for calling existing code without doing any rewriting. This
statement allows the creation of object instances and then the calling of methods on these
instances.
Scenario
An end-to-end set of steps encompassing test code, combining logic and test
data, and used to complete a specified task. An example of a scenario might be to
create a file.
Scenario Definition Language
An XML file which defines test scenarios (the .SDF files binds test item into a scenario). This
can be visualized as an expression / grammar for a given scenario.
Statement
Designing a Scenario
To create a Scenario
1. On the Tools menu, click Scenario Builder.
2. Click the New Scenario Builder Document button.
3. In the tree view, drag and drop statements to form the outline of the
desired scenario.
4. Click on each statement and rename that statement on the Sequence
tab.
5. On the Managed Validator tab, enter the appropriate assembly name
and class or browse to the file.
6. On the Parameters tab, add parameters for the statement as desired.
7. Create individual variations for each statement as needed.
8. Add additional statements and variations to complete the desired scenario.
9. You can able to modify the sequence by moving statements up or down.
10. Click the Save button, enter a file name in the Name box, and then click
Save.
Executing Scenarios
Two options are available to testers for executing scenarios within WTT:
Execute the scenario as a library job.
Launching the scenario as an executable.
Either may be used, with usage depending on the needs of the jobs being run.
10. In the Value column, click the Browse [...] button and navigate to the
desired scenario (.sdf) file. Click OK to import it.
11. Click the Save button and schedule the job.
6. In the Value column, click the Browse [...] button and navigate to the
desired scenario (.sdf) file. Click OK to import it.
7. On the Tasks tab, click the Add button.
8. Select Execute and click OK.
Appendix A: Glossary
Asset
The inventory control number used to track corporate assets. Asset tags
usually start with either an E, L, or V followed by a five or six digit number,
although in some cases a simple six digit number is used. If an asset tag
starts with a V, it should be followed by either five or six digits. If an asset tag
starts with an E or an L, it should be followed by a six digit number. If the
asset tag has no preceding letter, then it should be a six digit number.
Associated Device
A device that is provided by the vendor along with the computer. Associated
devices are required to stay with the computer and are sent back to the
permanent owner when the computer is either retired or returned to the
vendor. An example of an associated device is the AC adaptor that comes with
a laptop.
Attached Device
Custom properties defined by the type owner for defining specific type data.
Authority
The database where Sysparse stores the WTT test case automation data. WTT
test cases stored in the Automation DataStore can be grouped and scheduled
as needed to complete a test pass.
Also: Controller. (Deprecated term: Controller database.)
Categories
Categories allow the user to sort data items into logical groups while keeping
the information for that data in only one location. In WTT all categories from
all teams are available to all users. Categories are grouped in a hierarchy so
that users can easily browse to the categories used by their team, while
ignoring the categories of others.
The concept of categories is equivalent to the term test suite as used by
many teams in the Windows division. A test suite is a category that is used to
select a group of tests to be executed. Test Cases can belong to more than
one test suite.
Child Asset Pool
An asset or machine pool that is a part of another "parent" asset pool. Several
child asset pools can be created as part of a parent asset pool in the
asset/machine pool hierarchy.
Cleanup Job
Cleanup tasks normally execute after setup tasks are completed or after a
failure action has allowed job flow-control to come to the cleanup tasks.
A cleanup task may also be a job that is scheduled within a stress job that is
executed after the regular stress tests.
Common Context
Computers can run the Execution Agent (EA) and Job Delivery Agent. The
Automation Datastore stores the information about the computer and its
status. Changes to the computers configuration are identified and updated to
the database by Sysparse each time that the client computer starts.
Computers are typically organized into asset or machine pools.
Computer Config
The information that the computer reports via Sysparse to the Automation
DataStore.
Config Job Role
A job that does a set up activity such as a smart installation or smart cleanup
when required by any other job before executing it.
Constraint
A set of constraints that can be applied to test jobs and schedules when
designing a test or a set of tests.
Controller
The WTT Controller hosts the Job Delivery Agent and WTT Execution Agent,
along with other services and applications that run constantly and perform the
functions fundamental to the operation of WTT. Additional WTT Controllers can
be tied to the Controller Automation Datastore.
Controller Admin
A user with full access to all the objects in a WTT Controller database.
Controller User
The Controller runs a few specific services in order to complete its jobs, which
run under an account that is entered during Controller setup. This account is
referred to as the Controller User or the service account.
Copy Results Task
Moves log files or other data files off the test computer and onto the log file
server. Typically, these files are separate from other WTT log files that are
output by the executable tasks.
Current Owner
The Current Owner is the user who currently has the asset in his or her
possession, who may be the same or different than the Permanent Owner.
Custom Dependencies
A dependency defines which tasks must complete before another task can
begin. Advance dependencies can be set between a subset of computers using
a dependency index.
Device
A component or peripheral hardware part that cannot run the Execution Agent
(EA).
Dimension
Dimension Value
The specific model or other value of the dimension aspect or component. For
example, NVIDIA is a possible dimension value for the video driver dimension
and 4123 is a possible build number for an operating system build.
Execution Agent (EA)
The WTT Jobs tool that runs on client computers whenever they are started. It
runs Sysparse, and then takes the resulting XML file, parses it, and uploads
the data to various WTT and Asset Tracking databases.
Execution Phase
Defines where in the job execution that the task is executed. (Deprecated
term: Category.)
The phase within the job run where the job executes. Examples include Setup,
Main, and Cleanup.
The execution phase provides a way to organize tasks into groups within a
job. Each execution phase can have its own ordering and dependency-base
execution.
Execution Order
A command line to any executable file. The execution task can be any
executable type such as EXE, VBS, and CMD.
Failure Action
The Failure Action per task allows the user to specify how the job should
continue if the current task fails.
Global Mixes
A global mix is a mix that can be applied to any job on a controller. It can be a
simple mix or an Model-based Test Development Environment (MDE)-based
mix. A simple mix is a collection of contexts with each context containing set
of constraints. An MDE-Based mix is a complex mix with a set of rules
associated to the contexts.
Group
An arbitrary set of tests. WTT uses the term "Job" to encapsulate the
information required to automate a test case. A Job consists of several pieces
of information that prescribe how the test case is to be executed. Test Cases
are extended with this data, also referred to as a "Job Definition."
Job
The logical entity that defines how a test case is executed. A typical Job
contains Logical Machine Set, Task, Dependency and Parameter definitions.
Each of these elements determines how the test case runs.
Job Collection
A set of scheduled jobs used mainly for a conceptual view of scheduled jobs
and results. Job collections are in place mainly to provide a centralize point of
view for tracking test run status.
Job Constraints
The agent that queries the database for scheduled jobs and delivers them to
the client computers. Sometimes also called the Push Daemon. A database
can have more than one Job Delivery Agent, depending on the load on the
database.
Job Name
The display name given to a test case or job. This is the string that is
displayed when browsing or exploring test cases.
Job Role
Provides information to WTT about how the job is to be used. Different roles
place different restrictions on jobs. Also referred to as the
JobExecutionTypeID.
Job Run
See Schedule.
Key
Variables that can be used by the test case executable. These variables can be
set at job creation time.
Library Job
A job that can be referenced as a part of another job. A library job can have
no logical machine sets (LMS) defined and it cannot be scheduled directly.
Local Logical User
A logical machine set containing the definition of the quantity and description
of the computer type that is required for the execution of a job. These
requirements can be both hardware and software orientated and multiple
logical machine sets may be used per job.
Main Job
Regular tasks that are executed once all setup tasks have completed. Their
ordering and dependencies are respected. The Main Job is executed in the
main phase of the job. (Deprecated term: Regular)
Manual Job
A job where the steps required to run the test case are performed manually
by the user.
Mix
The process of entering a computer or device into the Asset Tracking portion
of WTT.
Reporting Category
A flag used for reporting purposes for filtering or sorting in reports. Different
values exist, depending on the failure type. For crashes, values are ignore,
pass, fail and test; for task failures, values are product, script, and
infrastructure.
Resolver
The failure tracking and management tool for both WTT and UST.
Result
While a result is the unit of work created when a job is scheduled, a result
collection is a set of those scheduled job results. As such, it provides users
with a centralized point of view for tracking test-run status by associating
aggregate result values such as PassedJobs, FailedJobs, NotRunJobs,
NotApplicableJobs, and Total Jobs. These counts track summary information
for all results included in a collection.
Role
The role designates the type of job, keyed on how the job is to be used.
Run Jobs Task
Runs a library job within the context of the current job. Because library jobs
themselves cannot contain Run Jobs tasks, WTT jobs are limited to a single
embedded layer.
Schedule
A Schedule uses the information defined within the Job or Jobs and prepares
the jobs for execution in the designated order. Sometimes referred to as a job
run.
Schedule Constraints
Stress Job
The job that contains all the setup jobs, stress tests, and cleanup jobs for
running stress.
Stress Type
The WTT tool that runs on client computers and takes a snapshot of the
computers configuration. Sysparse inventories the computer's hardware
components and provides information for WTT to use to determine which
computers to schedule for testing. The Sysparse outputs the results of an XML
file used by WTT when applying constraints and for other operations.
Target Owner
The intended owner when transferring ownership of a system from one person
yourself to someone else.
Task
The set of operations that execute when a job runs or define what action to
take if the job fails. The user can assign a task to run on one or more logical
machine sets. There are four types of tasks: Execution, Run Jobs, Copy File,
and Copy Results.
Task Dependencies
See Job.
Test Case Management (TCM)
A device that has an asset tag and/or serial number and a defined device
Label. It can also be a standalone device such as the printer or scanner.
Type Owner
The user who has the permissions to configure the stress type.
Vendor
E-mail alias copied on all loan and transfer requests for tracking.
WTT Controller
See Controller.
WTT Execution Agent (EA)
Appendix B: Accessibility
Options
The following topics provide alternate options designed to enhance accessibility to
the functionality within the Windows Test Technology (WTT) environment.
For additional information on accessibility issues at Microsoft, see the Microsoft
Accessibility website.
Job Explorer
Alt + X + R
Result Explorer
Result Collection
Alt + X + E
Alt + X + O
Result Rollup
Alt + S
Stress Menu
Operation
Keyboard Shortcut
Keyboard Shortcut
Using Ctrl
Using Alt
File Menu
Alt + F
Open
Ctrl + O
Alt + F + O
Save
Ctrl + S
Alt + F + S
Save As
Alt + F + A
Manage Enterprise
Alt + F + M
Ctrl + P
Alt + F + P
Print Preview
Alt + F + V
Exit
Alt + F + X
New Job
Ctrl + N
Alt + F + N
Export Jobs
Alt + F + E
Import Jobs
Alt + F + I
Save Job
Ctrl + T
Alt + F + S
Create Schedule
Ctrl + Shift + S
Alt + F + C
Save Result
Ctrl + T
Alt + F + S
Ctrl + N
Alt + F + N
Save Mix
Ctrl + T
Alt + F + S
Ctrl + N
Alt + F + N
New Stage
Ctrl + N
Alt + F + N
New Template
Ctrl + N
Alt + F + T
Ctrl + N
Alt + F + N
Edit Menu
Alt + E
Undo
Ctrl + Z
Alt + E + U
Redo
Ctrl + Y
Alt + E + R
Cut
Ctrl + X
Alt + E + T
Copy
Ctrl + C
Alt + E + C
Paste
Ctrl+ V
Alt + E + P
Delete
Del
Alt + E + D
Categories
Ctrl + Shift + C
View Menu
Alt + E + I
Alt + V
Query
Ctrl + Q
Alt + V + Q
Hierarchy
Ctrl + H
Alt + V + H
Refresh
F5
Alt + V + R
Task Results
Ctrl + T
Alt + V + T
Machines
Ctrl + M
Alt + V + M
Asset Menu
Alt+ S
Alt+ S + R + C
Alt+ S + R + D
Alt + S + B
My Assets
Alt + S + M
Vendor Management
Alt + S + V
Alt+ S + S + L
Wizard
Asset Management - Asset
Alt+ S + S+ T
Transfer Wizard
Asset Management - Sysparse -
Alt + S + S + P + A
Alt + S + S + P + M
Alt + S + P + C
Alt + S + P + D
Explorers Menu
Alt + X
Job Explorer
Alt + X + J
Result Explorer
Alt + X + R
Job Monitor
Alt + X + M
Result Collection
Alt + X + C
Alt + X + E
Result Rollup
Alt + X + O
Admin Menu
Alt + A
Mix
Alt + A + M
Log Type
Alt + A + L
Users
Alt + A + U
Parameters
Alt + A + P
Dimensions
Alt + A + D
Attributes
Alt + A + A
Alt + A + C
Process Menu
Alt + P
Stage Explorer
Alt + P + S
Process Explorer
Management - Process Template
Ctrl + Shift + P
Alt + P + E
Alt + P + M + P
Explorer
Tools
Alt + T
Scenario Builder
Alt + T + S
Plugin Manager
Alt + T + P
Metric - Configuration
Alt + T + M + C
Metric - Analyser
Alt + T + M + A
Window
Alt + W
Toolbar
Alt + W + T
StatusBar
Alt + W + S
Cascade
Alt + W + C
Tile
Alt + W + T
Opened Window
Alt + W + [window
number]
Help
Alt + H
Contents
Alt + H + C
Plugins
Alt + H + P
Alt + H + A
Metric Analyzer
Alt + M
Dataset - Load
Alt + M + L
Dataset - Save
Alt + M + S
Scenario Builder
Alt + B
Validate
Alt + B + V
Run
Alt + B + R
Table B.1 Accessibility Shortcut Hotkey Combinations
Issue: The keyboard shortcut Alt + Hyphen does not actuate shortcut
menu for current child MDI window when maximized. Alt + Hyphen is the
standard accessibility key for navigating MDI child windows and does not
reliably function within WTT.
If your job has an LMS as part of its definition, then all tasks within the job
must have an LMS assigned to them. A task with no LMS in a job that has an
LMS definition is treated as an invalid task.
When using LMS default parameters to extract the computer names selected
by the LMS, differing results may be returned depending on the type of job
being used. For example: given a job with two LMS, with two computers
each:
PrimaryLMS = "Test1"
SecondaryLMS = "Test2"
Within a main job you can use the default parameters of the LMS names to
extract the machines selected by the LMS. In other words, running the
following command-line on either LMS will return both computer names:
ECHO PrimaryLMS:[PrimaryLMS]&&ECHO SecondaryLMS:[SecondaryLMS]
with the following output:
PrimaryLMS: Test1
SecondaryLMS: Test2
Use a wildcard to copy a large number of files from one location. WTT
supports system wildcards for copying files just like the Copy.exe. For
example, using *.txt is much cleaner than typing or browsing entries file by
file.
When creating a Copy File task, you can specify system environment variables
in your paths using the following method:
\\server\share\[PROCESSOR_ARCHITECTURE]\test_binaries\*
It is important to note that any environment variable from the user profile will
not be expanded in the context of the user that is running the Copy File task,
but rather in the context of local system.
o Common examples of user environment variables might be:
%USERNAME%, %TEMP%, etc.
o In an execute task, you would simply keep using the %% format and
ensure that CMD.EXE was running the task, which would handle
expansion under the credentials of the user running the task.
o For CopyFiles task, there is no execute commandline to take advantage
of this behavior. As a result, there is no built-in method to use the USER
variables for the user specified in the copyfiles task.
A workaround to this behavior is to make use of the WTTCMD
/SysInitKey functionality within an execute task in order to capture
the value of the user profile environment variable desired. This is
done by:
1. Adding an execute task to the job, usually in the setup phase
but in any case prior to the point where the variable is used.
2. This task must execute under the same user context that the
Copy File task will execute under.
3. The following command-line should be used:
WTTCMD /sysinitkey /key:KEYNAME /value:%VARIABLE%
where
KEYNAME is the name used in the Copy File task
%VARIABLE% is the exact user environment variable that needs
to be captured, for example %TEMP%, %HOMEDRIVE%, or
%USERNAME%.
4. Select the Create new command shell for this task check
box.
5. Save the task, and then use [KEYNAME] in the subsequent Copy
File task.
When a Copy File task is used to copy a test binary to the JobsWorkingDir and
a subsequent execute task attempts to run this binary directly, this will fail
with an error in finding the binary file.
This occurs because the working directory for WTTSVC is
\WTT\JobsWorkingDir\ whereas the actual job has a working directory of
\WTT\JobsWorkingDir\JobRuns\<folder name>, where <folder name> is the
Run GUID folder name.
Although WTT can call CreateProcess() with the working directory as specified
in the task, CreateProcess() does not use the working directory to find the
specific process being run in the task, so therefore it must either be in the
system path or be explicitly added to the command-line. If that task runs
notepad.exe, it will therefore succeed because notepad.exe is in the path. It
would also succeed if CMD.EXE is run (or the Create new command shell
for this task option is selected).
Therefore there are three workarounds available for this problem:
Select the Create new command shell for this task option. This
launches CMD, which then uses RunWorkingDir as the CMD working
directory. CMD will then run the remainder of the command-line tasks
from this directory.
Specify a custom working folder and then use the entire path to the
binary.
For example:
C:\TESTBINS\Test.EXE switch name
(LLU) account for most operations. The User Name and Password tab under
Execution Options of the task details page can have parameters that allow the
username and password to be applied at schedule time. This is also useful for
running tests under different user credentials. Note: Be aware that using
domain credentials will transmit the password in clear text. If security is a
concern, use an LLU. If the command line used will make a particular task
reboot, make sure the reboot check box is selected under Task Execution
Conditions. WTT relies on this check box to determine if the reboot is a bug
check or just an action that the task must take.
Security Concerns
Because of the large number of computers on networks running with insecure
patch levels, insecure user accounts and other security issues, it is
recommended that users install and run Microsoft Baseline Security Analyzer
(MSBA), available from the Microsoft web site at
http://www.microsoft.com/mbsa in order to audit and correct important
security concerns. This is especially important for WTT Controllers, which have
full control over all systems attached to them, and can make use of any LLU on
a client.
Other Considerations
If there are dimensions, parameters, or local system environment variables
that share an identical name, WTT will expand the name in brackets to the
first value defined when it searches in the following order:
1. Key
2. Parameter
3. Dimension
4. System Environmental Variable.
For example:
All systems automatically have %WINDIR% defined as a system
environment variable that points to the location of the main Windows
folder, as in C:\WINDOWS.
If a job is created with a task to execute ECHO [WINDIR]&&PAUSE and
executed, WTT will look first for keys, then parameters, then dimensions,
and then environment variables, and will subsequently return the
environmental variable value C:\WINDOWS.
If a dimension named WINDIR is then added to the job with its value set
to dimension and the job executed again, WTT will look again look for
keys, then parameters, and then dimensions, and finding one, will expand
[WINDIR] to return dimension rather than C:\WINDOWS.
If a parameter named WINDIR is added to the job with the value param
and the job executed, WTT will look first for a key, and then a parameter,
and finding one, will expand [WINDIR] to return param rather than
dimension or C:\WINDOWS
And lastly, if a key named WINDIR is added to the job with the value key
and the job executed, WTT will look first for a key, and finding one, will
expand [WINDIR] to return key, rather than param, dimension or
C:\WINDOWS
Test case management is used to manage all test cases in a test run. It includes
marking the start and the end of each individual test case. Upon starting a test,
the logger creates a context based on this test automatically. Similarly, end test
closes the test case context. It also includes setting test case information as well
as test computer-specific information so that auto-logging is possible. A rollup
XML is generated at the end of the test run to summarize the overall pass/fail
results.
Terminology
WTT Log Device
In WTT Logger terms, a device is an object that can process a trace in certain
way.
WTT Log Device String
The WTT Log Device String is a string that represents the configuration of
logging outputs.
hDevice = NULL;
CLogger Logger;
//
// define the outputs to which logs will go
//
Logger.CreateLogDevice
(
L"$LocalPub($LogFile:file=foo,writemode=overwrite)",
&hDevice
);
//
// run a test case and add some logs
//
Logger.StartTest (L"Test1", hDevice) ;
//
// execute some test code
//
Logger.Assert(FALSE,__WFILE__,__LINE__) ;
Logger.EndTest (L"Test1", WTT_TESTCASE_RESULT_FAIL,
L"End of Test1", hDevice) ;
//
// run another test case and add some logs
//
Logger.StartTest (L"Test2", hDevice) ;
//
// execute some test code
//
Logger.Trace (
WTT_LVL_WARN,
hDevice,
__WFILE__,
__LINE__,
L"Warning Message"
) ;
Logger.EndTest (L"Test2", WTT_TESTCASE_RESULT_PASS,
L"End of Test2", hDevice) ;
//
// clean up
//
Logger.CloseLogDevice(NULL, hDevice) ;
}
5.
WTTCMD Commands
The WTTCMD Command-Line tool provides commands for the following functions:
Note: This feature can only be used from inside a task. It will fail if invoked
directly from a command prompt.
defined.
If any of the keys (enclosed between % characters) is not resolved, it returns
the default expanded string. This prints out the expanded string to the
standard output in the following format:
WTTCmdSysExpandStr:ExpandedString = <evaluated value>
The task would then have to parse the output of the above command and get
the evaluated value.
Note: This feature can only be used from inside a task. It will fail if invoked
directly from a command prompt.
Note: This feature can only be used from inside a task. It will fail if invoked
directly from a command prompt.
Additionally, /taskwillreboot is another command-line switch which can be
used with WTTCmd.exe /eareboot. By supplying the /taskwillreboot switch,
the WTT service Execution Agent is told that the task which initiated the
WTTCmd.exe command will itself initiate a shutdown. If this switch is not
specified, EA will initiate the reboot process.
2.
3.
In the Run dialog box, type cmd, and then click OK.
At the command prompt, type
WTTCmd.exe /addsymboluser /user:<username>
/domain:<domain> /password:<password>
Where:
<user> is the user name to be added
<domain> is the domain name to be added
<password>
For example:
WTTCmd.exe /addsymboluser /user:abc /domain:test
/password:abc123
This will configure the local symbol user as the default symbol user.
2.
3.
In the Run dialog box, type cmd, and then click OK.
At the command prompt, type
WTTCmd.EXE /deletesymboluser
2.
3.
In the Run dialog box, type cmd, and then click OK.
At the command prompt, type
WTTCmd.exe /querysymboluser
This will display the list of symbol users configured on the machine.
2.
3.
In the Run dialog box, type cmd, and then click OK.
/user:<username> /domain:<domain>
/password:<password>
Where:
<localName> is the local name referring to the user credential to be
added.
<username>
<password>
For example:
WTTCmd.EXE /addlogicaluser /localName:Local /user:abc
/domain:test /password:abc123
4. Grant the LLU administrator rights to the client computer.
2.
3.
In the Run dialog box, type cmd, and then click OK.
At the command prompt, type
WTTCmd.EXE /deletelogicaluser /localName:<localName>
Where:
<localName> is the local name referring to the user credential to be
deleted.
For example:
WTTCmd.EXE /deletelogicaluser /localName:Local
Note: All Logical Users configured in the computer may be deleted using
the command:
WTTCmd.EXE /cleanlogicaluser
In the Run dialog box, type cmd, and then click OK.
3.
This will display the list of logical users configured on the machine.
Appendix F: WTTOMCMD
Command Tool
WTTOMCMD is a command-line utility that provides test engineers with an
alternate data update mechanism to the UI. It serves to facilitate the porting of
data across Windows Test Technologies (WTT) controllers or to provide a means
to tie existing job creation/execution automations with the WTT framework.
Actions that can be performed with this tool include the import/export of test
cases, results, adding or updating results, updating assets, as well as the
scheduling of test cases.
The command-line interface of this tool provides test case developers with a
natural means to provide an extra layer of automation to repetitive or highvolume WTT user actions.WTTOMCmd is built with WTT and is available on the
release share among the x86 binaries. To use it, it should be downloaded to the
folder from which WTT Studio is run, as it depends on other Object Model binaries.
The configuration file WTTOMCmd.exe.config must also be copied for the utility to
function correctly. Note that unlike the WTTCmd utility, this is not a client-side
tool, and therefore sits in a single location only, from where it interacts with the
WTT backend.
Terminology
MachineConfig
The value of a particular dimension, within the finite set of values specified for the
dimension at the time it was created.
Saved schedule
A schedule that has been saved as a .wtq file. This schedule will consist of the
job(s) to be scheduled, any mixes or constraints, parameter values and schedule
operations.
Import/Export XML files
These files are the XML form of exported objects. When an object is exported,
other objects that it tightly depends upon are usually exported as well. Similarly,
when an object is imported, its dependent objects are imported. In the XML files,
each object type gets its own folder, and within these folders, every object forms
its own XML file.
WTTOMCMD Syntax
The standard command-line syntax for WTTOMCmd is of the following form:
wttomcmd.exe [/databaseparams] [/commanddirective] [/properties].
Database Parameters
Database parameters uniquely identify the database that the utility should
perform the desired operations on. They comprise of the following three
mandatory arguments:
Identity Server Name
The name of machine that functions as the Identity Server in the WTT
Enterprise and should provided in the following format
/IdentityServer <identityservername>
Identity Database Name
The name of the database on the Identity Server. This is typically the name of
the database as seen in SQL Enterprise Manager when connected to the
Identity Server.
/IdentityDatabase <databasename>
Logical Datastore Name
The logical name given to the database on the Identity Server. This is usually
the name seen in the datastore drop-down list in WTT Studio.
/LogicalDatastore <logicaldbname>
Command Directive
The command directive specifies what action is to be performed. The following
commands are supported:
/importjob
Inserts or imports records into the database. Jobs and results can be imported
with this action declarative.
/exportjob
Used to export a job from the database to XML files. The utility uses the core
Import/Export functionality and the resulting XML files are configured as if
they had been exported from the UI.
/importresult
Used to import results from XML files into the database. The utility uses the
core Import/Export functionality and expects the XML files to be exactly as if
they were created by a standard export operation.
/exportresult
Used to export a result from the database to XML files. The utility uses the
core Import/Export functionality and the resulting XML files are configured as
if they had been exported from the UI.
/updateresult
Used to modify an existing result record or add a new one. If the specified
result record does not exist in the database, a new one is created with the
supplied information. Note: If the result record is specified using a ResultID
that does not exist, a new result will be created but it will not have the same
ID. This is because the ID is auto-generated by the database. As a result, the
supplied ResultID will effectively be dropped.
/createschedule
Used to schedule a previously saved XML (.wtq) file. The input XML should
have a schema similar to that of a schedule saved from the UI.
Note: This ability to schedule jobs on the command-line was previously
provided by the command-line utility WTTCmdSch, which has been retired and
will no longer be supported.
/updateasset
This is used to set either the status of an asset, or to move it to another asset
(machine) pool, or both.
Properties
Properties are the various arguments needed by the particular command to
successfully complete the database operation. They are provided in the following
format:
property:value <property:value ...>
For every supported command directive, at least one property is required.
However, depending on the operation, a certain minimum set of properties may
be mandatory. Specific properties required are listed under the supported
Path: The path to the Import/Export XML files. Because these files tend to
be in a collection of folders, the path supplied should point to the parent
folder of this collection.
JobID: The specific ID of the target job. Note that the combination
Path+job name cannot be substituted because it does not uniquely identify
a job.
JobGUID: The GUID of the target job.
ResultID: The ID of the result that should be exported or updated. If
updating a result, a new result will be created if no result is found in the
database with the specified ID. However, because the ID field is autogenerated by the database, the new records ID will not be the one that
was supplied.
ResultGUID: The GUID of the result that should be updated. If no result
is found in the database with the specified GUID, a new one will be
created. As in the case of the ResultID, because the GUID field is autogenerated by the database, the new records GUID will not be the one that
was supplied.
Pass: The Passed count for this result. If not provided, this property will
default to 0.
Fail: The Failed count for this result. If not provided, this property will
default to 0.
NotRun: The Not Run count for this result. If not provided, this property
will default to 0.
NotApplicable: The Not Applicable count for this result. If not provided,
this property will default to 0.
ResultStatus: The current status of the result. This property can have a
value of Cancelled, Completed, InProgress, Investigate or Resolved.
StartTime: The start time of the test run for this result. If not provided,
this property defaults to the current time. The time may be provided in
any format that can be converted by DateTime.Parse method.
EndTime: The ending time of the test run for this result. If not provided,
this property defaults to the current time. The time may be provided in
any format that can be converted by the DateTime.Parse method.
AssignedTo: The user alias of the person this test case is assigned to. If
this property is not provided, the current logged on user will be used as a
default.
LogLocation: The location of the log files. If not provided, this property
will remain blank.
MachineID: The ID of the computer upon which the test case was run.
The MachineConfig of this computer will be attached to the result. The
specified computer must exist in the database for this property to be
used..
MachineName: The name of the machine that the test case was run on.
The MachineConfig of this machine will be attached to the result. The
specified machine must exist in the database.
Dimension: The name of an existing dimension. This is required to create
a new MachineConfig this describes the machine on which the test case
was run.
Value: The value for the dimension described above.
AssetID: The ID of the target computer record.
AssetName: The name of the target computer.
AssetStatus: The current status of the target computer. This property can
have a value of Ready, Running, Manual or Debug.
AssetPoolID: The ID of the asset (machine) pool to which the computer
should be moved.
AssetPoolPath: The name of the asset (machine) pool along with its
path, to which the computer should be moved.
ReportType: The items to include in the job import report. These can be
either Full or FailureOnly.
AttributeOptions: Options for Export/Import when the job being
imported has attributes that do not exist in the target database.
GlobalMixOptions: Options for Export/Import when the job being
imported has mixes that cause conflicts.
GlobalParamOptions: Options for Export/Import when the job being
imported has parameter conflicts.
JobConflictOptions: Options for Export/Import when the job being
imported already exists in the database.
JobOverwriteOptions: Options to determine under what conditions a job
should be overwritten if it already exists.
LibraryJobConflictOptions: Options for Export/Import when the library
job being imported already exists in the database.
LibraryJobOverwriteOptions: Options to determine under what
conditions a library job should be overwritten if it already exists.
To import jobs
Supported properties:
Path
AttributeOptions
GlobalMixOptions
GlobalParamOptions
JobConflictOptions
JobOverwriteOptions
LibraryJobConflictOptions
LibraryJobOverwriteOptions
RemapHierarchy
AppendFeature
RemapFeature
ImportCategory
ImportHierarchy
ReportType
All properties except path may be provided in the configuration file, and are
hence optional on the command-line. In case of a conflict, the value passed
on the command-line overrides the value found in the configuration file.
To export a job
Supported properties:
Path
JobID
JobGUID
Note: Either the JobID or JobGUID may be provided. If both are supplied,
JobGUID is used.
For example:
wttomcmd /IdentityServer MyServer /IdentityDatabase
DB1205 /LogicalDatastore StressDB /ExportJob JobID:22
Path:C:\API test cases\Print
To import results
Supported properties:
Path
AttributeOptions
GlobalMixOptions
GlobalParamOptions
JobConflictOptions
JobOverwriteOptions
LibraryJobConflictOptions
LibraryJobOverwriteOptions
RemapHierarchy
AppendFeature
RemapFeature
ImportCategory
ImportHierarchy
ReportType
ResultsOnly
ResultConflictOptions
Note: All properties except Path may be provided in the configuration file, and
are hence optional on the command-line. In case of a conflict, the value
passed on the command-line overrides the value found in the configuration
file.
For example:
wttomcmd /IdentityServer MyServer /IdentityDatabase
DB1205 /LogicalDatastore StressDB /ImportResult Path:C:\API
test cases\Print
To export a result
Supported properties:
Path
ResultID
ResultGUID
Note: Either the ResultID or ResultGUID may be provided. If both are
supplied, ResultGUID is used.
For example:
[JobID:<JobID> | JobGUID:<JobGUID>]
ResultStatus:<ResultStatus> Pass:<Pass> Fail:<Fail>
NotRun:<NotRun> NotApplicable:<NotApplicable>
StartTime:<StartTime> EndTime:<EndTime>
AssignedTo:<AssignedTo> LogLocation:<LogLocation>
[MachineID:<MachineID> | MachineName:<MachineName> |
Dimension:<Dimension> Value:<Value>]
Example 2:
wttomcmd /IdentityServer MyServer /IdentityDatabase
DB1205 /LogicalDatastore StressDB /UpdateResult
ResultGUID:F9168C5E-CEB2-4faa-B6BF-329BF39FA1E4
JobGUID:936DA01F-9ABD-4d9d-80C7-02AF85C822A8
ResultStatus:InProgress MachineID:12
Example 3:
wttomcmd /IdentityServer MyServer /IdentityDatabase
DB1205 /LogicalDatastore StressDB /UpdateResult ResultID:72
JobID:4 ResultStatus:Completed StartTime:1/9/2004 5:35:12
AM EndTime:1/9/2004 2:04:39 PM MachineName:Stress10
Example 4:
wttomcmd /IdentityServer MyServer /IdentityDatabase
DB1205 /LogicalDatastore StressDB /UpdateResult
ResultID:188 JobID:51 ResultStatus:Investigate
AssignedTo:johndoe Dimension:WTT\OS Value:Longhorn
Dimension:WTT\Processor Value:X86 Dimension:WTT\Build
Value:chk
To schedule jobs
Supported properties:
Path
For example:
wttomcmd /IdentityServer MyServer /IdentityDatabase
DB1205 /LogicalDatastore StressDB /CreateSchedule
Path:C:\Schedules\Overnight Run.wtq
For example:
wttomcmd /IdentityServer MyServer /IdentityDatabase
DB1205 /LogicalDatastore StressDB /UpdateAsset AssetID:36
AssetStatus:Running
For example:
wttomcmd /IdentityServer MyServer /IdentityDatabase
DB1205 /LogicalDatastore StressDB /UpdateAsset
AssetName:TeamTest4 AssetPoolID:8
or:
wttomcmd /IdentityServer MyServer /IdentityDatabase
DB1205 /LogicalDatastore StressDB /UpdateAsset
AssetName:TeamTest4 AssetPoolPath:$\TeamPool\MyPool\PoolC
JobImportOptions
These parameters dictate the import behavior when a conflict occurs with a job
or a dependent being imported into the database.
AttributeOptions
Specifies what Export/Import should do when the job being imported has
attributes that do not exist in the target database. Supported values are:
Add Add any missing attributes to the target database while importing.
UseExisting Ignore all attributes that are missing in the target
database. Include only those present.
Drop Drop all attribute mappings while importing.
GlobalMixOptions
Specifies what should be done if the job being imported has mixes that cause
conflicts. Supported values are:
UseExisting Ignore mixes that are not already present in the database
and use only mix versions from the database.
Overwrite If a mix being imported already exists, overwrite it.
GlobalParamOptions
Specifies what should be done if there is a conflict with the parameters in the job
being imported. Supported values are:
FeatureImportOptions
These parameter options indicate how Export/Import should match the feature
hierarchy being imported with what exists in the target database.
RemapHierarchy
Indicates if the feature hierarchy of the jobs being imported should be recreated.
Supported values are:
True Import the feature hierarchy and append it to the feature path in
AppendFeature.
False Ignore the exported feature hierarchy and import all jobs into the
feature path in AppendFeature.
AppendFeature
Indicates the feature path into which the features (or jobs) should be imported.
This option should be passed with the trailing backlash. It can be root ($\).
RemapFeature
Indicates the feature path which should be used instead of the one mentioned in
AppendFeature if there is an error. It can be root ($\) or blank. A likely scenario
where this will be used is when the user does not have permissions to import into
the feature path provided in AppendFeature.
CategoryImportOptions
These parameter options indicate how Export/Import should handle clashes
between the category hierarchy being imported and what exists in the target
database.
ImportCategory
Indicates the category to which the hierarchy should be appended (or remapped,
depending on the value of ImportHierarchy).
ImportHierarchy
Indicates if the category hierarchy should be imported. Supported values are:
True Import the category hierarchy of the exported jobs and append it
to the category indicated in ImportCategory.
False Do not import the category hierarchy and remap all jobs to the
category indicated in ImportCategory.
JobImportReportOptions
A success/failure report is automatically generated by the job import module and
may be customized using parameters in the configuration files.
Note: While results are being imported, jobs are imported as well as dependent
objects. Currently, however, report functionality for results is not available and
report generated will describe the job import part only.
ReportType
Indicates the type of report to be generated. Supported values are:
ResultImportOptions
These parameters dictate the handling of conflicts or mismatches while importing
results.
ResultsOnly
Indicates whether the import XML files contain only the result related files, and
not the job related files, and therefore that every result points to a job already
existing in the database. Supported values are:
True Import only the result XML files. They point to jobs already exist in
the database. If the job is not found, the result object not be imported.
False The XML files for the jobs are also being provided and they should
be imported along with the results.
ResultConflictOptions
Indicates what should be done if the result being imported already exists in the
database. Supported values are:
Copy Import the result as a copy using a new GUID..
Overwrite Overwrite the existing result.
Skip Do not import the current result and skip to the next result.
Stress Scheduler
Stress Scheduler is used to schedule stress runs on machines from machine
pools, it generates stress mixes based on machines capabilities and schedule it to
controllers to which the machines are registered.
Terminology
Stress Mix
Jobs that are executed before regular jobs in order to do setup tasks. These
are part of the stress run unless manually unchecked.
Clean up stress jobs (tests)
Jobs that are executed after the regular jobs to do cleanup tasks.
Mandatory stress jobs (tests)
These are tests that always run regardless of the generated mix, and cannot
be unchecked even manually. All the above three types of tests can be either
mandatory or optional.
Note: The jobs listed with the icon next to them are mandatory and
cannot be unchecked.
By default, the job list shows only Regular jobs. This view can be
changed to show all jobs, setup jobs only, regular jobs only, or cleanup
jobs only.
5. To regenerate the last stress mix that ran on the computer, right-click the
specific computer, and then click Last Mix.
6. To regenerate a stress mix from the previous on the computer, right-click
the computer, and then click Previous Mix. Click the desired mix from the
list.
7. To regenerate a mix from the previous mixes that ran on the machine right
click the machine and choose Previous Mix from the context menu then
choose your mix from the list.
Terminology
Test Group
A collection of Stress Tests that are related in terms of their behavior or the
component of the operating system that they are supposed to test.
Stress Type
A user who has been granted the permission to modify a stress type. Edit and
update operations on a stress type are limited to users who own that type.
Additionally, any user who is the owner of any stress type in the system can
also create or modify all stress groups and approve or disapprove jobs for
stress.
UST Administrators
A user who has unlimited permissions on all objects in the system, that is,
they have the ability to make any change to any object (tests, groups or
types).
If this menu option is not enabled, the user is not registered a UST
Administrator.
5. Click the selection for job approval as appropriate:
Approve as Stress Test.
Approve as Setup Job.
Approve as Cleanup Job.
Appendix H: Machine
Configuration Query Dimensions
Machine configuration query dimensions are part of a suite of UI, services and
libraries that take information from an individual computer and store it in the WTT
database in a manner customized to each team's needs. It is comprised of
additions to the Admin Dimensions UI, the Asset Pool UI, as well as the WTT
Controller service. Collectively, this suite is called Machine Configuration Update,
or MCU.
Troubleshooting Updates
If the Update does not seem to happen
There are several reasons why this may happen:
You need to wait longer for Sysparse to complete collecting the data on
the test computer and send it to the controller. It should take no more
than 20 minutes, depending on the speed of the test machine.
The query may not have returned any results. If the query does not work
against the test computers specific XML file, then you wont have any
results.
A permissions problem may exist. For example, if you have a lab router
that transfers the Sysparse XML to a separate controller that is servicing
several machine pools, make sure the controller machine does have full
permission to update the machine records for your machine pool. As well,
it needs to have access to the share where the XML data is stored.
Beta 2 XPath queries may not have been converted to the new RTM format
(see below).
infograb/object[@name='INFOGRAB_BLOCK_DEVNODE']//object[@name='devnode']/object[@name='Devnode
08002BE10318}']/../property[@name='deviceid']
infograb/object[@name='INFOGRAB_BLOCK_NETWORK']/object[@name='NDIS']/object[@name='NetworkAdap
with(@value,'VIRTUAL'))]/../property[@name='description']
boolean(infograb/object[@name='INFOGRAB_BLOCK_NETWORK']/object[@name='NDIS']/object[@name='Net
s'])
count(infograb/object[@name='INFOGRAB_BLOCK_DISK']/object[@name='disk'])
Appendix I: Sysparse
Sysparse is a tool installed on a client computer that inventories the computer's
hardware components and provides that information to Windows Test
Technologies (WTT) for use in determining which computers to schedule for
testing. The WTT client calls Sysparse each time that client restarts. Additionally,
Sysparse maintains its in-memory tree while the client is running. Tests running
on the client can use the Sysparse APIs to query data from Sysparse in real-time
and ask Sysparse to refresh data in real-time. As well, the Sysparse APIs can be
used to load custom gatherers which collect additional information. For full
documentation on the Sysparse API set, see the WTT Software Development Kit
(SDK).
The Sysparse tool was developed on the Infograb technology and consists of the
Infograb engine and gatherers. This engine creates and manages data in an inmemory tree, and provides interfaces for tests to programmatically retrieve
computer data in order to make run-time decisions. Gatherers collect the data
when called by the engine and use the Sysparse APIs to return the data they
collect to the engine. The data collected by Sysparse is used by WTT to enable
scheduling against specific hardware constraints and to create a persistent,
unique ID for each computer in the WTT database.
Base Sysparse
Base Sysparse consists of the Infograb engine and the standard gatherers
that are compiled in the Sysparse executable.
Custom Gatherer
The Custom Gatherer (also referred to as a COM gatherer or a plug-in
gatherer) consists of a set of COM objects called through the Sysparse
engines APIs. These COM objects collect data in addition to the data collected
by Base Sysparse.
Registry Gatherer
As an alternative to writing custom gatherers, Sysparse has a registry data
gathering feature for custom data. Users can add new key-value pairs to the
registry location below. Sysparse automatically collects all key-value pairs
from this registry location and makes them available for scheduling.
HKEY_LOCAL_MACHINE\Software\Microsoft\WTT\Sysparse
\ExtendedData
Note: This registry key may only be populated with data of type REG_SZ.
Keys with other types of data will be ignored by the gatherer. As well,
nesting of keys under this key is not allowed. Any keys nested under this
key will be ignored by the gatherer.
All keys have a default (nameless) value. In the cases where this value is
implemented, the resulting xml output will contain the entry value node, but
not the KeyName node
The following sample data will be used to demonstrate how the Sysparse XML will
look.
[HKLM\Software\Microsoft\WTT\Sysparse\ExtendedData]
Test-client 1s Key=6481062
Test-client 2s Key=Test2
When this data is collected by the gatherer, Sysparse will produce the following
XML:
<INFOGRAB_BLOCK_WTT>
<RegistryInfo>
Software\Microsoft\WTT\Sysparse\ExtendedData
<RegistryKey>
<KeyName>Test-client 1s Key</KeyName>
<KeyValue>6481062</KeyValue>
</RegistryKey>
<RegistryKey>
<KeyName>Test-client 2s Key</KeyName>
<KeyValue>Test2</KeyValue>
</RegistryKey>
</RegistryInfo>
<INFOGRAB_BLOCK_WTT>
Sysparse Data
The data is collected by the API CoverageIsInstrumentedBuild
(_CoverageIsInstrumentedBuild@0) in WinCover.dll. Since this API/DLL is only
available on coverage builds, it is loaded dynamically.
IF the DLL/API is not available, the node will not be present in the xml output.
The data will appear in the Sysparse output in this manner:
<INFOGRAB_BLOCK_RUNTIME>
< CoverageIsInstrumentedBuild >1</
CoverageIsInstrumentedBuild >
</INFOGRAB_BLOCK_RUNTIME>
Computer Attribute
When the Sysparse output is moved to the server, an attribute will be added
automatically to the Computer Configuration in this form:
Caveats
A false positive may occur under certain circumstances. If a coverage-enabled
build was installed, then WinCover.dll will be present on the hard drive. If the
computer is upgraded to a non-coverage-enabled build, Sysparse will erroneously
publish that the build is coverage-enabled.
Workaround: Clean install before and after using a coverage-enabled build.
For more information, see additional documentation regarding WinCover.dll at this
location:
http://codecoverage/v2/defaultredir.aspx?
displayNavTree=false&path=/Help/displayhelp.aspx¶ms=navPageURL
%3D/Help/CCSavingData.htm
Terminology
Log View
One of a group of predefined formats in which a log file can be displayed for
viewing within WTT. Log views are defined by view transforms, and thus,
additional formats may be defined by WTT users.
HW Configuration Log View
The HW Configuration Log view formats and displays the log files produced by
Sysparse. These log files contain detailed information about the configuration
of the test machines, which is collected each time the computer boots and
transferred from the test client to the Controller.
Infrastructure Log View
The Infrastructure Log view formats and displays the files produced by server
to client communications via the WTT Service, including information on
scheduled jobs.
Test Log View
The Test Log view formats and displays the selected test log file. These files
are transferred from the test clients to the Controller after the selected tests
have run.
View Transform
A set of instructions installed on the WTT file store and applied to a specific
log file. These transform files use an .xslt file format and application of these
transforms converts the raw XML data of a log file into an organized HTML
page for easy viewing.
4. Right-click the job result you wish to view, point to Test Log,
Infrastructure Log, or HW Configuration Log, and then click the
specific computer whose log you wish to view.
Note: If the view has already been filtered to a specific machine or a
specific task on a machine, it is only necessary to click on the Test Log,
Infrastructure Log, or HW Configuration Log as appropriate.
5. If a warning regarding loading very large files appears, click OK.
Note: To view logs based on individual tasks making up a specific job, click
the Show Task List button to display the Task Execution Status pane, and
then click on the desired job. Right-click on the task to display the desired
log.
Log Views
Each log view initially displays information in the default raw XML format. A
number of predefined view transforms are provided to allow users to see the data
in different formats depending on individual needs. These transforms are
available in the View drop-down list.
Failure: This selection formats and displays information about each failed
test from the selected test log.
Not Run Tests: This selection formats and displays information about
each blocked or skipped test.
Summary: This selection formats and displays a summary of test results
and lists each test result.
Infrastructure Log View
The View drop-down list within the Infrastructure Log view provides the following
predefined views:
Default: The default view displays the raw unfiltered and unformatted
XML log file.
HW Configuration Log
The View drop-down list within the HW Configuration Log view provides the
following predefined views:
Default: The default view displays the raw unfiltered and unformatted
XML log file.
WTT: This selection formats and displays the registry information for the
selected computer.
Terminology
Plug-in
A plug-in can have one or many groups, each of which is a functional unit
typically consists of menu items, toolbar buttons, and windows.
Manifest
The UI components defined by the plug-in manifest within a group that will be
instantiated when WTT Studio starts, including such things as menu items
used to create a group instance.
Dynamic Contents
A group that only allows one group instance creation. Any attempts to create
another instance will simply cause the existing instance to be re-activated
Multiple Instance Group
When WTT Studio starts, it will automatically create an instance for the groups
marked as startup group. This flag can be set from within Plugin Manager
Plugin Manager
The user interface within WTT Studio, used to manage plug-in settings. It is
available through the Tools menu.
To deploy a Plug-in
1. If they are not already so located, place the plug-in manifest, assemblies,
help file and resource files into a single directory.
2. On the Tools menu, click Plugin Manager.
3. Click Add.
4. Click Browse and navigate to your manifest file.
To remove a Plug-in:
1. On the Tools menu, click Plugin Manager.
2. Select the plug-in to be removed and then click Remove.
3. Click Yes to confirm the removal, and then click OK.
4. Click Yes if you want to restart WTT Studio immediately or No if you wish
to do it later.
5. Click Disable.
6. Click OK.
To enable a Group
1. On the Tools menu, click Plugin Manager.
2. Select the plug-in node containing the target group and expand it by
clicking the + icon.
3. Click View Groups.
4. Select the group to enable.
5. Click Enable.
6. Click OK.
Terminology
Metric
The WTT automation database storing the collected metric data collected by
the client engines.
Plug-in
External software module that plugs into existing modules to add or improve
functionalities to the overall framework.
IPF
A plug-in that loads into the Configuration UI and allows for setting thresholds
on metrics that are being collected. This allows for the defining of pass/fail
thresholds of jobs based on metrics.
IPF Client plug-in
A plug-in that loads into the client engine and monitors the actual metrics
collected at run-time. It tests whether these metrics are within defined
thresholds and then takes appropriate actions as defined if the Metrics fall
outside of the defined thresholds.
Criteria
The definition for a threshold for a given metric(s) that is specified in the IPF
UI plug-in.
General
This section allows users to configure the following settings:
The client engine will not report the metric data to the database providing for
flexibility to the users to monitor the data offline, if required. This check box is
cleared by default.
Load Plugin
Allows for users to load their own custom configuration UI plug-ins to extend
existing functionality.
EventLog
This section allows user to configure the following Eventlog related settings
Event Source Settings
Allows users to specify event source filters to apply on the event logs, so that
only events from a specific EventSource are collected. Use the Add/Remove
buttons to add or remove one or more EventSources from the displayed list.
The list of EventSource filters is defaulted to (default) meaning Events from
all EventSources are to be collected.
Event Category Settings
This group allows user to specify event category filters to apply on the event
logs so that only events matching specific EventCategory are collected. Check
all the categories of events you want to be collected for separately for each
EventSource added above. The default is not to collect any category of events
for any EventSources(thus no eventlogs at all) unless explicitly set by the
User
Note: The client engine will only report new events that have occurred since
the last time that the event logs were read.
make sure that the selected performance counters will, in fact, exist on the client
machines while the client engine is collecting this information during run-time.
Pool Tags
User may select the driver pool tag(s) and associated information about the
selected pool tag(s) that is collected from the client machines at run-time. This
information may be updated from a list of excluded tags, and the final list in the
Include section is what the client engine will collect. Currently, collection of the
following data for each pool tag is supportedNonpaged Allocations, Nonpaged
Frees, Nonpaged Used, Paged Allocations, Paged Frees and Paged Used. By
default, no pool tag information is collected.
The Pool Tag option must be enabled on clients for the client engine to be able
to collect this data. The engine will not enable the option by itself. If the client
does not have it enabled, it will fail, and ignore collecting this data on the clients.
Additionally, the list of displayed pool tags are derived from the pooltags.txt file
(from the WS03 sources).
Process
Users may specify the process name(s) for the processes to collect information
on. The list of process to collect information on can be added to or updated, along
with the specific counter used, by using the Add/Remove buttons. Currently, the
engine supports collecting the following data for each process Process ID,
HandleCount, WorkingSet, VirtualBytes, PagedPoolBytes and NonPagedPoolBytes.
The user can also enter the service name (such as wuauserv, dhcp, and so on)
and the engine will pick the correct process during run-time under which the
service is running and collect data about that process. This allows for tracking
services that run under a common service host (such as svchost.exe). In this
case, the information that is collected will also include other services that are
running in the same process.
4. Select an appropriate operator for the threshold value specified from the
Operator drop-down list. (These include <, >, ==, and so on to dictate
whether the metric should stay below, above or equal to the threshold
value.)
5. Type an appropriate threshold value for your metric in the Threshold box.
6. Select an operator from the And/Or drop-down list if you wish to combine
this threshold with other thresholds. If ignored, only one threshold is set.
7. Click OK.
Additional thresholds may be added by repeating the steps and all may be
rearranged using the Move Up or Move Down buttons if desired.
Note: You should select whether the test will pass or fail if the collected metrics
meet the specified threshold. Additionally, a command-line to execute in case of
failure may be dictated. By default, selecting Pass will make the engine to log a
Pass in the appropriate WTT Log for the job when the threshold is met and
continue monitoring the metrics. Selecting Fail will make the engine to execute
the failure command-line, if specified, before logging a Fail in the appropriate WTT
log for the job and exiting.
Configuration UI FAQs
How can users get a cumulative log that contains all the updates that are
being sent to the database?
This feature is not currently supported in WTT Metric as this is contrary to the
basic concept of storing collected metric data in one location.
Can users reduce the report interval below one minute?
This is not currently supported due to the expectation that this would grow
the WTT database excessively while not providing significant value.
Can users write custom Configuration UI tabs?
See Advanced Metric Usage information.
Can event logs be filters using something other than Event Source and
Event Category?
This feature is not currently supported in WTT Metric due to unknown usage
information.
Can counters be added that are not part of standard system performance
counters?
See Advanced Metric Usage information.
Can pool tags be added that are not listed in the Pooltags section?
This feature is not currently supported in WTT Metric due to unknown usage
information.
Can additional metrics other than what is displayed be added to pool
tags?
This feature is not currently supported in WTT Metric due to unknown usage
information.
Can specific processes be added when there will be multiple instances of
that process running at a given instant?
The metric feature allows users to add service names instead of just
processes which you can use to differentiate between different service hosts
with the same process name. This allows users to differentiate between
processes with same process names. WTT Metric does not currently support
more than this. If WTT Metric cannot distinguish between processes due to
their naming, the client engine will pick only the first instance of that process
(process name) and report data on that process only.
Can additional metrics on processes be added above what is displayed?
This feature is not currently supported in WTT Metric due to unknown usage
information.
Can more than a simple command-line be run when metrics do not meet
the thresholds?
This feature is not currently supported in WTT Metric due to unknown usage
information.
Can users dynamically add/update configuration/settings at run-time?
See Advanced Metric Usage information.
Can users programmatically add/update configuration settings?
See Advanced Metric Usage information.
Can tests do more than Pass or Fail, if metrics do not meet the specified
criteria?
See Advanced Metric Usage information.
Queries
Users may query the stored metrics in the selected datastore based on various
filters. There are two primary ways to query for metrics: based on job results, or
based on the computer name.
button.
If the selected result above has metric data collected, then the Machines
drop-down list is populated with all computers that were part of that job
result. The Metrics list box on the right is populated with the metric name(s)
that are collected for the computer in Machines drop-down list.
The Start time and End time are set by default to the start and end periods
for the complete job selected. The user can further filter the metrics by
changing these dates so as to view only a selection of that data.
Display Section
Users may also view the queried metrics in graphical form. Currently, these can
be displayed in three formatsline graph, bar graph and spreadsheet by selecting
the desired format in the Select View drop-down list. The default view is a line
graph.
Users may have more than one query and thus more than one set of metric data
open at the same time, allowing for comparison of metric data or further analysis
by the user. Each time a new query is defined, a new set of view formats is
created. The user can move between multiple views and queries using the Select
Query drop-down list.
Spreadsheet View Tips
As well as being able to view data in a spreadsheet format, this view enables
users to select a section of the spreadsheet and use the Copy button to copy that
data to a clipboard, where it can then be pasted into any application that can
handle the OLE transition appropriately.
You can also use the Export to Excel button to export the complete data set
displayed to a new Microsoft Excel worksheet, if Excel has been installed.
All data from a query may also be stored in a comma delineated file (*.csv) and
saved on a local hard-disk by selecting Save when pointing to Data Set on the
Metric Analyzer menu.
Previously saved metric data can be loaded from a comma delineated file (*.csv)
by selecting Load when pointing to Data Set on the Metric Analyzer menu.
Note: Line and Bar graph formats of the data will be automatically re-created
when you load previously saved metric data from a CSV file.
Line and Bar Graph Tips
Both Line graph and Bar graph views of queried data may be saved as JPEG
images by clicking Save when pointing to Graph on the Metric Analyzer menu.
Analyzer UI FAQs
Can users change the resolution of the graph image that is saved?
This feature is not currently supported in WTT Metric due to unknown usage
information.
Can the user edit the data in the spreadsheet view
This feature is not currently supported in WTT Metric due to unknown usage
information.
However, the data can be exported to Microsoft Excel where you would be
able to edit the data as required.
Can users add more view formats?
This feature is not currently supported in WTT Metric due to unknown usage
information.
Can users do more than just view the data?
This feature is not currently supported in WTT Metric due to unknown usage
information.
Can users programmatically access the data through the UI?
This feature is not currently supported in WTT Metric due to unknown usage
information.
Can users add more Filters to query metric data than what is currently
supported in the UI?
This feature is not currently supported in WTT Metric due to unknown usage
information.
Jobs integration
To assist users in setting metric configurations to collect and analyze job data
immediately without the added complexity of invoking the proper command-line
for the client engine, a common Library Job has been added to default
installations of the WTT infrastructure.
To use this feature, users, need to include this library job as a sub job within their
actual jobs.
Note: Only one metric configuration sub job can be saved per job. If
multiple configurations are needed to collect different metrics, it will be
Advanced Options
A number of customizations or advanced options can be added by component
teams to customize metrics collection and analysis to meet their specific needs.
Most classes, DLLs and methods that are needed for customization are available
within Source Depot, within the WTT Development tree. For additional
information, see the WTT Software Development Kit (SDK).
Customization Procedures
Create a job using Job Explorer to collect Metrics with the following local
parameter in addition to standard job information:
o
Add a regular task to the main job to be integrated with. This task must
run in Parallel to all other tasks and be defined as follows:
1. Select Run Job as the task type and click OK.
2. Type a task name in the Name box.
3. On the Run Job tab, click the Browse button and navigate to the
available library jobs. Select the Metric library job and click OK.
4. From the Library Job Param Name drop-down list, select
config.metric.
5. From the Job Param Name drop-down list, select the parameter
created above (MyMetricConfig.txt), and then click OK.
Note: Only one Metric library subjob can be defined per job. If multiple
configurations are needed, different jobs will need to be created.
Usage of an LLU
There are two primary reasons for using an LLU:
1. To substitute for sensitive account information in jobs or tasks. If a test
engineer specifies sensitive accounts in a task definition, that account is
vulnerable to password disclosure. To avoid this, a user can configure a
LLU with the sensitive account information and configure the task to use
the LLU. The actual account information remains on the client computer
and remains safe while the LLU is used.
Note: It should be remembered, however, that a LLU is available to any
administrator on the WTT client computer upon which is located. An LLU
should therefore only be configured on client computers that are
themselves secure.
2. To abstract a user name from job task definitions. For example, if a job is
created which will be used by multiple testers in different teams, using an
LLU rather than actual accounts is useful because the actual account may
not be valid in different setups. If an LLU is configured for the task, the
new setup only has to make sure that the appropriate LLU is present on
the WTT client computers.
Additionally, an LLU gives the WTT test team an easy way to maintain test
accounts. If a test account is used within task definitions, every time that test
account password changes, each task where the account is used will require
updating. If an LLU was used, however, all that is necessary is to update the
account password on all the client test computers using a simple bulk operation in
the WTT Studio UI.
Several tips that may assist users with managing LLUs include:
To find out what LLU is configured a specific computer, users can look at
file c:\WTT\JobsWorkingDir\Security\LLUTable.xml
The created LLU will stay with the system until WTT is uninstalled. It will
stay across operating system installations. This means that tester will not
need to create an LLU each time after an operating system upgrade or
after booting to a different operating system installation.
LLU Operation from the UI is not allowed on computers in the Default Pool
because the default pool is not secure.
Configuring an LLU
Two things that are important to consider when configuring an LLU:
For any LLU request to succeed, the target computers must have the WTTSVC
service running.
The LLU user interface works directly with the client computers selected.
Therefore, if there is not a direct connection between the WTT Studio interface
and the client computer, the LLU request will fail.
To create an LLU
1. On the Explorers menu, click Job Monitor and then select your controller
from the Datastore drop-down list.
2. Right-click an asset pool or select a set of computers, point to Manage
Logical Users, and then click New.
3. Select Local Logical User, and then click Next.
4. If the LLU is being configured from the WTT Controller, select Running
from a controller, and then click Next. Skip the next step.
5. If the LLU is being configured from a client machine, select Running from
another machine, type a domain and user name with administrative
rights on the client computers in the Domain\User Name box, and then
type the account password in the Password box. Click Next.
Note: This step should be skipped if the LLU is being configured from
the WTT Controller.
6. Type an account name for the LLU in the Local Name box.
7. Type the domain and user name associated with the LLU in the
Domain\User Name box.
8. Type the account password in the Password box, and then retype it in the
Confirm Password box.
9. Click Start.
10. When the LLU is configured, click Close.
To update an LLU
1. On the Explorers menu, click Job Monitor and then select your controller
from the Datastore drop-down list.
2. Right-click the asset pool on which the LLU is located, point to Manage
Logical Users, and then click Edit.
3. Select Local Logical User, and then click Next.
4. If the LLU is being configured from the WTT Controller, select Running
from a controller, and then click Next. Skip the next step.
5. If the LLU is being configured from a client machine, then select Running
from another machine and type a domain and user name with
administrative rights on the client computers in the Domain\User Name
box and type the account password in the Password box. Click Next.
Note: This step should be skipped if the LLU is being configured from
the WTT Controller.
6. Type the LLU name to be updated in the Local Name box.
7. Type the domain and user name associated with the LLU in the
Domain\User Name box. It is not necessary that this be an
administrative account.
To delete an LLU
1. On the Explorers menu, click Job Monitor and then select your controller
from the Datastore drop-down list.
2. Right-click the asset pool on which the LLU is located, point to Manage
Logical Users, and then click Delete.
3. Select Local Logical User, and then click Next.
4. If you wish to delete single LLU, select Delete one and type the LLU name
in the Local Name box. If you wish to delete all local logical users on the
client computer, select Delete All.
5. Click Start.
6. When the LLU is deleted, click Finish.
2.
3.
In the Run dialog box, type cmd, and then click OK.
At the command prompt, type
WTTCmd.exe /QueryLogicalUser
Usage of LSU
The LSU is currently used only by the WTT Autotriage component to resolve
symbol shares.
A LSU is identified by the network share it gives access to. For example, when
WTT Autotriage uses the LSU:
<\\MySymbolShare, UserA, DomainB, Password>
Autotriage will think that it needs to access \\Mysymbolshare for accessing the
symbols and therefore it will net use using this credential.
The LSU name * represents the account to be used for any share for
which any credential is not given.
When used, the LSU stays across computer restarts as well as across operating
system installations. As well, the password for the account can be updated
centrally from the WTT Studio UI, rather than having to be changed on each client
computer.
Configuring an LSU
Two things that are important to consider when configuring an LSU:
For any LSU request to succeed, the target computers must have the
WTTSVC service running.
The LSU user interface works directly with the client computers selected.
Therefore, if there is not a direct connection between the WTT Studio interface
and the client computer, the LSU request will fail.
To create an LSU
1. On the Explorers menu, click Job Monitor and then select your controller
from the Datastore drop-down list.
2. Right-click the asset pool on which the LSU is located, point to Manage
Logical Users, and then click New.
3. Select Local Symbol User, and then click Next.
4. If the LSU is being configured from the WTT Controller, select Running
from a controller, and then click Next. Skip the next step.
5. If the LSU is being configured from a client machine, select Running from
another machine, type a domain and user name with administrative
rights on the client computers in the Domain\User Name box, and then
type the account password in the Password box. Click Next.
Note: This step should be skipped if the LSU is being configured from
the WTT Controller.
6. Type the network share on which the LSU will be located in the Network
Share box.
7. Type the domain and user name associated with the LSU in the
Domain\User Name box.
8. Type the new account password in the New Password box, and then
retype it in the Confirm Password box.
9. Click Start.
10. When the LSU is configured, click Finish.
To update an LSU
1. On the Explorers menu, click Job Monitor and then select your controller
from the Datastore drop-down list.
2. Right-click the asset pool on which the LSU is located, point to Manage
Logical Users, and then click Edit.
3. Select Local Symbol User, and then click Next.
4. If the LSU is being configured from the WTT Controller, select Running
from a controller, and then click Next. Skip the next step.
5. If the LSU is being configured from a client machine, then select Running
from another machine and type a domain and user name with
administrative rights on the client computers in the Domain\User Name
box and type the account password in the Password box. Click Next.
Note: This step should be skipped if the LSU is being configured from
the WTT Controller.
6. Type the network share on which the LSU is located in the Network
Share box.
7. Type the domain and user name associated with the LSU in the
Domain\User Name box.
8. Type the old account password in the Old Password box.
9. Type the new account password in the New Password box, and then
retype it in the Confirm Password box.
10. Click Start.
11. When the LSU is updated, click Finish.
To delete an LSU
1. On the Explorers menu, click Job Monitor and then select your controller
from the Datastore drop-down list.
2. Right-click the asset pool on which the LSU is located, point to Manage
Logical Users, and then click Delete.
3. If you wish to delete single LSU, select Delete one and type the network
share on which the LSU to be deleted is located in the Network Share
box. If you wish to delete all local logical users on the client computer,
select Delete All.
4. Click Start.
5. When the LSU is configured, click Finish.
2.
3.
In the Run dialog box, type cmd, and then click OK.
At the command prompt, type
WTTCmd.exe /QuerySymbolUser
Figure N.3 Selecting Report Dimensions within Unified Reports Cube Admin
7. Click Apply.
8. Click OK to generate reports.
Appendix O: Unattended
Installation
Testers and test lab managers often find performing unattended installations of
WTT useful across several (or many) computers. Although the standard "quiet"
installation may be performed using the /q command-line option, many more
options are available, depending on whether the user needs to install a test
controller, test client, or test studio.
Test Controller
<source>\Setup\<type>\Setup.exe
/qb
DBSAPASSWORD="<sa>" NOTIFICATIONUSER="<userid>"
NOTIFICATIONPASSWORD="<password>"
NOTIFICATIONDOMAIN="<domain>"
INSTPASSWORD="<installpassword>"
Where:
<source> is the source from which the controller is being installed.
<type> is the server architecture type, such as x86.
<sa> is the database sa password.
<userid> is the WTT Notification user ID being used for the
installation.
<password> is the WTT Notification user password.
<domain> is WTT Notification user account domain.
<installpassword> is the install user password.
<source>\Setup\<type>\Setup.exe
/uninstall
Where:
<source> is the source from which the controller is being installed.
<type> is the server architecture type.
Test Client
/qb
ADDLOCAL=WTTBin
Where:
<server> is the test controller from which the test client is being
installed.
<port> is the serial port, such as COM1, through which the kernel
debugger is connected.
<rate> is the baudrate at which the kernel debugger is connected.
<channel> is the channel, such as 1394, through which the kernel
debugger is connected.
DRIVESELECTION the Drive that will be used for the WTT directory, use
the form "C:\"
Test Studio
Test Studio:
Msiexec /x {DE3938A9-F58A-4BCB-BEC5-036385AF3D5B}
Controller Setup
To set a WTT Controller to integrate with code coverage, use the
ccserversetup.cmd script from the run dialog box.
2.
\\codecoverage\public\wtt\ccserversetup.cmd <WTTBin>
<WTTInstall> <SystemLog>
Where:
<WTTBin>
Note: With the scheduled task running on the WTT controller, code coverage
data collected from test machines is automatically sent to the code coverage
server every 2 hours. The frequency of this task can be changed by the WTT
administrator.
Client Setup
Setup on WTT client computers varies depending upon whether they use the
Longhorn operating system or not, as follows:
Longhorn Builds Longhorn builds do not require any additional
consideration as the necessary code coverage utilities are already in place
beginning with build number 4068. The only actions necessary are the
installation of the code coverage server script and then running a normal test
suite.
2.
\\<wttcontroller>\wttinstall\codecoverage\ccclientsetup.cmd
Where
<WTTController> is the path to the WTT Controller used by the client
computer.
Job Considerations
For WTT, the Code Coverage team recommends creating one test, matched to one
trace, which is mapped to one job. Traces should be in the range of 30 seconds to
15 minutes to run. If your Job contains numerous traces that run for longer than
30 minutes or so, you should investigate breaking the tests up so that they
conform to the recommendation. If this is not possible, the user can create tasks
inside the job(s) to call CCSave.EXE to save the individual test cases. However, it
should be noted that as the traces are saved, the upper level Job-Trace will be
empty.
Add the test computer installed with a code coverage Windows build
to a separate machine (asset) pool.
During the test, coverage data is automatically saved with the full
name of the scheduled tests. Coverage data files are exported to
\\<WTT Controller>\SystemLogs\CCData after each test run. Where
<WTTController> is the path to the Controller.
1394 Lab
Different computers can have many different chipset 1394 solutions as well as
different implementations of the same chipset. The 1394 lab team needs to test
on a wide variety of systems from all OEM's as well as new chipsets.
The team is Responsible for testing the core 1394 bus for all operating systems in
development.
Extensively test interoperability of Host controller and devices on a
daily basis.
Test through multiple classes of devices on all known types of host
controllers.
Disabler/remover tests (just host controllers).
Asych and Isoch loopback tests.
Storage and Digital Video device interoperability testing.
IP 1394 testing.
Power management tests (all systems, all ACPI sleep states).
WHQL/1394 test suites.
Contact Alias
ncrum
ACPI/Power Lab
As part of the Hardware Platforms Test Team, the ACPI/Power lab we check that
each BIOS implementation follows the ACPI 1.0B and ACPI 2.0. Our primary focus
is operating systems test, but bios verification is done at the same time because
of this.
Test focus
Thermal test
However, the team cares more about the number of applications tested rather
than the number of systems tested. If the team receives one Sony system with
50 applications, it is comparable to two Sony systems with 25 applications each.
Contact Alias
ppagel
Audio Lab
The Audio Lab team tests devices such as video capture, TV tuners, DVD, DV
cameras, portable media devices, audio devices (ISA/PCI/USB/1394), and so on.
Test Focus
Plug and Play.
Power Management.
Low system resources/stress situations.
All of the above on various bus types ISA/PCI/AGP/USB/1394.
Driver/API/Core testing across many operating system SKUs and
computer configs.
Systems Needed
Multiprocessor computers in all flavors, AMD64, and laptops (with USB
and 1394 ports).
Contact Alias
brentm
Server systems for continued .NET testing and to promulgate future Windows
Longhorn testing. One of the team's goals is to smack as much stress on highend servers as possible in an effort to identify longevity issues that might
otherwise take several days, weeks or even months to manifest under normal
circumstances. The team uses client-server scenarios to test both ends of the
puzzle at the same time. For example, client computers request several streaming
media threads or web pages from a server that uses a SQL backend running on a
different server. The clients get stressed by the process of requesting and
validating the information, and the servers get stressed passing that information
out.
The team's scenarios are generated from real-world market and usage research.
They try to make tests match real-world scenarios as closely as possible to better
understand where product improvements are needed, while testing both client
and server scenarios in the Base Scenarios lab.
Contact Alias
ncrum
Bluetooth Lab
The Bluetooth lab tests radios from most radio manufacturers. One side of the
system is a PC running Windows with our Bluetooth stack. It talks to a variety of
devices to using real world scenario. Real world scenarios include cell phones,
hand held devices (CE/PPC), printers, access points, HID (mouse, keyboard), and
other Bluetooth-capable PCs. Most of the local device interfaces are USB- or PC
Card-based using the H4 (UART) interface. PCI-based devices are in the works.
Our test matrix extensively tests the Bluetooth stack, as well as the normal mix
of Plug and Play type testing to ensure these devices behave well in a Bluetooth
focus as well as Plug and Play, USB, power management. Interoperability with
other devices is something we are able to cover significantly better than most due
to the variety of devices we test with.
The Bluetooth team develops all Bluetooth tests that WHQL (Windows Hardware
Quality Labs) uses for qualification.
Contact Alias
ncrum
Embedded Lab
The Windows Embedded lab tests Windows XP Embedded and Microsoft Windows
codename Longhorn Embedded on a wide variety of PCs as well as embedded
devices such as Cash Registers, Set Top Boxes and Windows Based Terminals.
Test Focus
Anything new.
Contact Alias
ncrum
iSCSI Lab
The iSCSI lab tests interoperability between the Microsoft iSCSI Initiator and
vendor iSCSI targets. It also verifies target compliance with the iSCSI Standard.
Test Focus
The primary tests run to test the iSCSI targets and iSCSI HBAs are:
iSCSI Boot Test
iSCSI Digest Test
iSCSI Chap Test
iSCSI Ping Test
iSCSI Redirection Test
iSCSI Target iSNS Test
Exchange Loadsim (5 Days)
Kernel Lab
Test Focus
Multi-Proc kernel testing
Large memory testing including > 4GB PAE testing.
Win32 base APIs.
Process Management APIs.
Memory Management APIs.
Registry APIs.
Kernel stress.
Mobile Lab
Test Focus
Using a combination of the technology segments of Mobile Computing Test Team,
this team tests full mobile system integration including functionality testing of all
laptop ports (USB, IR, Serial, Parallel, and so on), device bays, docking stations,
CardBus Controllers, Video and Audio).
The team performs extensive Plug n Play testing of the CardBus controllers using
all device classes of PC Cards using a combination of manual and automated tests
that include hot plug, stop/start, disable/enable, install/remove, surprise removal,
device functionality, and dynamic resource rebalancing.
Power Management is an integral part of our testing and is integrated into every
aspect of our testing.
This team consists of several labs specializing in a technology area that is tested.
PC Card Lab: Builds test matrices around CardBus Controller Chipsets.
Laptop Integration Lab: Video and Audio Chipsets with a secondary
focus based on USB, IR, and 1394 chipsets.
This team is in the process of developing several additional test tools for Mobile
Plug and Play, CardBus Wake-on-LAN, and device functionality testing. Below are
some of the existing automated tests that the team currently uses:
Driver Verifier.
PMTE (Power Management Test Engine).
NTSTRESS
WHQL Test Kit for CardBus Controllers
Contact Alias
ncrum
Networking Lab
The Core Networking primarily tests network drivers, new versions of NDIS and
parts of the HCT, such as ndistester. Systems run stress daily, and BVTs and new
operating systems are loaded multiple times per week. On portable systems, we
work hard to get new drivers into the build, primarily networking, modem, video
and audio. We have instrumented a new procedure in our lab: When writing bugs,
we list the asset number of the computer, so we always know what hardware is
hitting what problems at any given moment.
Contact Alias
stevesu
OOBE Lab
In the OOBE lab we simulate the role of the OEM, verifying that this application
("Welcome to the operating system, get registered with Microsoft and the OEM,
get connected to the Internet") can be branded with OEM logos, additional
registration information, additional hardware tutorials, and communication
hardware interaction is seamless.
Test Focus
OEM Customization includes:
OEM Branding
OEM Registration Process
OEM Hardware Tutorial
This team thoroughly tests input devices and communications devices. The
team's goal is to ensure that OOBE is completed successfully on first boot and
consumer can connect to the internet.
Basic Functionality
Install Windows 2000
Power Management
USB Keyboards/Mouse
OOBE Device Usage
Sound cards
Modems
DSL Connections
Cable Modems
LAN cards
Feature Testing
OPK/Sysprep Interaction
Imaging Software Interaction
Hibernate/Standby Interaction
Global User Info post OOBE
Domain Join Interaction
User accounts creation within OOBE
Globalization Testing
German OOBE testing
MUI plus OOBE testing
Language packs plus OOBE testing
Systems Needed
Different OEM boxes, with different Images on how they customized OOBE in the
past, and how they want to continue customizing OOBE
Contact Alias
ncrum
Systems Wanted
This tam is very interested in systems with some of the following configurations:
Some dual processors.
3rd party IDE and SCSI controllers.
Raid Arrays (onboard controller or pci).
Large disks (especially computers with multiple partitions or OEM
partitions, recovery partitions, and so on).
Systems supporting USB boot.
The more complex, the better (restore disks are a plus for getting drivers so we
can preinstall them, and so on).
Contact Alias
ncrum
Performance Lab
The Performance Lab team is responsible for all desktop, laptop, and client
performance on Windows 2000, Windows XP, or later. The team focuses
extensively on system responsiveness, browsing, games, applications, industry
benchmarks (sysmark, webmark, mobile mark, business winstone, content
creation winstone, and others), internal workloads, boot, power management,
hibernation, standby, and so on.
We are also responsible for providing architectural guidance to developers,
architects and others designing code, drivers, or what have your for the Windows
platform.
As a part of our work, we build tools to assess performance, provide executive
reports detailing performance issues, concerns and design changes we need to
make going forward. We have a large lab that used to stress computers and study
performance.
We are a part of a larger group responsible for Server performance as well.
Contact Alias
ronth
Static Lab
Test Focus
Optical CD-ROM, CD-R/RW
Tape
Tape Changers
Smartcard readers
MPS Multi Port Serial adapters
Serial
Parallel
Removable drives
Functionality
Install Windows.
Power Management.
Running automated tools in all area.
ATAPI burn testing on CD-R/RW drives.
Backup utility testing performed on Tape and Changers.
RSM Testing performed on Changer libraries.
Automated loop back testing on serial and MPS.
Removable drive testing performed with RM Disk test tool.
Contact Alias
ncrum
Storage Lab
The Storage Lab thoroughly tests the IDE controller's basic functionality by
running through various configurations and tests. Our goal is to ensure that
Windows 2000 can be installed on the controller in test with full operating system
functionality.
Test Focus
Basic Functionality
Install Windows .NET Framework.
Power Management.
DMA Testing.
Installing and configuring a secondary hard disk drive.
Install the secondary hard disk drive.
Disk Management
Using Disk Management to crate and delete NTFS and FAT partitions.
RM Test
Basic Functionality
CD/PD
STape Test
HCTs
Manual Backup/Restore
Raid Testing/Multiple Adapters
Mixed Volumes
Striped Volumes
Spanned Volumes
Contact Alias
ncrum
Windows core team for Service Pack testing ownership, most are transitioned to
WinSE upon release of SP1 for each operating system.
Our Labs include Base Storage Drivers, Kernel and File Systems, Storage Services,
Networking, Application Compatibility, Print and Imaging, Setup and Installer,
Security, Windows Update, ACPI, RAS, TS, and so on: Basically, all component
areas covered by the core team. As for our hardware requirements, we can always
use additional test systems and often are able to make use of systems and
devices that may not be as current as those required by the core team.
Contact Alias
JohBro
USB Lab
Different computers can have slightly different implementations of the same USB
chipset and sometimes different implementations of the same BIOS. We need to
test on a wide variety of systems from all OEMs, as well as new chipsets.
Responsible for testing the core USB bus and the HID stack for all
Windows operating system releases.
Extensively test interoperability of devices on a daily basis. For example, a
USB Net meeting scenario would involve multiple levels of hubs, a
keyboard, mouse, speakers, microphone, modem/NIC, storage device or
printer and cameras.
Test through multiple depths of hubs on all known types of host
controllers.
Disabler/remover tests (all devices, including hosts).
Bulk/ISO loopback tests (hosts/hubs).
Power management tests (all systems, all ACPI sleep states).
WHQL/USB-IF test suites.
Contact Alias
ncrum
Watchdog)
D3D/DDRAW/OpenGL
GDI
AGP Filters
Device Coverage
Monitors
Stress/Stability
Display Applet
DCT/HCT Conformance
Multidisplay/Dualview
Display Performance
Migration/SetupVars
Systems Wanted
For testing purposes this team requests any and all newer computers with a wide
variety of video adapters. Here is a small list of current adapters supported in this
team's testing of x86 Drivers
ATI Radeon family, Desktops and Laptops
Radeon 7000
Radeon 7500
Radeon 8500
Radeon 9000
Radeon 9100
Radeon 9500
Radeon 9700
Radeon 9800
Intel i830-i845, Almador family, Desktops and Laptops
Matrox Parhelia, desktop only
NVIDIA, NV5-NV35, Desktops and Laptops
Nvidia TNT2
Nvidia Vanta
Nvidia GeForce 256
Nvidia GeForce
Nvidia Quadro
Nvidia GeForce2
Nvidia Quadro2
Nvidia GeForce3
Nvidia GeForce4
Nvidia Quadro4
Nvidia GeForce FX
S3, Super Savage laptops
SIS Xabre family, desktop only
Trident CyberBlade Windows XP laptops
Additional Systems Wanted
Laptops, tablet PCs, and desktop systems with integrated video adapters.
OEM systems with the shipping operating systems that are from that
vendor to further the team's real world testing.
As a huge favor the team also requests laptops with serial comports (for
debugging).
Contact Alias
VCTLeads
WDEG/NCD-AVQ Lab
Test Focus
In the WDEG/NCD-AVQ lab we test that all function instances for hardware and
software on a system are available properly according to the Function Discovery
Functional and Design Specification. We test the following:
Verify all function instances enumerate properly and in the correct
category.
Verify all function instances can be properly activated.
Verify function instances report properly in a terminal services session.
Verify function instances work correctly using either managed or
unmanaged code.
Verify on device/software installation/removal that proper notification of
changes in available function instances occurs.
Various end-to-end and integration scenarios.
Systems Wanted
Any system with PCMCIA, USB, 1394, Bluetooth or other means to easily connect
and disconnect a device from a system
Also interested in any laptops that have docking stations or other hot changeable
docking units
AVQ Lab
In the AVQ lab we test that the AVQ APIs are working properly. Those APIs
include CPU Reserves, Memory Reserves, and disk IO reserves.
Systems Wanted
Any OEM custom hardware.
Single processors, dual processors, quad processors, hyper threading processors.
x86, Itanium-based, AMD64.
HiPPOP Lab
In the HiPPOP lab we test HiPPOP API (DLL in Windows Longhorn) which is
lightweight solution for remote access to COM objects over given transport (RDP
or TCP). Mostly the team just runs a series of tests using Remote Desktop
Connection between two computers.
Systems wanted
Any computer is useful. Single proc, Dual proc, Quad proc, Itanium-based,
AMD64.
Contact Alias
ncrum