You are on page 1of 15

Solution Analysis and

Management
Module 10: Conducting Evaluation
Validating Solution Results

• If your organization uses a predictive project lifecycle (waterfall approach) the


solution is validated at the end of the life cycle – immediately before release or
at a specified time subsequent to release.
• If your organization uses an iterative project lifecycle (agile approach), validation
occurs at the end of each sprint or release.
• Many organizations will use a hybrid approach to project planning in practice.
Therefore it is common to specify that milestones be evaluated as part of a
predictive project lifecycle.

Sprint or Immediately prior Post-project


Milestone to project release validation
Validating Solution Results

Sprint or Immediately prior Post-project


Milestone to project release validation

Does the solution Does the solution Did the solution


meet the user meet the business meet the user and
requirements? requirements? business
requirements?
Techniques for Evaluation

• Questionnaires and focus groups.


• Results from exploratory and user acceptance testing.
• Results from day-in-the-life testing.
• Results from integration testing.
• Expected vs. actual results for functionality.
• Expected vs. actual results for non-functional requirements.
• Outcome measurements and financial calculation of benefits.
Questionnaires and Focus Groups
• These are useful techniques for gathering quantitative and qualitative data.

Questionnaires
• Used when gathering information from a larger number of stakeholders, or where
the stakeholders are geographically dispersed.
• You must know the right questions to ask, as well as the scope of response options.
• They are useful for gathering general satisfaction feedback post-project.

Focus Groups
• These are used to collect qualitative information when the response set is not clear.
• They are time consuming and expensive to run.
• They are useful for gathering information that can help to inform questionnaire
design.
Exploratory / UAT Results

• Exploratory testing is an unscripted form of testing conducted by a subject


matter expert. It can help to discover whether the requirements and acceptance
criteria capture all possible uses of the solution – for example, beta testing
software to try and ‘break’ the code through intended or unintended usage.
• User acceptance testing (UAT) is a formal approach to testing conducted by
subject matter experts, or possibly end users (who have the knowledge and
ability to validate the solution).
• Both types of testing are used when validating solutions.
• UAT can take the form of a checklist (the checklist is made up of the acceptance
criteria, which in turn is based on user requirements – functional and where
appropriate, non-functional requirements).
• The output of both types of testing is to determine whether the product, service
or solution is working as intended with regards to functionality, ease of use, and
performance.
Day-in-the-Life Testing (DITL)

• Day-in-the-life, or DITL, testing is a semi-formal approach to evaluation.


• It consists of a set of use case scenarios, user stories, or functional requirements
to evaluate the solution against a set of actual data.
• The result of the test is a comparison between expected results and actual
results.
• DITL can also be used as a form of exploratory testing.
• Test results are used to determine whether the solution provides the
functionality for typical usage in a role that interacts with the solution.
• Like exploratory testing, DITL can help uncover results from unintended (or
unpredicted) usage of the solution.
Integration Testing

• This type of testing validates or evaluates how a component of a solution works


with other components of the solution – or with existing components in the
business.
• For example, a call centre employee needs to use management software to
guide a support call with a customer – but it also may need to communicate
data to other systems – for example business intelligence reporting.
• Systems testing is a form of verification that that proves that a solution can
interoperate with other systems to exchange data.
• Integration testing puts the solution into an environment that is identical (or
nearly so) to the production environment.
• The outcome of integration testing is to evaluate the solution in a larger context
– how the solution will work with other components can be difficult to predict.
Expected vs. Actual Results – Functional Requirements

• Each acceptance criterion will need to be evaluated, often by comparing the


expected result versus the actual result.
• Acceptance criteria are defined during requirements elicitation, and these can
be defined in terms of the actual usage of the solution.
• There is a balance between keeping the acceptance criteria broad enough to
avoid being inflexible in the solution development, but being specific enough to
have expected results.
Expected vs. Actual Results – Functional Requirements

Fields can also be placed in one row (so each row is a functional acceptance criterion) and adding
expected vs actual results:
Expected vs. Actual Results – Nonfunctional Requirements

• Similar to functional requirements, we need to capture the results for


nonfunctional requirements.
Outcome Measurements and Financial Calculation of Benefits

• Some benefits are easily quantified, some must be inferred.

For example, and insurance company has defined a solution benefit to be


‘decrease the amount of time to process a claim’. You would use your baseline data
to set the estimated current time, against which the solution time will be
compared to measure the success of the solution in regards to this benefit. The
actual reduction in time could also be translated into a direct financial benefit in
order to communicate the value of the benefit in easily understood terms, but it
also allows for comparing the actual results of different benefits (since financial
values are expressed in dollars and can be directly compared).
Outcome Measurements and Financial Calculation of Benefits

An adjuster spent 338 hours on average processing manual claims per quarter prior to the solution
implementation. Post-implementation, it is estimated that the adjuster would spend 225 hours per quarter
on average processing manual claims. In actual testing, it is estimated that the adjuster will spend 250 hours
due to the automated system having more exceptions than intended. Adjusters are paid, on average, $85,000
per year. Using the information in the table above, what is the estimated and actual annual cost savings for
the company?
Evaluating Acceptance Criteria
One of the following activities will occur as a result of ongoing evaluations:

Comparison of Expected vs Actual Results


• For functional or nonfunctional requirements, user stories, or use cases with defined
acceptance criteria.

Examine Tolerance Ranges and Exact Numbers


• For nonfunctional requirements, acceptance criteria may take the form of a tolerance
range – worst case (minimum), best case (desired), and target (most likely) values. The
solution is defective if it falls below the minimum standard.

Log and Address Defects


• Where results are not acceptable, we need to analyze the cause of the discrepancy.
• The business analyst is not necessarily responsible for finding the cause, but should
ensure that discrepancies are being logged, assigned, reviewed, and resolved.
Go / No-Go Decisions

• A solution could be released in whole, in part, or not at all.


• The specific responsibility to make the decision should be documented in the
stakeholder analysis / stakeholder management plan.
• The business analyst will summarize the evaluation results and should be able to
provide more detail where necessary. Relevant subject matter experts that would
have been part of requirements elicitation and testing should be present.
• Stakeholders will need to understand the nature of the problem, the impact of any
changes, and the impact of delaying a decision (it can often delay the entire
project).
• These meetings should be held in person wherever possible to allow for discussion
between stakeholders, and creates an expectation that a decision be made.
• Once the decision is made, it must be documented and ‘signed off’ – the formality
of this depends on the nature of the project, the risk involved, and whether it
could affect third party agreements.

You might also like