Professional Documents
Culture Documents
Quality: Quality is defined as justification of all client requirements of a project/product & deliver the
application in time without having any defects is called Quality.
Quality Assurance: Quality Assurance is the process to verify all the standards and process of a company
for giving the right product to the client.
Quality Control: Quality Assurance is the process to verify all the standards and process of a company for
giving the right product to the client.
Software: Software is a set of Programs, set of logic & related data that gives instructions to the system,
what to do & how to do.
(Or)
A set of executable programs is called software. We have 2 types of Software:
1. Product
2. Project
1. Product: Product is something which is developing for segment of customers. There is no end of developing
& testing activities in the product, because the product will release into the market versions by versions.
E.g.: Operating system, MS office, Photoshop & Processors Etc.
2. Project: Project is something which is developing for only specific customers. There is end of developing
& testing activities in the project, because whenever we satisfying the client requirements we need to stop the
developing & testing activities.
E.g.: Manufacturing application, Hospital management application, etc
1. Development Team: The responsibility of Development Team has to develop the Application project/
product according to the client requirements.
2. Testing Team: The responsibility of Testing Team has to Test the developed Application (Project/Product)
by using different Testing Types & Techniques according to the client requirements.
Testing: Testing is the Verification & Validation to ensure to deliver the defect free Application/Product to
the client.
(Or)
Software Testing: Testing is a process of executing a program with the intent of finding errors.
(Or)
Perform the testing on the software application or product is called Software Testing.
2. Automation Testing: Perform the Testing on the Application/Product with the help of some
Third party tools like QTP, Selenium, Load runner etc. is called an Automation Testing.
In both Manual Testing & Automation Testing we are performing same testing
in the application, but the way is different while we are performing the testing in Manual
Testing & Automation Testing.
Client: Client is a person who is providing the requirements to the company for developing their business
application is called client.
Company: Company is an organization which is developing the application according to the client
requirements is called company.
End User: End user is a person who is using the application in the final stage is called an End user.
E.g.: Infosys has developed an online application for SBI Bank. Here SBI Bank is a
client to Infosys Company and End user will be customers of SBI Bank.
Software bidding
A proposal to develop the software is called software bidding.
1. Questionnaire: By using this approach BA will collect the requirements from the client by asking the
questions to the clients.
2. KT-Knowledge Transfer: In this approach client will provide KT-Sessions to the BA for understand
the requirements.
3. Walkthrough: In this approach BA will go through the requirements documents which has provided
by the client and understand the requirements.
4. Inspection: In this approach BA will collect the requirements from the clients by inspecting the client
business location directly.
2. Analysis: In Analysis phase, BA will be involving to analyze the requirements and BA will design the
documents as understandable format of the requirement documents called Use case/BRS doc.
BRS doc: BRS doc divided into 2 docs.
1. SRS
2. FRS
SRS doc: SRS doc contains details about software & hardware requirement.
FRS doc: FRS doc contains details about the functionality of the project.
Use case Doc: Use case doc is in the format of word doc. One use case doc contains one flow of requirements.
3. Design: In designing phase, system architecture will be involving to design the architecture of the
application.
There are 2 types of Designs
1. HLD (High Level Design): It defines overall architecture of the application that includes all the
modules in the application.
2. LLD (Low Level Design): It defines overall architecture of the individual modules that includes all the
sub modules & screens of the application.
* Most of the projects are using UML for designing the architecture of the application.
4. Coding: In coding phase, development team will be involving to write the code for the functionality
of the individual module. After completion of all individual modules, the development team will be
integrate all the modules and make it as a single application.
5. Testing: In testing phase, Testing Team will be involving to perform the testing on the application
based on the client requirements. While testing the application testing team will execute the test cases,
using different types of testing & techniques.
Types of Testing
we have 2 types of Testing. 1. Functional Testing
2. Non Functional Testing
1. Functional Testing: Testing the application against business requirements. Functional testing is done
using the functional specifications provided by the client or by using the design specifications like use cases
provided by the design team.
Unit Testing
In Unit testing, development team will be involving to perform the testing on individual module of the
project through a set of white box testing techniques is called Unit testing/Module/Component Testing.
1. Top-Down Approach
If all main modules are developed and some of the sub modules were not developed in a project. In
that case programmers need to create some temporary programs called Stubs. Which acting as a sub module.
2. Bottom-Up Approach
If all sub modules are developed and some of the main modules were not developed in a project. In
that case programmers need to create are creating some temporary programs called Drivers. Which acting as
a main module.
3. Sandwich Approach
If some of the Main modules & some of the sub modules were not developed in a project. In that case
programmers need to create some temporary programs called drivers & stubs. Which acting as a main
modules & sub modules.
Types of Testing
Sanity Testing
After receiving initial build, Testing Engineers validating the major functionalities of the application
whether it is working or not. If major functionalities of the application is working fine then we can perform
further testing of the application. If major functionalities of the application is not working fine then we can’t
move further testing. So we can reject the build.
Smoke Testing
Validating the major functionality of the application by the development team in development
environment is called smoke testing.
Note: Definition wise both Sanity & Smoke testing is different. But practical oriented is same.
Functional Testing
Validating the overall functionality of the application includes the major functionality with respect to
the clients business requirements is called functional testing.
Recovery Testing
In this testing we are verifying how much time the application is taking to come back from abnormal
state to normal state.
1. While an application is running, suddenly restart the computer, and afterwards check the validness of
the application's data integrity.
2. While an application is receiving data from a network, unplug the connecting cable. After some time,
plug the cable back in and analyze the application's ability to continue receiving data from the point
at which the network connection disappeared.
3. Restart the system while a browser has a definite number of sessions. Afterwards, check that
the browser is able to recover all of them.
Compatibility Testing
In Compatibility Testing, we are verifying whether the application is working in different browsers,
different types of OS, different types of system software & Etc.
Forward compatibility --> application is ready to run but Operating system is not
supporting.
Backward compatibility --> Operating system is supporting but the application has some
internal coding problems to run on Operating system.
Compatibility Test Scenarios:
Test the website in different browsers (IE, Firefox, Chrome, Safari and Opera) and ensure the website
is displaying properly.
Test the HTML version being used is compatible with appropriate browser versions.
Test the images display correctly in different browsers.
Test the fonts are usable in different browsers.
Test the java script code is usable in different browsers.
Test the Animated GIF’s across different browsers.
Configuration Testing
We are validating the application in different configuration of the system like RAM, Processor and
HDD etc. is called Configuration Testing.
Note: Compatibility testing is suggestible for projects.
Configuration testing is suggestible for products.
Certification testing:
Certification testing is also related to Compatibility testing, however product will be certified as fit to
use. Certification is applicable for hardware, Operating systems or Browser. e.g. hardware/laptops are
certified for Windows 7 etc., (Or)
We have to certified the product whether the product is compatible or not for appropriate
software/hardware devices.
Re-testing
Re-testing the application whether the bugs are fixed or not. Re-testing is done based on failed test
cases of previous build. (Or)
Re-execution of a test with multiple test data to validate a function, e.g. To validate multiplication,
Test Engineers use different combinations of input in terms of min, max, -ve, +ve, zero, int, float, etc.
Regression Testing:
During this testing, testing team will be involving to perform regression testing on modified build
(application) to verify whether the fixed defect is impacting on other functionality of the application or not.
Regression testing is done based on passed test cases of previous build.
Any dependent modules may also cause side effects.
Performance Testing
Testing the application how much load is applied on server to execute the current application in terms
of Load, Stress & Volume testing.
During performance testing we have to use 3 techniques.
Load Testing: Validating the performance of the application with the client expected users is called Load
Testing. Testing of an application with various No. of concurrent users to verify the response time of the
application by increasing the No. of users.
Run SUT under customer expected configuration and customer expected load (No. of users) to calculate
“speed in processing” is called as Load testing.
Stress Testing: Validating the performance of the application by increasing the client expected users to the
maximum level & identifies the breakpoint of the application. By giving different sets of load continuously for
some time. We are going to find the application is stability and where it is getting crashed will be analyzed in
the stress testing.
Run SUT under customer expected configuration and more than customer expected configuration and
reliability is called as stress testing.
Volume Testing: Testing of an application with various volumes of data to verify where the application is
breaking.
Hence to overcome the above problems we should use Performance testing tool. Below is the list of some
popular testing tools.
Apache JMeter
Load Runner
Borland Silk Performer.
Rational Performance Tester
WAPT
NEO LOAD
Security Testing
Validating the security of the application in terms of the authentication & authorization.
Authentication: Verifying whether the system is accepting valid/Right user or not.
Authorization: Verifying whether the system is providing right information to right users or not.
Test Scenarios for Security Testing:
Verify the web page which contains important data like password, credit card numbers, secret
answers for security question etc should be submitted via HTTPS (SSL).
Verify the important information like password, credit card numbers etc should display in encrypted
format.
Verify password rules are implemented on all authentication pages like Registration, forgot password,
change password.
Verify if the password is changed the user should not be able to login with the old password.
Verify the error messages should not display any important information.
Verify if the user is logged out from the system or user session was expired, the user should not be
able to navigate the site.
Verify to access the secured and non secured web pages directly without login.
Verify the “View Source code” option is disabled and should not be visible to the user.
Verify the user account gets locked out if the user is entering the wrong password several times.
Verify the cookies should not store passwords.
Verify if, any functionality is not working, the system should not display any application, server, or
database information. Instead, it should display the custom error page.
Verify the SQL injection attacks.
Verify the user roles and their rights. For Example The requestor should not be able to access the
admin page.
Verify the important operations are written in log files, and that information should be traceable.
Verify the session values are in an encrypted format in the address bar.
Verify the cookie information is stored in encrypted format.
Verify the application for Brute Force Attacks
Database Testing
Validating the database of the application is called database testing. Whatever we performed in front
end application that should reflect in back end database & whatever we performed in back end application
that should reflect in front end application.
To perform the Database testing, the tester should be aware of the below mentioned points:
The tester should understand the functional requirements, business logic, application flow and
database design thoroughly.
The tester should figure out the tables, triggers, store procedures, views and cursors used for the
application.
The tester should understand the logic of the triggers, store procedures, views and cursors created.
The tester should figure out the tables which get affected when insert update and delete (DML)
operations are performed through the web or desktop applications.
With the help of the above mentioned points, the tester can easily write the test scenarios for Database
testing.
Test Scenarios for Database Testing:
Verify the database name: The database name should match with the specifications.
Verify the Tables, columns, column types and defaults: All things should match with the specifications.
Verify whether the column allows a null or not.
Verify the Primary and foreign key of each table.
Verify the Stored Procedure:
Test whether the Stored procedure is installed or not.
Verify the Stored procedure name
Verify the parameter names, types and number of parameters.
Test the parameters if they are required or not.
Test the stored procedure by deleting some parameters
Test when the output is zero, the zero records should be affected.
Test the stored procedure by writing simple SQL queries.
Test whether the stored procedure returns the values
Test the stored procedure with sample input data.
Verify the behavior of each flag in the table.
Verify the data gets properly saved into the database after the each page submission.
Verify the data if the DML (Update, delete and insert) operations are performed.
Check the length of every field: The field length in the back end and front end must be same.
Verify the database names of QA, UAT and production. The names should be unique.
Verify the encrypted data in the database.
Verify the database size. Also test the response time of each query executed.
Verify the data displayed on the front end and make sure it is same in the back end.
Verify the data validity by inserting the invalid data in the database.
Verify the Triggers.
Test data
A data or a value which we are using to test the application is called Test data. We are using test data
in input domain testing, re testing & regression testing types.
Positive Testing
Performing testing on the application with +ve Test data is called Positive Testing.
Negative Testing
Performing testing on the application with –ve Test data is called Negative Testing.
E.g. Performing testing in login screen with valid Uid & pwd is called +ve Testing. If we perform the testing
in same application with invalid uid & invalid pwd, invalid uid & valid pwd, valid uid & invalid pwd, etc. is
called –ve testing.
α-Testing
Performing the testing on the application directly by the client in developer’s environment is
called α-Testing.
(Or)
We invite the Selected Customers/Client to our location and Ask them to take a look at software, how
it works? etc. get the feedback front them its called as Alpha testing. Also Alpha testing happens only ONCE
and not many times.
Main purpose of Alpha testing to get the feedback or client or customers or users about how they feel
about the software. this feedback gets analyzed by Product Manager, Delivery Manager and Team to change
the certain things before the release.
β-Testing
Performing the testing on the application directly by the client like people is called β-Testing.
(Or)
In Beta Testing we distribute the software to the selected users or customer or client asks the feedback
of it. as Alpha testing was already done before Beta testing.
Installation Testing
During this test, Testing team validates whether application build along with supported software’s
into customers site like configured systems. During this test, Testing team observes below factors.
Acceptance Testing
After completing of system testing, Project management is concentrating on acceptance testing to
collect the feedback from real customers & Model customers. In this acceptance testing developers & testers
are also involving to convince customers. There are two ways in acceptance testing such as α-testing & β-
testing.
Release Testing
After completing of Acceptance testing & their modifications, Project management is concentrating on
software release. Here project manager will form the release team with few developers, few testers, few h/w
engineers and one delivery manager has head. This team will go to customer site and start s/w installation in
customer’s site. Release team observes the following factors during
Complete installation
Overall Installation
Input devices handling
Co-existence with OS
Co-existence with other s/w to share resources.
After completion of above observations in customer’s site, release team will provide training to the customer’s
site people and then release team come back to the organization.
Adhoc Testing
Adhoc testing is an informal testing type with an aim to break the system.
It doesn’t follow any test design techniques to create test cases. In fact is does not create test cases
altogether!
It is primarily performed if the knowledge of testers in the system under test is very high.
Testers randomly test the application without any test cases or any business requirement doc.
Adhoc testing can be achieved with the testing technique called Error guessing.
Error guessing can be done by the people having enough experience on the system to “Guess” the
most likely source of errors.
3. Pair testing
4. Monkey Testing
Buddy Testing
Due to lack of time to complete the application, Testers will join with developers to continue
development & testing parallel from beginning stages of development. (Or)
Two buddies mutually work on identifying defects in the same module. Mostly one buddy will be from
development team and another person will be from testing team. Buddy testing helps the testers develop better
test cases and development team can also make design changes early. This testing usually happens after unit
testing completion.
Exploratory Testing
Due to lack of documentation, testers can prepare scenarios and cases for responsible modules depends on
past experience, discussions with others, by operating SUT screens etc.
Pair testing
Two testers are assigned modules, share ideas and work on the same machines to find defects. One person can
execute the tests and another person can take notes on the findings. Roles of the persons can be a tester and
scriber during testing. (Or)
Due to lack of skills, junior tester will join with senior testers to share the knowledge during testing.
Buddy testing is combination of unit and system testing together with developers and testers but Pair testing is
done only with the testers with different knowledge levels.(Experienced and non-experienced to share their
ideas and views)
Monkey Testing
Randomly test the product or application without test cases with a goal to break the system
Domain Testing:
Domain testing is a software testing technique, Objective of domain testing is to select test cases of critical
functionality of the software and execute them. Domain testing does not intend to run all the existing test
cases.
End-to-end Testing:
End to end testing is performed by testing team, focus of end to end testing is to test end to end flows e.g. right
from order creation till reporting or order creation till item return etc and checking ., End to end testing is
usually focused mimicking real life scenarios and usage. End to end testing involves testing information flow
across applications.
Agile Testing
Due to sudden changes in requirements, testing team can change corresponding scenarios and cases
then they will go to retesting and regression testing on those modified software.
Testing methodologies
There are 3 different types of testing methodologies.
1. Black box Testing
2. White box Testing
3. Grey box Testing
1. Black box Testing
Performing testing on the application without having the structural knowledge (or) coding knowledge
is called Black box testing. Black box testing is done by the Testing team, because testing team people doesn’t
required coding knowledge but application knowledge is required for testing the application.
Test Initiation
In Test Initiation phase, QA manager will be involving to prepare the Test strategy document. This
document is known as test methodology or test approach. QA Manager will form the team for testing the
application.
Test Plan
After getting the Test strategy document from QA Manager, Test lead will be involving to design the
Test Plan document along with senior members in a project based on the SRS, Project Plan & Test strategy
documents.
Test Plan document
Test Plan document is in the format of word document. It is the route map document for testing of the
application. It defines what to test, how to test, who to test & when to test.
E.g.: Prepare BVA for local mobile number which is starting with ‘9’
Max= 9 9 9 9 9 9 9 9 9 9 => Pass
Min= 9 0 0 0 0 0 0 0 0 0 => Pass
Max+1=> 1 0 0 0 0 0 0 0 0 0 => Fail
Min+1=> 9 0 0 0 0 0 0 0 0 1 =>Pass
Max-1=> 9 9 9 9 9 9 9 9 9 8 => Pass
Min-1=> 8 9 9 9 9 9 9 9 9 9 => Fail
2. ECP (Equivalence class partitioning): It defines whether the data is valid or invalid which we are
using
to performing the testing on the application.
2 types of Equivalence class partitioning 1. Valid
2. Invalid
E.g.: To validate the User Name, it contains only alphabets “Naresh”
Valid Invalid
a-z (Small letters) Numbers(0-9)
A-Z (Capital letters) Special characters (*, @, $, (, &, ^, %)
Valid Invalid
0-9 a-z
A-Z
Space
Special character
Valid Invalid
0-9 a-z
A-Z Space
Special character
3. Error Guessing: Error guessing is experience based testing. Based on the previous experience the
tester will guess the errors in the application and design the test cases.
Defects
In this phase, both Testing team & developing team will be involving to report the defects, fix the
defects, retesting the defects & close the defects etc.
Assigned: After the tester has posted the defect, the test lead validates the defect whether it is correct or not.
If the defect is correct, then the lead assigns the bug to corresponding development lead. Then the status as
assigned from new.
Open: After test lead assigned the defect, the developer has started analyzing and working on the defect fix.
Then the developer gives the status as open.
Fixed: When developer makes necessary code changes and fixes the defect, then the developer gives the
status as ‘Fixed’.
Verified: After fixed the bug by the developer, the tester tests the bug again. If the bug is not present in the
software, tester approves that the bug is fixed and changes the status to “verified”.
Reopen: If the bug still exists even after the bug is fixed by the developer, the tester changes the status to
“reopened”. The bug goes through the life cycle once again.
Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the
application, tester changes the status of the bug to “closed”. This state means that the bug is fixed, tested and
approved.
Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug
status is changed to “duplicate“.
Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is
changed to “rejected”.
Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The
reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low,
lack of time for the release or the bug may not have major effect on the software.
Not a bug: The state given as “Not a bug” if there is no change in the functionality of the application. For
an example: If customer asks for some change in the look and field of the application like change of color of
some text then it is not a bug but just some change in the looks of the application.
Regression Testing:
During this testing, testing team will be involving to perform regression testing on modified
application to verify the whether the fixed defect is impacting on other functionality of the application.
Most of the projects are using Automation tools like QTP, Selenium, silk test etc. to perform the
regression testing on the application.
Test closure
whenever we are executed all the test cases and when all defects got closed, QA manager will be
involving to signoff the testing activity and technical team will deploy the application into production
environment
b) Buddy Testing:
Due to lack of time, testers will join with developers to continue development and testing parallel from early
stages of development.
c) Exploratory Testing:
Due to lack of documentation, testers can prepare scenarios and cases for responsible modules depends on
past experience, discussions with others, internet browsing, by operating SUT screens, conference with the
customer site people and etc.
d) Pair Testing:
Due to lack of skills, junior testers will join with senior testers to share their knowledge during testing.
e) Agile Testing:
Due to sudden changes in requirements, testing team can change corresponding scenarios and cases then they
will go to retesting and regression testing on those modified software.
f) Debugging:
To increase or to improve the skills of testers, developers can release software with known bugs. If testers are
found those bugs testing team is good otherwise team needs some training.
2) Critical: some part of functionality is brown, tester cannot test some part of functionality and there is no
workaround.
3) Major: in this type, defects are logical defects which do not block any functionality. Major type usually
contains functional and major UI defects.
4) Minor: it mostly contains UI defects, minor usability defects. Defects which does not harm to application
under test
Priority : It is term indicates the importance of the defect and when it should gets addressed or fixed.
1) High: it has high business value, end user can not work, unless the defect gets fixed. in this case Priority
should be High, means immediate fix of the defect.
2) Medium: end user can work using workaround but some functionality end user cannot use and that
functionality is not regularly used by the user.
3) Severity describe the how the defect is impacting the functionality of the product or software under test and
Priority indicates the importance of the defect and when it should gets addressed or fixed.
Examples
Major functionality failure like log in is not working, crashes in basic workflow of the software are the best
example of High priority and High Severity
1) Spelling mistake on menu names, client’s names or any important name which is getting highlighted to the
end user.
There is very common mistakes people were doing while giving the examples, they give example of logo and
logo misspelled this is wrong example. it comes under high priority and high severity. Logo and company
name is identity of the company or organization then how it should be low severity?
Tester should be judgmental while assigning the Severity to the defect
1) Crashed in application if end users do some weird steps which are not usual or invalid steps.
This is all about Severity and Priority, let me know if anyone has questions on it.
To me it seems like you 'want' a android phone. Perhaps by accident you ended up with 520. Most of
the things you have listed are features found in droids and that is what you should consider buying.
Just a friendly advise, after reading your post. I would seriously consider returning the phone.
Otherwise, you will continue to hate this phone everyday, especially when you consider that windows
phone cost a lot more than comparable droids.