Professional Documents
Culture Documents
Software:
A set of executable programs in a computer is called as ‘Software’.
Note:
(CEO)
(BA)
Requirement Gathering
(SA)
(TA)
Design
(Programmers)
Coding
(Programmers)
Testing
(Release team)
Release
(CCB)
Maintenance
B).Prototype model:
When customer requirements are not clear, oraganization can follow prototype
model to get clear requirements & develop original s/w.
(CEO)
(BA)
Requirement Gathering
(SA)
(TA)
Design
(Programmers)
Coding
(Programmers)
Testing
(Release team)
Release
(CCB)
Maintenance
C).Incremental model:
When customer requirements are huge, company can develop that s/w,
installment by installment.
L#3 SRGADCTR&M
L#2 SRGADCTR&M
L#1 SRGADCTR&M
Time
D).Spiral model:
When customer requirements are enhancing regularly, companies can
follow special model.
Analysis design
Maintenance Release
Note: In above old SDLC models, testing will come after coding stage only and
testing stage also conducting by same programmers, due to this reason so many
s/ws are not satisfying customer site people.
Verification Validation
BRS Acceptance
Coding
Note1:from above V-model ,each development stage have testing stage ,but
separate testing team was recruited for s/w testing stage.so,this model is assuring
quality and decreasing cost.
* works as expected,
* can be implemented with the same characteristics, and satisfies the needs of the
stakeholders.
NOTE:- Static testing and Dynamic testing both combination is called testing.
NOTE:- A system or any object whole Functional and Non functional testing
combination is also called S/W testing or System testing or Project testing.
NOTE:- Testing is the process of executing a program with the intention of finding
errors.
By SA by same SA stakeholder
By TA by same BA stakeholder
Note:
1. Agile model is not costly like v-model becaz, stakeholder will not take care by
customer .but unlike v-model, agile model is taking time, becaz approval is extra in
every stage.
1. Document Testing:
In this stage business analyst can review BRS.In general, BRS
documents is having customer requirements .So, this document is also called as
CRS (Customer Requirement specification)/User requirements specification.
From the above example “BRS” specifies ‘What to develop’ and “SRS” specifies
‘how to develop’ corresponding SA can review SRS for completeness and
correctness.
When SRS document was base lined ,TA can start design in high level and low
level .High level design can define overall architecture of complete s/w .Due to this
reason,HLD is also called as ‘Architectural design /System design’.
Login Root
Mailing
Chatting
Logout
Low level design is defining internal architecture of specific module. So, one s/w
design consists of one HLD and multiple LLD’s.
User
Used id&pwd
Fig() Login
Error msg DB
Invalid
Valid
Next window
The same TA can review HLD &LLD’s for completeness and correctness.
Inspection means that searching a document for specific factor for completeness
and correctness in that document.
Peer review means that the comparison of two similar documents point to point.
2.Unit testing:
When design documents are baselined, programmers can start coding,
by using development technologies like Java, .net, SAP, PHP, people
soft….etc.
Eg:-
If condition
T F
T F T
If condition
else
If condition
To apply this technique on program in unit testing programmer can follow below
process.
Step 4: Run program more than one time to cover all paths of program execution.
EX: Eg:-
If condition
T F
If condition
else
If condition
b) Control structure coverage:
Note: In old days testing means that debugging only .But now debugging is one
technique in huge testing topic.
a=10
b=20
c=30(temp value)
c=a a=a-b
In the above example first logic is running fastly but need extra memory for ‘c’
variable.
Second logic is not taking extra memory but takes more time to execution of
addition and subtraction.
When a program is running correctly & fastly, programmer can test that program
completeness in testing. Due to this reason, previous techniques called as ‘Program
TestingTechniques’, but this technique is testing on program testing.
Changes Changes
3. Integration Testing:
When a related programs writing and their unit testing completed, corresponding
programmers can start those programs interconnect to built a s/w.
Program1 Program2
b).Bottom-up approach
c).Hybrid approach
d).System approach
a. Top-down approach:
Main
Stub
Sub1
Sub2
From this approach programmers can integrate main module and some of
the sub-modules, becaz remaining sub-modules are under construction.
b. Bottom-up approach:
We can use this approach to integrate sub-modules without involvement of
main module is under construction.
Driver
Sub1
Integration &Integration Testing
Sub2
C. Hybrid approach:
It is combination of top-down, bottom-up approaches.
Fig ():
Main
Driver
Sub1
Integration &
Integration Testing
Stub
Sub2
Sub3
The above approach also called as’ Sand witch approach’.
Note:
D. System approach:
From this approach programmers can start integration after completion of
100% of coding from this approach, programmers no need to use driver and stubs.
This approach is also called as ’Big bang approach’.
4. Software testing:
Now a days .organizations are recruiting separate testing team only for s/w
testing stage.becaz,s/w testing is “bottle neck “ stage, in s/w development and
separate testing team is testing the s/w with ‘openers’.
a. Functional Testing
b. Non-Functional Testing
a.Functional Testing:
In general testing team job can start with functional testing to validate
corresponding s/w w.r.to customer requirements. Due to this reason functional
testing is also called as ‘Requirements testing’ .during this functional testing,
testing team can conduct below sub test on SUT (software under testing).
i).Behavioral testing
iv).Manipulation testing
i).Behavioral testing:
In general functional testing can start with GUI testing. *.So, this testing is
called as ‘Behavioral testing or Control flow or GUI testing’.
iv).Manipulation testing:
During this test, testing team can validate correctness of o/p or outcome in every
screen of SUT.this testing is called as ’Output (or) Outcome testing’.
Note: SUT front end screens are already integrated with database by developers.
So, a s/w means combination of screens and DB.
A screen of SUT is called ‘Front end’ and DB of SUT is called as ‘Bank end’.
vi).Data volume testing:
During this test, testing team can test SUT database capacity by inserting
sample data .This testing is also called as ‘Data Capacity tests or Memory capacity
testing’.
Front end
DB
Front end
DB
Sometimes our SUT is connecting to other s/w to share other s/w DB.This
resources sharing testing is also called as ‘Integration tests or End-End testing
or Web-service testing or SOA (Service oriented application) testing.
Note: The above is applicable in SUT, which is able to connect to other s/w to share
DB.
2. Non-Functional Testing:
After completion of functional testing, corresponding testing team can
concentrate on non functional testing to validate a s/w with respect to
customer expectations like usability, portability, performance, security…etc.
S/W testing
Exceptions/characteristics
Non-functional testing system testing
A) Usability testing:
In general, NFT can start with usability testing. During this test,
testing team can observe bellow factors on every screen of s/w under
testing (SUT).
3. Short navigations
B) Compatibility testing:
This testing is called as portability testing during this test, testing
team can run SUT on various customer expected platforms like different
operating system, different browsers, other system S/Ws.
D) Performance testing:
Performance means that speed in processing to calculate a
performance of an SUT, testing team can conduct bellow sub test.
Server
SUT
Load (concurrent users)
E) Security testing:
This testing is called as penetration testing during this test, testing
team can concentrate on bellow sub test
SUT
Client Server
Encryption decryption
Cipher text
Internet
Hacker (tester)
F) MultiLonguity testing:
Some s/w are taking inputs and providing outputs in Unicode instead
of ASCII (American standard code for information interchange). Which
testing this type of s/w, testing team can conduct multilanguity testing in
any one of two ways.
(Localization)
Language
Tester English Spanish, French, Arabic SUT
conversion
S/W
(Globalization/Internationalization)
g) Parallel testing:
When SUT is product, testing team cam compare SUT with previous
versions of same s/w or with competitive s/w s in market to identify
weakness & strengths before release.
Note1: In above 7 NFTs, first 4 tests are mandatory. Security testing team
needs hacking knowledge multilanguity testing was needed on SUT
developed in Unicode parallel testing is applicable for products only.
Note2: While conducting FT&NFT on s/w, testing team will be tested by test
management. To confirm that testing team is doing testing on s/w with
respect to companies standards or not. This management on testing team is
called as “compliance testing” or” standard testing”.
V) Acceptance testing:
After completion of s/w testing company management can concentrate
on acceptance testing to collect feedback on that s/w. There are two
techniques in this stage such as alpha testing &beta testing.
Note: The above alpha & beta test in acceptance testing are also called as
yellow box testing techniques.
Complete installation.
Overall functionality.
I/P devices support
Ex: keyboard, mouse, and joystick….etc.
O/p device handling
Ex: Monitor printers, internet connection, scanner etc.
Secondary storage devices support ex: ROM, external hard disk, pen
drive.
OS support.
Co existence with other s/w in customer site.
VII) Testing during maintenance:
After s/w release process completion, release team can provide
training to end users of customer site and then release team people will
come back to company and wait for next project or produce.
While utilizing s/w customer site people are getting failures or getting
need for enhancements. To handle those failures and enhancements,
company management can recruit some trained fresher’s called as CCB
(change control board). This CCB team can do below tasks:
SCR hierarchy:
S/W change request (SCR)
Missing bug)
Perform & test enhance Fix bug & Test bug fixing
(Correctness maintenance)
During corrective maintains management can calculate efficiency of
development and testing team in previous project/product to improve skills
for upcoming projects/products.
Test closer
Test initiation Test planning Test execution
Test design
Test reporting
(EX: In my company s/w testing process starts with test initiation. In that,
my project manager decides to provide which type of test are suites for our
current project. After that we will plan to test by using those tests, and then
we will design the test cases, these to be executed after designing
completed. In this execution, we will find defects, those send to developers.
And then developers will correct those defects and provide the modified
software’s to the testers.)
Form the above mapping in b/w SDLC & STLC, developers and Testers can
sign in/role in into current project/product after completion of SRS
preparation and review by SA.
Note: When Acceptance stage was completed, developers and Testers will
sign-off/roll-off from current project/product, but few developers and Testers
go to onsite for s/w release.
I.Exhaustive Testing
Ii.Optimal Testing
Iii.Ad-hoc Testing
Ad-hoc testing is Illegal, becaz this testing is not testing complete s/w.
Ex:
Roles Responsibilities
1.Test Lead .Preparation of test plans.
.Review testcases.
.Involving in defect Tracking.
.Conducting daily/weekly/monthly.
2.Sr.Tester(QA) .Prepare testcases.
.Involving in defect reporting.
.Co-ordinate with Jr .Testers.
3.Jr.Tester .Executing testcases on SUT.
.Detecting defects.
.Defect report preparation.
Eg:
a.GUI testing
d.Manipulation testing
e.DB testing
g.Inter connection
testing
Eg: PM
Tester Developer
8. Configuration Managements:-
To store all development & Testing document for future, PM can
provide a folder a folder structure.
Server
Eg:
11).Training Need:-
PM can analyze need for training to tester before starting testing.
3. Lack of resources.
4. Lack of Documentation.
5. Delays in delivery
7. Lack of communication.
Test Environment: Required H/w &S/w for testers during current project/product
testing.
.Time exceeded.
.All major bugs are fixed.
Staff and training needs: List of selected testers and list out training topics if
needed.
Risks and assumptions: List out previously analyzed risks and solutions to
overcome them.
Note: From the above IEEE-829 test plan document format, Test plan consists of 4
components such as, “What to test?”, “how to test?”, “who to test?”, “when to
test?”
From the above case study, test engineers can prepare test scenarios, test
case, sql commands, test data in test design stage, & prepare test long & defect
report in test execution stag.
During test design, selected test engineers can follow below steps of
process in s/w test design stage.
Form the above ways, every tester can try to understand all
requirements in SRS, but responsible for selected modules testing.
BRS
Prepare test scenarios &test cases for long in module final testing in an
instances project.
4 Min Valid
5 min+1 Valid
3 Min-1 Invalid
16 Max Valid
15 Max-1 valid
17 Max+1 Invalid
b) Equivalence classes partitions:
(ECP)-(type
Valid invalid
.alpha Numerics with lower cases .alphabets with upper cases
.special characters
.blank field.
2).password field validation:
a.BVA (size)
4 Min Valid
5 min+1 Valid
3 Min-1 Invalid
8 Max Valid
7 Max-1 Valid
9 Max+1 Invalid
b) Equivalence classes partitions (ECP)-(type
Valid invalid
.alphabets with lower cases .alphabets in upper cases
.special characters
.blank field.
.Alpha Numerics
Test cases:
Decision table (DT)
ii).period (5 to 20 years)
after filling above fields agent can click “create” button to get
policy number as output &that number is in 7 digits sometimes, agent can
click “back” button to back options window.
Validate the policy holder name:
Test case:
1).BVA
Valid In valid
Alphabets Numerics
Special characters
Blank field
7). Test scenario :( validate the duration/period of policy)
Test cases:
Min 5 Valid
Min-1 4 In Valid
Min+1 6 Valid
Max 20 Valid
Max-1 19 Valid
Max+1 21 In Valid
2. ECP on type:
Valid In valid
Numeric Alphabets
Special chars
Blank fields
8).Test scenario :(validate the amt field)
Test cases:
Valid In valid
Numeric Alphabets
Special char
Blank field
Test cases:
Min=7 Valid
Max=7 Valid
Max-1=6 Valid
Max+1=8 In valid
2. ECP:
Valid In valid
numeric’s Alphabets
Special chars
Blank field
Duplicate
Test cases: - DT
Ex:
[0-9]one digit
[\.]one.
[A-Z]([a-z]*[\s]?){1,}[\.]
1. BVA (Boundary values analysis) is used to define boundaries of i/ps and o/ps.
2. ECP (Equivalence Classes Partition) is used to define the type for i/ps and o/ps
5. STF (State Transition flow) is used to write scenarios and cases in order w.r.to
functionalities in SUT
6. EG (Error Guessing) is used to guess the errors in upcoming SUT (based on past
experience)
BRS
Usecases
Coding, UT&IT
ii. Usecase description: While doing money transfer, user can specify
amount & destination A/C number.
iii.Actors: While doing money transfer, user can fill below fields:
7 digits number with 1st and 2nd digits are o for HDFC,
9 digits number but doesn’t start with 0 &1 and doesn’t end with 0
for local,
9 digits number with doesn’t start with and end with 0 to 1 for
ICICI,
8 digits number with 1st and 2nd digits in b/w 1 to 9 only for ICICI
C.Amount1500 to 100000
E.Events lists
Fill fields in money transfer page Successful message for valid data
and click ‘Transfer’ button
Unsuccessful message for Invalid
data.
User
Login
DB
Money
transfer DB
Valid Invalid
Successful
Unsuccessful
message
message
Money Transfer
Account number
Amount
Developers to Testcases
From the above diagram manual testers can try to understand project
requirements by taking with customer site, by getting prototypes from
developer and operating original SUT screens and then manual tester will go
to start test scenarios and test cases for responsible modules functional
testing.
Ministatement
To date (mm/dd/yy)
Statement Logout
Print
4.Non-functional Specs based Tet design:
After completion of test scenario’s and cases writing for functional
testing, manual testers can start scenarios and cases writing for non-functional
testing like usability testing,compatability testing ,H/W-configuration
testing ,compatability,performance testing.
Here, manual tester can prepare scenarios and cases for specific testing topic to be
apply on complete SUT.To prepare test scenario’s & cases for functional testing at
modules level, manual tester followed 3 methods, such as
But to prepare test scenario’s and cases for non-functional testing topics at s/w
level, manual testing can follow only one method ,such as
BRS
Design Testcases
Coding, UT&IT (Non-functional testing)
S/W build
Test Scenarios:
Note:
1. The above 20 test scenarios (or) cases application on any S/W usability
testing .Due to this reason .The above usability testing. Due to this reason, the
above usability scenarios (or) cases common check points.
2. For more usability test scenarios/cases, we can go to refer “s/w testing concept
and tools”.
2. Compatibility Expectations:
From customer site people expectations, OBP website is able to run in windows
XP, windows Vista, Windows 7,windows 8 like client machines and able to run in
windows 2003 server, windows 2008 sever, Linux Red hat server like server
machines.
Either in client (or) server computer, users can open OBP site by using IE,
Google Crome, Mozilla Firefox, Safari, Opera, Netscape Navigator, Hot java.
Q. Prepare Test Scenarios and Cases for compatibility testing on OBP.
Test cases:
Windows 7 Yes
Windows 8 Yes
Other Yes/No
Other Yes/No
Browser Ie Yes
Safari Yes
Other Yes/No
While running mini statement functionality, OBP s/w will connect to printer like
Inkjet, Dot matrix Laser.
Q. Prepare test Scenarios and cases for H/W configuration testing on OBP.
Test Scenario:
Ring Yes
Hub Yes
Others Yes/No
4. Performance Expectation:
From customer expectations, OBP web-site will be used by 1000 users at a time.
In general, developers can place SUT in “Soft base” folder in server computer.
Server
Developers testers
In general, tester can report defects to developers by using MS.Outlook /Lotus
notes in below process.
PM
In general, Tester can release modified SUT builds with unique version numbers.
These numbers are understandable to testers to distinguish old build and new build.
1.0
DR
2.0
DR
SUT (1.0)Smoke testing real testing (DR) DTT (Accepted) Bug fixing
Note:
2. Test execution levels like retesting, sanity testing and regression testing are will
come on IInd version to last version.
4. Real testing will come on 1st version to last but one version.
H/W team can provide required H/W and S/W to testers for SUT testing.
D.Smoke testing:
In general s/w test execution levels on every s/w build can start with smoke is
testing. In this level, test lead and all testers can gather in on cabin and do tasks:
If smoke testing failed on SUT, then test lead can reject that SUT and waiting for
working SUT or stable SUT or testable SUT.
If smoke testing was passed, then tead can think about further testing levels.
Server
SUT (Testers)
Test environment
Note: From the above diagram, smoke is Team level job. Smoke testing is also
called as “testability testing” (or)”Build verification testing”.
E.Real Testing:
In general testers are conducts are conducting real testing on responsible modules
of SUT individually.
Server
SUT T1 T2
SUT
SUT
T3
Test environment
Note: From the above diagram, every test engineer can follow below process to
apply testcases on responsible modules of SUT and detect defect.
Defect means mismatch in b/w testcases expected values and actual values of
SUT.
During real testing, corresponding Test engineer can follow below process.
Down/Launch stable SUT compare expected values in testcases and Actual values
in SUT.
Open previous prepared Testcases.
Operate SUT compare expected values in Testcases and Actual values in SUT.
If Expected is not equal to actual, then tester can stop testing and prepare defect
report in IEEE-829 format by using MS.Excel and forward to DTT using MS.Outlook.
F.Defect Reporting:
When any test case expected value is not equal to actual value of SUT.During real
testing ,corresponding manual tester can stop real testing ,then prepare defect
report in IEEE-829 format like shown below:
3. Build version id: Version number of SUT in ,which module testing this defect
was detected.
5. Test scenario: Name of failed scenario, in which scenario execution this defect
was detected.
10. Test Environment: Used H/W & S/W, while getting defect in SUT.
Note:
Tester are using MS-Excel to prepare defect report by following above IEEE-829
Format and Testers can use MS.Outlook to format that except defect report to DTT.
DTT categorized DR
No
No
No
Yes
DTT assign DR H/W Team Test
environm
ent?
No
Note:1.From the above defect Tracking process ,one defect classified into any one 4
types like Testcase related ,Testdata related, test environment related s/w coding
related defect.2.In general ,DTT can reject one defect due to lack of information
(or) duplicate to other defect.
(Tracking)
(Run)(Retesting)
SUT
Failed Defect report
SUT
Failed Defect report
(Run)(Retesting)
SUT
Failed Defect report
K.Coding related Bug fixing:
(Run) failed
Defect report to DTT
Testcases SUT
(Real test)
(Tracking)
Test lead
conducts
‘rout-cause Developer DTT accepted DR as
analysis’ to Assign testcase related
+Test lead+ TA defect
identify wrong
coding areas
of SUT Developers can modify wrong
coding areas to correct and TA
can change HLD& LLD, if
needed
After completion of smoke testing related testers will go to previously failed test
execution called as “Retesting”.
If retesting was passed, then go to Sanity testing by executing previously passed
most related tests to identify side effects of modifications.And then testers can go
to regression testing executing all related testcases w.r.to modifications.
Smoke testing a common testing stage for any SUT build& executing fixed
testcases.
The combination of smoke test, retest, sanity test & regression test is called as
‘confirmation testing’.
In above specified story every by status as changing called as ‘life cycle’.
New/reopen
Deferred
Accepted (opened) Rejected
(Agree)
Fixed
Correctly fixed Closed
In above bug life cycle (BLC), differed means that a bug was accepted to fix, but
fixing was postponed to future releases from the above BLC, one bug final status is
either closed or differed.
Newdiffered
Newrejectedclosed
New rejectedreopened
Close
m).Test cycle:
SUT (1.0) Smoke testing real testing (DR) DTT (Accepted) Bug fixing
SUT (3.0) Developer bug fixing Accepted DTT DR further real testing
:
From the above diagram, the time gap in b/w two build releases is called
‘test cycle’. In one cycles, developers can fix one/more bugs.
In general, testers will test SUT in week days & report defeats. On week ends can
fix those bugs & release modified SUT 80, one test cycle is taking one week time.
Coverage analysis:
Eg:
module %bugs
A 20%
B 20%
C 40%
D 20%
After completion of test closure review meeting, corresponding testers & test lead
can start re-execution of testcases on high bug density modules in final SUT build.
This testing is called ‘final regression (or) post Morton testing (or)
confidence testing (or) reacceptance testing.
If any test case was failed on final SUT, then testing team can contact
developers immediately to fix that golden bug/lucky bug as early as possible.
If time is not enough for fixing golden bug then request customer site
people for later patch.
vi). Acceptance:
Note:
While roll off from current project, test lead can prepare Final test summary report
in IEEE-829 format and submit it to PM.