You are on page 1of 8

Introduction & Manual Testing - Presentation Transcript

1. Introduction & Manual Testing


o Software Development Life Cycle
o Software Life Cycle Models
o Water Fall Model
o Prototype Model
o Rapid Application Model
o Spiral or Iterative Model
o Component Assembly Model
2.
o Testing Fundamentals
o Testing Objectives
o Testing Information Flow
o Test Case Design
o White Box Testing
o Basis Path Testing
o Flow Graph Notation
o Cyclomatic Complexity
o Deriving Test Cases
o Graphic Metrics
3.
o Control Structure Testing
o Conditions Testing
o Dataflow Testing
o Loop Testing
o Black Box Testing
o Equivalence Partitioning
o Boundary Value Analysis
o Comparision Testing
o Verification and Validation
o Different Kinds of tests to be considered
o SEI, CMM, ISO, IEEE, ANSI
4. SDLC Model (or) Linear Sequential Model (or) Classic Life Cycle Model
o System/Information Engineering and Modeling
o Software Requirements Analysis
o System Analysis and Design
o Code Generation
o Testing
o Maintenance
5. Quality. Quality Assurance, And Quality Control Quality is meeting the requirements
expected of the software, consistently and predictably.
o Quality Assurance
o Concentrates on the process of producing the products.
o Defect-prevention oriented.
o Usually done throughout the life cycle.
o This is usually a staff function.
o Examples : Reviews and Audits
o Quality Control
o Concentrates on specific products.
o Defect-detection and correction oriented.
o Usually done after the product is built.
o This is usually a line function.
o Examples : Software testing at various levels.
6. Testing, Verification, And Validation
o Testing is the phase that follows coding and precedes deployment.
o Verification is the process of evaluating a system or component to determine
whether the products of a given phase satisfy the conditions imposed at the start
of that phase.
o Validation is the process of evaluating a system or component during or at the
end of the development process to determine whether it satisfies specified
requirements.
7. Quality Assurance = Verification Quality Control = Validation = Testing
8. Waterfall Model
o A Waterfall Model is Characterized by three attributes.
o The project is divided into separate distinct phases.
o Each phase communicates to the next through pre-specified outputs.
o When an error is detected, it is traced back to one previous phase at a time, until
it gets resolved at some earlier phase.
9. Overall business requirements. Software requirements. Planning. High-level design.
Low-level design. Coding. Testing.
10. Prototyping Model
o A Prototyping model uses constant user interaction, early in the requirements
gathering stage, to produce a prototype.
o The proto-type is used to derive the system requirements specification and can
be discarded after the SRS is built.
o An appropriate life cycle model is chosen for building the actual product after
the user accepts the SRS.
11. Rapid Application Model
o The RAD is a linear sequential software development process that emphasizes an
extremely short development cycle. It includes the following phases.
o Business Modeling.
o Data Modeling.
o Process Modeling.
o Application Generation.
o Testing and Turnover.
12. Spiral or Iterative Model
o Most life cycle models can be derived as special cases of this model. The Spiral
uses a risk management approach to software development. Some advantages of
this model are:
o Defers elaboration of low risk software elements.
o Incorporates prototyping as a risk reduction strategy.
o Gives a early focus to reusable software.
o Accommodates life-cycle evolution, growth, and requirement changes.
o Incorporates software quality objectives into the product.
o Focus on early error detection and design flaws.
o Uses identical approaches for development and maintenance.
13. Component Assembly Model
o Object technologies provide the technical framework for a component-based
process model for software engineering.
o The object oriented paradigm emphasizes the creation of classes that encapsulate
both data and the algorithm that are used to manipulate data.
o If properly designed and implemented, object oriented classes are reusable
across different applications and computer based system architecture.
o Component Assembly Model leads to software reusability.
o The integration/assembly of already existing software components accelerate the
development process.
14. Testing Fundamentals
o Testing Objectives
o Testing is the process of executing a program with the intent of finding errors.
o A good test is one that has a high probability of finding an as yet undiscovered
error.
o A successful test is one that uncovers an as yet undiscovered error.
15. Test Information Flow Testing Reliability model Evaluation Debug Software
Configuration Test Configuration Corrections Predicted Reliability Error Rate Data
Expected results Test Results Errors
16. Test Case Design
o Can be difficult at the initial stage.
o Can test if a component conforms to specification – Black Box testing.
o Can test if a component conforms to design – White Box Testing.
o Testing can not prove correctness as not all execution paths can be tested.
17. White Box Testing
o Testing control structures of a procedural design. Can derive test cases to ensure:
o All independent paths are exercised at least once.
o All logical decisions are exercised for both true and false paths.
o All loops are executed at their boundaries and within operational bounds.
o All internal data structures are exercised to ensure validity.

Automated Testing vs Manual Testing - Presentation Transcript


1. Automated Testing vs Manual Testing By Bhavin Turakhia CEO, Directi (shared under
Creative Commons Attribution Share-alike License incorporated herein by reference)
( http://creativecommons.org/licenses/by-sa/3.0/ )
2. Manual Tests
o Coding Process with Manual Tests
 Write code
 Uploading the code to some place
 Build it
 Running the code manually (in many cases filling up forms etc step by
step)
 Check Log files, Database, External Services, Values of variable names,
Output on the screen etc
 If it does not work, repeat the above process
Creative Commons Attribution Share-alike
3. Automated Tests
o Coding Process with Automated Unit Tests
 Write one or more test cases
 Auto-compile and run to see the tests fail
 Write code to pass the tests
 Auto-compile and run
 If tests fail -> make appropriate modifications
 If tests pass -> repeat for next method
o Coding Process with Automated Functional Tests
 Finish writing code (with all unit tests passing)
 Write a Functional Test using any tool
 Auto-compile and run
 If tests fail -> make appropriate modifications
 If tests pass -> move ahead
Creative Commons Attribution Share-alike
4. Automated Tests vs Manual Tests
o Effort and Cost
 Lets assume 6 test cases
 Effort required to run all 6 manually => 10 min
 Effort required to write unit tests for all 6 cases => 10 min
 Effort required to run unit tests for all 6 cases => < 1 min
 Number of testing iterations => 5
 Total manual testing time => 50 min
 Total unit testing time => 10 min
Creative Commons Attribution Share-alike Release Manual Test Auto Test Manual Test
Cumulative 1 10 10 10 2 10 0 20 3 10 0 30 4 10 0 40 5 10 0 50
5. Automated Tests vs Manual Tests
o Effort and Cost
 Adding incremental Unit test cases is cheaper than adding incremental
Manual Test Cases
 Eg registerDomain
 Case 1: Register a .com domain with all correct fields
 Case 2: Register a .com domain with an invalid
nameserver
Creative Commons Attribution Share-alike
6. Automated Tests vs Manual Tests
o Manual Testing is boring
 Noone wants to keep filling the same forms
 There is nothing new to learn when one tests manually
 People tend to neglect running manual tests
 Noone maintains a list of the tests required to be run if they are manual
tests
o Automated Tests on the other hand are code
 They are fun and challenging to write
 One has to carefully think of design for reusability and coverage
 They require analytical and reasoning skills
 They represent contribution that is usable in the future
Creative Commons Attribution Share-alike
7. Automated Tests vs Manual Tests
o Manual Testing is not reusable
 The effort required is the same each time
 One cannot reuse a Manual Test
o Automated Tests are completely reusable
 IMPORTANT: One needs to setup a Continuous Integration Server, a
common Code Repository and a organization structure
 Once written the Automated Tests form a part of the codebase
 They can be reused without any additional effort for the lifetime of the
Project
Creative Commons Attribution Share-alike
8. Automated Tests vs Manual Tests
o Manual Tests provide limited Visibility and have to be Repeated by all
Stakeholders
 Only the developer testing the code can see the results
 Tests have to be repeated by each stakeholder
 For eg Developer, Tech Lead, GM, Management
o Automated Tests provide global visibility
 Developers, Tech Leads and Management can login and see Test Results
 No additional effort required by any of them to see the software works!!
Creative Commons Attribution Share-alike Release Manual Testing by Dev Manual Testing by
Team Leads Manual Testing by Mgmt Total Manual Testing Auto Test Dev Manual Test
Cumulative Total Manual Test Cumulative 1 10 5 3 18 10 10 18 2 10 5 3 18 0 20 36 3 10 5 3 18 0 30
54 4 10 5 3 18 0 40 72 5 10 5 3 18 0 50 90
9. Automated Tests vs Manual Tests
o Manual Testing ends up being an Integration Test
 In a typical manual test it is very difficult to test a single unit
 In most circumstances you end up checking the unit alongwith backend
services
 Introduces fragility – if something else breaks the manual test breaks
o Automated Tests can have varying scopes
 One can test a unit (class / method), a module, a system etc
Creative Commons Attribution Share-alike
10. Automated Tests vs Manual Tests
o Manual Testing requires complex Manual Setup and Tear Down
 Can involve frequently running db queries
 Can involve making changes to backend servers
 Steps become more complex with multiple dependent test cases
o Automated Tests can have varying scopes and require less complex setup and
teardown
 Unit Tests have external dependencies mocked – so no setup / teardown
required
 Setup and Tear down are automated in Functional Tests using
framework support
Creative Commons Attribution Share-alike
11. Automated Tests vs Manual Tests
o Manual Testing has a high risk of missing out on something
 Each time a developer runs manual tests it is likely he will miss out on
an important test case
 New developers may have no clue about the battery of tests to be run
o Automated Tests have zero risk of missing out a pre-decided test
 Once a Test becomes a part of Continuous Integration – it will run
without someone having to remember to run it
Creative Commons Attribution Share-alike
12. Automated Tests vs Manual Tests
o Manual Tests do not drive design
 Manual tests are run post-facto and hence only drive bug-patching
o Automated Tests and TDD / Test-First development drive design
 Writing a Unit test first clarifies the requirement and influences design
 Writing Unit Tests with Mock Objects etc forces clean design and
segregation through abstraction / interfaces / polymorphism etc
Creative Commons Attribution Share-alike
13. Automated Tests vs Manual Tests
o Manual Tests do not provide a safety-net
 Manual tests are run post-facto and hence only drive bug-patching
o Automated Tests provide a safety-net for refactoring / additions
 Even New developers who have never touched the code can be confident
about making changes
Creative Commons Attribution Share-alike
14. Automated Tests vs Manual Tests
o Manual Tests have no training value
o Automated Tests act as documentation
 Reading a set of Unit Tests clarifies the purpose of a codebase
 They provide a clear contract and define the requirement
 They provide visibility into different use cases and expected results
 A new developer can understand a piece of code much more by looking
at Unit Tests than by looking at the code
 Unit Tests define the expected behavior of the code
Creative Commons Attribution Share-alike
15. Automated Tests vs Manual Tests
o Manual Tests create crazy code clutter
 Most manual testing involves –
 System.outs to check values of variable names
 Useless log file entries in app server, db server etc
 Cause code / log / console clutter
 if then(s), flag based logging, event based log entries etc
 Slows down the application
o Automated Tests reduce code clutter to zero
 Log file entries / System.outs are replaced by assertions in test code
 Even if specific console / log entries are needed they can reside in the
test and not in the code
 Keep a live application / logs / console clutter-free and fast
Creative Commons Attribution Share-alike
16. Summary
o Manual Tests take more Effort and Cost more than Automated Test to write and
run
o Manual Testing is boring
o Automated Tests are reusable
o Manual Tests provide limited Visibility and have to be Repeated by all
Stakeholders
o Automated Tests can have varying scopes and can test single units of code by
Mocking the dependencies
o Automated tests may require less complex setup and teardown
Creative Commons Attribution Share-alike
17. Summary
o Automated Testing ensures you dont miss out on running a test
o Automated Testing can actually enforce and drive clean design decisions
o Automated Tests provide a Safety Net for refactoring
o Automated Tests have Training value
o Automated Tests do not create clutter in code/console/logs
Creative Commons Attribution Share-alike
18. Why do people not write Automated Tests
o Initial learning curve
 Understanding Unit Testing Frameworks and Functional Testing
Frameworks
 Understanding Continuous Integration and effective usage of it
 Understanding and learning Code Coverage Tools
 Figuring out how to organize the tests
 How to create Mock Objects?
 How to automate the running of the tests each time?
 Where to commit the tests?
o Am I really going to be working on this same module again?
o Will my tests be re-used? If not what is the point?
Creative Commons Attribution Share-alike
19. Why do people not write Automated Tests
o Solution
Spend time during First Release to freeze / design / implement -
 A Code Repository structure that incorporates Unit Tests and
Functional Tests
 A CI Server integrated with the release
 Unit Testing Framework (any xUnit framework)
 Functional Testing Tools (Sahi / Watir / Selenium / QTP etc)
 Code Coverage Tools (Clover)
 Testing guidelines and principles
 Designate Responsibility
 Each developer MUST write Unit tests for multiple use cases per
unit
 Designate a specific Developer to write Functional Tests
 The developer who writes the tests is also responsible for
organizing them, committing them and linking them in CI
Creative Commons Attribution Share-alike
20. Why do people not write Automated Tests
o Don’t give up
 If you come across a hurdle, pair
 Make sure you complete your testing responsibility
o Check Code Coverage
 Use code coverage tools while coding and post-coding to check parts of
your code that are covered by tests
Creative Commons Attribution Share-alike
21. What to Test
o Unit Tests
 Ideally do not cross class boundaries
 Definitely do not cross process-boundaries
 Write a unit test with multiple cases
o Functional Tests
 UI Tests using specific tools (Watir / Selenium / QTP / White etc)
 Tests one layer below the UI (Using APIs)
Creative Commons Attribution Share-alike
22. Best Practices
o You must use a unit testing frameworks (there’s one for every platform)
o You must have an auto-build process, a CI server, auto-testing upon commits etc
o Unit Tests are locally during the day, and upon commit by CI Server
o Over a period of time you may want to have your CI Server run tests selectively
o Tests must be committed alongwith code
Creative Commons Attribution Share-alike
23. Best Practices
o Organize the tests properly
o If you do not commit Tests they are not reusable and the reduced effort
advantage is lost
Creative Commons Attribution Share-alike

  Re: what is defination of regression testing? Answer

Execution of selected test cases on a modified build is


know as regression testing.These selected test cases means
already executed test cases.

First we execute the test cases to test an application.If


we get any bug we will report that bug to test lead if it
is a genuine bug test lead will post it to
developer,developer will fix the bug and send it to the
tester.Tester will execute the test cases again where he
got a defect.
 

    Re: what is defination of regression testing? Answer


#2
It is a process in which one will perform testing on an
application again and again....it is usually done in two
ways..

1. whenerver a bug is found and sent back o developer,next


built is released at that time testing is doe on the
released built of an application with related
functionalities...
2. whenever new features are incorporated then testing
released built with its related functionalities as well as
the new features.

You might also like