#SDLC (Software development Life cycle

1. F.A(Feasibility Analysis) a. Technical Feasibility b. Schedule Feasibility. c. Financial Feasibility. 2. Requirement Collection a. Functional( requirements related to different functions of the system) b. Environmental (which environment to be used to develop the whole system. Like which database, Operating System, Technologies, Application Server to be used) c. UI(User Interface) and usability (Requirement related to UI will be documented) 3. Designing The software system design is produced from the results of the requirements phase. Architects have the ball in their court during this phase and this is the phase in which their focus lies. This is where the details on how the system will work are produced. Architecture, including hardware and software, communication, software design (UML is produced here) are all part of the deliverables of a design phase. a. b. c. UI design (User interface is designed) DB design (database is designed)

Application Design - HLD(High Level Design)(Architectural design as Use case diagrams, Application will be two tire or multi tier etc) - LLD(Low level design)(classes, interfaces, methods described into it) 4. Coding ( on the basis design document coding has been done) 5. Testing is a process of executing a program with the intent of finding an error. After coding developer perform the testing to check his code, that testing is called UNIT testing (unit testing always performed by developers) After unit testing, then rest of the testing has been performed by testers step by step. a. Sanity Testing -

In sanity testing we are checking application is ready for complete testing or not. If application is not ready then we will send it back to development team and if application is ready, then we will go to next testing phase. We have to perform following tasks to check application is ready for testing or not: - Do installation and see that installation is happening properly. - Navigate through the application and see that application is working properly. - Make sure application doesn’t have any issue regarding hang, crash, broken areas (like links), runtime errors etc. **Hot Fix- Quick fix of any bug by developer at any stage of testing is called Hot Fix b. BVT (Build verification Testing) we will perform complete possible testing on the application. We will perform through testing on the application. In this BVT check list** is also taken into consideration and all major functionalities are checked. **BVT check List: is a list of all major functionalities of the application as interest calculation, account updating, important validations, date updating etc. c. Integration Testing: - Content integration: Link with different modules or different modules is integrated with each other as required. - Data Base Integration: We check whether application is integrated with database properly or not. Like when we are entering someone’s salary into accounts, it is updated or not. - Application Integration: Application all modules are integrated with each other, as a whole application and with other resources or other applications from outside the application. d. System Testing: In this testing phase we check the whole application completely in the real environment. We will create real environment, how it will be used by end user. We will use like dial-up connection for internet, might be P-3 or P-2 machine or a Macintosh machine, different browsers like Netscape navigator, internet explorer etc. e. Performance Testing: In this phase we, check the performance of the application. How fast our application is working. We usually check the following the following: - Response Time: Time between when we submit a request and when we get starting response (first response, not complete page or request).

Server resource utilization: We see whether server resources are utilized efficiently or not. - Execution Time: Time between submit a request and getting full response. - Network Performance: In this we check we check how much time our network is taking if we have 100 users or 1000 users or 10000 users. f. Acceptance Testing: In the end, after all testing phases, client will accept the application and at that time, this testing is done. It is not a through testing. Only the main functionalities r checked in it. - Alpha Testing: In-house testing in the end is called alpha testing. - Beta Testing: Client side testing in the end is called as Beta Testing. Only difference in Alpha and beta is that alpha is done in- house and beat is done at client premises. 6. Release and Maintenance: At this stage application is released and Maintenance period of the application starts.

Different types of testing techniques or key words on Testing:

Test Case: This is usually the smallest unit of testing. A Test Case will consist of
information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc.

Test case: TC* ID. 1 2 3 4 5 TC* Description Min. age of client Max. age of client More then Max. age Less then Min. age When age is 75 Input 1 150 151 0 75 Expected Login successful Login successful Login Fail Login Fail Login successful Actual Login successful Login Fail Login Successful Login Fail Login successful Status(Pass/Fail) Pass Fail Fail Pass Pass

Module Name/En t Batch/win




Batch/ wi

*TC=Test Case

This column will be after Module Name Comment(if Any) Boundary value Analysis and +ve approach(TC 1) Boundary value Analysis and +ve approach(TC 2) In this we used +ve approach(TC 3) In this we used –ve approach(TC 4) In this Equivalence Partition tech. is used(TC 5) Techniques for Test case writing: 1. Boundary value Analysis. (for example, age should be between 1-150, so check value 1 and 150 only and if these two r fine rest should be fine) 2. Equivalence partition technique (if we have to check age between 1 and 150, then we will check, 75) 3. –ve approach (if we have to check age between 1-150, then if we will check 0, -1, 151) 4. +ve approach( if we have to check age between 1-150, then if we will check 1, 2, 149, 150, mean any value between 1-150 will be +ve approach) Test Case classification: i. User Interface (to check look and feel of the application) ii. Usability(example: tab working, default curser focus) iii. Validations(example: age should be 0-150) (–ve, +ve, boundary value etc) iv. Functionality(example: save button, clear button is working or not) White Box Testing: Preparing test case by considering coding into consideration is white box testing. For example: if statement, for loop etc. Black Box testing: Preparing test case by not considering coding into consideration is Black box testing. Test Bed: The machine which have test environment along the required test data. Localization: If we are using different Languages for testing like french etc, that is called localization. We first perform testing in English, then in other language and compare their results.

Test Stub: It is dummy and called program, it should be able to accept the passed parameters from calling program and should return value. It’s used for Test Drivers, API Testing, Component Testing, Testing when UI is not ready. Test Driver: It is a calling program, it accepts return value. Bottleneck point: Bottleneck point is a situation in performance testing, if we put more load on application (mean more users) then that bottleneck point then the application will hang. For example Bottleneck point for an application could be 10000 users or 90000 users. In the following table 41000 users is the bottleneck point
pointaer g

No of Users


Ad-Hoc Testing:
1. Adhoc Testing. A tester who has little idea about application tries to use application. 2. Exploratory Testing. A tester who has no idea knowledge about application tries to use application. 3. Compatibility Testing. In this Application’s compatibility is checked on different OS (operating System), browsers etc. 4. Comparative Testing. In this comparison of application with other applications of same type is performed. 5. Scalability Testing. In this extendibility and enhancebility of the application is checked.

6. Installation Testing. In this we check that installation and uninstallation of the application is happening perfectly. 7. Security Testing. We check authentication and authorization of the application. 8. Recovery Testing. Its of two type: a. Database Recovery: we check that database recovery should be 100%. b. Application Recovery: we check that after crash application should open perfectly. 9. Happy-Path Testing: This testing only meant to show that the system meets its functional requirements. 10. Endurance Testing: Checks for memory leaks or other problems that may occur with prolonged execution.

Different Documents used in development and testing: SRS (System Requirements Study) document OR SRA (System
Requirement Analysis) document – The objective of this phase is to identify and document the user requirements for a proposed system. After feasibility study SRS document is prepared. Deliverables of this document is full requirement analysis containing functional specifications, Architecture specifications, estimated effort, price, detail schedule for remaining project phases. MPP (Master Project Plan) document – On the basis of SRS document MPP has been made. The Master Project Plan, which is basically a blueprint to follow throughout the project, includes all of the detailed plans contributed by each functional area of the project: Product Management, Program Management, Development, Test, User Education, and Logistics. Developing this plan prompts the team to consider all of the resources that must be assembled in advance, and alerts the team to potential pitfalls to avoid. A good plan steers the migration process, helping keep staff on schedule and on task, and the project within budget. Functional specification—the preferred client configuration and the deployment process for this configuration. Project plan—the activities and deliverables necessary to deliver the design described in the functional specification.

Master project schedule—dates for when the preferred client solution will be developed, tested, and deployed. Go to following URL to see sample MPP

DD (Design Document). Design document describes all data, architectural, interface and component-level design for the software. Go to this URL for further details about DD: http://www.rspa.com/docs/Designspec.html TP(Test Plan) document: It tells us about the whole testing phase of our application and gives answers of all our questions like what, when, how, why to test our application.) Followings are the contents of the Project plan:
1. Test Plan Identifier 2. References 3. Introduction 4. Test Items 5. Software Risk Issues 6. Features to be Tested 7. Features not to be Tested 8. Approach 9. Item Pass/Fail Criteria 10. Suspension Criteria and Resumption Requirements 11. Test Deliverables 12. Remaining Test Tasks 13. Environmental Needs 14. Staffing and Training Needs 15. Responsibilities 16. Schedule 17. Planning Risks and Contingencies 18. Approvals 19. Glossary

Test Report: It contains results and status of all the test cases.

Bug Report: Report containing report of the bug is called bug report. Bug Template Bug ID. Seviourit y Test case ID Bug Descriptio n Detecte d by Input Expected Actual Status(Pass/Fail)


Assigned to



Bug Life Cycle:

New /Open Bug

Resolution Comment

New / Open Bug

Won’t Fix/Can’t Fix

Differed(Reaso n for delay (low Priority))


Fixed / Resolved


Software life Cycle Models:
Waterfall Model
This is the most common and classic of life cycle models, also referred to as a linear-sequential life cycle model. It is very simple to understand and use. In a waterfall model, each phase must be completed in its entirety before the next phase can begin. At the end of each phase, a review takes place to determine if the project is on the right path and whether or not to continue or discard the project. Unlike what I mentioned in the general model, phases do not overlap in a waterfall model.

Waterfall Life Cycle Model

• • • •

Simple and easy to use. Easy to manage due to the rigidity of the model – each phase has specific deliverables and a review process. Phases are processed and completed one at a time. Works well for smaller projects where requirements are very well understood.

• • • • • •

Adjusting scope during the life cycle can kill a project No working software is produced until late during the life cycle. High amounts of risk and uncertainty. Poor model for complex and object-oriented projects. Poor model for long and ongoing projects. Poor model where requirements are at a moderate to high risk of changing.

V-Shaped Model
Just like the waterfall model, the V-Shaped life cycle is a sequential path of execution of processes. Each phase must be completed before the next phase begins. Testing is emphasized in this model more so than the waterfall model though. The testing procedures are developed early in the life cycle before any coding is done, during each of the phases preceding implementation.

Requirements begin the life cycle model just like the waterfall model. Before development is started, a system test plan is created. The test plan focuses on meeting the functionality specified in the requirements gathering. The high-level design phase focuses on system architecture and design. An integration test plan is created in this phase as well in order to test the pieces of the software systems ability to work together. The low-level design phase is where the actual software components are designed, and unit tests are created in this phase as well. The implementation phase is, again, where all coding takes place. Once coding is complete, the path of execution continues up the right side of the V where the test plans developed earlier are now put to use. V-Shaped Life Cycle Model

• • • •

Simple and easy to use. Each phase has specific deliverables. Higher chance of success over the waterfall model due to the development of test plans early on during the life cycle. Works well for small projects where requirements are easily understood.

• • • •

Very rigid, like the waterfall model. Little flexibility and adjusting scope is difficult and expensive. Software is developed during the implementation phase, so no early prototypes of the software are produced. Model doesn’t provide a clear path for problems found during testing phases.

Incremental Model
The incremental model is an intuitive approach to the waterfall model. Multiple development cycles take place here, making the life cycle a “multi-waterfall” cycle. Cycles are divided up into smaller, more easily managed iterations. Each iteration passes through the requirements, design, implementation and testing phases. A working version of software is produced during the first iteration, so you have working software early on during the software life cycle. Subsequent iterations build on the initial software produced during the first iteration. Incremental Life Cycle Model

• • • • •

Generates working software quickly and early during the software life cycle. More flexible – less costly to change scope and requirements. Easier to test and debug during a smaller iteration. Easier to manage risk because risky pieces are identified and handled during its iteration. Each iteration is an easily managed milestone.

• •

Each phase of an iteration is rigid and do not overlap each other. Problems may arise pertaining to system architecture because not all requirements are gathered up front for the entire software life cycle.

Spiral Model
The spiral model is similar to the incremental model, with more emphases placed on risk analysis. The spiral model has four phases: Planning, Risk Analysis, Engineering and Evaluation. A software project repeatedly passes through these

phases in iterations (called Spirals in this model). The baseline spiral, starting in the planning phase, requirements are gathered and risk is assessed. Each subsequent spirals builds on the baseline spiral. Requirements are gathered during the planning phase. In the risk analysis phase, a process is undertaken to identify risk and alternate solutions. A prototype is produced at the end of the risk analysis phase. Software is produced in the engineering phase, along with testing at the end of the phase. The evaluation phase allows the customer to evaluate the output of the project to date before the project continues to the next spiral. In the spiral model, the angular component represents progress, and the radius of the spiral represents cost. Spiral Life Cycle Model


• • •

High amount of risk analysis Good for large and mission-critical projects. Software is produced early in the software life cycle.

• • • •

Can be a costly model to use. Risk analysis requires highly specific expertise. Project’s success is highly dependent on the risk analysis phase. Doesn’t work well for smaller projects.

And that’s it. If you have any input, especially your views on advantages and disadvantages of any particular model, feel free to leave them in the comments and I can add them to my copy.
Prototyping Model
This is a cyclic version of the linear model. In this model, once the requirement analysis is done and the design for a prototype is made, the development process gets started. Once the prototype is created, it is given to the customer for evaluation. The customer tests the package and gives his/her feed back to the developer who refines the product according to the customer's exact expectation. After a finite number of iterations, the final software package is given to the customer. In this methodology, the software is evolved as a result of periodic shuttling of information between the customer and developer. This is the most popular development model in the contemporary IT industry. Most of the successful software products have been developed using this model - as it is very difficult (even for a whiz kid!) to comprehend all the requirements of a customer in one shot. There are many variations of this model skewed with respect to the project management styles of the companies. New versions of software product evolve as a result of prototyping.

Rapid Application Development (RAD) Model
The RAD is a linear sequential software development process that emphasizes an extremely short development cycle. The RAD model is a "high speed" adaptation of the linear sequential model in which rapid development is achieved by using a component-based construction approach. Used primarily for information systems applications, the RAD approach encompasses the following phases:

Component Object Model
A software architecture developed by Microsoft to build component-based applications. COM objects are discrete components, each with a unique identity, which expose interfaces that allow applications and other components to access their features. COM objects are more versatile than Win32 DLLs because they are completely language-independent, have built-in interprocess communications capability, and easily fit into an object-oriented program design.
Component Assembly Model

Object technologies provide the technical framework for a component-based process model for software engineering. The object oriented paradigm emphasizes the creation of classes that encapsulate both data and the algorithm that are used to manipulate the data. If properly designed and implemented, object oriented classes are reusable across different applications and computer based system architectures. Component Assembly Model leads to software reusability. The integration/assembly of the already existing software components accelerates the development process. Nowadays many component libraries are available on the Internet. If the right components are chosen, the integration aspect is made much simpler

SEI - ‘Software Engineering institute’ at Carnegie-Mellon University; initiated by US
Defense Department to help improve software development processes.

CMM – ‘Capability Maturity Model’ developed by SEI. It’s a model of 5 levels of
organizational maturity that determine effectiveness in delivering quality software. Organizations can receive CMM rating by undergoing assessments by qualified auditors. Level 1 – Characterized by chaos, periodic panics and heroic efforts required by individuals to successfully complete projects. Few if any process in place; successes may not be repeatable. Level 2 – Software project Tracking, requirements, management, realistic planning, and configuration management processes are in place: successful practices can be repeated Level 3 – Standard software development and maintenance processes are integrated throughout an organization; a software Engineering process group is in place to oversee software processes, and training programs are used to ensure understanding and compliance. Level 4 – Metrics are used to track productivity, processes and products. Project performance is predictable and quality is consistently high. Level 5 - The focus is on continues process improvement. The impact of new processes and technologies can be predicted and effectively implemented when required.

ISO - ‘International organization for Standards’- The ISO 9001, 9002, 9003 standards
concern quality systems that are assessed by outside auditors, and they apply to many kind of production and manufacturing organizations , not just software.

IEEE – ‘Institute of Electrical and Electronics Engineers’. Standards for testing and
quality assurance.

ANSI – American National Standards Institute’ the primary industrial standards body in
the U.S. publishes some software related standards in conjunction with the IEEE and ASQ (American Society for Quality).

Others: software development process assessment methods besides CMM and ISO
9000 includes SPICE, Trillium, TickIT and Bootstrap
Scrum Meeting: A short daily meeting where the team shares status

Sign up to vote on this title
UsefulNot useful