This action might not be possible to undo. Are you sure you want to continue?
Submitted To: Submitted By:
Ma’m Qaisra Emaan Khalid Noor Fishan Summaya Ijaz 01 02 04
Salma Tabasum 017 Iqra Khaliq Ayesha Khaliq Faiza Sadiq Class: 031 041 042
BSCS (Reg) 4th semester
DEPARTMENT OF CS &IT
Q NO 1: What is meant by software requirement definition? Elaborate on its importance? Ans:
A software requirements specification (SRS) is a comprehensive description of the intended purpose and environment for software under development. The SRS fully describes what the software will do and how it will be expected to perform. Importance An SRS minimizes the time and effort required by developers to achieve desired goals and also minimizes the development cost. A good SRS defines how an application will interact with system hardware, other programs and human users in a wide variety of real-world situations. Parameters such as operating speed, response time, availability, portability, maintainability, footprint, security and speed of recovery from adverse events are evaluated. Methods of defining an SRS are described by the IEEE (Institute of Electrical and Electronics Engineers) specification 830-1998.
Q NO 2: Explain varies steps involved in Requirement Engineering? Ans:
Software requirements engineering for a software development project has a few typical phases: 1. Requirements elicitation and gathering is always a necessary step, as frequently primary internal and external project stakeholders do not know what they want, the requirements can be deeply “hidden” within a client organization, prior requirements may not be validated or verifiable, and even completely incorrect. This is the phase of the project which will largely determine the success or failure of the project. 2. Requirements modeling is a way in which the written, prose requirements are presented in another format. Although effectively doing this can prove difficult for novices, many techniques such as use case modeling, UML diagrams, user stories and user goals can help system designers and requirements engineers and business analysts represent the requirements in a more easily comprehensible or shareable form. 3. Analyzing requirements is the process whereby the requirements are checked for consistency, correctness, completeness, sufficient detail, and writing style and format.
4. Requirements change management is a requisite activity for business analysts and software requirements engineers, as requirements are changing all the time and this process is to be expected and prepared for.
Q NO 3: Explain data modeling? Ans: Data Modeling
The figure illustrates the way data models are developed and used today. A conceptual data model is developed based on the data requirements for the application that is being developed, perhaps in the context of an activity model. The data model will normally consist of entity types, attributes, relationships, integrity rules, and the definitions of those objects. This is then used as the start point for interface or database design. Data modeling in software engineering is the process of creating a data model for an information system by applying formal data modeling techniques. Data modeling is a process used to define and analyze data requirements needed to support the business processes within the scope of corresponding information systems in organizations. Therefore, the process of data modeling involves professional data modelers working closely with business stakeholders, as well as potential users of the information system There are three different types of data models produced while progressing from requirements to the actual database to be used for the information system
The data requirements are initially recorded as a conceptual data model which is essentially a set of technology independent specifications about the data and is used to discuss initial requirements with the business stakeholders. The conceptual model is then translated into a logical data model, which documents structures of the data that can be implemented in databases. Implementation of one conceptual data model may require multiple logical data models. The last step in data modeling is transforming the logical data model to a physical data model that organizes the data into tables, and accounts for access, performance and storage details.
Q NO 4: Compare testing & debugging? Ans:
1.Testing is meant to find defects in the code, or from a different angle, to prove to a suitable level (it can
never be 100%) that the program does what it is supposed to do. It can be manual or automated, and it has many different kinds, like unit, integration, system / acceptance stress, load, soak etc. testing.
2.Testing is an attempt to create an issue through various ways of using the code that can then be
debugged. It's almost always done in userspace, where you're running the code as an end user would run it, and trying to make it break.
3.Testing is where you make sure the program or code works correctly and in a robust manner under
different conditions: You "test" your code by providing inputs, standard correct inputs, intentionally wrong inputs, boundary values, changing environment(OS, config file). Essentially, we can say that you try to discover bugs and eventually "debug" them in the testing process.
4.Testing is easy then debugging .in testing we just find out errors.. 5.Testing is a privilege you enjoy before releasing to the client. 6.Tesing is done by testers. This is a process of finding errors. 7.Testing is done by tester.Testing is about verifying whether the application is working as per the
Client's Requirements and if not then defects are logged. DEBUGGING:
1.Debugging is the process of finding and removing a specific bug from the program. It is always a
manual, one-off process, as all bugs are different.
2.Debugging is an attempt to fix known and unknown issues by methodically going over the code. When
you're debugging you're usually not focused on the code as a whole, and you're almost always working in the backend, in the actual code.
3.In simple terms, a "bug" is said to have occured when your program, on execution, does not behave the
way it should. That is it does not produce the expected output or results. Any attempt to find the source of this bug, finding ways to correct the behaviour and making changes to the code or configuration to correct the problem can be termed debugging.
4.Bugs are visible errors. Debugging is the process started after test case design. It is a more difficult task
than testing, because in the debugging process we need to find out the source of the error and remove it, so sometimes debugging frustrates the user.
5.Bugs are a nightmare you endure after releasing to the client. 6.Debugging is done by developers. It is a process of finding errors and fixing the errors. 7.debugging is done by developers.Debugging is to find out where exactly is the code failing and fix it. Q NO 5: Explain various system testing? Ans:
Three type of testing: By SDLC changes By techniques By special test
1. SDLC changes:
Explanation: designer test a piece of code i.e a unit to find error such testing is unit testing.but when designer makes modules of whole project and find out errors of each module separately called module testing.when designer make components of software and test each compnent an integrate these into whole called integration testing.acceptanc testing is done by tester when user gave feedback or aurguments. 2. Techniques: when designer test just input and output and do not go to internal coding called black box testing.but when internal programming coding is tested called white box test. Equavalance partinoning devides a program into classes of data from which test cases can be devided. Greater errors occur at boundry of input domain and testing of such type is boundary value testing. For testing, put random value to program code and check either code is correct or incorrect called adhock testing. 3. Special testing: volume testing performanc testing stress testing regression testing
Q NO 6: Explain software configuration management with baselines SCIs?
Ans: Configuration management is the process of managing change in hardware, software, firmware, documentation, measurements, etc. As change requires an initial state and next state, the marking of significant states within a series of several changes becomes important. The identification of significant states within the revision history of a configuration item is the central purpose of baseline identification. Typically, significant states are those that receive a formal approval status, either explicitly or implicitly (approval statuses may be marked individually, when such a marking has been defined, or signified merely by association to a certain baseline). Nevertheless, this approval status is usually recognized publicly. Thus, a baseline may also mark an approved configuration item, e.g. a project plan that has been signed off for execution. In a similar manner, associating multiple configuration items with such a baseline indicates those items as being approved. Generally, a baseline may be a single work product, or set of work products that can be used as a logical basis for comparison. A baseline may also be established (whose work products meet certain criteria) as the basis for subsequent select activities. Such activities may be attributed with formal approval. Conversely, the configuration of a project often includes one or more baselines, the status of the configuration, and any metrics collected. The current configuration refers to the current status, current audit, current metrics, and latest revision of all configuration items. Similarly, but less frequently, a baseline may refer to all items associated with a specific project. This may include all revisions of all items, or only the latest revision of all items in the project, depending upon context, e.g. "the baseline of the project is proceeding as planned." A baseline may be specialized as a specific type of baseline. Some examples include: Functional Baseline: initial specifications established; contract, etc. Allocated Baseline: state of work products once requirements are approved Developmental Baseline: state of work products amid development Product Baseline: contains the releasable contents of the project others, based upon proprietary business practices
Q NO 7 : Explain identification of objects in the software configuration? Ans: To control and manage software configuration items, each must be separately named and then organized using an object-oriented approach. Two types of objects can be identified: basic objects and aggregate objects. A basic object is a unit of text that has been created by a software engineer during analysis, design, code, or test. For example, a basic object might be a section of a requirements specification, a source listing for component, or a suite of test cases that are used to exercise the code. An aggregate object is a collection of basic objects and other aggregate objects. Design specification is an aggregate object.
Each object has a set of distinct features that identify it uniquely: a name, a description, a list of resources, and a realization. The interrelationships between configuration objects can be represented with a module interconnection language. A MIL describes the interdependencies among configuration objects and enables any version of a system to be constructed automatically.
Q NO 1 :What is difference between SRS document & design document?what are the contents we should contain in the SRS document and design document. Ans:
SRS: SRS is implemented form of BRS. SRS is often referred as parent document of project management document such as design specifications, statements of work, software architecture specifications, testing and validation plans and documentation plans. The basic issues of SRS is what is the functionality(what is the s/w supposed to do)what are the external interfaces (how does the software interact with the user, other hardware, and other system software)performance(What is the speed of application ,recovery time ,response time, availability of various software functions)attributes(what is the portability, security, correctness etc )design constraints (OS environments. implementation of languages, database integrity and resource limits)
A software design document (SDD) is a written description of a software product, that a software designer writes in order to give a software development team an overall guidance of the architecture of the software project. An SDD usually accompanies an architecture diagram with pointers to detailed feature specifications of smaller pieces of the design. Practically, a design document is required to coordinate a large team under a single vision. A design document needs to be a stable reference, outlining all parts of the software and how they will work. The document is commanded to give a fairly complete description, while maintaining a high-level view of the software. There are two kinds of design documents called HLDD (high-level design document) and LLDD (lowlevel design document). The SDD contains the following documents: 1. The data design describes structures that reside within the software. Attributes and relationships between data objects dictate the choice of data structures. 2. The architecture design uses information flowing characteristics, and maps them into the program structure. The transformation mapping method is applied to exhibit distinct boundaries between incoming and outgoing data. The data flow diagrams allocate control input, processing and output along three separate modules. 3. The interface design describes internal and external program interfaces, as well as the design of human interface. Internal and external interface designs are based on the information obtained from the analysis model. 4. The procedural design describes structured programming concepts using graphical, tabular and textual notations. These design mediums enable the designer to represent procedural detail, that
facilitates translation to code. This blueprint for implementation forms the basis for all subsequent software engineering work.
Introduction Purpose Definitions System overview References Overall description Product perspective Product functions User characteristics Constraints, assumptions and dependencies Specific requirements External interface requirements Functional requirements Performance requirements Design constraints Logical database requirement Software System attributes Other requirements
DDS document contents A.Introduction discription module decomposition chart
B.OVERVIEW C.COMPONENT DESIGN module proluge physical data structure
test plan and procedures test cases
E.REQUIRMENTS TRECEABILITY MATRIX F.ACCEPTANCE PLAN pakaging and installation acceptance testing acceptance criteria
G.APPENDICS formulas and algorithms are not documentd
Q NO 2: What are properties of good requirements? Ans: 1.As a tester point of view, a good requirement is defined as a requirement which can be tested easily by creating test cases or by writting test scripts. 2.Good Requirements are those which are clear not only to the Programmer but also to the Tester.May be it is in a written form or in the Graphical form, because good requirements always make program Bug Free.
Q NO 3: What is purpose of the software testing?
Ans: The purpose of software testing is to assess or evaluate the capabilities or attributes of a software program‟s ability to adequately meet the applicable standards and customer needs. Testing with a Purpose: Software testing is performed to verify that the completed software package functions according to the expectations defined by the requirements/specifications. The overall objective to not to find every software bug that exists, but to uncover situations that could negatively impact the customer, usability and/or maintainability. From the module level to the application level, this article defines the different types of testing. Depending upon the purpose for testing and the software requirements/specs, a combination of testing methodologies is applied. One of the most overlooked areas of testing is regression testing and fault tolerant testing. Q NO 4: List & explain different types of testing done during the testing phase? Ans:
Introduction: The development process involves various types of testing. Each test type addresses a specific testing requirement. The most common types of testing involved in the development process are: By SDLC stages By Techniques By special test
By SDLC :
Unit testing – Testing of individual software components or modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. may require developing test driver modules or test harnesses. Integration testing – Testing of integrated modules to verify combined functionality after integration. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems. System testing– Entire system is tested as per the requirements. Black-box type testing that is based on overall requirements specifications, covers all combined parts of a system. Acceptance testing -Normally this type of testing is done to verify if system meets the customer specified requirements. User or customer do this testing to determine whether to accept application. By techniques: Black box testing – Internal system design is not considered in this type of testing. Tests are based on requirements and functionality. White box testing – This testing is based on knowledge of the internal logic of an application‟s code. Also known as Glass box Testing. Internal software and code working should be known for this type of testing. Tests are based on coverage of code statements, branches, paths, conditions. Equivalence Partitioning: In this method the input domain data is divided into different equivalence data classes. This method is typically used to reduce the total number of test cases to a finite set of testable test cases, still covering maximum requirements. In short it is the process of taking all possible test cases and placing them into classes. One test value is picked from each class while testing. Example of a function which takes a parameter "month". The valid range for the month is 1 to 12, representing January to December. This valid range is called a partition. In this example there are two further partitions of invalid ranges. The first invalid partition would be <= 0 and the second invalid partition would be >= 13. ... -2 -1 0 1 .............. 12 13 14 15 ..... --------------|-------------------|--------------------invalid partition 1 valid partition invalid partition 2
Equivalence partitioning uses fewest test cases to cover maximum requirements. Boundary Value Testing: Software testing technique in which tests are designed to include representatives of boundary values. It is performed by the QA testing teams. By special: Stress testing: System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to system or database load. Performance testing: Term often used interchangeably with „stress‟ and „load‟ testing. To check whether system meets performance requirements. Used different performance and load tools to do this. Security testing: Can system be penetrated by any hacking way. Testing how well the system protects against unauthorized internal or external access. Checked if system, database is safe from external attacks. Regression testing: Regression testing is any type of software testing that seeks to uncover new software bugs, or regressions, in existing functional and non-functional areas of a system after changes, such as enhancements, patches or configuration changes, have been made to them. The intent of regression testing is to ensure that a change, such as a bugfix, did not introduce new faults. One of the main reasons for regression testing is to determine whether a change in one part of the software affects other parts of the software. Common methods of regression testing include rerunning previously run tests and checking whether program behavior has changed and whether previously fixed faults have re-emerged. Regression testing can be used to test a system efficiently by systematically selecting the appropriate minimum set of tests needed to adequately cover a particular change. The selective retesting of a software system that has been modified to ensure that any bugs have been fixed and that no other previously working functions have failed as a result of the reparations and that newly added features have not created problems with previous versions of the software
Q NO 5: What is User Acceptance Testing? Explain diff testing in user acceptance testing. Why is it necessary? Ans:
User Acceptance Testing is often the final step before rolling out the application.Usually the end users who will be using the applications test the application before „accepting‟ the application. This type of testing gives the end users the confidence that the application being delivered to them meets their requirements.This testing also helps nail bugs related to usability of the application. User Acceptance Testing – Prerequisites: Before the User Acceptance testing can be done the application is fully developed. Various levels of testing (Unit, Integration and System) are already completed before User Acceptance Testing is done. As
various levels of testing have been completed most of the technical bugs have already been fixed before UAT. User Acceptance Testing – What to Test? To ensure an effective User Acceptance Testing Test cases are created. These Test cases can be created using various use cases identified during the Requirements definition stage. The Test cases ensure proper coverage of all the scenarios during testing. During this type of testing the specific focus is the exact real world usage of the application. The Testing is done in an environment that simulates the production environment. The Test cases are written using real world scenarios for the application User Acceptance Testing – How to Test? The user acceptance testing is usually a black box type of testing. In other words, the focus is on the functionality and the usability of the application rather than the technical aspects. It is generally assumed that the application would have already undergone Unit, Integration and System Level Testing. However, it is useful if the User acceptance Testing is carried out in an environment that closely resembles the real world or production environment. The steps taken for User Acceptance Testing typically involve one or more of the following: User Acceptance Test (UAT) Planning Designing UA Test Cases Selecting a Team that would execute the (UAT) Test Cases Executing Test Cases Documenting the Defects found during UAT Resolving the issues/Bug Fixing Sign Off
User Acceptance Test (UAT) Planning: As always the Planning Process is the most important of all the steps. This affects the effectiveness of the Testing Process. The Planning process outlines the User Acceptance Testing Strategy. It also describes the key focus areas, entry and exit criteria. Designing UA Test Cases: The User Acceptance Test Cases help the Test Execution Team to test the application thoroughly. This also helps ensure that the UA Testing provides sufficient coverage of all the scenarios. The Use Cases created during the Requirements definition phase may be used as inputs for creating Test Cases. The inputs from Business Analysts and Subject Matter Experts are also used for creating. Each User Acceptance Test Case describes in a simple language the precise steps to be taken to test
something. The Business Analysts and the Project Team review the User Acceptance Test Cases. Selecting a Team that would execute the (UAT) Test Cases: Selecting a Team that would execute the UAT Test Cases is an important step. The UAT Team is generally a good representation of the real world end users. The Team thus comprises of the actual end users who will be using the application.
Executing Test Cases: The Testing Team executes the Test Cases and may additional perform random Tests relevant to them Documenting the Defects found during UAT: The Team logs their comments and any defects or issues found during testing. Resolving the issues/Bug Fixing: The issues/defects found during Testing are discussed with the Project Team, Subject Matter Experts and Business Analysts. The issues are resolved as per the mutual consensus and to the satisfaction of the end users. Sign Off: Upon successful completion of the User Acceptance Testing and resolution of the issues the team generally indicates the acceptance of the application. This step is important in commercial software sales. Once the User “Accept” the Software delivered they indicate that the software meets their requirements. The users now confident of the software solution delivered and the vendor can be paid for the same.
Q NO 6: what is unit testing?
Ans: An computer programming, unit testing is a method by which individual units of source code, sets of one or more computer program modules together with associated control data, usage procedures, and operating procedures, are tested to determine if they are fit for use. Intuitively, one can view a unit as the smallest testable part of an application. In procedural programming a unit could be an entire module but is more commonly an individual function or procedure. In object-oriented programming a unit is often an entire interface, such as a class, but could be an individual method. Unit tests are created by programmers or occasionally by white box testers during the development process.
Q NO7: what is smoke testing?
Ans: Smoke testing refers to physical tests made to closed systems of pipes to test for leaks. By metaphorical extension, the term is also used for the first test made after assembly or repairs to a system, to provide some assurance that the system under test will not catastrophically fail. After a smoke test proves that "the
pipes will not leak, the keys seal properly, the circuit will not burn, or the software will not crash outright," the system is ready for more stressful testing. The term smoke testing is used in several fields, including electronics, software development, plumbing, woodwind repair, infectious disease control, and the entertainment industry.
Qno8:what is the exact diff b/w integration & system testing? Ans:
Intergration Testing : In every application we intially implement small Sub-functionalities and integrate those...to make it a big fuctionality... So main purpose of testing here is to check whether the integration is done correctly and verifying dependencies between the small sub-functionalities System Testing : Here in application is viewed in terms of Fuctionalities...... we see Complete functionality as whole...we never consider the sub-functionalities..... So we test the flow of the system which means verifying the functionality of system Integration testing: This test begins after two or more programs or application components have been successfully unit tested. The development team to validate the technical quality or design of the application conducts it. It is the first level of testing which formally integrates a set of programs that communicate among themselves via messages or files (a client and its server(s), a string of batch programs, or a set of on-line modules within a dialog or conversation.) System testing: During this event, the entire system is tested to verify that all functional, information, structural and quality requirements have been met. A Predetermined combination of tests is designed that, when executed successfully, satisfy management that the system meets specifications. System testing verifies the functional quality of the system in addition to all external interfaces, manual procedures, restart and recovery, and human-computer interfaces. It also verifies that interfaces between the application and the open environment work correctly, that JCL functions correctly, and that the application functions appropriately with the Database Management System, Operations Environment, and any communications system. Example integration: Take the example of aeroplane assembling. each and every unit is manufactured and tested separately. Then all parts assembled with each other and tested for functionality when two parts starts communicating with each other. This is Integration testing. When you test Aeroplane a one masterpiece for its intended functionality it is called system testing. Q NO 9:Diff b/w black & white box software testing?
1. In white box framework, where developer often needs to know the detailed implementation of framework but in black box framework consists of components that hide their internal implementation. 2. In white box framework less range of flexibility but in black box grater range of flexibility. Developers can choose different components and classes in black box framework. In white box have to show complete details but in black box has flexibility. Developers can select which data to show and which data to hide. 3. In developing, white box framework easy to develop compare to black box because no need to analyses about what data to be hide and what data to be show. In white box complete data is available and all internal information also available. No level of abstraction in white box so easy to develop. 4. White box framework always comes with source code but black box not comes with source code. 5. White box framework requires deep understanding of framework implementation but black box not require deep knowledge of framework development.
Q NO 10: Diff b/w alpha & beta software testing?
Typically Software goes through 2 stages: 1. Alpa Testing 2.Beta Testing Alpha Testing: Testing a software product which is not the final version.This software does not have to necessarily contain the full functionality required for an appliation, however core functionality to accept input an generate output is required. Alpha testing is testing of an application when development is nearing completion. Minor design changes can still be made as a result of alpha testing. Alpha testing is typically performed by a group that is independent of the design team, but still within the company, e.g. in-house software test engineers, or software QA engineers. Alpha testing is final testing before the software is released to the general public. First, (and this is called the first phase of alpha testing), the software is tested by in-house developers. They use either debugger software, or hardware-assisted debuggers. The goal is to catch bugs quickly. Then, (and this is called second stage of alpha testing), the software is handed over to us, the software QA staff, for additional testing in an environment that is similar to the intended use.
Beta Testing: Beta Testing is last stage of testing where a product is sent outside the company or offer the product for free trial download. Beta testing is testing an application when development and testing are essentially completed and final bugs and problems need to be found before the final release. Beta testing is typically performed by end-users or others, not programmers, software engineers, or test engineers Following alpha testing, "beta versions" of the software are released to a group of people, and
limited public tests are performed, so that further testing can ensure the product has few bugs. Other times, beta versions are made available to the general public, in order to receive as much feedback as possible. The goal is to benefit the maximum number of future users.
Q NO 11: Difference between client server and web based testing ?& what are things that we need to test in such applications? Ans:
Client - Server Testing : It's a two tier application. Front end and Back end, where front end is client request and back end is mounted with application and database servers together. Limited no of users are going to access this type of application. We have to concentrate only on functionality testing.
Web based Testing : It's a 3tier/n tier architecture. Client request --> Web server --> Application Server --> Database Server. Unlimited no of users, where we can not restrict the no of users who will access the application We need to concentrate on functionality testing as well as non functionality testing like security testing, performance testing, compatibility testing.
Q NO 12: How do you perform regression testing of software?
Ans: Regression means retesting the unchanged parts of the application. Test cases are re-executed in order to check whether previous functionality of application is working fine and new changes have not introduced any new bugs. This is the method of verification. Verifying that the bugs are fixed and the newly added feature have not created in problem in previous working version of software. Why regression Testing? Regression testing is initiated when programmer fix any bug or add new code for new functionality to the system. It is a quality measure to check that new code complies with old code and unmodified code is not getting affected. Most of the time testing team has task to check the last minute changes in the system. In such situation testing only affected application area in necessary to complete the testing process in time with covering all major system aspects. How much regression testing? This depends on the scope of new added feature. If the scope of the fix or feature is large then the
application area getting affected is quite large and testing should be thoroughly including all the application test cases. But this can be effectively decided when tester gets input from developer about the scope, nature and amount of change. Q NO 13: How do you feel about cyclomatic complexity? Ans: Cyclomatic complexity is computed using the control flow graph of the program: the nodes of the graph correspond to indivisible groups of commands of a program, and a directed edge connects two nodes if the second command might be executed immediately after the first command. Cyclomatic complexity may also be applied to individual functions, modules, methods or classes within a program. One testing strategy, called Basis Path Testing by McCabe who first proposed it, is to test each linearly independent path through the program; in this case, the number of test cases will equal the cyclomatic complexity of the program.
Qno14: What are the possible states of software bugs? Life cycle?
Ans: Introduction: Bug can be defined as the abnormal behavior of the software. No software exists without a bug. The elimination of bugs from the software depends upon the efficiency of testing done on the software. A bug is a specific concern about the quality of the Application under Test (AUT). Bug Life Cycle: In software development process, the bug has a life cycle. The bug should go through the life cycle to be closed. A specific life cycle ensures that the process is standardized. The bug attains different states in the life cycle. The life cycle of the bug can be shown diagrammatically as follows: The different states of a bug can be summarized as follows: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. New Open Assign Test Verified Deferred Reopened Duplicate Rejected and Closed
Description of Various Stages: 1. New: When the bug is posted for the first time, its state will be “NEW”. This means that the bug is not yet approved. 2. Open: After a tester has posted a bug, the lead of the tester approves that the bug is genuine and he changes the state as “OPEN”. 3. Assign: Once the lead changes the state as “OPEN”, he assigns the bug to corresponding developer or developer team. The state of the bug now is changed to “ASSIGN”. 4. Test: Once the developer fixes the bug, he has to assign the bug to the testing team for next round of testing. Before he releases the software with bug fixed, he changes the state of bug to “TEST”. It specifies that the bug has been fixed and is released to testing team. 5. Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low, lack of time for the release or the bug may not have major effect on the software. 6. Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “REJECTED”. 7. Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “DUPLICATE”. 8. Verified: Once the bug is fixed and the status is changed to “TEST”, the tester tests the bug. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “VERIFIED”. 9. Reopened: If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “REOPENED”. The bug traverses the life cycle once again. 10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “CLOSED”. This state means that the bug is fixed, tested and approved. While defect prevention is much more effective and efficient in reducing the number of defects, most organization conducts defect discovery and removal. Discovering and removing defects is an expensive and inefficient process. It is much more efficient for an organization to conduct activities that prevent defects. Guidelines on deciding the Severity of Bug: Indicate the impact each defect has on testing efforts or users and administrators of the application under test. This information is used by developers and management as the basis for assigning priority of work on defects. A sample guideline for assignment of Priority Levels during the product test phase includes: 1. Critical / Show Stopper — An item that prevents further testing of the product or function under test
can be classified as Critical Bug. No workaround is possible for such bugs. Examples of this include a missing menu option or security permission required to access a function under test. 2. Major / High — A defect that does not function as expected/designed or cause other functionality to fail to meet requirements can be classified as Major Bug. The workaround can be provided for such bugs. Examples of this include inaccurate calculations; the wrong field being updated, etc. 3. Average / Medium — The defects which do not conform to standards and conventions can be classified as Medium Bugs. Easy workarounds exists to achieve functionality objectives. Examples include matching visual and text links which lead to different end points. 4. Minor / Low — Cosmetic defects which does not affect the functionality of the system can be classified as Minor Bugs.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.