This action might not be possible to undo. Are you sure you want to continue?
SDLC Phases Review
R e q u
i r e m D e n D
t n o C n s t r u T c Tt i o n g
e s i g C
e s t i n
3 What is a Test Metric? A metric is a mathematical number that shows a relationship between two variables. Software metrics are measures used to quantify status or results. This includes items that are directly measurable, such as lines of code, as well as items that are calculated from measurements, such as earned value. Metrics specific to testing include data regarding testing, defect tracking, and software performance. 4 Metric? A metric is a quantitative measure of the degree to which a system, component, or process possesses a given attribute. 5 Process Metric A process metric is a metric used to measure characteristics of the methods, techniques, and tools employed in developing, implementing, and maintaining the software system. 6 Product Metric A product metric is a metric used to measure the characteristics of the documentation and code.
7 Software Quality Metric A software quality metric is a function whose inputs are software data and whose output is a single numerical value that can be interpreted as the degree to which software possesses a given attribute that affects its quality Testers are typically responsible for reporting their test status at regular intervals. The following measurements generated during testing are applicable: Total number of tests Number of tests executed to date Number of tests executed successfully to date
8 Data concerning Software defects Data concerning software defects include: Total number of defects corrected in each activity Total number of defects detected in each activity Average duration between defect detection and defect correction Average effort to correct a defect Total number of defects remaining at delivery
9 Objective Vs Subjective MeasuresMeasurement can be either objective or subjective. An objective measure is a measure that can be obtained by counting. For example, objective data is hard data, such as defects, hours worked, and completed deliverables. Subjective data normally has to be calculated. It is a person’s perception of a product or activity. For example, a subjective measure would involve such attributes of an information system as how easy it is to use and the skill level needed to execute the system. As a general rule, subjective measures are much more important than objective measures. For example, it is more important to know how effective a person is in performing a job (a subjective measure) versus whether or not they got to work on time (an objective measure). 10 Objective Vs Subjective Measures Individuals seem to want objective measures because they believe they are more reliable than subjective measures. It is unfortunate, but true, that many bosses are more concerned that the workers are at work on time and do not leave early, than they are about how productive they work before the boss They believe meeting objective measures is more important than meeting subjective measures, such as how easy the systems they built are to use. 11 How do you know a Metric is good?
Reliability This refers to the consistency of measurement. If taken by two people, would the same results be obtained? Validity This indicates the degree to which a measure actually measures what it was intended to measure. Ease of Use and Simplicity These are functions of how easy it is to capture and use the measurement data. Timeliness This refers to whether the data was reported in sufficient time to impact the decisions needed to manage effectively. Calibration This indicates the movement of a metric so it becomes more valid, for example, changing a customer survey so it better reflects the true opinions of the customer. 12 Test Metric Categories In examining many reports prepared by testers the following eight metric categories are commonly used: Metrics unique to test Complexity measurements Project metrics Size measurements Defect metrics Product measures Satisfaction metrics Productivity metrics
13 Metrics unique to Test This category includes metrics such as Defect Removal Efficiency, Defect Density, and Mean Time to Last Failure. The following are examples of metrics unique to test: Defect removal efficiency – the percentage of total defects occurring in a phase or activity removed by the end of that activity. Defect density – the number of defects in a particular product Mean time to failure – the average operational time it takes before a software system fails
14 Complexity Measurements This category includes quantitative values accumulated by a predetermined method, which measure the complexity of a software product. The following are examples of complexity measures:
” severity of defects. Documentation complexity – the difficulty level in reading documentation usually expressed as an academic grade level. usually related to system size. etc. important. but rather an assessment of the functionality/structure completed at a given point in time) 16 Size Measurements This category includes methods primarily developed for measuring the software size of software systems. 15 Project Metrics This category includes status of the project including milestones. Pages or words of documentation 17 Defect Metrics This category includes values associated with numbers or types of defects. The following are examples of project metrics: Percent of budget utilized Days behind or ahead of schedule Percent of change of project scope Percent of project completed (not a budget or schedule metric. These can also be used to measure software testing productivity. The following are examples of defect metrics: Defects related to size of software. budget and schedule variance and project scope changes. and function points. Priority of defects – the importance of correcting defects. Defects uncovered in testing Cost to locate a defect . Function points – a defined unit of size for software. Severity of defects such as very important. used primarily with statement level languages. uncorrected defects. The following are examples of size metrics: KLOC – thousand lines of code. Sizing is important in normalizing data for comparison to other projects. and unimportant.Size of module/unit (larger module/units are considered more complex). such as lines of code. Age of defects – the number of days the defect has been uncovered but not corrected. Logic complexity – the number of opportunities to branch/transfer within a single module. such as “defects/1000 lines of code” or “defects/100 function points.
This review is specific to the project and can be measured based on the application complexity. Amount of testing using automated tools. Examples of productivity metrics are: Cost of testing in relation to overall project costs – assumes a commonly accepted ratio of the costs of development versus tests. Customer complaints – some relationship between customer complaints and size of system or number of transactions processed. 22 Test Strategy Review Effort . User participation in software development – an indication of the user desire to produce high quality software on time and within budget. The following are examples of product measures: Defect density – the expected number of defects that will occur in a product during development. information provided by the customer etc. 19 Satisfaction MetricsThis category includes the assessment of customers of testing on the effectiveness and efficiency of testing. sometimes we call as CRS.18 Product Measures This category includes measures of a product’s attributes such as performance. The following are examples of satisfaction metrics: Ease of use – the amount of effort required to use software and/or software documentation. Software defects uncovered after the software is placed into an operational status. Customer subjective assessment – a rating system that asks customers to rate their satisfaction on different project characteristics on a scale. Under budget/Ahead of schedule. This minimizes the deviation that might occur during documenting the requirement phase during development. 21 Requirement Document Review EffortIt’s a static test activity to ensure that the created SRS matches with the raw requirements of the customer. 20 Productivity Metrics This category includes the effectiveness of test execution. for example a scale of 1-5. usability. The effort required to review the requirement specification document against the specified client requirements is Requirement Specification Documentation Effort It’s calculated in Person hours. Acceptance criteria met – the number of user defined acceptance criteria met at the time software goes operational. A Person hour or man-hour is the amount of work performed by an average worker in one hour. reliability..
Test Strategy is a result of balancing quality risks and project resources including time. how regression testing has to be carried out. Team should comprise the project members and members of the independent testing team. Perform risk assessment Identify critical success factor Evaluate the project resources and constraints Form prioritized test objectives Identify trade-offs Discuss and decide on types of testing. what approach will give an optimum yield of bugs and minimize the risk of product failure 24 Key Aspects in a Test Strategy to be considered: When will testing be performed during the life cycle? Degree of focus and effort for different modules or functions in the product When to stop testing a module or product – optimum point? Extent of regression testing to be carried out vis-à-vis the risk exposure Efforts required to cooperate with project management and development teams to make the test strategy work 25 Key Project Features considered for Review: Duration & budget allotted to test effort or program. Quantum of critical/major functions Quality risk exposure in the product Delivery priority and time-lines of product releases Expected/Stated level of reliability & performance Software process maturity in the organization Technology learning curve 26 Test Strategy ProcedureEntry Criteria: Approved Project Plan and SRS Tasks: Identify a team to prepare the test strategy document. 23 What is a Good Strategy? Given the product nature. amount of testing. customer expectations and project constraints. A distinct test approach is arrived at by applying the fundamental notions and economics of testing. Exit Criteria: Approved Test Strategy Document 27 Test Strategy Efforts . Prepare test strategy document as per defined format Submit test strategy document to review team [Validation] Release test strategy upon review and approval.
test effort estimates. . Overall test plan talks in general about the entire testing process. Preparation of test plans can be started as early as possible in the life cycle to make COQ (Cost of Quality) less. defect reporting. System. 29 Test Plan ProcedureEntry Criteria: Approved Project Plan and SRS Tasks: Understand the test strategy Determine the levels of testing.Effort for Preparation of Test Strategy Document: The effort for preparing Test Strategy is calculated in Person Hours considering the above mandatory parameters Effort for Review of Test Strategy Document: The effort for reviewing Test Strategy is calculated in Person Hours considering the above mandatory parameters. tools. 33 Effort for reviewing level test plans The effort for reviewing Test Plans is calculated in Person Hours. deliverables. Integration. 28 Test Plan Review Effort Test Plan can be overall test plan or level test plans. the test environment. progress reviews and test deliverables Initiate technical review of the test plan [Validation] Integrate the durations and timelines into the overall project plan and schedule Keep the test plan current as the project progresses Exit Criteria: Approved Test plan 30 Level Test Plan Templates Sample Unit Test Plan: 31 Level Test Plan Templates Sample Integration Test Plan: 32 Effort for Preparation of Level Test Plans The effort for preparing Test Plans is calculated in Person Hours. test levels. test estimates. test tools and techniques Determine the inputs. test environment. Level test plan talks in detail about levels of testing like Unit. Acceptance and Regression Testing. resources. resources. dependencies for each level of testing Develop and document a test plan that details the scope. schedule.
An RTM traces all the requirements from their genesis through design. Percentage of Requirements covered along with planned testing/Adhoc testing with the designed test cases can be calculated as: (Total Number of Requirements Covered/Total Number of Requirements in the Requirement Document) * 100 34 Requirement Specification – Example Sample Requirement Specification for Lock and Key: 35 Requirement Traceability Matrix Requirements like the one given in the previous slide are tracked by a Requirements Traceability Matrix (RTM). This will not apply for Adhoc testing where testing is done based on tester’s intuition and experience and not on planned cases. This matrix evolves through the life cycle of the project 36 Sample Requirement Traceability Matrix ‹#› .Reviews of test plans can be started as early as possible in the life cycle to make COQ (Cost of Quality) less once the preparation of test cases are over based on the plan. development and testing.
however a RTM is not meant for this purpose) One to none – The set of requirements can have no test cases. the test results can be used to collect metrics such as: Total number of test cases (or requirements) passed Total number of test cases (or requirements) failed Total number of defects in requirements 39 Requirement Traceability Matrix Sample Test Execution Data: ‹#› . Ex: BR-03 Many to One – A set of requirements can be tested by one test case (not represented in table) Many to Many – Many requirements can be tested by many test cases (these kind of test cases are normal with integration and system testing. Once the test cases are executed.37 Requirement Traceability MatrixOnce the test case creation is completed. The phase of testing column indicates when a requirement will be tested and at what phase of testing it needs to be considered for testing. The test case id’s column can be used to complete the mapping between test cases and the requirement. The following combinations are possible: One to One – For each requirement there is one test case. Ex: BR-08 38 Requirement Traceability Matrix The test conditions column lists the different ways of testing the requirement. the RTM helps in identifying the relationship between the requirements and test cases. The test team can take a decision not to test a requirement due to non-implementation or the requirement being low priority. Ex: BR-01 One to Many – For each requirement there are many test cases.
For Ex: Preparation to Estimate project “X” requires 30 Person hours This includes identifying the key parameters for estimation like “who will be involved in estimation”.. improving quality. “cost for estimation” etc. When there is a deviation found it has to be corrected and review process has to be conducted again to ensure that the deviation has been minimized or closed. that also has to be noted with reasons. This enables to prepare the estimate more accurately and focused next time while working on similar kinds. For Ex: There are 50 deviations in the RS document from expected 43 Errors found during Test Strategy review These are deviations found during review of Test Strategy document. These deviations are the differences found in the actual RS document from what is expected in RS document from the customer. 41 Test Estimate Review EffortEffort required to review the estimates that has been done as per planned comes under this category. This ensures that Test Estimate Preparation and Test Estimate Review is not a one-time or static activity and it is done until sufficient level of quality has been achieved in the work product. “what is their role and responsibilities”. . These deviations are the differences found in the actual TS document from what is expected in TS document. 42 Errors found during RS reviewThese are deviations found during review of RS document. If there is any deviation from what has been planned. “time frame”. This is calculated in person hours. These are generally calculated by numbers.40 Test Estimate Preparation Effort Effort required to prepare the Effort Estimate for the proposed project has also to be taken into account. Also it reduces the person hour’s effort and saves time and cost. “Factors that has to be considered for estimation”. all comes under this planning category. This is calculated in Person Hours.
For Ex: “Response time between 2 screens should be within acceptable limits” In this above example. These can again be sub categorized into different level test plans like Unit Test Plan. These are generally calculated by numbers. but customer doesn’t want those requirements to be implemented or tested. and errors are identified in numbers. the tester has to concentrate only on checking whether “Add" business logic is correct and not how soon it arrives at results. In this case taking the above example of checking the add functionality. Needs clarification from the customer to have a clear idea on the requirement to plan how it can be implemented and tested. For Ex: There are 25 major deviations in the PP document from what is planned and expected. These are the deviations found during review of the overall PP document. this typical requirement is not a testable requirement. 48 Need Clarification on Requirements These are requirements that are not clear to one or more stakeholders of the project. System Test plan etc. What is an acceptable limit for my customer? .. For Ex: There are 25 major deviations in the TS document from what is planned and expected 44 Errors found during Project Plan review Here Project Plan talks about the overall testing project plan and this plan is specific only to testing. They are: Requirements in Scope Requirements not in Scope Need Clarification 46 Requirements in Scope These are requirements that are clearly specified and well understood by all stakeholders and the customer wants these requirements to be implemented and tested for quality For Ex: Add Functionality has to be tested in Calculator Software 47 Requirements not in Scope These are requirements that may or may not be applicable for the project. For Ex: Customer wants to check the functionality of the Calculator program and not to check the performance. Integration Test Plan.These are generally calculated by numbers. 45 Counting Requirements Requirements are classified into 3 categories while taking into account for analysis.
For Automation Testing – Design of Test Scripts: Automation script creation productivity = Scripts written / Effort The Unit of Measures is: Numbers Per Hour Numbers per day Numbers per week etc.. of Errors / No. 51 Test Case Review Error Rate Rate of errors that arise in the test suite is called Test Case Review Error rate.. This is calculated in person hours. 49 Test Case Design EffortThis is the effort required to create the test suite either by manual or automated means. of Test Cases reviewed / Review Effort This analyze the effort put in reviewing the test cases productively 53 Defect Rejection Ratio . This is calculated periodically to analyze the total number of defects compared with the test cases designed TC Review error rate = No. Deviations or differences that found between expected and actual results of the test cases are called test case review errors. 50 Test Case Review Errors These are the errors found while reviewing the test cases. of Test Cases This gives on an average one test case produce “n” defects where “n” can range from 1 to Infinite 52 Test Case Review Productivity TC Review Productivity = No.We need to get it clarified from the customer and set goals before taking forward this kind of requirement. Test Case Creation productivity = Test Cases written / Effort For Manual Testing – Design of Test Cases: Manual TC creation productivity = Manual Test Cases written / Effort The Unit of Measures is: Numbers Per Hour Numbers per day Numbers per week etc. They are measured in terms of numbers. This is considered to be the most difficult and challenging tasks in testing and it requires enough amount of intelligence from the testing personal.
or Closed) as a function of time. To calculate the percentage ratio of rejected defects.Sometimes defects are rejected due to different reasons without fixing it and bringing to “closed” state. Lagging behind new defects by about a month. the number of open defects was the highest in March. Open. The trend reports can be cumulative or noncumulative and help management identify defect rates by status thus providing an indication of how well the software quality is progressing through the project cycle. Defect Rejection Ratio = Number of defects rejected/ Total number of defects 54 Test Execution Productivity Manual Test Execution Productivity: It’s calculated as: Manual Test Cases executed / Hour The Unit of Measure is Numbers Per Hour Automation Test Execution Productivity: It’s calculated as: Automated Test Cases executed / Hour The Unit of Measure is Numbers Per Hour 55 Defect Trend Analysis Defect trend reports show defect counts by status (New. The defect-fixing efforts appear to be consistent throughout the project. 56 Defect Trend AnalysisIn above Figure. of functions covered / Total functions)*100 . Figure in the next slide represents a typical defect trend report. the number of new defects peaked in February. closing all defects by June 57 New Test cases Efficiency This new test cases might get added the existing test case repository because of a new enhancement made or because of a change in the requirement document New Test Cases efficiency = Cumulative Defects found by New Test Cases / Cumulative New Test Cases Executed 58 Test Coverage Ratio Test Coverage % (Functional) = (No.
of Defects in the categories of Fatal.Planned Effort)/Planned effort * 100 63 Lab Resource Utilization Lab Resource Utilization = Total Person hrs resource was utilized Vs. Total person hrs resource was available 64 Release Coverage Release Coverage = No. of Test Cases executed by Wipro out of the total no. of Test Cases planned for the release Release Coverage = (No. Major and Minor 61 Test Schedule Deviation Ratio Test Schedule Deviation ratio = (Actual end date.Planned End date)/Planned end date * 100 62 Effort Deviation Ratio Effort Deviation ratio = (Actual effort. of Test Cases passed / No. of Test Cases executed 60 Defect Severity Distribution Defect Severity takes about “how bad the defect is”.59 Test Case Passing Rate Test Case Passing Rate = No. Defect Severity Distribution = No. of Test Cases executed by Wipro/ Total No of Test Cases planned for the release (to be executed)) X 100 65 Test Effectiveness .
In theory. indicating a higher ratio of defects or important defects were detected before release.Test Effectiveness: A/A+B Where. of Residual defects at any point of time Projection on test execution effort for discovering the residual defects. 67 Reliability Estimation Residual defects are one of the most important factors that allow one to decide if a piece of software is ready to be released. one can find all the defects and count them. the higher is the effectiveness of the testing to drive out defects 66 Defect Tracking Defect Turn around time = Defect Closed Day – Defect Open Day This is also called Defect Ageing. Estimated No. however it is impossible to find all the defects within a reasonable amount of time These estimates are done to ensure that the application is reliable Estimated No. A = the number of defects found by the test team B = the number of defects found in the product after release. The higher this number. of Additional Test Cases to be executed to capture residual defects .
Tools and Techniques (StORM 101) 1 StORM Tools (Statistics Operations Research Matrix) 2 Agenda 1 Introduction to StORM Tools 2 Wipro CoDeC Tool 3 Wipro OA Tool 4 Wipro DFA Tool 5 Questions and Answers 3 Introduction to StORM Tools Wipro Codec (Complexity Dependency Change impact) Tool 1. Reliability Analysis (Reliability Estimation) • Estimates residual defects in the system • Indicates whether to continue or stop testing 4 StORM Tools .Defect priority analysis etc. pass rate. SCIM (Maintenance testing) System Change Impact Matrix • Estimates Change Impact on the system due to Change Requests (CRs) • Estimates relative effort distribution across different CRs Wipro OA(Orthogonal Array) Tool Orthogonal Array (Test suite optimization) • Optimizes the test suite.Software Test Project Management . 2. Metrics Analysis (Test Reporting) • Systematically analyzes various metrics in testing projects • Standardizes reports by providing graphical and tabular representation of .Defect Trends . 3. • Avoids dependency clash during test execution of modules. • Determines modules to be executed serially and in parallel.Test case productivity. 2. • Eliminates redundant test cases. SCE (Effort Estimation) System Complexity Estimator • Analyzes and estimates the effort distribution required across modules. • Reduces cycle time and improves productivity • Improves test coverage. DSM (Test Sequencing) Dependency Structure Matrix • Determines sequence of test execution. • Reduces effort during • Test case development • Test case execution Wipro DFA (Defect Flow Analysis) Tool 1. .
5 Wipro CoDeC Tool Features & Benefits Example Case Studies 6 Wipro CoDeC tool Complexity Dependency and Change Impact matrix .
DSM Dependency Structure 18 Matrix CoDeC. for instance. ISDN) In order to test all combinations we would need 3 x 3 x 2 = 18 test cases. a car insurance quotation: . Macintosh. generating and running eighteen test cases is not a big problem―but most of our applications are not this simple. 29 The Combinatorial Testing ProblemIn this example. Mozilla) Operating System (Windows. Netscape.8 9 10 CoDeC.DSM Dependency Structure 15 Matrix CoDeC.DSM Dependency Structure 16 CoDeC.DSM Dependency Structure Matrix 17 Matrix CoDeC.DSM Dependency Structure Matrix 14 Matrix CoDeC. Consider this problem on a small scale with an example as follows: Example: Web Browser (IE. Linux) Connection Type (LAN. 30 The Combinatorial Testing ProblemTake.DSM Dependency Structure Matrix 11 12 13 CoDeC.DSM Dependency Structure 19 Matrix CoDeC.DSM Dependency StructureImpact Analysis Reports: 20 21 Challenges 22 23 24 Challenges 25 26 27 Wipro OA Tool 28 The Combinatorial Testing ProblemOne of the challenges we face during testing is managing the large number of test cases we need to create and execute. The problems we face within our organizations involve much more complex combinations and ultimately vast numbers of test cases.
22-30. fire. there is a risk of missing important bugs. 50+) Five engine sizes (<1001 cc. and check. 2001 cc-2999 cc. But. On one hand. 1601 cc-2000 cc. set up.800 test cases. It provides representative (uniformly distributed) coverage of of all variable pair combinations. statistical way of testing pair-wise interactions. driveway. third party. fully comprehensive) Three storage modes (garage. theft. full) Five age categories (17-21. 34 What is OATS?Dr. The Orthogonal Array Testing Strategy (OATS) is a systematic. if we were to test all combinations. On the other hand. if we were to ignore certain combinations. and 3+ years) Two license types (provisional. Creates an efficient and concise test set with many fewer test cases than testing all combinations of all variables. we are faced with a dilemma. 31-40. This would equate to 900 days testing―that's almost four years! 32 The Combinatorial Testing ProblemTherefore.Five policy types (third party. 1001 cc-1600 cc. road) Four no-claims-discount types (NCD) (0 years. 41-50. Let us estimate that each test case takes half a day to design. 35 Kinds of Errors RevealedConsider the Example:1 . 2 years. 3000 cc+) 31 The Combinatorial Testing ProblemTo test all combinations we would need: 3 x 3 x 4 x 2 x 5 x 5 = 1.as a solutionOrthogonal Array Testing (OATS) provides a means to select a test set that: Guarantees testing the pair-wise combinations of all the selected variables. run. 1 year. Genichi Taguchi was one of the first proponents of orthogonal arrays in test design. we could find ourselves executing tests for infeasible combinations. exhaustive testing is often impractical and in most instances impossible to achieve in the time allocated. Creates a test set that has an even distribution of all pair-wise combinations. So how do we select the best combinations? 33 OATS . Is simpler to generate and less error prone than test sets created by hand.
com/storm 60 ReAl Tool – a snapshot . This is called a single-mode fault 37 Double Mode FaultAssume now that there is a defect that depends upon two conditions (even though two things work by themselves.The following are the kinds of faults that can be revealed using OATS in combinatorial problems 36 Single Mode FaultAs usual. 1. or more generally as a multi-mode fault.0 StORM Team TeS Innovation Center 58 Agenda 59 Introduction to ReAl Tool http://tes-bu. 38 Triple / Multi Mode FaultThe most difficult kind of problem to find by black-box testing is one in which three or more things in combination don't work together-This is known as a triple-mode fault.wipro. the simplest kind of problem to detect is one that is triggered by a single variable in a single state. 39 Wipro OA Tool 40 41 Abstraction of Factors and Levels 42 Exhaustive Vs OA Generated test cases 43 Abstraction of Factors and Levels 44 Exhaustive Vs OA Generated test cases 45 Exercise 46 47 48 49 Wipro DFA Tool 50 Wipro DFA tool Defect Flow Analysis 51 Wipro DFA Tool – Metrics Analysis 52 53 Wipro DFA Tool – Reliability Analysis 54 55 Question and Answers 56 Thank YouStORM Team 57 ReAl Resource Allocation Tool Ver. they fail when paired (connected) together) This is known as a double-mode fault.
Click on ‘Create input template’ e.Automatic Project Scheduling Tool Module Dependencies and Constraints are considered Inclusion of Leave Calendar in schedule generation Recommendations for Optimized skill solution Output can be fed to Microsoft Project Plan for monitoring and tracking Anytime during the project life cycle 61 Real Tool Primary Screen 62 ReAl Tool Pre-Processing 63 Input Template / CreationSteps for Input Template Creation a. Input Data Set a. Launch the tool using the link b. Download the created input template 64 Input Template Creation 65 Input Template2. ii. No. Clink on the button “Input Template” on the GUI c. of Modules in the project / system. Module dependency matrix Dependency between various modules can be defined for processing. . of resources planned to use in execution of the project (team strength) d. Give the following information i. No.
Organization Holidays Information on project / organization holidays. Maximum Resource Matrix Maximum number of resources working in a module at a given point of time. 70 "f"f. Resource Weight-age Matrix Efficiency of a resource for working in a module . Module Earliest Start Date Facility to keep the execution of module on hold till a particular date . Range between 0 – 1 where 0 is 0 % and 1 means 100 %. Maximum of 10 leave days can be given for each of the resources. Module Effort Matrix User has to input the effort required to complete each module in Person Days. Resource Leave Information on resource leave days during the project execution. 68 Input Templated. 69 "e"e. Time unit (1 for day wise and 2 for half day schedule) . Facility to choose holiday for Saturdays & Sundays Tool will skip the above during schedule generation 71 "g"g. Tool will avoid allocation of the resource to any module during the same. 67 Input Templatec.66 Input Templateb.
Processing Input Sheet Click on the button ‘Execute ReAL’ Browse and upload the input sheet Mention the type of output sheet (either Excel or HTML ) in the input window 4. Output Types The user can opt for one among from the Excel or HTML for the output Solution Type Primary Solution Skill Optimized Solution Least Schedule 75 Executing Input / GUI 76 ReAl Output / Module Vs Resource Assignments5) Interpreting the Output tables a.72 "h"h. Module Assignment Vs Resources Most important and primary output of the tool which describes the module to resource assignment. Module Vs Effort Consumption Module effort consumption daily Shaded non-zero cells in this table indicate a Gantt chart. Resource Availability Period Date range for which the resource is available 73 "Mandate of Work"Mandate of Work Resource will be allocated to the module whenever it is active Multiple modules can be mandated. 77 ReAl Outputb. other constraints are considered during allocation Facility to specify the % of daily time a resource needs to spend in the module 74 Executing Input3. .
78 ReAl Outputb. Module Vs Effort Consumption Sorted by Module End dates 80 ReAl Output c. Resources Vs More Desired Skills Green color cells indicate additional desirable skills for a resource May help in shrinking the over all schedule 82 ReAl Outpute. 83 ReAl Outputf. Resources Vs Project Presence Duration of resources in the project. Same Matrix is sorted with module start and finish dates. Resources Vs Skills used Snapshot of the ‘Resource Weight age Matrix’. Same matrix is sorted with ‘resource reporting ’ and ‘resource last day’. Module Vs Execution Dates Indicates when the modules are taken up for execution. 84 ReAl Outputg. Module Vs Effort Consumption Sorted by Module Start dates 79 ReAl Outputb. Project Scheduling Efficiency Matrix based on ‘Module assignment Vs Resources’. . Yellow indicates that the resource skills used in the project execution Red indicates the skills for a module which are not utilized 81 ReAl Outputd.
of ‘allocated cells’ in the table to ‘all cells’ barring any resource leave and holidays. Percentage Engagement Resource wise snapshot of total available days. 85 ReAl Outputh. Total Project days Applicable Project days for a resource Engaged Project days 88 Recap Web based tool Excel based standard input template Output: Excel based Html Project schedule creation within minutes At the beginning of the project Any time during the project Optimal allocation of multi-skilled resources Inclusion of Resource leaves Organization holidays Resource release dates Module Delivery dates Module Dependency àSchedule Creation 89 90 Thank You . total engaged days and % engagement.Ratio of no. 86 ReAl Output 87 ReAl Outputj. Resource Vs Module (Days to Spend) Duration of a resource in a particular module. A less than 100% efficiency indicates that some resources are idle.
3hrs 4 Analysts view on testing . 3hrs Test estimation techniques.Notes Slide Show Outline 1 Test Estimation – 101 2 AgendaTE – 101: Concepts Basics of test estimation Testing overview Typical requests Solution approach Test effort composition Scheduling Assumptions / Dependencies / Risks Decoding requests Test estimation guidelines Functional / requirements specifications / use cases classification Test cases classification Reusability factor Productivity improvements Onsite-offshore activities Productivity Estimation Approach TE – 101: Hands-on Building estimation sheets Q&A / Clarifications References 3 "Basics of test estimation" Basics of test estimation.
Client is expecting to start testing activities by End of Sep 07” “Please find attached docs for DUNS# mod 10 elimination. Customer is looking at a centralized approach for QA. Migration.) and wondering do we have any SME's from Wipro to join the call on Friday” “Attaching herewith training manual of Siebel application which needs testing. Package Application.5 Outsourcing testing related activitiesApplication Development & Maintenance Pre-production ad post-production test activities Maintenance & Support Bug fixes testing and regression testing Center of Excellences Performance. Microsoft etc. IBM. please send across Estimation . and the available requirements/design documents. This will be a big piece and let's put together a creative plan to address it” . they are looking for a proposal for setting up the service” “As you are aware that ABC's team would like to engage with us. … Centralized Testing Service Across business lines / testing areas 6 Typical requestsTesting Generic requests Functional / system testing Regression testing Test automation Performance Testing Specialized requests Localization / Globalization testing Certification / Compliance testing Security testing 7 Sample requests“We have around 5000+ applications to be Vista certified and tested on Connect rollout sometime during 2008” “Based on the Testing Coordination meeting yesterday. I have gone through Identity and Access management features from different vendors (Sun..Both for coordination effort and developing test scripts as well” “We are requested to define load testing as an internal service. UAT. Oracle. The development changes will be made by individual application teams.
8 Solution approachApplication Development & Maintenance Scope of testing engagement Types of tests to be performed Delivery model Execution approach Prerequisites System documentation Release plan Maintenance & Support No of requests to be supported Types of tests to be performed for releases supported Delivery model Prerequisites Test artifacts SME time for clarifications 9 Solution approach…Center of Excellence No of applications to be supported Engagement model Service levels Prerequisites Applications documentation / test artifacts Infrastructure Centralized Testing Service No of groups / applications to be supported Types of tests to be performed Engagement model Prerequisites Applications documentation / test artifacts Infrastructure 10 Test effort compositionApplication familiarization / Knowledge Acquisition / Knowledge Transition Test planning and management Onsite-offshore coordination Feasibility analysis [tools / automation / performance] Processes and templates Test design Understanding requirements / test cases [regression / automation] Common libraries / functions / framework [automation / performance] Test cases writing /scripting .
“The many conditions and rules underlying the calculations” E.g. “Environment on which performance scripts are generated is different from the performance test environment” 13 Decoding requestsApproach to servicing the requests Asking right questions Make right assumptions Understand dependencies .g. environment on which performance tests will be executed should be same or similar in configuration to production environment ” Risks – “The chance of something happening that will have an impact upon objectives” E.Reviews / Unit testing [automation / performance] Documentation [automation / performance] Test data creation Test execution 11 SchedulingTasks aligned to project timelines Onsite-offshore activities Resource ramp-up / ramp-down Planning and management 12 Assumptions / Dependencies / RisksAssumptions . “For accurate performance test results. “All tools required for performance testing will be provided by Customer” Dependencies – “The logical relationships between tasks” E.g.
Identify risks and mitigation 14 Decoding requests…Exercise “We have around 5000+ applications to be Vista certified and tested on Connect rollout sometime during 2008” 15 Functional / requirements specifications / use cases classificationDocumentation clarity Business logic Dependency Other Req / Func / Use Cases Other systems Prerequisites Test data Testability Application / domain knowledge 16 Test cases / scripts classificationDocumentation clarity Prerequisites. Test data requirements No of steps Language / tool used. for automation Environment setup requirements Application / domain knowledge .
8weeks Onsite-Offshore ratio 30% . select box] test cases can be generalized and reused across application(s) Common actions. most of the test cases can be reused GUI look & feel and input elements [text box. most of the test cases can be reused For browser compatibility. execution. automation scripting With structured test cases documentation. many steps can be reused for generating other test cases 19 Onsite – Offshore activities 20 ProductivityKnowledge Acquisition Phase 3 .70% Assessment – Process / Automation Feasibility / Tool evaluation 2 – 8weeks Manual . over 90% of test cases can be reused Localization / globalization.Traceability to one or more requirements 17 ReusabilityFor role based tests. logic can be separated and made into libraries for better reuse – automation 18 Productivity improvementsProper sequencing of execution / batching Use of common functions / libraries in automating scripts In long run due to system familiarization. productivity should improve – test case writing.
1hrs 23 Estimate: Application DevelopmentWeb based application Information – 45 Usecases . with appropriate no of test leads / managers Typical onsite-offshore resource mix is 30% . 5hrs References.70% resply 22 "Building estimation sheets" Building estimation sheets.Test cases writing Test execution Automated Test scripting Manual test cases enhancement 21 Estimation ApproachClassify Req / func / use cases /screens based on complexity Use the multiplicity factor and convert to test cases For automation. consider effort for documenting unclear test cases Apply productivity norms and compute test effort Assume reusability factor and productivity improvements over time Complete resource loading.
Offshore .~3000 test cases Project schedule: 24weeks Test – Automation of manual test cases Execution model .Bug fix and Regression Execution – Onsite Offshore 25 Estimate: AutomationCOTS Application Input .Functional / System / End-to-end Execution – Onsite Offshore 24 Estimate: Application MaintenanceWeb based application Input – 20 CRs Project schedule – 4weeks Tests .Project Schedule – 8weeks Tests .
Manual .Manual 45 Resources Loading .26 Estimation for Manual Testing for a webModule Assumptions Approach Dependencies Functions Calculations 27 Estimation – Module and Assumptions 28 Estimation – Approach/Dependencies/functions 29 Estimation – Activity with Effort 30 Factors Influence Test estimation – Manual tests 31 Sample Effort Estimate – Manual tests 32 Sample Effort Estimate – Activities and TC execution 33 JPMC –Functional Test automation 34 JPMC – Activity Wise Effort 35 JPMC – Requirements and Dependencies 36 JPMC – Performance Testing estimate 37 JPMC – Load Profiles 38 JPMC – Key Scenarios 39 JPMC – Other Scenarios 40 JPMC –Consolidated Effort estimate 41 JPMC – Requirements and Dependencies 42 JPMC – Total effort –Functional Vs Performance 43 Resources Loading Estimation – data and assumptions 44 Resources Loading .
ROI 52 Test Automation – Data & assumptions 53 Test Automation – Sample Effort Vs Assumptions 54 Test Automation – Manual Activities Vs Execution 55 Test Automation – factors Vs Assumptions 56 Test Automation – Activities Vs Effort (Manual) 57 Test automation – ROI calculation 58 Performance Test estimations .46 Resource Loading – Factors and Assumptions 47 Resources Loading – Activities vs.Assumptions 59 Performance Test – System familiarization 60 Performance Test – Scripting and System Familiarization 61 ReferencesEstimation Techniques Collective thought Delphi Framework method Collective thought .requirement 51 Cost Benefit .ROI 49 Resource Load for Manual testing 50 Resources Loading – costing . effort 48 Resources Loading -.advanced Framework .advanced Delphi .advanced Hybrid Test Estimation Tool .
Frameworks capture data from past similar projects and thus reduce the dependency on expert’s experience There are four main parameters which contribute to overall variance in estimation.Use cases Estimation Techniques 62 Estimation technique – Collective thoughtIn this method. 63 Estimation technique – Delphi MethodLike the collective thought method. Senior team members are available to do estimation at the time of estimation process. In most of the cases. They are: 1. the full team may not be available to do the estimation at the time of estimation process. 65 Estimation EstimatesThere are 3 ways to state an estimate of the execution effort needed for a project. Variance when two different experts do the estimation at the same time. 2. but there are two important differences: Experts outside the project team could also be asked to participate in the estimation process. Team members are able to reach a consensus about the estimate. This method can be used under the following circumstances. Estimation is obtained from the experts in a confidential manner. 4. Point estimate Interval estimate Upper limit estimate Point estimate . a moderator would be active in this scenario too. Variance when experts interpret risk parameters differently and give different ratings for the same risk. There would not be any interaction between the experts involved in the estimation process. The project manager is able to find different experts inside and outside the project team to give estimates. They can meet and interact with each other. 3. 64 Estimation Technique – framework MethodThe estimators use frameworks in order to reduce the variability in estimates between them. It is usually done by some of the senior members in the team. There are enough experts available within the organization who can judge over the requirements in hand. Variance when the same expert does estimation for two different requirements. Variance when the same expert does the estimation at different times. This method is best used in the following circumstances. Usually estimators use their gut feel and experience from their past projects to arrive at estimates. the project team usually does the estimation.
Optimistic estimate (OE) is defined as the effort wherein everything goes fine for the project. it can be treated as an estimate when most or all of the identified risks do occur. it can be treated as an estimate when most of the identified risks do not occur. the expert is allowed to give 3 kinds of estimates .AdvancedIn this case. interval or upper limit estimates. Interval estimate: Moderator finds the mean and standard deviation from the individual estimates.This is a single value estimate. In terms of risks. Point estimate: Moderator uses mean. but it specifies an upper limit (rather than an average value) and is qualified by a confidence level. Upper limit estimate: it could be stated with 75% confidence that the effort required for the project will be less than 219. 68 FRAMEWORK . Interval estimate This is a double value estimate defining an interval qualified by a confidence level. but the team would come up with 3estimates (triple estimates). A point to be noted here is that the ‘most likely estimate’ is not an average estimate. Upper limit estimate This is again a single value estimate. most likely and pessimistic.optimistic. most likely and pessimistic. Team has enough insight into the project needs and is able to analyze the risk factors so that they can reach on the optimistic. the moderator gets the individual estimates from different experts. In terms of risks.AdvancedIn this method of estimation.6 person hours. Framework-advanced method is used under the following circumstances: Senior team members are available to do estimation at the time of estimation process.ADVANCEDIn the Framework-Advanced method of estimation. Most likely estimate (ME) is defined as the effort project may take most of the time as judged by prior experience. In terms of risks. 67 Delphi . the moderator processes these estimates as follows and states his estimate using any of the 3 wayspoint. Thereafter. it can be treated as an estimate after accommodating the risks that occur frequently. median or mode. 66 Collective Thought . it is the estimate of effort the project may take most of the time. . most likely and pessimistic estimates. Similarly for ‘most likely’ and ‘pessimistic' values as well. The optimistic estimate is arrived at with an input of the optimistic values for the estimator attributes in the framework. The three estimates are termed optimistic. There exist similar projects executed earlier within the organization and analysis has already been done on the past data and a framework has been created. The project manager needs estimates at different confidence levels. Pessimistic estimate (PE) is defined as the effort when everything goes against the project. all the characteristics described for the collective thought method holds true.
EFFORT ESTIMATION 77 AgendaSignificance Current Practices & Issues Proposed Approach & Tools Demo Case Study Further Evolutions . most likely and pessimistic.69 Hybrid MethodIn this method. participating experts in the Delphi method are asked to use the framework already available and provide 3 estimates .optimistic. The following tables illustrate how this information will be processed to arrive at overall mean and overall standard deviation of the estimates. 70 Use case Estimation Use Case Estimation technique 71 Use Case Points with technical Complexity 72 Use Case – Environmental Complexity 73 Use case – Unadjusted Case Points 74 Use Case – Productivity Factor 75 Test Estimation tool Test Estimation Tool 76 A TOOL DRIVEN APPROACH FOR SOFTWARE TEST. This means that all the experts would give 3 estimates each.
communicating functional requirements.Software testing – plays a significant role in an IT project 80 .. measuring productivity."Architectural Overview of Test Effort Estimator Tool 89 90 Functions Points Function Point Analysis for Test Estimation 91 Function Point AnalysisFunction point as a metric should be technology independent and support the need for Estimating Project management Measuring quality Gathering requirements Function Point Analysis is extremely useful in estimating projects.Questions & Answers 78 Significance 79 Significance. 92 Function point from User PerspectiveFunction points express the resulting ...... managing change of scope.Significance 81 Current Practices & Issues 82 Current practices & Issues 83 Proposed Approach & Tools DemoIntroduction Scope & Features Methodology Used Architectural Overview Process Flow Benefits 84 Introduction 85 Scope & Features 86 Methodology Used 87 88 "Architectural Overview of Test Effort.
maintained by an end user. Two of these address the data requirements of an end user and are referred to as Data Functions. changing and deleting its contents External Output The next Transactional Function gives the user the ability to produce outputs . 95 Transactional functions – External Input/External OutputThe Transactional functions address the user's capability to access the data contained in ILFs and EIFs. change and delete the data An External Input gives the user the capability to maintain the data in ILF's through adding. Logical groupings of data in a system. This capability includes maintaining. Function Point Analysis if implemented derives lot of potential benefits. Initial design criteria for function points was to provide a mechanism that both software developers and users could utilize to define functional requirements one of the primary goals of Function Point Analysis is to evaluate a system's capabilities from a user's point of view From a user's perspective a system assists them in doing their job by providing five (5) basic functions. are referred to as Internal Logical Files (ILF). External Input The first Transactional Function allows a user to maintain Internal Logical Files (ILFs) through the ability to add.work-product in terms of functionality from the user's perspective the tools and technologies used to deliver it are independent. External Interface Files The second Data Function a system provides an end user is also related to logical groupings of data Groupings of data from another system that are used only for reference purposes are defined as External Interface Files (EIF). inquiring and outputting of data. 93 Components of Function pointsData Functions Internal Logical Files External Interface Files Transactional Functions External Inputs External Outputs External Inquiries 94 Data functions – Internal/ExternalInternal Logic Files The first data function allows users to utilize data they are responsible for maintaining. The remaining three address the user's need to access data and are referred to as Transactional Functions.
Performance 4. To accomplish this a user inputs selection information that is used to retrieve data that meets the specific criteria. the function point count for these functions would be: 99 Unadjusted Function PointThe Unadjusted Function Point count is multiplied by the second adjustment factor called the Value Adjustment Factor This factor considers the system's technical and operational characteristics 1. 97 Functional complexity – Adjustment factorsTwo adjustment factors that need to be considered in Function Point Analysis while deriving the complexity.End-user Efficiency 8. the resulting output is the direct retrieval of stored information.On-line Update 9. Data Communications. In function point terminology the resulting display is called an External Output (EO).The results displayed are derived using data that is maintained and data that is referenced.Complex processing 10. Installation ease. These transactions are referred to as External Inquiries (EQ).Transaction rate. 6. Unique Complexity Matrix – Refer the example in the notes 98 Unique complexity matrixUsing the examples given above and their appropriate complexity matrices. 96 Transaction functions –External InquiresThe final capability provided to users through a computerized system addresses the requirement to select and display specific data from files. In this situation there is no manipulation of the data. EI. Heavily Used Configuration 5. EIF. 2.Online data entry 7.Distributed Data Processing 3. For example if a pilot displays terrain clearance data that was previously set. EO and EQ) has its own unique complexity matrix. 12 Operational ease 13 Multiple sites 14 facilitate change . Functional Complexity is determined based on the combination of data groupings and data elements of a particular function Each of the five functional components (ILF. Reusability 11. The first adjustment factor considers the Functional Complexity for each unique function.
managing changing project requirements. understanding project and maintenance productivity. In addition to delivery productivity. Since function points are technology independent they can be used as a vehicle to compare productivity across dissimilar tools and platforms. documenting and communicating a system's capabilities with real time and embedded code system 103 FP Counting Process 104 FP Components FP components 105 FP Components/Internal Logic files/External Interface files. function points can be used to evaluate the support requirements for maintaining systems. gathering user requirements Implementation of Function point analysis for estimating the software projects includes several environmental factors out of which two are considered essential Size of deliverable Delivery rate Note – refer the notes for the above points 102 Outputs of FP analysisProductivity measurement is a natural output of Function Points Analysis.Note – For each point refer the add notes 100 Approach to Counting function pointsThere are several approaches used to count function points. One approach can be accomplished with minimal documentation which improves accuracy and efficiency Examples of documentation are Design specifications Display designs Data requirements (Internal and External) Description of user interfaces 101 Benefits of Function Point AnalysisFunction Point analysis benefit includes improved project estimating. Managing Change of Scope for an in-process project is another key benefit of Function Point Analysis Communicating Functional Requirements was the original objective behind the development of function points Function Point Analysis has proven to be an accurate technique for sizing.There are Five Standard FP Components: .
WBS also provides the necessary framework for detailed cost estimating schedule development and control 111 WBS as a Tree structureWork Breakdown Structure (WBS) is a tree structure. which shows a subdivision of effort required to achieve an objective which may be an Program Project and Contract WBS may show hardware.ILFs EIFs EIs EOs EQs 106 FP – External Inputs/Inquiries/OutputsExternal Inputs (EIs): EIs are External Data flowing into the inside across the application boundary This data may come from a data input screen or another application 107 FP Components with DET/RET/FTRFP components have an associated complexity as: High / Medium / Low Complexity determined by number of: DETs / RETs / FTRs 108 FP Counting Process 109 Work Break Down structure Work Break Down structure 110 Definition of a Work break down structureA Work Breakdown Structure (WBS) is a tool that defines a project groups the project’s discrete work elements organize and define the total work scope of the project Work breakdown structure element may be a product data a service or Any combination. .
the most common technique to ensure an outcome-oriented WBS WBS is to use a product breakdown structure Work breakdown structures that subdivide work by project phases (e. The starting point for an WBS is the end objective successively subdividing it into manageable components in terms of size. into their successively higher level “parent” tasks. The rule applies at all levels within the hierarchy: the sum of the work at the “child” level must equal 100% of the work represented by the “parent” WBS should not include any work that falls outside the actual scope of the project. an approved Preliminary Design Review document. etc. process oriented. it cannot include more than 100% of the work… 114 WBS – Planned Outcomes Not Planned actionsThe best way to adhere to the 100% Rule is to define WBS elements in terms of outcomes or results. Duplication and Redundancy with miscommunication is eliminated If the WBS element names are ambiguous.g. The WBS Dictionary describes each component of the WBS with milestones deliverables activities . or an approved Critical Design Review document). and responsibility 112 WBS as a Cost factorA work breakdown structure permits summing of subordinate costs for tasks.g. materials. decomposition and evaluation of the WBS. materials The WBS is organized around the primary products of the project (or planned outcomes) instead of the work needed to produce the products (planned actions).product. duration. a WBS dictionary can help clarify the distinctions between WBS elements. In not planned actions action oriented WBS becomes difficult to achieves the phases of the project 115 WBS as Mutually exclusive elementsIt is important that there is no overlap in scope definition between two elements of a Work Breakdown Structure (WBS). service. Preliminary Design Phase. Critical Design Phase) WBS must ensure that phases are clearly separated by a deliverables WBS also used in defining Entry and Exit Criteria (e.. For new product development projects. 113 WBS Design principlesThe 100% rule is one of the most important principles guiding the development.
can be completed in accordance with one of the heuristics defined above. A WBS is not an organizational hierarchy WBS should be outcome-oriented and not be prescriptive of methods A WBS is not a logic model. etc. 116 WBS as Level of DetailsThe level of details in WBS basically decides a several heuristics or rules of thumb used when determine appropriate duration of activities The first is the "80 hour rule" which means that no single activity or group of activities to produce a single deliverable should be more than 80 person hours long.e. Nor is it a strategy map. Applying this rule of thumb. 117 WBS as a Work packageA work package at the activity level is a task that: can be realistically and confidently estimated. it is not further subdivided. It is recommended that WBS design be initiated with interactive software (i.scope dates resources costs quality. one can apply "common sense" when creating the duration of a single activity or group of activities necessary to produce a deliverable defined by the WBS. makes no sense practically to break down any further. and forms a unique package of work which can be outsourced or contracted out. 118 WBS Coding SchemeA coding scheme also helps WBS elements to be recognized in any written context A terminal element is the lowest element (activity or deliverable) in a work breakdown structure. produces a deliverable which is measurable.DrawbacksA WBS is not a project plan or a project schedule and it is not a chronological listing. 121 UP counting Procedure . The second rule of thumb is that no activity or series of activities should be longer than a single reporting period. The last heuristic is the "if it makes sense" rule. a spreadsheet) that allows automatic rolling up of point values The example is in the next slide and the details are in notes 119 An example of WBS Construction 120 WBS . The WBS Construction Technique employing the 100% Rule during WBS construction The figure below shows a Work Breakdown Structure (WBS) construction technique that demonstrates the 100% Rule quantitatively.
. 123 Use Case Standard Template 124 .UCP Counting Process 125 Benefits 126 Case Study 127 Case Study 128 Further Evolutions 129 Further Evolutions 130 Questions & Answers 131 Thank You ..UP Counting Procedure 122 UCP Counting Process...
Project Metrics : includes status of the project including milestones.Design. and tools employed in developing. Sizing is important in normalizing data for comparison to other projects. 11) Metrics used by testers : a. 7) An objective measures are obtained by counting like: objective data is hard data. It is a person’s perception of a product or activity. Defect density – the number of defects in a particular product Mean time to failure – the average operational time it takes before a software system fails b. budget and schedule variance and project scope. 9) Subjective measures are more important than objective measures. and maintaining the software system. Metrics unique to test : It includes Defect removal efficiency.construction and Testing.Defect Density and Mean time to last failure. and completed deliverables.defect tracking and software performance. Documentation complexity – the difficulty level in reading documentation usually expressed as an academic grade level. 2) S/w metrics are measures used to quantify status or results. 4) A metric is a quantitative measure of the degree to which a system.Validity. and function points. The following are examples of complexity measures: Size of module/unit (larger module/units are considered more complex). c. 8) Subjective data normally has to be calculated. ex : how easy it is to use and the skill level needed to execute the system. 10) Good metric should have following characteristics : Reliability. component. Size Measurements : includes software size of software systems. implementing. hours worked.Timeliness. Complexity Measurements : quantitative values accumulated by a predetermined method. Pages or words of documentation . d. techniques. 6) Product metric – characteristics of the documentation and code. 5) A process metric is a metric used to measure characteristics of the methods.Ease of use and simplicity and calibration. such as lines of code. The following are examples of size metrics: KLOC – thousand lines of code. used primarily with statement level languages. The following are examples of metrics unique to test: Defect removal efficiency – the percentage of total defects occurring in a phase or activity removed by the end of that activity.Test Metrics ---------------- 1) SDLC phases – Requirement. such as defects. or process possesses a given attribute. which measure the complexity of a software product. Function points – a defined unit of size for software. 3) Test metrics includes data regarding testing. Logic complexity – the number of opportunities to branch/transfer within a single module. These can also be used to measure software testing productivity.
usability. 27) TC Review Productivity = No. Numbers per day 3. 29) Manual Test Execution Productivity= Manual Test Cases executed / Hour. . Numbers per week etc. Ex : Defect Density Satisfaction metrics : includes Assessment of customers on effectiveness and efficiency of testing. 28) Defect Rejection Ratio = Number of defects rejected/ Total number of defects. Ex: ease of use. Numbers Per Hour 2. uncorrected defects. Needs clarification from the customer to have a clear idea on the requirement to plan how it can be implemented and tested. Ex : cost of testing. of Errors / No. reliability. 21) Requirement in scope : that are clearly specified and well understood by all stakeholders and the customer wants these requirements to be implemented and tested for quality. For Manual Testing – Design of Test Cases: Manual TC creation productivity = Manual Test Cases written / Effort The Unit of Measures is: 1. customer expectations and project constraints. unresoloved CRs and amount of testing used by automation tools. 24) Test Case Creation productivity = Test Cases written / Effort. of Test Cases reviewed / Review Effort. Defect Metrics : includes values associated with numbers or types of defects. 20) Requirement can be classified into 3 catagories. Numbers Per Hour 2.e. of Test Cases. subjective assessment. For Automation Testing – Design of Test Scripts: Automation script creation productivity = Scripts written / Effort The Unit of Measures is: 1. It is calculated in person hour. 13) A Person hour or man-hour is the amount of work performed by an average worker in one hour. h. 25) Deviations or differences that found between expected and actual results of the test cases are called test case review errors 26) Rate of errors that arise in the test suite is called Test Case Review Error rate TC Review error rate = No. Some times we call as CRs. 23) Need clarification : that are not clear to one or more stakeholders of the project.customer complaints. but customer doesn’t want those requirements to be implemented or tested.1) Req in scope 2) Req in not in scope 3) need clarification. Numbers per week etc. such as “defects/1000 lines of code” or “defects/100 function points. 12) Requirement document review effort : The effort required to review the requirement specification document against the specified client requirements is Requirement Specification Documentation Effort.. 18) Percentage of Requirements covered along with planned testing/Adhoc testing with the designed test cases can be calculated as: (Total Number of Requirements Covered/Total Number of Requirements in the Requirement Document) * 100 19) An RTM traces all the requirements from their genesis through design.acceptance criteria met. Product measures : includes measures of a product’s attributes such as performance. 17) Test Plan can be overall test plan or level test plans.User participation (produce s/w within budget and on time). 22) Requirement not in scope : that may or may not be applicable for the project.. 14) Test Startegy is a result of balancing quality risks and project resources including time. usually related to f. what approach will give an optimum yield of bugs and minimize the risk of product failure. 15) Good Strategy : Given the product nature. Productivity Metrics : includes effectives of test execution. g. development and testing. Numbers per day 3. system size. etc.” severity of defects.
This is also called Defect Ageing. Reliability Analysis (Reliability Estimation) .Planned Effort)/Planned effort * 100. 38) Defect Turn around time = Defect Closed Day – Defect Open Da.Planned End date)/Planned end date * 100. CoDec-SCE used for (Effort Estimation) System Complexity Estimator Analyzes and estimates the effort distribution required across modules. 32) New Test Cases efficiency = Cumulative Defects found by New Test Cases / Cumulative New Test Cases Execute. 34) Test Case Passing Rate = No. Codec Tool is mainly used during the planning phase.Test case productivity. 4) Tools used for each process : CoDec Tool : 1. StORM Tools : 1) Statistics Operations Research Matrix –Strom Tools. Open. 37) Effort Deviation ratio = (Actual effort. OA Tool : Orthogonal Array (Test suite optimization) o Optimizes the test suite.Complexity Dependency Change impact (Test Planning) o Wipro OA Tool . o Reduces effort during Test case development Test case execution DFA Tool: 1. 35) Test Schedule Deviation ratio = (Actual end date. Avoids dependency clash during test execution of modules. o Eliminates redundant test cases. . CoDec-SCIM used for (Maintenance testing) System Change Impact Matrix Estimates Change Impact on the system due to Change Requests (CRs) Estimates relative effort distribution across different CRs. 36) Effort Deviation ratio = (Actual effort. 2) Strom Tools comprises : o Wipro CoDeC Tool .Planned Effort)/Planned effort * 100.Defect Flow Analysis ( Test Reporting) 3) StORM tools used in all phases of STLC (Software Testing Life Cycle). 2. CoDec-DSM used for (Test Sequencing) Dependency Structure Matrix Determines sequence of test execution. of Test Cases passed / No. 3. OA tool is used during the test design phase and DFA tool is used during test reporting phase. 33) Test Coverage % (Functional) = (No. Determines modules to be executed serially and in parallel. pass rate.Defect priority analysis etc. of functions covered / Total functions)*100. or Closed) as a function of time.30) Automation Test Execution Productivity = Automated Test Cases executed / Hour.Orthogonal Array (Test Designing) o Wipro DFA Tool .Defect Trends . 2. (The Unit of Measure is Numbers Per Hour) 31) Defect trend reports show defect counts by status (New. of Test Cases executed. Metrics Analysis (Test Reporting) Systematically analyzes various metrics in testing projects Standardizes reports by providing graphical and tabular representation of . o Reduces cycle time and improves productivity o Improves test coverage.
Estimates residual defects in the system Indicates whether to continue or stop testing .
Functional Points (FP).5) Wipro CoDec Tool . o Leveling table outputs the modules according to their levels of early sequencing. o Cohesion act as a input to calculate the effort required for each modules. The tool helps in decision making across different phases of a testing lifecycle. o Cyclic Blocks are outputs of modules that have interdependency. o In this limited resource path analysis. . the same way for finishing it. 11) Exhaustive testing is often impractical and in most instances impossible to achieve in the time allocated. o Module Impact Index Table is helpful for a Tester to calculate testing effort and Factor impact index table is helpful for a developer to calculate development effort. it determines the modules that has to be tested in parallel without any dependency clash. o Float Analysis table gives a collective study on how early we have to start testing a module. This enables us to identify very easily “n” module is in which level. how late we have to start testing a module. 13) A Error occurring because of one single input is called Single Mode defect.module dependency analysis. budget and time. o How much we can relax between testing of all modules is defined in “As late as possible sequencing”.. 10) The Orthogonal Array Testing Strategy (OATS) is a systematic.Wipro CODEC tool captures information related to the number of modules. we could find ourselves executing tests for infeasible combination where there are chances that may run short of time and budget. DSM lists out the modules which are cyclically dependent and assigns a block number ( reference number ) and lists in this cyclic block output. It determines the sequence of execution. Lets see more on OA functionality in this section. It tells the modules that has to be kept under a single team. Fine Function Points are examples 8) SCIM – System Change Impact Matrix analysis distribute testing effort across modules after receiving Change requests while maintenance. 6) DSM – Dependency Structure Matrix is used mainly for Test sequencing. This ensures that we are within quality. 9) Useful reports that DSM will generate Module dependency list. o Value Thread output gives the end to end business process in the system o Schedule Info gives the execution time assigned for each modules of the flow graph in the Input File. strength of interdependency among the modules and complexity of an individual module in the form of the matrix. Cohesion is nothing but number of factors influencing complexity in a module o Complexity can be identified using different external techniques: KLOC.level analysis.. o Partitioning table maps the relationship of modules with the corresponding levels. 14) A Error occurring because of more than two inputs combination is called Triple/Multi Mode defect. the execution time of a cyclic block is taken as the sum of execution time of individual modules of the block 7) SCE – System Complexity Estimator focuses on the distribution of efforts across all modules while testing. On the other hand. if we were to test all combinations. o Tagging refers a particular module is tagged to which level. This also helps for proper risk management. 12) Dr. Genichi Taguchi was one of the first proponents of orthogonal arrays in test design. If error occurs of two input combination is called double mode defect. statistical way of testing pair-wise interactions during test design phase.
15) OA Application Process is given above: • Analyze the existing system • Identify the Factors and Levels of inputs • Apply OA and get the optimized combinations • Check whether the existing test suite is sufficient • Add non-existing test cases on your satisfaction • Add test cases based on your past experience • Finally we will get a optimized suite that covers all single and double mode combinations 100% • Once all the factors and levels are abstracted from the application. •Inclusion of Leave Calendar in schedule generation. in the system. this acts as an input to the Wipro OA tool to generate the optimized number of test cases 16) DFA Tool has 2 features: Metric Analysis and Reliability Analysis. 19) ReAL Tool : Resource allocation Tool. undermining the effects it can create on the project management. if any. To release the product to the field with an estimated confidence level with respect to the residual defects. Maintenance Testing . 17) Metric Analysis mainly addresses: Test Reporting What reports are to be generated? Every single graphical representation requires a table to be generated independently Team members usually procrastinate. 18) Reliability Analysis mainly addresses: Reliability estimation and release Whether to release a product or continue testing. •Automatic Project Scheduling Tool •Module Dependencies and Constraints are considered. 20) Features of ReAL tool : • Web based tool • Excel based standard input template • Output: –Excel based –Html • Project schedule creation within minutes –At the beginning of the project –Any time during the project •Optimal allocation of multi-skilled resources •Inclusion of –Resource leaves –Organization holidays •Resource release dates •Module Delivery dates •Module Dependency ->Schedule Creation Effort Estimation Tool Technique or the Methodology used for estimating should be a) reliable b) consistent. •Recommendations for Optimized skill solution •Output can be fed to Microsoft Project Plan for monitoring and tracking •Anytime during the project life cycle.
• There exist similar projects executed earlier in the organization and somebody analyzed the past data and has already come up with a framework. • Estimation is obtained from the experts in a confidential manner. • There are enough experts available within the organization who can judge over the requirements in hand. Confidence level is used in Interval and Upper Limit estimates. Delphi or framework method. • Team has enough insight into the project needs and is able to analyze the risk factors so that they can reach on the optimistic.This is a single value estimate. 4 main criteria which determine the business criticality of a project. . Usually.Effort Estimation Process Collective Method : • Senior team members are available to do estimation at the time of estimation process.This is again a single value estimate. • Senior team members are available to do estimation at the time of estimation process. • They can meet and interact with each other. Delphi Method : • Experts outside the project team could also be asked to participate in the estimation process. Confidence level is the probability that the actual effort will be within the specified interval of the estimate.yes/no Term of association with the client – long/medium/short/zero Nature of competition – strong/medium/weak Nature of the project – Fixed Price project (FPP) / Time & Material (T&M) Collective thought Advanced : Collective thought-advanced method can be used under the following circumstances: • Senior team members are available to do estimation at the time of estimation process. most likely and pessimistic estimates. but it specifies an upper limit (rather than an average value) and is qualified by a confidence level. • Team members are able to reach a consensus about the estimate. which may reflect in their estimates.This is a double value estimate defining an interval qualified by a confidence level • Upper limit estimate . 3 types of inputs are fed into frameworks: • Requirement attributes • Framework constants • Estimator attributes Estimation Statements : There are 3 ways to state an estimate of the execution effort needed for a project. Frameworks help to reduce this variance. • Interval estimate . • Even though the use of framework is optional. There would not be any interaction between the experts involved in the estimation process. it should be used if one is available. • Point estimate . Framework Method: Different estimators will have varied experiences. Usually point estimate is the average estimate arrived at through collective thought. This will bring wide variance in their estimates. • Project/business manager needs estimates for different confidence levels. • The project manager is able to find different experts inside and outside the project team to give estimates. They are: Existing client .
average estimate AE = (OE + 4*ME + PE) / 6 Standard deviation SD = (PE – OE) / 6 Delphi Advanced : In this method of estimation. participating experts in the Delphi method are asked to use the framework already available and provide 3 estimates . Hybrid Method : In this method. there is a category of inputs called ‘estimator attributes’ in a framework that adds variability to the estimates. Collective Thought Collective Thought Advanced Delphi Method Delphi Method Advanced Framework Method Framework Method Advanced Hybrid Test Estimation Test effort composition: (*) Application familiarization / Knowledge Acquisition / Knowledge Transition (*) Parths of the test effort composition: Test design. Use of framework is optional depends on the availability of the same.The average estimate and standard deviation are calculated using the following equations. • There exist similar projects executed earlier within the organization and analysis has already been done on the past data and a framework has been created. interval or upper limit estimates. the moderator gets the individual estimates from different experts. • The project manager needs estimates at different confidence levels. most likely and pessimistic estimates. the moderator processes these estimates as follows and states his estimate using any of the 3 ways. (*) Productivity factor(PF) is a ratio of the number of man hours per usecase points based on past project.optimistic.Test Plan. This means that all the experts would give 3 estimates each. (*)Estimation should also consider factors that can have a strong influence on the Application Under Test (AUT) : Technical Complexity Environmental Complexity Meeting Meeting Triple Estimate Confidence level Independent Experts Independent Experts Confidence level Framework Framework Triple Estimate Confidence level Independent Experts Framework Triple Estimate Confidence Level .optimistic. Thereafter. As discussed earlier. Framework-advanced In this method. a framework is used by an expert to arrive at an estimate.point. the expert is allowed to give 3 kinds of estimates . In the Framework-Advanced method of estimation. most likely and pessimistic. most likely and pessimistic. In this method is used under the following circumstances: • Senior team members are available to do estimation at the time of estimation process.Test execution and Test data creation. • Team has enough insight into the project needs and is able to analyze the risk factors so that they can reach on the optimistic.
ILFs and EIFs in the system • Extracted from the application / system’s Technical Design (Software Architecture / HLD / LLD) • UCP count: • Characterizes / measures the system from Requirements perspective • Calculated based on : Use Cases and Actors in the system • Extracted from the application / system’s Business RequirementsSpecifications / System Requirements Specifications / FunctionalRequirements Specifications. maintained by an end user. Test Effort Estimator (TEE) • Computes Test Effort Estimates for a new project of a given domain • Output is the Net Effort Estimate (Combined effort for all defined testing activities in the project). §External Interface Files o The second Data Function a system provides an end user is also related to logical groupings of data o Groupings of data from another system that are used only for reference purposes are defined as External Interface Files (EIF). EQs. are referred to as Internal Logical Files (ILF).Productivity Constant over a set of completed projects of a given domain.(*)Technique or the Methodology used for estimating should be: Reliable Consistent Test Productivity Calculator (TPC) • Calculates Test Productivity constant in units of: UCPs per person hour or FPs per person hour. . Components of Function points §Data Functions • ŸInternal Logical Files • ŸExternal Interface Files §Transactional Functions • ŸExternal Inputs • ŸExternal Outputs • ŸExternal Inquiries Data functions – Internal/External §Internal Logic Files o The first data function allows users to utilize data they are responsible for maintaining. FP counting and UCP counting are proven to be more consistent and reliable methods for measuring the “size” of a given software system / application • Both are independent of: • The underlying technology of application • Programming Language • Development methodology • Hardware platform / Competency • FP count: • Characterizes / measures the system from a Functional perspective • Calculated based on : EIs.Helps to baseline Test. o Logical groupings of data in a system. EOs.
Transactional functions – External Input/External Output The Transactional functions address the user's capability to access the data contained in ILFs and EIFs. This capability includes maintaining, inquiring and outputting of data. External Input The first Transactional Function allows a user to maintain Internal Logical Files (ILFs) through the ability to add, change and delete the data An External Input gives the user the capability to maintain the data in ILF's through adding, changing and deleting its contents External Output The next Transactional Function gives the user the ability to produce outputs The results displayed are derived using data that is maintained and data that is referenced. In function point terminology the resulting display is called an External Output (EO).
The Unadjusted Function Point count is multiplied by the second adjustment factor called the Value Adjustment Factor. Productivity measurement is a natural output of Function Points Analysis. There are Five Standard FP Components: ILFs EIFs EIs EOs Eqs (*)There are several approaches used to count function points. One approach can be accomplished with minimal documentation which improves accuracy and efficiency Examples of documentation are o Design specifications o Display designs o Data requirements (Internal and External) o Description of user interfaces (*)Productivity measurement is a natural output of Function Points Analysis. o function points can be used to evaluate the support requirements for maintaining systems. o Managing Change of Scope for an in-process project is another key benefit of Function Point. Analysis Communicating Functional Requirements was the original objective behind the development of function points. o Function Point Analysis has proven to be an accurate technique for sizing, documenting and communicating a system's capabilities with real time and embedded code system
Internal Logical Files (ILFs): ILFs represent the data stored and maintained within the boundary of the application ILFs are the data functions maintained by the application External Interface Files (EIFs): EIFs represent the data that the application will use/reference and which is not maintained by the application The data resides entirely outside the application boundary It is maintained by another applications external inputs External Inputs (EIs): EIs are External Data flowing into the inside across the application boundary This data may come from a data input screen or another application External Inquiries (EQs): EQs are data information that crosses the boundary from inside to outside the application boundary EQs result in data retrieval from one or more internal logical files and external interface files External Outputs (EOs): EOs are derived data flowing out across the application boundary to outside This data can be reports or output files sent to other applications Work Breakdown Structure (WBS) is a tool that defines a project groups the project’s discrete work elements organize and define the total work scope of the project Work breakdown structure element may be a product data a service or Any combination. WBS also provides the necessary framework for detailed cost estimating schedule development and control
L1S Testing Concepts sample concepts 1.
Storm is a property tool of a. b. c. d. a) Wipro Infotech b) Wipro technologies c) WiproBPO d) All of the above
what are the tools not part of the strom tools a. b. c. d. a) Codec b) Orthogonal c) Defect low analysis d) Test phase tool
Strom tools is used in which phase of software testing life cycle a. b. c. d. a) Planning b) design c) reporting d) All of the above
Cylical dependency is between a. b. c. d. a) process b) phases c) Modules d) None of the above
Cyclic Blocks are outputs of modules that have a. b. c. d. a) interdependency b) dependency c) paths d) networks
Leveling table outputs the modules according to their levels of a. b. c. d. a) phases b) dependency c) early sequencing d) None of the above
Partitioning table maps the relationship of modules with the corresponding a. a) Table b. b) Rows
d. a) segments b) Columns c) sequence d) blocks 11. ensures that are within a. c. d) levels 8. d. b. a) test effort b) Factor Impact Index c) KLOC impact d) CRQ impact 13. d. c. c. Cohesion Complexity can be identified using different external techniques a. a) FFP b) FP c) KLOC d) All of the above 12. a) early start and early finish b) late start and late finish. c) Columns d. b. c. b. b. Testing of all modules is defined in As late as possible sequencing.c. Factor impact index table is helpful a. d. Module Impact Index Table is helpful for a Tester to calculate a. d. In Unlimited Resource path analysis. Cyclic Block execution time is taken as the maximum of execution times of individual modules in a a. c) resource path analysis d) Both a and b 10. b. b. Float Analysis table gives a collective study on a. d. a) quality b) budget c) time d) All of the above 9. c. a) Tester b) Developer c) Quality Analyst d) None of the above . c.
d. d. c. a) Test Plan b) Test stratregy c) Test Estimation d) None of the above 19. a) OATS b) CODEC c) DSM d) All of the above 16. d. c. c. c. Aggressive time to market schedule requires testers to a. this acts as an input to the Wipro OA tool to generate the optimized number of a. a. Which tool in strom ensures that single and double mode combinations are covered 100% and triple/multi mode combinations are covered less than 100% a. b. Application familiarization / Knowledge Acquisition / Knowledge Transition is a part of a. b. b. c. d. If all the factors and levels are abstracted from the application. a) Reduce cycle time b) maximum defects c) maximum coverage d) All of the above 17. a) Test Strategy b. b) Test Plan c. d. b. c) Test Execution . c. a) time b) estimation c) Verification d) Validation 20. what kind of tool can be used to test all combinations which would need 3 x 3 x 2 = 18 test cases. d. Resuability Factor in a Test environment is an activity of a. a) defects b) requirements c) Design d) test cases 18. b. a) Storm b) Orthogonal c) Codec d) DSM 15. Which factor is more profitable from the Analysis point of view in terms for an off shore model Cost a. b.14.
a) Overhead b) variable cost c) Fixed cost d) ROI 27. b. What classification does ROI effects the cost of the Test project a. d. a) Test documentation b. Which is not a part of Test effort compostion a. a) Ratio of saving over the cost of the investment b) Ratio of return over the cost of the investment c) Ratio of increasing over the cost of the investment d) Ratio of return on investments 25. What is the ROI in Test estimation a. Collective thought Estimation technique method is done by which team a. d. a) Activty b) Effort c) defects d) Both a and b 24. Time taken in minutes for the number of units in the application is an estimtion based a. b. d. a) Fixed costs b) variable cost c) Overhead costs d) Both a and b 26. c) Test data creation d. c. b. Application Maintenance is often defined as the modification of a software product after a. b. d.d. d. b) Test Plan c. b. c. Test case execution belongs to which type of costs a. c. c. d) Test execution 22. a) Change request identifed b) Build implementaion c) delivery to correct faults d) Test projects 23. a) Project team . d) Test effort compostion 21. c.
a) Point method b. d. b. c) internval method d. d. b) classical method c. c) development team d. a) Simple b) Medium c) Complext d) All of the above 33. most likely and pessimistic. d. c. c. b. How many ways are to state an estimate of the execution effort needed for a project a. under what method of estimate does optimistic. c.b. b) Testing team c. a) UUCW b) UAW c) both a and b d) TCF 32. Unadjusted Use Case Points are computed based on two computations: a. c. c. Individual use cases are categorized as a. a) four b) five c) three d) two 29. d) upper limit method 30. Which is not an Estinmation estimates a. b. b. d. b. a) Future Projects b) Present projects c) Past Projects d) None of the above . a) Delphi b) COCOMO c) cost d) Framework 31. d. a. The Productivity Factor (PF) is a ratio of the number of man hours per use case point based on a. d) All of the above 28.
d. b. a) Reliability b) Consistent c) Complexity d) Both a and b 36. d. d. a) data b) defects c) Performance d) All of the above 37. a) Data b) Subjective Measure c) Objective Measure d) None of the above 38. c. c. b. c. a) Defect removal b) Defect density c) Mean time d) Design . c. Which Metric is not unique to a test a. Which is not a category of Metric a. Estimation should also consider factors that can have a strong influence on the a. a) Appliction under test b) System Under test c) Operating system d) Both a and b 35. c. d. d. a) Validity b) Calibration c) TimeLineess d) None of the above 39. Total number of defects remaining at delivery is related to a. a) Defect b) Size c) Orthognal array d) Complexity 40. Metrics specific to testing include a. b.34. d. b. consistency of measurement in a Metric is called a. Technique or the Methodology used for estimating should be: a. d. b. c. b. c. b.
d. a) None to One b. b. c. b. c. c.41. A set of requirements can have no test cases a relation ship of a.Many to Many. a) more b) less c) relative d) expensive 45. b. a) Test Strategy b) requirements c) Satisfaction d) Good Strategy 44. which matrix evolves through the life cycle of the project a. a) Planning b) phases c) Metrics traceability d) Requriement Tracebality Matrix 47. d. d. a) Complaints b) Ease of use c) Satisfaction d) None of the above 43. what Kind of Metric approach will give an optimum yield of bugs and minimize the risk of product failure a. d. d. a. Preparation of test plans can be started as early as possible in the life cycle to make COQ a. c. b. c. c. Which is an example of complexity measure a. b. d. One to None a. b) None to many c. c) None to None . a) Defects b) Test cases c) Requirments d) Size and Logic 42. What kind of category of metrics includes the assessment of customers of testing on the effectiveness and efficiency of testing. Which Metrics of the traceability identifes one to one. b. One to Many. a) Design traceability matrix b) Test requirement matrix c) Requirement traceability matrix d) All of the above 46.
a) Subjective Validation b) Objective Validation c) V and V model d) Validation 50. b. d. c. d. Optimization is the process by which bottlenecks are identified and removed by tuning the Test cases according to the classification of the a) Requirements b) Design c) Execution d) Application .d. In the Test plan procedure Initiate technical review of the test plan is called a. The effort for reviewing Test Plans is calculated in a. b. d) One to None 48. c. a) Man Hours b) Person hours c) Metric hours d) None of the above 49.
Q.No 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 Correct Answer D D D C A C D D D D D A B B A D D C A D A C D A D B A C C D C D D C D D D A D C D D C D B C D D B .