P. 1
Test Estimation Using FP-UCP

Test Estimation Using FP-UCP

|Views: 18|Likes:
Published by Pritam Surve

More info:

Published by: Pritam Surve on Jun 02, 2012
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as XLS, PDF, TXT or read online from Scribd
See more
See less

10/10/2014

pdf

text

original

Test Estimation

Function-dependent (Df) Df = ((Ue + Uy + I + C)/16) * U

Ref#1

Df = weighting factor for the function-dependen Ue = user-importance Uy = usage-intensity I = interfacing C = complexity U = uniformity

Ref#2

Dynamic test point formula TPf = FPf * Df * Qd

TPf = number of test points assigned to the fun FPf = number of function points assigned to the Df = weighting factor for the function-dependen Qd = weighting factor for the dynamic quality c Qd = per characteristic 0.02/4*Weighting Facto together (explicitly measurable quality characte

Productivity factor: Ref#3 Total number of test points TP = ∑TPf + (FP * Qi) / 500

TP = total number of test points assigned to the whole ∑TPf = sum of the test points assigned to the i functions (dynamic test points) FP = total number of function points assigned t as a whole (minimum value 500) Qi = weighting factor for the indirectly measura characteristics

Primary test hours formula PT = TP * P * E

PT = the total number of primary test hours TP = the total number of test points assigned to the whole P = the productivity factor E = the environmental factor

Total number of test hours

PT= Primary test hours formula PC= Total number of test hours

Total test hours= PT*PC

Breakdown between phases The result of a TPA is an estimate for the complete test process, excluding formulation of the test plan. If a str test process is divided into five life cycle phases; many clients will want to see estimates for the individual pha process. The estimate for the Planning and Control phase will normally be the same as the planning and cont multiplied by the planning and control percentage. The primary test hours are then divided between the Completion phases. The breakdown between the phases can of course vary from one organization to another another. Suitable phase percentages can be calculated by analyzing completed test projects; thus, historical d breaking down the total estimate. Experience with the TPA technique suggests that the following percentages are generally appropriate: • Preparation: 10 percent • Specification 40 percent • Execution 45 percent • Completion 5 percent

mation

Df = weighting factor for the function-dependent factors Ue = user-importance Uy = usage-intensity = interfacing C = complexity U = uniformity

TPf = number of test points assigned to the function FPf = number of function points assigned to the function Df = weighting factor for the function-dependent factors Qd = weighting factor for the dynamic quality characteristics Qd = per characteristic 0.02/4*Weighting Factor, then added ogether (explicitly measurable quality characteristics)

TP = total number of test points assigned to the system as a whole ∑TPf = sum of the test points assigned to the individual unctions (dynamic test points) FP = total number of function points assigned to the system as a whole (minimum alue 500) Qi = weighting factor for the indirectly measurable quality haracteristics

PT = the total number of primary test hours TP = the total number of test points assigned to the system as a whole P = the productivity factor E = the environmental factor

PT= Primary test hours formula PC= Total number of test hours

If a structured testing approach [5]. thus.s. Execution and course vary from one organization to another. as well as for the complete test ormally be the same as the planning and control allowance. or even from one organizational unit to yzing completed test projects. Specification. excluding formulation of the test plan. the ill want to see estimates for the individual phases. the primary test hour count test hours are then divided between the Preparation. historical data on such projects is necessary for wing percentages are generally appropriate: . i.e.[6] is used.

Ref#1 .

.

.

Ref#2 .

Ref#3 .

.

.

.

.

.

.

.

6 Normal: the importance of the function relative to the other functions is normal. As with user-importance the usage-intensity is being determined at a user-function level. Usage Intensity: The usage intensity has been defined as the frequency with which a certain function is processed by the users and the size of the user group that uses the function. 4 Normal: the function is being used a great many times per day 12 High: the function is used continuously throughout the day. . 12 High: the importance of the function relative to the other functions is high. Rating: 3 Low: the importance of the function relative to the other functions is low.Function-dependent (Df) User Importance: (A useful rule of thumb is that about 25 per cent of functions should be placed in the “high” category. 50 per cent in the “normal” category and 25 per cent in the “low” category. Rating: 2 Low: the function is only used a few times per day or per week.

a given function may be counted several times over if it accesses several LDSs. Explanation : L : Low interfacing A : Average interfacing H : High interfacing If a function does not modify any LDSs. then the other functions which access these LDSs. 8 The degree of interfacing associated with the function is high. The degree of interfacing is determined by ascertaining first the logical data sets (LDSs) which the function in question can modify. Rating: 2 The degree of interfacing associated with the function is low. When working out the number of "other functions" affected. all of which are maintained by the function for which the interweave calculation is being made. 4 The degree of interfacing associated with the function is normal. it is given a low interface rating. LDS\functions 1 2-5 >5 .Interfacing: Interfacing is an expression of the extent to which a modification in a given function affects other parts of the system. A CRUD table is very useful for determining the degree of interfacing. An interface rating is assigned to the function by reference to a table in which the numbers of LDSs affected by the function are ranged vertically and the numbers of other functions accessing the LDSs are ranged horizontally.

In function point analysis. a unique method of updating a data set) . although it does not use a unique combination of data sets.g. does use a unique processing technique (e. 12 The function contains more than eleven conditions. • In the case of a clone function: the test specifications can be reused for clone functions. the test specifications can be largely reused. 6 The function contains between six and eleven conditions. only 60% of the test points assigned to the function under analysis count towards the system total: • In the case of a second occurrence of a virtually unique function: in such cases. An information system may therefore contain functions that possess a degree of uniformity for test purposes.6 is assigned in cases of the kinds described above. otherwise a uniformity factor of 1 is assigned. even though they are regarded as unique in the context of a function point analysis. A uniformity factor of 0. • In the case of a dummy function (provided that reusable test specifications have already been drawn up for the dummy). the term “unique” is applied to the following: • A function which uses a combination of data sets which is not used by any other input function • A function which.Complexity: Rating: 3 The function contains no more than five conditions. Uniformity: Under the following circumstances.

Rating: 0 Quality requirements are not important and are therefore disregarded for test purposes.) 6 Quality requirements are extremely important. explicitly measurable quality characteristics are recognized: • Suitability • Security • Usability (regarding usability no distinction has (yet) been made in subcharacteristics.(This rating is generally appropriate where the information system relates to a support process.Dynamic quality characteristics (Qd): In TPA. The number of test points assigned to each function can be calculated by entering the data so far obtained into the following formula: . explicitly measurable quality characteristics: Characteristic / Rating: 0 3 4 5 6 Functionality (weighting 0. (This rating is generally appropriate where the information system relates to a primary process. 4 Quality requirements are of normal importance. 3 Quality requirements are relatively unimportant but do need to be taken into consideration for test purposes.) 5 Quality requirements are very important. this is done separately for each subsystem. Dynamic.75) Security (weighting 0.10) Dynamic test point formula The number of direct test points is the sum of the test points assigned to the individual functions. since there are no usability testing techniques available that have this level of accuracy. four dynamic.05) Usability (weighting 0. if necessary. efficiency is not split up into time-behavior and resource-utilization) The importance of the requirements relating to each quality characteristic is rated.) • Efficiency (for the same reason as mentioned at usability.10) Efficiency (weighting 0.

by evaluating the security measures with the support of a checklist.Rating Suitability 1. Security can therefore be measured dynamically. A static test can be carried out using a checklist. In principle all ISO 9126 quality characteristics [3] can de tested using a checklist. One has to determine whether the static measurable quality characteristics are relevant for test purposes.g. The indirect test point count is also influenced by the requirements regarding the static quality characteristics to be tested (the Qi factor). E. another sixteen is added to the Qi factor rating. using a semantic test. the factor Qi get the value sixteen. . For each subsequent quality characteristic to be included in the static test. Processing 2. and/or statically. Method of calculation (Qi) If a quality characteristic is tested by means of a checklist (static test). Screen checks Security Usability Efficiency Static test points The indirect test point count depends mainly on the function point count for the system as a whole.

knowledge and skill of the test team. Specification. If insufficient information is available to enable rating of a given variable. thus. but also by the environmental factor. The primary number of test points is multiplied by the productivity factor and the environmental factor to obtain the primary test hour count. Environmental factor The number of test hours required for each test point is influenced not only by the productivity factor. the greater the number of test hours required.4 gives the total number of test points assigned to the system as a whole.0.Primary test hours The formula presented in subsection 5. Again. The productivity factor is a measure of the experience. Productivity factor The productivity factor indicates the number of test hours required per test point. Execution and Completion. intermediate ratings are not allowed. In practice the productivity factor has shown to have a value between 0. A number of environmental variables are defined for calculation of the environmental factor. The various environmental variables and the associated ratings are described below. The primary test hour count is the number of hours required for carrying out the test activities involved in the test life cycle phases Preparation. The higher the productivity factor. historical data on such projects is necessary for productivity factor determination. This total number of test points is a measure of the volume of the primary test activities. one of the ratings given must be selected. Productivity factors can be calculated by analyzing completed test projects. the nominal rating (printed bold) should be assigned. The productivity factor can vary from one organization to the next or from one organizational unit to the next. .7 and 2.

but no record and playback tool is being used. the earlier testing will have been system testing. 4 No test tools are available. Development testing The development testing variable reflects the quality of earlier testing. The quality of such development testing influences the amount of functionality that may be require less thorough testing with less coverage and the duration of the test activities. the less likely one is to encounter timeconsuming problems during the test currently under consideration. . 8 No development testing plan is available. Rating: 1 Testing involves the use of a query language such as SQL. the term “test tools” covers tools that are used for the primary test activities. if the estimate is for a system test. For. a record and playback tool is also being used. If the estimate under preparation is for an acceptance test. For the purpose of calculating this variable. the earlier testing will have been white-box testing. the better the development testing. Rating: 2 A development testing plan is available and the test team is familiar with the actual test cases and test results 4 A development testing plan is available. or the extent to which automatic tools are used for testing. The availability of test tools means that some of these activities can be performed automatically and therefore more quickly.Test tools The test tools variable reflects the extent to which testing is automated. 2 Testing involves the use of a query language such as SQL.

the degree to which the development environment will have prevented errors and inappropriate working methods is of particular importance. in addition the inspections are organized 6 During the system development documentation standards are being used and a template. Rating: 2 The system was developed using a 4 GL programming language with an integrated DBMS containing numerous constraints. The quality of the test basis influences the amount of time required for the Preparation and Specification phases. In this context. Development environment The development environment variable reflects the nature of the environment within which the information system was realized. it is of course not necessary to test for them. 12 The system documentation was not developed using a specific standards and a template. 8 The system was developed using only a 3 GL programming language such as COBOL. Rating: 3 During the system development documentation standards are being used and a template.Test basis The test basis variable reflects the quality of the (system) documentation upon which the test under consideration is to be based. 4 The system was developed using a 4 GL programming language. . If errors of given type cannot be made. possibly in combination with a 3 GL programming language. PASCAL or RPG.

Rating: 1 The environment has been used for testing several times in the past. Method of calculation (E) The environmental factor (E) is calculated by adding together the ratings for the various environmental variables (test tools.) is available for the test. In a well-tried test infrastructure. then dividing the sum by twenty-one (the sum of the nominal ratings). but separate factors can be calculated for the individual subsystems if appropriate. one environmental factor is worked out for the system as a whole. 4 The test is to be conducted in a newly equipped environment which may be considered experimental within the organization.) and specified test cases are available for the test. . 2 The test is to be conducted in a newly equipped environment similar to other well-used environments within the organization. development testing. Rating: 1 A usable general initial data set (tables. Normally. Testware The testware variable reflects the extent to which the tests can be conducted using existing testware. test environment and testware). test basis. fewer problems and delays are likely during the execution phase. 4 No usable testware is available. etc. The availability of usable testware mainly influences the time required for the Specification phase. development environment. etc. 2 A usable general initial data set (tables.Test environment The test environment variable reflects the extent to which the test infrastructure in which the testing is to take place has previously been tried out.

where appropriate. . 8 No automated (management) systems are available. Planning and control tools The planning and control tools variable reflects the extent to which automated resources are to be used for planning and control. in line with the following two factors: • Team size • Management tools Team size The team size factor reflects the number of people making up the team (including the test manager and. The standard (nominal) allowance is 10 per cent. 6 The team consists of between five and ten people. the test controller). Rating: 3 The team consists of no more than four people. the allowance may be increased or decreased. allowance needs to be made for such activities. Rating: 2 Both an automated time registration system and an automated defect tracking system (including CM) are available. 12 The team consists of more than ten people. 4 Either an automated time registration system or an automated defect tracking system (including CM) is available.Total number of test hours Since every test process involves tasks which may be placed under the heading “planning and control”. The number of primary test hour and the planning and control allowance together give the total number of test hour. However.

historical data on such projects is necessary for breaking down the total estimate. Suitable phase percentages can be calculated by analyzing completed test projects. or even from one organizational unit to another. Specification. thus. the primary test hour count multiplied by the planning and control percentage. Addition of the planning and control allowance to the number of primary test hours gives the total number of test hours. Breakdown between phases The result of a TPA is an estimate for the complete test process. many clients will want to see estimates for the individual phases. Execution and Completion phases. excluding formulation of the test plan. The breakdown between the phases can of course vary from one organization to another. as well as for the complete test process. The primary test hours are then divided between the Preparation.[6] is used. Experience with the TPA technique suggests that the following percentages are generally appropriate: • Preparation: 10 percent • Specification 40 percent • Execution 45 percent • Completion 5 percent . If a structured testing approach [5]. i.Method of calculation The planning and management percentage is obtained by adding together the ratings for the two influential factors (team size and planning and control tools).e. the test process is divided into five life cycle phases. The estimate for the Planning and Control phase will normally be the same as the planning and control allowance. The allowance in hours is calculated by multiplying the primary test hour count by this percentage.

Nevertheless. complexity. If a rough function point count is available. so that Df has a value of one. userintensity. a single function is defined whose size is determined by the total (gross) function point count.TPA at an early stage A test project estimate is often needed at an early stage. Until detailed functional specifications are obtained. A TPA can then be carried out as described in section 5. however. any such assumptions should be clearly documented and stated on the test estimate when it is presented to the client. interfacing and the like. a rough function point analysis can often be performed on the basis of very general specifications. a rough TPA can be performed as well. All function-dependent factors (user-importance. it is not possible to determine factors such as complexity. . The environmental factor will often have to be based on assumptions. For a rough TPA. interfacing and uniformity) are usually assigned a normal value.

.

1 L L A L A H 2-5 A H H >5 .

.

.

DFT and Error Guessing DFT Sample SEM EVT and DFT Sample SEM 2.3 4 5 1. . Error Guessing and Error Guessing and SYN Error Guessing SEM sample User SEM User Profiles Profiles No test specs and SUMI Use Case or PCT Use Case or PCT (Ue: (Ue: High) and Average High) and SUMI SUMI The thoroughness of the RLT is variable and will thus be determined by the rating and the amount of hours that comes available as a consequence.

.

.

.

.

.

.

.

.

.

.

.

6 EVT Sample SEM and SYN SEM User Profiles and Overall System Usability laboratory test mined by the rating and the amount of hours .

(1991).. Kluwer Bedrijfsinfomatie.0. R. H.Software product quality . NEFPUG.J. IBMGuideline [2] IFPUG (International Function Point User Group) (1994). Internatio Standardization [5] Pol.W. Structured Testing. Tutein Noltenius. M and E. Samson Publishing. Deven [7] NEFPUG (Dutch Function Point User Group).References [1] Albrecht. Test Point Analysis: a method for estimating the testing effort (in Dutch). in: Com . van (1995).M. release 4.P. Function Point Counting Practices. InterprogramFunctionPointAnalysis (IFPA) (in Dutch) . (1984).) (1989). Definitions and counting guidelines for the appliaction o (in Dutch). International O Standardization [4} ISO/IEC PDTR 9126-2 (1997). [3] ISO/IEC FCD 9126-1 (1998).Part 1: Quality Model. an introduction to TMap. Alphen aa Netherlands [9] Veenendaal. A. Testing according to Tmap (in Dutch). van Veenendaal (1995). Amsterdam. May 1991 [8] Schimmel. IFPUG. E. (ed. van Veenendaal (1999). M. Teunissen and E.Software product quality .Part 2: External metrics. Information technology . „s He Netherlands [6] Pol. AD/M productivity measurement and estimate validation. Information technology .P.

International Organization of ording to Tmap (in Dutch). „s Hertogenbosch. Alphen aan den Rijn. The oduction to TMap. IFPUG.e validation. The d for estimating the testing effort (in Dutch). The Netherlands ns and counting guidelines for the appliaction of function point analysis (IFPA) (in Dutch) .0. May 1995 . International Organization of product quality . Kluwer Bedrijfsinfomatie. IBMGuideline n Point Counting Practices.Part 2: External metrics. January 1994 oduct quality . Samson Publishing. Deventer. Tutein Noltenius. in: Computable. release 4.Part 1: Quality Model.

Formulas Function-dependent (Df) Df = ((Ue + Uy + I + C)/16) * U Ue= 3. 4. 8 C= 3. 6. 1 Dynamic test point formula (TPf) TPf = FPf * Df * Qd FPf= number of function points assigned to the function Qd= 3. 6.6. 12 Uy= 2. the factor Qi get the value sixteen. 12 U= 0. 2.7. Primary test hours formula PT = TP * P * E TP = ∑TPf + (FP * Qi) / 500 P= 0. 4.0 (in between these two) E = (sum rating environmental variables / 21) Total number of test hours Total test hours = PT * PC PT = TP * P * E PC = (100 + sum rating variables of team size and . 12 I= 2. 5. 6 Df = ((Ue + Uy + I + C)/16) * U Total number of test points (TP) TP = ∑TPf + (FP * Qi) / 500 ∑TPf = sum of the test points assigned to the individual functions (dynamic test points) function points assigned to the system as a FP = total number of whole a quality characteristic is tested by means of a checklist Qi= If (minimum value 500) (static test). 4.

Function Dependent Point 1 Ue Uy I C U Df Point 2 Point 3 Point 4 Dynamic test Point Point 1 FPf Df Qd TPf Point 2 Point 3 Point 4 Productivity factor Point 1 ∑TPf (FP * Qi) / 500 TP Point 2 Point 3 Point 4 Primary test hours Point 1 TP P E PT Point 2 Point 3 Point 4 Total number of test hours Point 1 PT Point 2 Point 3 Point 4 .

PC Total test hour .

Point 5 Point 6 Point 7 Point 8 Total Total 0 Point 5 Point 6 Point 7 Point 8 Total Total Point 5 Point 6 Point 7 Point 8 Total 0 0 0 Total Point 5 Point 6 Point 7 Point 8 Total 0 0 0 0 Total Point 5 Point 6 Point 7 Point 8 Total 0 .

Total 0 0 .

Total Test Hour UCP Number of checklist TP 0 P E PT 0 0 Function Name Ue Uy I C U .

Team Size Rating Management Tool Rating PC 1 Df 0 0 0 0 0 0 0 0 UCPf Qd TPf 0 0 0 0 0 0 0 0 0 Total TPf .

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->