Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more ➡
Download
Standard view
Full view
of .
Add note
Save to My Library
Sync to mobile
Look up keyword
Like this
1Activity
×
0 of .
Results for:
No results containing your search query
P. 1
Towards Generating a Rulebase to Provide Feedback at Design Level for Improving Early Software Design

Towards Generating a Rulebase to Provide Feedback at Design Level for Improving Early Software Design

Ratings: (0)|Views: 163|Likes:
Published by ijcsis
Performance analysis plays an important role in the software development process. The results of performance predictions have many a times resulted in a collection of performance indices, which are complex to interpret. The proper interpretation of results and generation of suitable feedback is very important for a good performance analysis process. The aim is drive decisions based on the results generated for a performance diagnosis and to generate rule bases that improve the performance at design level rather than waiting until testing phase. The method also identifies the necessary changes to be performed on the design, based on user requirements to improve results. Simple and easily applicable rules are generated to analyze the performance of the system and to impart changes at design level. The rules identified are applied to make both configuration and design changes. These changes can be carried out to apply feedback and make changes in the UML diagrams used for design.
Performance analysis plays an important role in the software development process. The results of performance predictions have many a times resulted in a collection of performance indices, which are complex to interpret. The proper interpretation of results and generation of suitable feedback is very important for a good performance analysis process. The aim is drive decisions based on the results generated for a performance diagnosis and to generate rule bases that improve the performance at design level rather than waiting until testing phase. The method also identifies the necessary changes to be performed on the design, based on user requirements to improve results. Simple and easily applicable rules are generated to analyze the performance of the system and to impart changes at design level. The rules identified are applied to make both configuration and design changes. These changes can be carried out to apply feedback and make changes in the UML diagrams used for design.

More info:

Published by: ijcsis on Apr 10, 2011
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See More
See less

04/10/2011

pdf

text

original

 
Performance diagnosis and generation of rule bases to provide feedback on designchanges for performance improvement at design level
B.BHARATHI
1
G.KULANTHAIVEL
21
Research Scholar, Sathyabama University, Chennai-119.
2
Assisstant Professor, NITTTR, ChennaiAbstractPerformance analysis plays animportant role in the software developmentprocess. The results of performance predictionshave many a times resulted in a collection of performance indices, which are complex tointerpret. The proper interpretation of results andgeneration of suitable feedback is very importantfor a good performance analysis process. Theaim is drive decisions based on the resultsgenerated for a performance diagnosis and togenerate rule bases that improve the performanceat design level rather than waiting until testingphase. The method also identifies the necessarychanges to be performed on the design, based onuser requirements to improve results. Simple andeasily applicable rules are generated to analyzethe performance of the system and to impartchanges at design level. The rules identified areapplied to make both configuration and designchanges. These changes can be carried out toapply feedback and make changes in the UMLdiagrams used for design.
Keywords: Software performance, performanceindices, feedback, design level, rule bases
1.
 
INTRODUCTIONSoftware Performance Engineering is animportant aspect to improve the softwaredevelopment process. There are number of methods identified for performance analysis andvalidation. The process of performancevalidation includes various steps like a)converting a software model to a performancemodel, b) evaluation of the performance model,c) analyzing the results of the performancemodel, d) interpreting the results and e)providing feedback to the original softwaremodel.
Figure. steps of performance validation
 
There is a large gap between the process of performance validation done in real timesoftware industries and the research results. Thispaper is an effort to bridge this gap and to bringin methodology that can be applied forperformance prediction and analysis of software.The main challenge in the application of aperformance validation methodology is the lack of automation. Moreover the results are largenumber of performance indices, which make theinterpretation and usage difficult. There has beennumber of researches towards automating theinitial steps of performance validation.
[1,2]
Thiswork aims at automating the performanceinterpretation and feedback part of the validationprocess. The work can be further extendedtowards generating a tool for the wholeperformance validation activity.The main objective of this paper is to devise asimple methodology that can be used for theperformance analysis process and to generatesimple rules to provide feedback at the designlevel. The rules are generated keeping in mindthe relationships between various performanceattributes. Calculating the spearmans correlationcoefficient first identifies the relationship amongvarious attributes. The results are taken as thebasis for feedback and design improvements.
(IJCSIS) International Journal of Computer Science and Information Security,Vol. 9, No. 3, March 2011160 http://sites.google.com/site/ijcsis/ISSN 1947-5500
 
II. METHODOLOGYThe work starts by taking up thesoftware design given in terms of UML diagramsprofiled with SPT notations (profile forSchedulability, Performance and Timingspecifications). The software model as UMLdiagrams are converted to a layered queuingnetwork, which is the performance modelselected. The process of performance diagnosisstarts after solving the performance model. Thetype of performance model selected is based onthe ease of usage. The most accepted model isthe queuing network or the extended queuingnetwork called layered queuing networks.Queuing networks are widely used because of the ease of applicability and the direct mappingof elements available. The performance model isalso solved using traditional methods of solving.The previous steps are not detailed as they do notbecome the area of concern in this paper.A. PROCESS OF PERFORMANCEDIAGNOSISThe output of the performanceevaluation stage is a collection of performanceindices. The actual required values are collectedfrom the user or customers too. The resultsidentified from the evaluation of the performancemodel are then compared with the requirementsprovided by the customers. The design changesand feedback 
[3]
process starts only when thecalculated requirements do not meet thecustomer requirements. The spearmanscoefficient is calculated to find the relationshipbetween the two values and the interdependencyof performance attributes are idenfied.A simple ishikawa diagram methodology is usedto identify the requirement changes to improvedesign. This is used because of its easy of usageand understandability. The rules are generatedbased on the relationships derived from thediagrams. A simple ranking method is used toidentify changes.B.FACTORS AFFECTING PERFORMANCEATTRIBUTESThe following table represents someperformance attributes and the factors affectingtheir changes. This table is taken as the basis forrule generation
TABLE 1. Factors affecting Performance Parameters,DescriptionsParameters Description
The averagerate at whichprocessesarrive.Ability of acomputerprogram to beretained in itsoriginal form,and to berestored to thatform in case of a failure.
 
load is ameasure of theamount of work that a systemperforms.It isbased on theQueue statusand ProcessorCapacity of theSystem
C. CALCULATION OF RANKS FORIDENTIFYING INTERDEPENDENCIES.In the calculation of ranking, Attribute rank isderived based on the requirements of the user.The calculated values are calculated based on theformulas of the parameters. The customerrequirements are given by the user. Applicabilityor feasibility ranking is derived based on theapplicability of the attribute. Table 3 and table 4become the basis for rank calculationsAll the below results are arrived by applying themethodology for a simple application of inventory manangement system used forgovernment agencies. The management systeminvolves simple components like supply, audit,order, monitor and report details.Each component is considered as a server for theperformance model used. The request to eachcomponent is separately identified. Theconstraints applied in the method is that weassume that the resource availability is always100percent for all projects. This may vary inactual situations.
(IJCSIS) International Journal of Computer Science and Information Security,Vol. 9, No. 3, March 2011161 http://sites.google.com/site/ijcsis/ISSN 1947-5500
 
Figure: The inventory management system
TABLE 2. Sample Ranking of Attributes
S.NoAttrib name AttribCal.valuesCustreq.App/ feasiRank 1 Responsetime1 0.25 0.3 1 122 ResourceUtilization4 86 90 5 73 Throughput 2 18 23 4 194 Servicetime2 0.12 0.1 1 95 Maintainability4 62 80 5 36 Reusability 5 60 70 2 87 Modularity 3 67 75 2 98 Load 5 50 60 5 49 ProcessorUtilization3 95 95 3 510Schedulability5 70 75 5 111Multiplicity 5 67 74 5 612Resourceavailability4 100 100 4 2
TABLE 3. Ranking between applicability and rating*
Attrib(x)Rank XApplicability(y)Rank YTotal = attrib+ appl=X+YRank X+Y1 12 1 11.52 124 6 5 3 9 4.52 10.54 6.5 6 8.52 10.51 11.53 114 6 5 3 9 4.55 2.5 2 9.5 7 73 8.5 2 9.5 5 105 2.5 5 3 10 23 8.5 3 8 6 8.85 2.5 5 3 10 25 2.5 5 3 10 24 6 4 6.5 8 6
* attribute listing order is the same in table 2 and 3
Table 3. identifies the performance attributeswhich are interrelated. The attributes with samerank values are considered to be interrelated andso a change in one attribute will positively ornegatively bring in voluntary changes in all therelated attributes. The efficiency of suchapplication varies with scenarios. For example,resource utilization and maintenance areinterrelated (rank 4.5).
TABLE 4. Ranking for calculated & customer values
S.No CalculatedvaluesCustomerReq|Y-X|1 353052 869573 263594 20 15 55 738076 6070107 677588 5060109 9595010 7075511 60701012 1001000
The difference of the values of x and y arecalculated and are ranked. The ranks generatedhelp us to identify the most critical change thathas to included in the design for improvement.
Table 5. Spearman rank correlation coefficient(Attribute rating & Applicability)
Attribute applicabilitySpearman’s rhoPearsoncorrelationAttribute Sig(2-tailed)N1.000-120.675*0.01612ApplicabilityPearson correlationSig(2-tailed)N0.675*0.016121.000-12*correlation is significant at the 0.05 level (2-tailed)
The spearmans coefficient table shows that thetwo variable are high correlated and alsospecifies the trueness of the values generated bythe above tables. Spearman's rank correlationcoefficient allows you to identify easily the
(IJCSIS) International Journal of Computer Science and Information Security,Vol. 9, No. 3, March 2011162 http://sites.google.com/site/ijcsis/ISSN 1947-5500

You're Reading a Free Preview

Download
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->