Software development project needs analysis and solution changes

Abstract "demand change", once the process of software development projects that the demand for change, whether it is the project manager or program developers have come to feel a headache. Moreover, in some project management consultant 的 PPT courseware, as well as software project management of technical books and tutorials, Ye to "Xuqiubiangeng" as a separate item to study. In this article, and your software development project to explore the demand for change occurs because the demand for change control, and when there needs to resolve how to deal with the time change. First, the demand for change annoying As a software project manager in project development in progress, you encounter such a problem: the customer a call overturned before on your customers, and your own development team, after repeated discussions and settle down in the demand for recognition . After you re-start and customers, and your development team needs new round of talk, even talk about is endless. Even have to redesign the existing structure. In the face of this situation, as a project manager you will say: "We can not refuse customers, but he could not immediately meet the new demand, so I had to be pushed to complete it later." Or, more extreme the idea of some : Customer always good to be true, customer demand is technically impossible to achieve ... ... Customer demand with the new argument, you will demand confirmation of the importance of the doubt. Since the beginning has been repeatedly and customer communication, but also where no objection was clearly the answer, but when the development project is constantly evolving, customers gradually deepening understanding of the system when they themselves would like to eventually reverse the to demand. But then you would think that to demand only the acquisition, did not confirm. The reason is because the demand changes, resulting in the extension of the project many times, customers still say this is not what they want. You are still complaining about the needs of our customers have been like the weather keeps changing, ultimately, whether your complaints or customer demand changes to the project team will make the developer exhausted and confused. In your software development projects before, you and your project members have had such thoughts, in the software development, it is necessary to eliminate the demand for change, not to talk about any changes in the demand good? First, the kind of thinking and understanding is wrong, the software project development changes in the demand can not be completely eliminated. Whether the project manager or project developer, preferably before the start of a project to eliminate this idea. Needs to change is not possible to be eliminated, and the "elimination of demand change," but the idea needs to be remove. Demand for change to eliminate all the efforts and ideas, usually carried out in the project development are thankless. Project development process, the demand for change is inevitable.

While under normal circumstances, the project manager spent a lot of effort to avoid tedious requirements change, can change the final demand will always be. But this does not mean that the project should not do this work, whether it is the project manager or developer needs to change for the right attitude and approach should be the attitude of software testing as possible before the change takes place in the demand for demand reduction to change the situation that occurred to demand changes to the risks to a minimum. Second, the demand causes the change In software development projects, the demand for change may come from program service providers, customers or suppliers and, of course, may come from within the project team. To demand changes to the causes and carefully held up nothing less than the following reasons: 1, there is no delineation began thinning Detailed work is done by the staff requirement analysis, generally based on user submitted descriptive summary of just a few words to refine, and extract a functional, and gives descriptions (normal execution time description and a description of the accident). With addition to a certain degree, and began a systematic design, scope changes occur, then the details of use case description may have Henduo to change. If the original data is added manually to change calculated based information systems, and the original description of a property to become an entity so described. 2, do not specify the baseline demand The baseline demand is the demand for change is to allow the line. As the project progressed, the baseline demand is also changing. Are allowed to change based on contracts and costs, such as the overall structure of the software has been designed, are not allowed to change the scope of the demand, because the overall structure of the entire project will progress and cost of the initial budget. As the project progressed, the baseline will be higher the more (the change will allow the less). 3, without good software architecture to adapt to change Component-based software architecture is to provide a rapid adaptation to changes in demand architecture, the data layer encapsulates the data logic between visits, business layer encapsulates the business logic, presentation logic layer show the user. However, adaptation must follow some principles of loosely coupled together, or there are links between the layers, the design of interfaces to minimize the entrance of parameters will be changed. Well, if the business logic package, then the interface layer arrangement or reduce some of the information requested is very easy to adapt. If the interface definition was reasonable, even if there are changes in business processes, and can quickly adapt to change. Therefore, the cost impact to the extent permitted baseline can reduce demand, improve customer satisfaction. Third, demand for change control

As already mentioned, and in software development projects before, we should eliminate the "demand for change must not allow the occurrence of" thinking. In the project, the event needs to change, not the complaints are not blind, do not go blindly to meet the customer's "new needs", but to manage and control requirements change. 1, hierarchical management customer demand Software development project, the "customer is always right" and "Customer is God" is not entirely correct, because the project contract has been signed, any new changes and increasing demand in addition to the normal impact of the project, it also affect the customer's investment income, so sometimes the project manager Fandao be for the customer. Demand for the project, classified management can be implemented to meet the demand for change on the control and management. One needs (or change) is a key demand, this demand, if met, means that the project can not be properly delivered, preliminary work will be all negative. This level of demand must be met, otherwise it means that no project members and members of their own all the efforts, so as "Urgent". This is usually a remedial debug type to fire. 2 demand (or change) is the follow-up to critical needs, it does not affect the delivery of the content of the previous work, but not be met, the content can not submit new projects or continue, it is "Necessary". General basis for the new module key components, fall into this category. 3 demand is an important follow-up demand, if not met will decrease the value of the overall project, in order to reflect the value of the project, the developer is also proof of the value of their own technology, so as "Needed". Major general valuable new module development, fall into this category. The three levels should be implemented, but the timing can be arranged in priority. 4 demand is improving demand, failed to meet such demand does not affect the use of existing features, but if achieved would be better, rated "Better". Interface and usage requirements, generally in the grade. 5 requirements are optional requirements, more is even an idea, and a possible, usually just a personal preference of customers only, rated as "Maybe". Demand for the four, if time and resources allow it, may wish to do so. For the five needs, as its description, as do it or not is a "Maybe". 2, the entire change management life cycle needs Of all sizes and types of software project life cycle can be divided into three stages, namely, project initiation, project implementation, project closeout. Do not change that needs management and control of projects is in the implementation phase, but throughout the entire project life cycle in the entire process. Proceed from the overall perspective of the demand for change management, change control requires an integrated approach.

(1) Project start-up phase of the change to prevent As previously emphasized, for any software project, requirements change are inevitable, and no escape, no matter the project manager or developer can actively respond to, and this should be started from the needs analysis phase of the project began. Doing very well on a needs analysis project, the base document defines the scope of the more detailed and clear, the user needs to change with the project manager of the chances you get. If demand did not do a good job, reference documents in the range of ambiguity was found to have great customers, "the new demands of space", this time the project team often have to pay a lot of unnecessary expense. If the needs analysis done well, the document clear and there clients sign, then later changes made by the customer beyond the scope of the contract would require additional fees. This time, the project manager must argue, then this is not to deliberately make our customers money, but can not allow customers to develop the habit of frequent changes, or no end of trouble. (2) the needs of the project implementation phase change Successful software projects and failed projects is that projects the difference between whether the whole process is controllable. Project manager should establish the idea that "needs to change is inevitable, controllable, and is useful." Project implementation phase of change control needs to do is to analyze the change request, assess the potential risk to change and modify the base file. Gradient control needs to note the following: Requirements must be associated with the input, if the demand side to change the cost borne by the developer, the project has become an inevitable demand for a change. Therefore, in the beginning of the project, both funded the development of party or parties must be clear about this one: Requirements change, software development inputs have changed. Changes in demand for recognition of donors to go through, so will demand changes to the concept of cost can be prudent to deal with demand changes. The demand for small change is also subject to formal requirements management process, otherwise it will add up. In practice, people often do not want the demand for small change to the implementation of formal demand management process that reduced the efficiency of development and a waste of time. But precisely because of this concept needs only to gradually become uncontrollable, leading to project failure. Precise definition of the needs and scope of change will not stop the demand. The definition of needs is not the more detailed, more needs to avoid gradient, which is two dimensions. Requirements definition is too thin gradient have no effect on demand. Because the demand for change is eternal, not requirement was dropped, it will not be changed. Attention to communication skills.

Project development process is the actual user, the developer recognized the problems between the points above, but due to demand changes may come from customer side, it may come from the development side, therefore, as demand for managers, project managers need to using a variety of communication skills to make the project the parties get what they want. (3), a summary of the project closeout phase Capabilities are often not come from successful experience, but from the lessons of failure come. Many project managers do not pay attention to lessons learned and accumulated, even in the course of the project ending up badly beaten, he can just complain about luck, the environment and the teamwork is not good, very little systematic analysis of the sum, or do not know how to analyze and summarize, so the same problem recurring. In fact, the project should be concluded as the existing continuous improvement project or future projects an important part of the work, but also as the project contract, design the content and target identification and validation. Project summary work includes projects in the pre-identified risks and did not anticipate the changes that occurred in response to measures such as risk analysis and summary, including projects and project changes that occurred in the analysis of problems in the statistical summary. 3, requirements change management principles Although the demand for content and type of change are varied, but the principles of change management needs is their original aim. Implementation of change management needs to follow the following principles: (1) to establish baseline requirements. Demand is the demand to change the basis of the baseline. In the development process, requirements to identify and read the review after (user involved in the assessment), we can establish the first requirements baseline. After each change, and after review, the new requirements should be re-established baseline. (2) develop a simple, effective change control process and the formation of the document. After establishing the baseline demand for all changes made must follow the control flow control. At the same time, this process is widespread, on future project development and other projects have the reference. (3) the establishment of Project Change Control Board (CCB) or related functions similar organizations, is responsible for determining which changes to accept. CCB by the project staff involved in multi-joint component, should include users and developers to policy and decision makers. (4) needs to change we must first apply and then evaluate, and finally through and change the size of the considerable level of assessment to confirm. (5) needs to change, the affected software plans, products, activities must change accordingly to maintain and update the same demand. (6) to keep proper documentation generated change. Fourth, changing requirements, deal with such

as the authors of the demand. change assessment.the user resistance was difficult to imagine a project to succeed. the project's launch is the initial basic needs as a prerequisite for development. the system prototype is gradually moving closer to the end user needs. developers vulnerable to neglect the development work at any time communicate with users. such as the limited needs of the user made changes to time. rejected or Some accepted.as the development progress. Discrimination . software developers should explain to the user to enter the design stage. Mutual cooperation . some users will continue to offer some of the project group did not achieve or workload appears relatively large. Software developers must learn to carefully listen to the user's requirements. you can effectively reduce the incidence of demand for change. so contracts with users. developers and users should try to understand each other. the project will not be completed on time. If users insist on implementation of the new requirements.the task to heavy at times. In fact. developers can further improve the user's instructions prototype. users can suggest new important and urgent needs of attainment by grade. as a basis for assessing needs change. developers can explain to the user. Binding contract . and fundamentally change the appearance of demand reduction. decision making. and analyze and organize. The industry is more popular iterative development methods of the urgent requirements of the project schedule change control is very effective. respond to these four steps. Choose the appropriate development model . Even if the user put forward the developer seems to be "excessive" demands should carefully analyze the reasons. Developers first description of the user on demand to establish a system prototype. according to the circumstances of the change can be accepted.needs to change to the software development impact for all to see. and sometimes there are steps to cancel the change. but actually increase the workload new requirements). For the change control process. but also provisions in the demand for change must be the implementation of change control processes. put forward a viable alternative. have a significant impact on the progress of the project needs. General users to see some real stuff. if a substantial increase in new demand (although demand for the user that is refined. the more demand there will be a detailed explanation. on demand by the user the opportunity to final confirmation. In such cases. but also increase the implementation of the change and validate two steps. If the change is accepted. Full exchange . users take for granted is one of the most authoritative spokesman. Arrangements for full-time staff requirements change management . .Change control requirements apply generally to go through change. you can increase the number of relevant terms.a prototype development model used to establish more appropriate to needs of development projects is not clear. a further demand for change will bring the development of what kind of impact and negative consequences. This process is repeated several times. the attitude of mutual cooperation on energy issues as much as possible. demand for the assessment process. In discussing the demand. and then communicate with the user.the process of change management needs of users and developers a large extent the exchange process. a few responded to the crisis. it requires a dedicated change management needs timely exchange of staff and users. It is also. the user can often put forward many valuable comments. At the same time. but also pay attention to controlling the frequency of proposed new requirements. Meanwhile. considerations and ideas. Needs assessment of user participation .

judges. pretrial discovery and the presentation of evidence at trial. a technical expert is necessary to properly evaluate the case and to deftly reduce complex technical concepts to simple terms so that attorneys.. and the creation date. 1992). He or she must determine whether the software was published or was in the public domain prior to the copyright date. copyright infringement. an attorney requires the assistance of a computer expert to properly assess and evaluate the complex technical evidence. As a result. menus and software logic hierarchy does this. and software piracy. Computer-Related Intellectual Property Disputes Which Require Technical Experts Technical experts are typically required to prove or defend issues arising from patent infringement. perhaps subject to the terms of a confidentiality order. Inc. 1031 (1987).Introduction A major computer software company has retained the services of an attorney to represent it in litigation alleging that an upand-coming software firm has pirated its software. arbitrators. Even though the copyright registration is not necessary for copyright ownership. Jaslow Dental Lab. compare Computer Assocs. Inter. the technical expert's assistance will be important to pre-filing preparation. including lawyers and their clients. cert. reports.. Regardless of the resolution of the matter through settlement. Otherwise. 1986).S. the judge.2d 693 (2d Cir. and the jury to grasp. the expert should examine the software for similarities in the overall design. 982 F. sequence and organization. a party may refrain from suit and. denied. Similarity in design. as the software must clearly state that is a copyrighted work. If a lawsuit is filed. These elements are usually included within three general categories: • • • An exact copy of the plaintiff's software. This article examines the critical role of the computer expert. an expert may conclude that a threatened claim is weak or even baseless.. A derivative work with many elements exactly the same or similar. In order to adequately represent the client. . The most important element of the expert's investigation is an examination of the defendant's software. the expert should launch a full-scale investigation. the expert must determine whether the defendant had sufficient access to enable him or her to copy the plaintiff's software. The expert must then examine the owner's version control system. The expert must examine the original software and look for a copyright notice. At times. v. and software piracy. See Whelan Assocs. and juries fully understand the issues. and software piracy issues associated with this litigation are complex and difficult for the attorney. Looking at screens. Inc. Altai. v. patent. copyright infringement. 479 U. trade secret misappropriation. trade secret misappropriation. Technical experts are typically required to provide or defend issues arising from patent infringement. protection extends to similarities in the program structure. in order to claim copyright infringement. timeconsuming litigation.2d 1222 (3d Cir. For example. Once a lawsuit is instituted. a technical expert may evaluate whether a competitor's software program is "substantially similar" to another's in a potential copyright infringement suit. the parties. Moreover. Inc. A technical expert may be essential in a non-jury trial by presenting a case in terms understandable to the judge so that the judge can adequately assess the case. 797 F. If the source code is available. What is the role of a Computer Expert? A computer expert makes the technical aspects of a computer-related intellectual property dispute understandable to laypersons. who owns the work. If litigation proves necessary. the defendant's source code can be obtained during discovery. The selection of a computer expert is crucial. Copyright Infringement A technical expert must first investigate exactly what the plaintiff's copyright and what the defendant infringed upon protect. The copyright. (1) Instead. the services of a computer expert are essential during pretrial proceedings and at the trial itself. avoid serious Rule 11 sanctions. The objective is to determine whether probable cause exists for a copyright infringement lawsuit. the owner must have a valid copyright registration in the computer software. arbitration or litigation. Computer experts can be used by attorneys to help resolve computer-related intellectual property disputes without costly. trade secret.. possibly. which extends protection beyond copying program code. Inc.

a pure mathematical algorithm. Finally. This protection extends not only to the software itself but also to any derivative work. He or she must then look for public domain similarities to ascertain the validity of the claims. The expert must examine the defendant's software to uncover areas of violation.. The expert must determine whether the defendant knew that the software was a trade secret. Defendant's software was created as an updated derivative of plaintiff's software from original source code using the same programming language. patent protection can be broader than copyright protection. However. An examination of the patent claims and specifications is essential to this investigation. Sometimes. 450 U. All information necessary to create a complete chronology of events pertaining to the matter.S.g. without any specific end use. is not patentable. he or she must examine the defendant's software and determine the areas of infringement. and databases) of both plaintiff's and defendant's software. because if the software exists therein through no fault of the defendant. A patent can protect a: • • • • Process Device Methodology (in some cases) Format Type In the case of computer software. A patentable claim could include computer software that controls industrial processes or devices. examination of source code is not necessary. a new format type (e. Diehr. had access to the secret. to be thorough. but it would not hurt. including any and all documentation created during development of plaintiff's and defendant's software. Diamond v. patent or trade secret information on both plaintiff's and defendant's software. executables. Complete working magnetic copies (object code. Software Piracy In order to establish software piracy. . There are at least seven different instances of software piracy which would usually be investigated: • • Defendant's software was created as a direct (exact) duplicate of plaintiff's object code. a computer expert must have access to the following information: • • • • • Copyright. the computer expert must launch a full-scale forensic investigation. However. then the defendant did not violate the plaintiff's confidence. the expert must determine whether the software was sufficiently novel and whether it was treated by the plaintiff as a trade secret. Misappropriation of a Trade Secret Trade secret law may provide the broadest protection against copying or misappropriation. The technical expert must examine the patent to determine specifically what the claims and specifications protect. In this case. Elements of Discovery Required by a Technical Expert In order to complete a forensic investigation in an intellectual property dispute involving software piracy. 175 (1981). While patent protection is more difficult to obtain. the technical expert must first investigate exactly what is protected by the patent and what the defendant infringed. Copies of all agreements that were entered into between plaintiff and defendant. even though such software utilizes mathematical algorithms. a new spreadsheet concept) could be patentable. and. the expert must also search the public domain.Patent Infringement Like in the case of copyright infringement. and used the secret in an unauthorized manner. Complete program source code for both plaintiff's and defendant's software.

The hierarchy chart shows the relationship between various programs and modules. In these cases. and Output portions of each programming step within the program or module. Similar functionality may have been created merely from the marketing needs of a particular industry or profession. this type of copying can be performed on any computer. an attorney should carefully direct the expert's efforts in order to ensure that an expert produces the most ideal work and does not waste the client's money. The IPO diagram shows the Input. What follows is a methodology that a computer expert can use to establish software piracy: Direct (Exact) Duplication of Object Code Direct (exact) duplication of object code is the most common form of software piracy. source code should be demanded. Copyright laws do not protect computer algorithms. he would be able to modify and improve the software to make it more marketable.Output) This is a documentation technique developed by IBM during the 1970's. However. locating copied sections is very time consuming. It appears similar to a corporate organization chart. If defendant's software was produced by direct duplication. As computer-related litigation can be very expensive. Updated Derivative Software From Original Source Code If the defendant has access to plaintiff's source code. software systems are very large. the character information contained therein can often be recognized. the expert then performs a byte-by-byte comparison of the defendant's and the plaintiff's object code. To establish software piracy resulting from direct duplication of object code. the new software may not have been copied. One tool that can be very useful in a forensic investigation is HIPO (Hierarchy plus Input . Defendant's software was created as an updated derivative of plaintiff's software which was generated using a 4GL. even where the functionality is similar. it is possible to develop HIPO diagrams from already existing software using the source code. This type of piracy is prevalent among personal computer users. This would impose a structure upon the software created from these diagrams thereby insuring maintainability. He would want to make modifications to the original software to enable it to run on a different computer or with . Defendant created software by copying only the design of plaintiff's software. the identifying character information should be recognizable. While object code is not usually understandable. Often. as the defendant probably cannot produce source code. Usually. In other words.• • • • • Defendant's software was created as a direct (exact) translation from plaintiff's original source code into another programming language. then the object files would be identical. and makes his forensic investigation more manageable. If they are identical. Most programmers put some character information into their programs.Process . It is important to remember that an individual who develops software similar to existing software. The program is produced by making an exact magnetic copy of the original. One IPO diagram is then generated for each program or module on the hierarchy diagram. each box on the hierarchy chart generates its own IPO diagram. A comparison of source code is extremely difficult. Using HIPO enables an expert to see the forest through the trees. is not necessarily guilty of software piracy. Defendant's software was copied from plaintiff's software using a fourth generation language (4GL). Another clue would be to look at a character dump of both object files. a software system contains several hundred thousand lines of source code. Defendant's software was created as an updated derivative of plaintiff's software from translated source code using a different programming language. Processing. It was developed as a structured analysis tool. Even where it can be shown that the individual had access to the original software. However. the technical expert would compare the file sizes and creation dates of both the plaintiff's and defendant's programs. It is so widespread because it does not require the defendant to use the plaintiff's source code. During discovery. It is very simple to accomplish using standard computer utilities. If the defendant's software was copied from plaintiff's object code. The forensic investigation is made by the expert using both object and source code. It was intended that HIPO diagrams be created prior to actual software development. and.

he would probably perform modifications to the software. This would be done to further disguise the software and to improve the software following translation. Possibly. and it provides ease of subsequent modification. a software analyst could potentially create entire software systems at a terminal without having to write a single line of program code. Translation of the original source code into another programming language accomplishes three things. screens. They should be identical or. a computer expert must compare source code of the defendant's software with that of the plaintiff's software. The IPO charts should show that the same logic was used to create both systems. the constants should be identical. The hierarchical charts should be identical. He or she should find that large sections of the hierarchy are identical or similar. Duplication of file structure is one of the telltale indications of software piracy. There might also be some modification of the main logic. Probably. The defendant would produce the new software by modifying the plaintiff's original source code. With such 4GL systems. large segments of logic would be identical. In addition. Finally. Direct (Exact) Translation from Original Source Code Into Another Programming Language Software pirates are usually very clever. The expert should then examine the program logic. Next. To establish this type of software piracy. Thus. very similar. software piracy acquired a new dimension. new functions will have been added. To further establish this type of software piracy. Many would be the same. In addition. since severe modification would make a complete re-write more cost effective. he should examine the data file structures of both systems. Copying using a 4GL is much simpler than translation. Variables would have the same or similar names. he would duplicate the original data file structure as well as the data flow. Updated Derivative Software from Translated Source Code After a software pirate has translated software into a new programming language. he is able to disguise the software so as to make piracy less detectable. he or she must compare both plaintiff's and defendant's source code. If the defendant's software was produced by direct translation of plaintiff's source code. Often. reports and menus would be changed significantly. This is usually performed on a line-by-line basis. In those cases where the hierarchy is identical. by making such modifications. However. it can enable the software to be produced from original source code by translating from one programming language into another. New screens and reports will have been generated. and there should be a one-to-one correspondence between the variables across both systems. there are limits to the logic modification. Third. it can disguise the final product. They should also be identical or similar. If he has the source code available. Software piracy can be established by a computer expert both from examination of source code of both systems and from observing software operation of both systems. In this type of software piracy. Second. First. some of the main logic will have been modified. Once again. On the other hand. Extensive modification to translated software could make software piracy virtually undetectable. The expert should examine the file structures of both systems.a different operating system. New screens and reports as well as new functions would be added. since the programming languages are the same. the file structure should be identical. the expert should develop HIPO charts. the expert should examine the corresponding IPO diagrams. at least. reports and menus will also have been changed. Rather than copying object modules or translating original source code. then the screens. and identical constants would be used. the pirate could easily duplicate the exact external functionality of someone else's software. reports and menus should be identical. the expert should develop HIPO charts from both plaintiff's and defendant's source code. To establish software piracy in this instance. He or she should search the source code for constants. It can be done with or without the pirate having original source code available. the expert should search for copied segments of program code (exact duplication). the original formats of the screens. the expert must run both the plaintiff's and defendant's software to demonstrate identical operation. he could derive a new file structure without source . it can permit the software to run more efficiently on a different computer or operating system. They should be identical or substantially similar. Exact Duplication of Software Using a 4GL The late 1970's and early 1980's witnessed the development of fourth generation software development systems. First.

Use of Computer Experts in Pretrial Proceedings Computer-related intellectual property litigation requires one or more independent technical experts. many reports are more substantial. but which are in a different order. he or she should develop HIPO diagrams and search for similarities in logic. the expert should prepare HIPO diagrams. menus and program logic. This issue demands that an expert be able to show striking similarities in structure. reports and menus should be identical. Copyright infringement is demonstrated by observation of software operation. The technical expert can establish this type of software piracy both by observation of software operation and from examination of the source code. A technical expert can be used to increase the chance of reaching a favorable settlement quickly. Essentially. a software pirate would probably update and modify the software. The expert should examine the design of the system for investigating screens. Finally. The expert should examine the source code and observe software operation. However. However.code that would function just as well. A technical expert would have difficulty establishing software piracy in this instance. Finally. where the attorney knows that a settlement is unlikely. often convince the opposing side that litigation would accomplish nothing. . At times. because of the overwhelming nature of an expert's technical report. Screens. In such instances. the opposition offers to settle or withdraw from the proceedings. they are either settled or abandoned. This can be observed from software operation as well as source code. The expert should also help prepare interrogatories. He or she should examine the data file structures for similarities. only a minimal report should be produced. function and organization. the resulting software is virtually new and original. Newly Developed Software Where Only the Design Was Copied There have been many instances of software copyright infringement where the program code was completely new (not copied). If the software has been sufficiently modified. the data file structures should be identical or extremely similar. an expert will work with an attorney to help him or her prepare an imposing argument of the merits of his case and the weaknesses of the opposition's case. reports and logic should be similar. If the source code was not available to him. where required. Using an Expert to Resolve Computer-Related Intellectual Property Claims Without Litigation Most computer-related intellectual property claims never go to court. Specific methods of accomplishing certain tasks should be identical. They often become treatises that attempt to prove whether or not infringement occurred. With an enhanced technical perspective. If the plaintiff's source code was available to the defendant when he copied the software. this type of report is desirable. the file structure should contain the same elements (fields) which have the same specifications. working as a team. This was usually done to enhance the marketability of a newly developed product. Such modifications would be simple to generate. If the attorney hopes to settle the matter without a trial. reports. An example of this would be the use of all the same function keys to accomplish specific tasks. Why should an expert over-prepare the opposition for trial? During pretrial preparation. and requests for admission addressing the technical aspects of the case. especially when the original software was very popular among consumers. At a minimum. Updated Derivative Software from 4GL Translation Once a program has been duplicated using a 4GL. Piracy in the resulting software would be extremely difficult to detect. the attorney and the expert. Nothing would be gained by examination of source code. Menus. document requests. The pirate can copy the software merely by reading the user manuals and by observing software operation. an expert should review pleadings for factual accuracy and suggest changes. screens. he duplicates the design of the original software. Where sufficient modification has been performed. the report is required to set forth the opinions that the expert will offer at trial and the basis for these opinions. Due to the high cost of litigation and the uncertainty of outcome. A computer expert should assist in preparing the complaint or. but where there was a deliberate effort to duplicate the design of an existing system. proving copyright infringement would be very difficult. An expert's report is usually submitted to the adversary during pretrial discovery. They should be identical. counterclaim.

Use of Technical Experts in Trial Proceedings A technical expert is essential at trial. Consequently. This is very confusing to judges and juries since their testimony will invariably conflict. The expert then establishes a series of questions or question categories designed to prove the point. Summary Technical experts are essential in computer-related intellectual property litigation. Normally. the expert is deposed as a part of pretrial discovery.With intellectual property litigation. A technical expert must be able to anticipate the answers of the opposing expert. This software is normally very large. and remain as a constant explanation and reminder to the jury during its deliberations. Such questions should reveal a potential juror with expert knowledge of computers so that he or she may be challenged and excluded. the modeling of a company or asset’s future financial liquidity over a specific timeframe. Establishment of infringement is only possible with expert assistance. During direct examination. Sometimes. attorneys challenge every fact and every opinion of the opposing expert. In preparation for such depositions. an attorney will ask for a computer expert's help with jury selection. An expert helps to evaluate the technical reports generated by opposing technical experts. This can usually be accomplished if he or she is familiar with the opposing expert's deposition. This article presents a methodology that simplifies the task of the expert. The other expert will always testify that the first expert's investigation was inconclusive. just prior to a trial. but often what is forecast is treasury position which is cash plus short-term investments minus short- . One expert will always state that a sufficient number of similarities exist between two software products so as to establish copying or derivation. The jury does not know which one to believe. the expert should attend depositions of all technical witnesses. They are needed because the complex technical issues are beyond the knowledge and understanding of the average layperson. search Cash flow forecasting is (1) in a corporate finance sense. the free encyclopedia Jump to: navigation. Experts are as important during pretrial proceedings as they are during the trial itself. the expert performs a forensic investigation. and similarities are very difficult to find. Establishment of software piracy is difficult because judges and juries have insufficient knowledge of the technical elements required to prove the case. It is important that exhibits be presented because they are placed in the jury room at the end of the trial. This can only be done with the expert's assistance. He or she helps with discovery. but they can be valuable in attaining satisfactory pretrial settlements. the expert must educate the fact finder. This individual is the most important witness in the case. during cross-examination. He or she must effectively explain complex technical evidence to lay people. an expert is asked to investigate an opposing expert in an effort to impeach that expert's credibility. In order to establish software piracy. At depositions the expert can provide ad hoc information to an attorney that could make the depositions more meaningful or less damaging. both litigants present expert witnesses. Cash flow forecasting From Wikipedia. an expert must examine and analyze both the software of the plaintiff and defendant. Sometimes. The expert assists the attorney in preparing a list of questions to ask prospective jurors during the voir dire. In any matter of this type. the expert prepares questions for the attorney to ask. Initially. an expert witness provides a hands-on demonstration of the software to the court during direct examination. An attorney must sort out the logic that separates these two witnesses and create a technical position that would be clear to the fact finder. The expert normally uses exhibits and materials prepared before trial. Not only are experts essential in cases that go to trial. Often. Cash usually refers to the company’s total bank balances. This is important because other jurors would look to this expert juror for guidance during deliberations.

The adjusted net income (ANI) method starts with operating income (EBIT or EBITDA) and adds or subtracts changes in balance sheet accounts such as receivables. Both the ANI and PBS methods are best suited to the medium-term (up to one year) and long-term (multiple years) forecasting horizons. as opposed to projected. but also include sales of other assets. Receipts are primarily the collection of accounts receivable from recent sales. which is similar to the ANI method. (2) in the context of the entrepreneur or manager. This direct. cash will be correct. (Bort. payables and inventories to project cash flow. Both are limited to the monthly or quarterly intervals of the financial plan. This allows the forecasting period to be weekly or even daily.term debt. etc. and need to be adjusted for the difference between accrual-accounting book cash and the often-significantly-different bank balances. Disbursements include. forecasting what cash will come into the business or business unit in order to ensure that outgoing can be managed so as to avoid them exceeding cashflow coming in. debt service. The pro-forma balance sheet (PBS) method looks straight at the projected book cash account. dividends. data is available. This requires that the quantity and timings of receipts of cash from sales are reasonably accurate. But instead of using projected balance sheet accounts. it is more complicated than the ANI or PBS indirect methods. The ARM is best suited to the medium-term forecasting horizon. etc. proceeds of financing. (de Caux. Contents [hide] • • • • • 1 Methods (corporate finance) 2 Methods (entrepreneurial) 3 Uses (corporate finance) 4 Uses (entrepreneurial) 5 References [edit] Methods (corporate finance) The direct method of cash flow forecasting schedules the company’s cash receipts and disbursements (R&D). 1990)and the advantages are as follows [edit] Methods (entrepreneurial) The simplest method is to have a spreadsheet that shows cash coming in from all sources out to at least 90 days. it is to become very good at cashflow forecasting. 2006) The third indirect approach is the accrual reversal method (ARM). But because the ARM allocates both accrual reversals and cash effects to weeks or days. payment of accounts payable from recent purchases. and all cash going out for the same period. which in turn requires judgement honed by . if all the other balance sheet accounts have been correctly forecast. (Association for Financial Professionals. large accruals are reversed and cash effects are calculated based upon statistical distributions and algorithms. too. Cash flow is the change in cash or treasury position from one period to the next. payroll. R&D method is best suited to the short-term forecasting horizon of 30 days or so because this is the period for which actual. 2005) The three indirect methods are based on the company’s projected income statements and balance sheets. It also eliminates the cumulative errors inherent in the direct. If there is one thing entrepreneurs learn fast. R&D method when it is extended beyond the short-term horizon.

Richard Bort. [edit] Methods Popular methods for estimation in software engineering include: • Parametric Estimating . A danger of using too much corporate finance theoretical methods in cash flow forecasting for managing a business is that there can be non cash items in the cashflow as reported under financial accounting standards. shown itself to be the most effective method of gaining useful historical data that can be used for statistical estimation. has allowed more accurate estimation and more rapid development times. because it is rare for cash receipts to match sales forecasts exactly. Warren Gorham & Lamont. In particular. Treasurer’s Companion. Corporate Cash Management Handbook. coupled with the loosening of constraints between parts of a project. clearly defined and well understood software development process has. This goes to the heart of the difference between financial accounting and management accounting. in recent years. [edit] Uses (corporate finance) A cash flow projection is an important input into valuation of assets.experience of the industry concerned. Association for Financial Professionals. 1990 Cost estimation in software engineering From Wikipedia. the free encyclopedia Jump to: navigation. [edit] Uses (entrepreneurial) The point of making the forecast of incoming cash is to manage the outflow of cash so that the business remains solvent. Tony de Caux. the act of sampling more frequently. The use of a repeatable. search The ability to accurately estimate the time and/or cost taken for a project to come in to its successful conclusion is a serious problem for software engineers. 2006 “Medium-Term Funds Flow Forecasting”. and it is also rare for suppliers all to pay on time. "what if we pay our suppliers 30 days later?" [edit] References • • • “Cash Forecasting”. Association of Corporate Treasurers. These principles remain constant whether the cash flow forecasting is done on a spreadsheet or on paper or on some other IT system. 2005 “Cash Flow Forecasting”. for instance. budgeting and determining appropriate capital structures in LBOs and leveraged recapitalizations. The section of the spreadsheet that shows cash out is thus the basis for what-if modeling.

Evidence-based Scheduling Refinement of typical agile estimating techniques using minimal measurement and total time accounting. programme or policy proposal.1 Use in regulation 4 See also 5 References . economic impact analysis. fiscal impact analysis and Social Return on Investment (SROI) analysis. whether explicitly or implicitly. Schedule. but differs in that it is explicitly designed to inform the practical decision-making of enterprise managers and investors focused on optimizing their social and environmental impacts. the free encyclopedia Jump to: navigation. effort and schedule for software projects. an approach to making economic decisions of any kind. search Cost-benefit analysis is a term that refers both to: • • helping to appraise. Benefits and costs are often expressed in money terms. The formal process is often referred to as either CBA (Cost-Benefit Analysis) or BCA (Benefit-Cost Analysis).” Closely related. Risk. Under both definitions the process involves. The latter builds upon the logic of cost-benefit analysis. Cost-benefit analysis From Wikipedia.• • • • • • • • • • • Wideband Delphi COCOMO SLIM SEER-SEM Parametric Estimation of Effort. and are adjusted for the time value of money. or assess. so that all flows of benefits and flows of project costs over time (which tend to occur at different points in time) are expressed on a common basis in terms of their “present value. Contents [hide] • • • • • 1 Theory 2 Application and history 3 Accuracy problems o 3. weighing the total expected costs against the total expected benefits of one or more actions in order to choose the best or most profitable option. but slightly different. Cost. formal techniques include cost-effectiveness analysis. the case for a project. Mimimum time and staffing concepts based on Brooks's law Function Point Analysis Proxy-based estimating (PROBE) (from the Personal Software Process) The Planning Game (from Extreme Programming) Program Evaluation and Review Technique (PERT) Analysis Effort method PRICE Systems Founders of Commercial Parametric models that estimates the scope. cost.

expected return. For example. or long-term enterprise strategy alignments..H. The process involves monetary value of initial and ongoing expenses vs. the discount rate used for present-value calculations is an interest rate taken from financial markets (R. Most commonly. The costs and benefits of the impacts of an intervention are evaluated in terms of the public's willingness to pay for them (benefits) or willingness to pay to avoid them (costs). which may have a huge impact on the desirability of interventions to help the environment. analysts try to estimate costs and benefits either by using survey methods or by drawing inferences from market behavior.g. Because cost–benefit analysis aims to measure the public's true willingness to pay. Frank 2000). A discount rate is chosen. This is especially true when governments use the technique. or offer a new drug through the state healthcare system. During cost–benefit analysis. In this case. which is then used to compute all relevant future costs and benefits in present-value terms. Cost–benefit calculations typically involve using time value of money formulas. [edit] Application and history The practice of cost–benefit analysis differs between countries and between sectors (e. This can be very controversial. Empirical studies suggest that in reality. health) within countries. a high discount rate implies a very low value on the welfare of future generations. The guiding principle is to list all parties affected by an intervention and place a monetary value of the effect it has on their welfare as it would be valued by them. It is an analysis of the cost effectiveness of different alternatives in order to see whether the benefits outweigh the costs.the value in their best alternative use. a value must be put on human life or the environment. Cost–benefit analysis attempts to put all relevant costs and benefits on a common temporal footing. Inputs are typically measured in terms of opportunity costs . For example. transport. and property damage thus prevented (R. for instance to decide whether to introduce business regulation.• • 6 Further reading 7 External links [edit] Theory Cost–benefit analysis is often used by governments to evaluate the desirability of a given intervention. Frank 2000). the extent to which impacts are expressed in monetary terms. deaths.H. the cost–benefit principle says that we should install a guardrail on a dangerous stretch of mountain road if the dollar cost of doing so is less than the implicit dollar value of the injuries. It is heavily used in today's government. for example. such as loss of reputation. Some of the main differences include the types of impacts that are included as costs and benefits within appraisals. people's discount rates do decline over time. this feature is typically built into studies. market penetration. Constructing plausible measures of the costs and benefits of specific actions is often very difficult. often causing great controversy. a product manager may compare manufacturing and marketing expenses with projected sales for a proposed product and decide to produce it only if he expects the revenues to eventually recoup the costs. build a new road. This is usually done by converting the future expected streams of costs and benefits into a present value amount. monetary values may also be assigned to less tangible effects such as the various risks that could contribute to partial or total project failure. and . The aim is to gauge the efficiency of the intervention relative to the status quo. In practice.

This brought together cost–benefit results with those from detailed environmental impact assessments and presented them in a balanced way.[4] More recent guidance has been provided by the United States Department of Transportation and several state transportation departments. cost–benefit techniques were applied to the development of highway and motorway investments in the US and UK in the 1950s and 1960s. Agencies across the world rely on a basic set of key cost–benefit indicators. Over the last 40 years. including HERS. HEATCO's aim is to develop guidelines to harmonise transport appraisal practice across the EU. more developed application of the technique was made to London Underground's Victoria Line.[11] The EU's 'Developing Harmonised European Approaches for Transport Costing and Project Assessment' (HEATCO) project. The practical application of CBA was initiated in the US by the Corps of Engineers. Environment and the Regions.[1] The Flood Control Act of 1939 was instrumental in establishing CBA as federal policy.[2] Subsequently. after the Federal Navigation Act of 1936 effectively required cost–benefit analysis for proposed federal waterway infrastructure. In the early 1960s. including the following: • • • • • • NPV (net present value) PVB (present value of benefits) PVC (present value of costs) BCR (benefit cost ratio = PVB / PVC) Net benefit (= PVB . It specified the standard that "the benefits to whomever they accrue [be] in excess of the estimated costs.[12][13] [3] Transport Canada has also promoted the use of CBA for major transport investments since the issuance of its Guidebook in 1994. It is now a cornerstone of transport appraisal in the UK and is maintained and developed by the Department for Transport. has reviewed transport appraisal guidance across EU member states and found that significant differences exist between countries. BCA. California Department of Transportation (Caltrans)[9]. Minnesota Department of Transportation[8].differences in the discount rate between countries. Available guides are provided by the Federal Highway Administration[5][6]. StatBenCost. and TREDIS.[11][12] Later.[13] . with discussion of available software tools for application of CBA in transportation.PVC) NPV/k (where k is the level of funds available) The concept of CBA dates back to an 1848 article by Dupuit and was formalized in subsequent works by Alfred Marshall. NATA was first applied to national road schemes in the 1998 Roads Review but subsequently rolled out to all modes of transport. part of its Sixth Framework Programme. CBA was also extended to assessment of the relative benefits and costs of healthcare and education in works by Burton Weisbrod. Federal Aviation Administration[7]. In the UK. the New Approach to Appraisal (NATA) was introduced by the then Department for Transport. cost–benefit techniques have gradually developed to the extent that substantial guidance now exists on how transport projects should be appraised in many countries around the world.Net. CalBC. and the Transportation Research Board Transportation Economics Committee [10]. An early and often-quoted. the United States Department of Health and Human Services issued its CBA Guidebook.

ROI What Does Return On Investment . the NPV is simply the PV of future cash flows minus the purchase price (which is its own PV). the net present value (NPV) or net present worth (NPW)[1] of a time series of cash flows. and accounting. The return on investment formula: In the above formula "gains from investment". That is. both incoming and outgoing. finance. Used for capital budgeting. and widely throughout economics. the benefit (return) of an investment is divided by the cost of the investment. once financing charges are met. *********************** ***************************************** Return On Investment . Return on investment is a very popular metric because of its versatility and simplicity. is defined as the sum of the present values (PVs) of the individual cash flows.Net present value From Wikipedia. Watch: ROI . and is more widely used in bond trading. The NPV of a sequence of cash flows takes as input the cash flows and a discount rate or discount curve and outputting a price. in present value terms. then the investment should be not be undertaken. taking as input a sequence of cash flows and a price and inferring as output a discount rate (the discount rate which would yield the given price as NPV) is called the yield. and is a standard method for using the time value of money to appraise long-term projects. To calculate ROI. the free encyclopedia Jump to: navigation. the converse process in DCF analysis. NPV is a central tool in discounted cash flow (DCF) analysis. In the case when all future cash flows are incoming (such as coupons and principal of a bond) and the only outflow of cash is the purchase price. or if there are other opportunities with a higher ROI. the result is expressed as a percentage or a ratio. refers to the proceeds obtained from selling the investment of interest. it measures the excess or shortfall of cash flows.ROI Mean? A performance measure used to evaluate the efficiency of an investment or to compare the efficiency of a number of different investments. search In finance. if an investment does not have a positive ROI.

Formulas. however. or percentage. When using this metric. or actions. can be modified to suit the situation -it all depends on what you include as returns and costs. for example). A high ROI means that investment gains compare favorably to investment costs. A financial analyst. and the result can be expressed in many different ways. For example.ROA Return On Capital Employed . an ROI ratio greater than 0. The definition of the term in the broadest sense just attempts to measure the profitability of an investment and. decisions. approval and funding decisions for projects and programs of all kinds (such as marketing programs. therefore the definition. as ROI calculations can be easily manipulated to suit the user's purposes. ROI has become a central financial metric for asset purchase decisions (computer systems. as such.ROIC More Related Terms Return on Investment (ROI) analysis is one of several commonly used approaches for evaluating the financial consequences of business investments. recruiting programs.00 (or a percentage greater than .ROI Keep in mind that the calculation for return on investment and. a marketer may compare two different products by dividing the gross profit that each product has generated by its respective marketing expenses. Portfolio Management Related Terms • • • • • • • • • • • • • • Capital Rationing Compound Return Investment Phantom Gain Return Return On Assets . and training programs). or service vehicles.Investopedia explains Return On Investment . may compare the same two products using an entirely different ROI calculation. there is no one "right" calculation. Filed Under: Acronyms. In the last few decades. perhaps by dividing the net income of an investment by the total value of all resources that have been employed to make and sell the product. and Payback Period • Other ROI Metrics • • The ROI Concept Most forms of ROI analysis compare investment returns and costs by constructing a ratio. IRR. and more traditional investment decisions (such as the management of stock portfolios or the use of venture capital). This flexibility has a downside. In most ROI methods.ROCE Return On Equity . • The ROI Concept • Simple ROI for Cash Flow and Investment Analysis • Competing Investments: ROI From Cash Flow Streams • ROI vs NPV. factory machines.ROGIC Return On Investment Capital .ROE Return On Gross Invested Capital . Investing Basics. Fundamental Analysis. make sure you understand what inputs are being used. ROI analysis compares the magnitude and timing of investment gains directly with the magnitude and timing of investment costs.

000 over the next five years and deliver an additional $700. or Internal Rate of Return). ROI simply shows how returns compare to costs if the action or investment brings the results hoped for. and each carries a different message. search . or accelerating gains (see the figure above). and payback period). on ways to improve ROI by reducing costs. internal rate of return IRR. IRR. That is “simple ROI. and this makes ROI less trustworthy as a guide for decision support. however. a good business case or a good investment analysis will also measure the probabilities of different ROI outcomes. and Payback Period) compares the differing and sometimes conflicting messages from different financial metrics. [ Page Top ] [ Encyclopedia ] [ Business Case Books & Tools ] [ Home ] Competing Investments: ROI From Cash Flow Streams ROI and other financial metrics that take an investment view of an action or investment compare investment returns to investment costs. NPV. which are probably not caused directly by the action or the investment. However each of the major investment metrics (ROI. [ Page Top ] [ Encyclopedia ] [ Business Case Books & Tools ] [ Home ] Simple ROI for Cash Flow and Investment Analysis Return on investment is frequently derived as the “return” (incremental gain) from an action divided by the cost of that action. it is not always easy to match specific returns (such as increased profits) with the specific costs that bring them (such as the costs of a marketing program). • • • • Payback period From Wikipedia. One serious problem with using ROI as the sole basis for decision making. For example.” as used in business case analysis and other forms of cash flow analysis. says nothing about the risk of an investment. and the next section ( ROI vs. such as Net Present Value. net present value NPV. Simple ROI works well when both the gains and the costs of an investment are easily known and where they clearly result from the action. (The same is also true of other financial metrics. For that reason. The net cash flow data and comparison graph appear below. With simple ROI. Investment A and Investment B. increasing gains. is that ROI by itself says nothing about the likelihood that expected returns and costs will appear as predicted. ROI by itself. what is the ROI for a new marketing program that is expected to cost $500. Simple ROI also becomes less trustworthy as a useful metric when the cost figures include allocated or indirect costs. the free encyclopedia Jump to: navigation. This section illustrates ROI calculation from a cash flow stream for two competing investments. the investment—or action. In complex business settings. or the better business decision. When potential investments compete for funds. that is. approaches the comparison differently. and when other factors between the choices are truly equal. incremental gains from the investment are divided by investment costs.000 in increased profits during the same time? • • • • • Simple ROI is the most frequently used form of ROI and the most easily understood. Which is the better business decision? Analysts will look first at the net cash flow streams from each investment. Consider two five-year investments competing for funding. and wise decision makers will consider both the ROI magnitude and the risks that go with it. Decision makers will also expect practical suggestions from the ROI analyst. or business case scenario—with the higher ROI is considered the better choice.• • • • • 0%) means the investment returns more than its cost.

First. To calculate the payback period an algorithm is needed. For example. upgrades. The modified payback period is calculated as the moment in which the cumulative positive cash flow exceeds the total cash outflow. Whilst the time value of money can be rectified by applying a weighted average cost of capital discount. because it does not account for the time value of money. search .This article does not cite any references or sources. described below.e. When used carefully or to compare similar investments. it can be quite useful. shorter payback periods are preferable to longer payback periods. financing or other important considerations. the return to the investment consists of reduced operating costs. Here. perhaps. Payback period intuitively measures how long something takes to "pay for itself. Unsourced material may be challenged and removed. often with respect to energy efficiency technologies. the concept of a payback period is occasionally extended to other uses. the sum of all of the cash outflows is calculated. The payback period is considered a method of analysis with serious limitations and qualifications for its use.. (March 2009) • • • • • • Payback period in capital budgeting refers to the period of time required for the return on an investment to "repay" the sum of the original investment. such as energy payback period (the period of time over which the energy savings of a project equal the amount of energy expended since project inception). Additional complexity arises when the cash flow changes sign several times. that the payback period should be less than infinity). It is easily applied in spreadsheets. For example. There is no formula to calculate the payback period. The modified payback period algorithm may be applied then. regardless of academic training or field of endeavour." All else being equal. Payback period does not specify any required comparison to other investments or even to not making an investment. The term is also widely used in other types of investment areas. it contains outflows in the midst or at the end of the project lifetime. i. Alternative measures of "return" preferred by economists are net present value and internal rate of return. Payback period is widely used because of its ease of use despite recognized limitations. such as the opportunity cost. a compact fluorescent light bulb may be described as having a payback period of a certain number of years or operating hours. or other changes. As a stand-alone tool to compare an investment to "doing nothing. maintenance. Payback period as a tool of analysis is often used because it is easy to apply and easy to understand for most individuals." payback period has no explicit criteria for decisionmaking (except. a $1000 investment which returned $500 per year would have a two year payback period. An implicit assumption in the use of payback period is that returns to the investment continue after the payback period. the free encyclopedia Jump to: navigation. • [edit] References COCOMO From Wikipedia. except the simple and unrealistic case of the initial cash outlay and further constant cash inflows or constantly growing cash inflows. The typical algorithm reduces to the calculation of cumulative cash flow and the moment in which it turns to positive from negative. it is generally agreed that this tool for investment decisions should not be used in isolation. Please help improve this article by adding citations to reliable sources. Although primarily a financial term. assuming certain costs. risk. Then the cumulative positive cash flows are determined for each period. these other terms may not be standardized or widely used. The time value of money is not taken into account.

early. References to this model typically call it COCOMO 81. COCOMO II is the successor of COCOMO 81 and is better suited for estimating modern software development projects. In 1997 COCOMO II was developed and finally published in 2000 in the book Software Cost Estimation with COCOMO II[2]. COCOMO was first published in 1981 Barry W. code reusability and the use of off-the-shelf software components. The need for the new model came as software development technology moved from mainframe and overnight batch processing to desktop development. It drew on a study of 63 projects at TRW Aerospace where Barry Boehm was Director of Software Research and Technology in 1981. The study examined projects ranging in size from 2. (November 2008) The Constructive Cost Model (COCOMO) is an algorithmic software cost estimation model developed by Barry Boehm. Boehm's Book Software engineering economics[1] as a model for estimating effort. but its accuracy is limited due to its lack of factors to account for difference in project attributes (Cost Drivers). . This article refers to COCOMO 81. See the talk page for details. Contents [hide] • • • • • • • • 1 Basic COCOMO 2 Intermediate COCOMOs 3 Detailed COCOMO 4 Projects using COCOMO 5 See also 6 References 7 Further reading 8 External links [edit] Basic COCOMO Basic COCOMO computes software development effort (and cost) as a function of program size. with parameters that are derived from historical project data and current project characteristics. These projects were based on the waterfall model of software development which was the prevalent software development process in 1981.000 lines of code. and schedule for software projects.000 to 100. Basic COCOMO is good for quick. Not to be confused with Kokomo (disambiguation). The model uses a basic regression formula. This article needs attention from an expert on the subject.Not to be confused with Docomo. rough order of magnitude estimates of software costs. cost. WikiProject Computer science or the Computer science Portal may be able to help recruit an expert. The first level. and programming languages ranging from assembly to PL/I. COCOMO consists of a hierarchy of three increasingly detailed and accurate forms. Intermediate COCOMO takes these Cost Drivers into account and Detailed COCOMO additionally accounts for the influence of individual project phases. Program size is expressed in estimated thousands of lines of code (KLOC). It provides more support for modern software development processes and an updated project database.

each with a number of subsidiary attributes:• • • • Product attributes o Required software reliability o Size of application database o Complexity of the product Hardware attributes o Run-time performance constraints o Memory constraints o Volatility of the virtual machine environment o Required turnabout time Personnel attributes o Analyst capability o Software engineering capability o Applications experience o Virtual machine experience o Programming language experience Project attributes o Use of software tools . cb and db are given in the following table.5 0.35 Embedded 3. . and so on.32 Basic COCOMO is good for quick estimate of software costs.."small" teams with "good" experience working with "less than rigid" requirements Semi-detached projects . operational.20 2. bb.) The basic COCOMO equations take the form Effort Applied = ab(KLOC)bb [ man-months ] Development Time = cb(Effort Applied)db [months] People required = Effort Applied / Development Time [count] The coefficients ab. However it does not account for differences in hardware constraints.5 0.COCOMO applies to three classes of software projects: • • • Organic projects . personnel quality and experience.12 2.developed within a set of "tight" constraints (hardware. [edit] Intermediate COCOMOs Intermediate COCOMO computes software development effort as function of program size and a set of "cost drivers" that include subjective assessment of product. software. This extension considers a set of four "cost drivers". hardware. use of modern tools and techniques.4 1.05 2.6 1. personnel and project attributes. Software project ab bb cb db Organic 2."medium" teams with mixed experience working with a mix of rigid and less than rigid requirements Embedded projects ..0 1.5 0.38 Semi-detached 3.

00 1.00 1.30 1.10 1.24 1.90 0.00 1.23 1.2 Semi-detached 3.24 1.00 High 1.15 0.07 0.00 1. KLoC is the estimated number of thousands of delivered lines of code for the project. and EAF is the factor calculated above.95 0.87 1.00 1.82 0.85 Nominal 1.15 1. An effort multiplier from the table below applies to the rating. Software project ai Organic 3. The product of all effort multipliers results in an effort adjustment factor (EAF).82 0.91 0.11 1.30 1.00 1.15 1.29 1.17 1.21 1.10 1.EAF where E is the effort applied in person-months.19 1.88 0.66 1.07 1. [edit] Detailed COCOMO .04 Very High 1.42 1.15 1.70 Low 0.87 0.91 0.o o Application of software engineering methods Required development schedule Each of the 15 attributes receives a rating on a six-point scale that ranges from "very low" to "extra high" (in importance or value).00 1.56 0.08 1. Ratings Cost Drivers Product attributes Required software reliability Size of application database Complexity of the product Hardware attributes Run-time performance constraints Memory constraints Volatility of the virtual machine environment Required turnabout time Personnel attributes Analyst capability Applications experience Software engineer capability Virtual machine experience Programming language experience Project attributes Application of software engineering methods Use of software tools Required development schedule Very Low 0.00 1.94 0.06 1.16 1.00 1.14 1.0 Embedded 2.4.8 bi 1.20 The Development time D calculation uses E in the same way as in the Basic COCOMO.86 0.00 1.9 to 1. The coefficient ai and the exponent bi are given in the next table.10 The Intermediate Cocomo formula now takes the form: E=ai(KLoC)(bi).46 1.86 0.00 1.00 1.00 1.05 1.30 1.83 1.71 0.91 1.65 1.70 Extra High 1.00 1.12 1.21 1.08 0.13 1. Typical values for EAF range from 0.75 0.10 1.40 1.

Portability. We need to know various quality factors upon which quality of a software produced is evaluated.effort required to run the program from one platform to other or to different hardware. etc.extent to which access to software and data is denied to unauthorized users. The first category of the factors is of those that can be measured directly such as number of logical errors and the second category clubs those factors which can be measured only indirectly for example maintainability but the each of the factors are to be measured to check for the content and the quality control.labor required to understand. which influence the software.effort needed to modify an operational program.effort required to test the programs for their functionality. Reliability . Usability. The various factors.amount of computing and code required by a program to perform its function.incorporates all characteristics of the intermediate version with an assessment of the cost driver's impact on each step (analysis. [edit] Projects using COCOMO Software Quality Factors Till now we have been talking software quality in general.) of the software engineering process 1. Integrity . operate. are termed as software factors. the detailed model uses different efforts multipliers for each cost drivers attribute these Phase Sensitive effort multipliers are each to determine the amount of effort required to complete each phase. • • • • • • • • • Correctness . They can be broadly divided into two categories.Detailed COCOMO . . These factors are given below.effort required to locate and fix an error in a program.extent to which a program satisfies its specification and fulfills the client's objective. Flexibility. prepare input and interpret output of a program Maintainability. We also looked at CMM in brief. Testability. What it means to be a quality product.extent to which a program is supposed to perform its function with the required precision. Efficiency . Few factors of quality are available and they are mentioned below. design. The classification is done on the basis of measurability.

Cn are regression coefficients and Mn is metrics that influences the quality factor. another method was evolved to measure out the quality.Cn*Mn where Fq is the software quality factor. Reliability.degree to which full implementation of functionality required has been achieved. Performance and Supportability.ability to trace a design representation or actual program component back to initial objectives. There are various ‘checklists’ for software quality. Data commonality. Operability. Software system independence. the generality of the functions that are derived and the overall security of the system. • • • • • • • • • • • • • • • • • • • • Auditability. . Simplicity. Usability.precision of computations and control Communication commonality. Consistency. Security. Hardware independence. Accuracy.degree to which the software is de-coupled from its operating hardware.ease with which the conformance to standards can be verified.extent to which the program or it’s parts can be used as building blocks or as prototypes for other programs.control and protection of programs and database from the unauthorized users. Modularity.degree to which the source code provides meaningful documentation. Instrumentation. operating system characteristics and other environment constraints. Now as you consider the above-mentioned factors it becomes very obvious that the measurements of all of them to some discrete value are quite an impossible task.degree to which a program is understandable without much difficulty. One of them was given by Hewlett-Packard that has been given the acronym FURPS – for Functionality.degree to which program is independent of nonstandard programming language features. Interoperability. Functionality is measured via the evaluation of the feature set and the program capabilities. Completeness.run-time performance of a program. protocols and bandwidth are used.degree to which the software is user-friendly to new users.degree to which one can extend architectural.use of standard data structures and types throughout the program. Conciseness. Therefore.• • Reusability. Training. Traceability.degree to which standard interfaces.program’s compactness in terms of lines of code. Execution efficiency.use of uniform design and documentation techniques throughout the software development. Self-documentation.functional independence of program components.ease of programs operation. Expandability. A set of matrices is defined and is used to develop expressions for each of the factors as per the following expression Fq = C1*M1 + C2*M2 + ……………. data and procedural design.degree to which the program monitors its own operation and identifies errors that do occur.effort required to couple one system to another. Metrics used in this arrangement is mentioned below. Error tolerance – damage done when program encounters an error.

adaptability. the ability to recover from failure and the predictability of the program. throughput and efficiency. compatibility. Reliability is figured out by evaluating the frequency and severity of failure. Software Quality Attributes . serviceability or in other terms maintainability and also testability. the accuracy of output results. resource consumption. Supportability combines the ability to extend the program. the mean time between failure (MTBF). response time. Performance is measured by measuring processing speed. configurability and the ease with which a system can be installed. overall aesthetics.Considering human factors. consistency and documentation assesses usability.

2006 11:39:46. these topics can be specialized by problem domain. modifiability. and chapter 5 talks about how to achieve them. in fact. tactics for achieving the attribute. It does this by describing. But I've seen people write patterns that were the same thing as one of the tactics. Performance means a different thing for programming scientific applications on supercomputers than it does for distributed business systems. reliability. will be illustrated. Capability Maturity Model and the ISO 9000 standard. . the main ones they talk about (though the book says "availability" when it should say "reliability") The book claims that one of the main purposes of architecture is to ensure these attributes. the lists that it gives should be useful for people who want to document patterns because it gives an outline of possible patterns. because most of these are global properties of systems. 2003 Abstract This paper serves as a general guideline for those who wish to implement a process improvement model--but are unsure of which governing framework to select. The two dominant process improvement models in use today. testability and usability. In other words. This is what they call non-functional requirements like performance. \************************************************************** Capability Maturity Model versus ISO 9000 An assessment John R. too. Moreover. Even though the book doesn't explain how to use any tactic. contrasted. what the book says is important. I go along with that. for each attribute. Previous publications on this topic are analyzed for relevance to today's environment. as is the distinction of a process model from a lifecycle model. UIUC has courses on most of them. I think that tactics are patterns. How the process model has become the dominant framework for software engineering activities is investigated. Certainly some patterns are like this. patterns are concrete examples of how to use a few tactics together. The recent ISO 9000:2000 updates and revisions are discussed at a high-level. the few pages that SAIP gives to each quality attribute is not nearly enough.490 Chapters 4 and 5 of Software Architecture in Practice are about "software quality attributes". or real-time control systems. that is. Of course. So. The book says that patterns bundle tactics. Chapter 4 talks about how to specify these attributes. security. The focus is on the ISO guidelines in areas that are most relevant to software engineering. Nevertheless. the ISO 9000:3. These are. most of these are big topics. Snyder March.September 7. and analyzed for applicability to software development environments.

ISO 9001 has undergone a major revision. from a . To embezzle the classic "To be or not to be" phrase of the ancient thespian Shakespeare. However. When computers began to get a foothold in academia and business in the 1970's. which consisted of a twenty-clause structure. and the CMM model.Some general conclusions are developed as to the applicability of each of the process model standards for different types of software development organizations and business environments. only the rare mission-critical software project was managed with any type of methodology framework. The 1994 standard. Mark C. now consists of only five clauses. the advent of high-level programming languages and the personal computer brought the ability to create software to a much larger group. Amateurish programming and get-rich quick software schemes unleashed a Pandora's Box of software issue on a naive public. 4). Indeed. 1997) more often than not. However. Software and the hardware systems that process the software have become increasingly complex. "To CMM. and the International Organization for Standards (ISO) ISO 9000 standard. Introduction The two most common process models in use today for software engineering are the Capability Maturity Model (CMM). how applicable each respective model might be in your organization. Paulk of the Software Engineering Institute. The smaller collection of configurations. Software projects are defined as a "Death March" (Yourdon. The "alphabet soup" of acronyms and labyrinth of clauses can be confusing to interpret. permutations. or ISO--that is the question". Developing analytic comparisons between the two models can be problematic because of interpretation issues. Horror stories of malfunctioning software are rampant today. "The quality of a software system is highly influenced by the quality of the process used to develop and maintain it". in his 1994 paper "A Comparison of ISO 9001 and the Capability Maturity Model for Software" (Paulk. Are you old enough to remember punch cards? In the last ten-twenty years. and the people who understood them made the creation and maintenance of software easier to manage. The responsibilities that accompanied this new ability were not always considered. choosing a process improvement framework is a daunting prospect for the uninitiated. This paper will attempt to analyze and assess how these two process models compare and contrast. In addition. the rudimentary nature of the tools in use through the early 1980's dictated that the pace of software construction was extremely slow and methodical--therefore less prone to be in error. p. the most recent model of the ISO 9000 standard will be compared and contrasted to the current CMM standard. Process Model Defined The Historical Perspective Why is the topic of "process" synonymous with the creation of high-quality software by professionals? According to Dampier and Glickstein (2000. but perhaps more "straightforward" to manipulate. The early computers and their software were certainly not "simple". In this document. 1994) developed an analysis of the relationships between ISO 9001. since the publication of that document. One could easily attribute many of the difficulties faced by software developers to the immaturity of the science--we do not have the luxury of hundreds of years of empirical experience.

He goes on to make the assertion "Perhaps the problem is that IT is just newer. testing. describes the "sub-activities or tasks within a phase or activity. Thus. the emerging field of Software Engineering may be doing better than the constant negativity in the news media would lead us to believe.historical perspective. 1997. speaking to the failure rate of Information Technology projects--states. "IT is performing just as well as other disciplines". but at the enterprise level. 401). The Life Cycle Model Often. coding. and the conditions that must exist before the tasks can begin" (Dorfman & Thayer (1997. A process model then. para 2). The process model provides a framework for the life cycle model to operate in. p. or some other variation (Dorfman & Thayer. We know a life cycle model as the familiar events that make up software creation: requirements. describe a life cycle model as "a model of the phases or activities that start when a software product is conceived and end when the product is no longer available for use". "Software entities are more complex for their size than perhaps any other human construct " (Dorfman & Thayer. there are misconceptions and confusion about what is a process model. design. the organization will only implement one process model. p. Figure 1 shows the relationship between the organization level process model. Note that each software project in the organization may use a different type of life cycle model. the complexity of software is one major factor that contributes to software project failures. more active and being studied and reported more frequently" (2000. integration. . Are quality control mechanisms that have been successful in other genres of manufacturing applicable to the creation of Software? Many in the industry believe that the application of "engineering discipline to the development and maintenance of software" (Paulk. and products that are laden with defects. 2002) would bring Total Quality Management (TQM) concepts into the software development process. and the procedures may take place once in sequence. Several different types of life cycle models exist. 1997. the dependencies among them. 402). and the project level life cycle model. 401). A life cycle model defines how the software product is assembled. 14). equivalent to the assembly line in a software factory . and what constitutes a life cycle model. and "life cycle" models for software engineering were created from the archetypes that interjected quality control into other lines of business and hard-goods manufacturing. Dorfman and Thayer (1997. in multiple iterations. p. p. Schulz. process. Complexity Certainly. analogous to the operations manual for the software assembly line and the workers who run it.

and Logistics (2003. it must be viewed systemically through all phases of development. This is appropriate as a mechanism of the "engineering discipline" mentioned previously (Paulk. p. Technology. As stated previously. para. management philosophies. What is the correct process model for your organization? Should you use the CMM structure. procedures. testing. 1). The DOD commissioned the SEI in 1986 with selfish motivations. The quality system helps to objectify and quantify the activities that provide assurance to the organization that customer expectations are being met. Software is only one element delivered to the customer.S. Department of Defense (DOD) recognized that in order to create and maintain the high-quality software systems that it needs. responsibilities. Some have been created with software development in mind. however. 2001).Figure 1 . some are more slanted to hard manufacturing. and improvement methods" (Tingey. 216). any practitioner of software engineering will quickly confirm that testing activities alone are not enough to introduce quality into software. The definition of a process model is analogous to the description of a Total Quality Management system: "a comprehensive set of management tools. p. The U. 5). or the ISO 9000 paradigm? The Capability Maturity Model (CMM) The Software Engineering Institute developed the Capability Maturity Model (CMM) for software at Carnegie Mellon University. a scientifically developed process model was required. Similarly. and resources for implementing quality management (2001. reporting-the improvement of quality measures throughout development activities (Pressman.Process Model Role in the Organization The Process Model Organization process models are usually mentioned in the context of software quality assurance activities. Department of Defense through the Office of the Under Secretary of Defense for Acquisition. 1997. according to Tingey (1997): • • • • • • • Customer orientation Empowerment of employees Participative management Data-based decisions Continual improvement "Process" orientation Quantitative tools for process improvement Many variants of process models exist. Pressman defines a quality assurance system as "the organizational structure. Software quality assurance is the planning. Software is part of a system. or as part of an enterprise-wide Total Quality Management plan. In addition. The Software Engineering Institute (SEI) is a federally funded research and development center sponsored by the U. the testing and quality control activities for software must be viewed as one component of a quality assurance system. The process model used by an organization defines the umbrella quality assurance system that is used. processes. the engineer must consider the hardware that it will be installed on.S. 2002). the entire software community has since benefited from the . testing alone is not software quality assurance. The essential elements of a process model. In order for the software to execute. and the data set that it will use when the software runs. As such. controlling. and some to service-only organizations. measuring.

The Capability Maturity Model for Software describes the principles and practices underlying software process maturity and is intended to help software organizations improve the maturity of their software processes in terms of an evolutionary path from ad hoc. and training. • Ability to Perform (AP) Ability to Perform describes the preconditions that must exist in the project or organization to implement the software process competently. Ability to Perform typically involves resources. Few processes are defined. The necessary process discipline is in place to repeat earlier successes on projects with similar applications.work of this Institute.CMM Maturity Levels With the exception of level one. Both the software process and products are quantitatively understood and controlled.Repeatable 3 . described in Table 1. . organizational structures. and functionality. The common features provide infrastructure to the process. chaotic processes to mature. called "common features". The software process for both management and engineering activities is documented. The Software CMM is organized into five maturity levels. Maturity Level 1 . 2. Here are the common features defined in the CMM version 1. and integrated into a standard software process for the organization. In fact. 1997): • Commitment to Perform (CO) Commitment to Perform describes the actions the organization must take to ensure that the process is established and will endure. All projects use an approved.Defined 4 . Detailed measures of the software process and product quality are collected. that serve to provide organization and categorization to the practices. The key practices in each subject area provide a mechanism to achieve the respective goals. 1997). and a high-level framework for the details of the key process areas. 1999). tailored version of the organization's standard software process for developing and maintaining software.1 (Tingey.Optimized Table 1 .Managed 5 . Commitment to Perform typically involves establishing organizational polices and senior management sponsorship. Basic project management processes are established to track cost. standardized. disciplined software processes. there are 52 goals and 316 practices defined (Tingey.Initial Description The software process is characterized as ad hoc. Continuous process improvement is enabled by quantitative feedback from the process and from piloting innovative ideas and technologies. the Software CMM is probably one of the most well known and most widely used model world-wide that is specific to software process improvement as of this date (Paulk. schedule. The 18 KPA's are organized by five logical groupings. Within the key process areas. the maturity levels are further refined into 18 Key Process Areas (KPA's). and success depends on individual effort and heroics. and occasionally even chaotic.

Activities Performed typically involves establishing plans and procedures. Here is the text of that KPA (Carnegie Mellon University & TeraQuest Metrics. • Measurement and Analysis (ME) Measurement and Analysis describes the need to measure the process and analyze the measurements. The "Peer Review" KPA is a good candidate to show how the CMM would be applied. An important corollary effect is to develop a bettor understanding of the software work products and of defects that might be prevented. For example. Key Process Area Example A specific example will better illustrate how the CMM would be used in a commercial environment from a pragmatic standpoint. • Verifying Implementation (VE) Verifying Implementation describes the steps to ensure that the activities are performed in compliance with the process that has been established. tracking it. as described in Integrated Software Management. 2002): Description: The purpose of Peer Reviews is to remove defects from the software work products early and efficiently. Goals: 1. at the CMM level 3 "Defined" maturity level. The practices identifying the specific software work products that undergo peer review are contained in the key process areas that describe the development and maintenance of each software work product.• Activities Performed (AC) Activities Performed describes the roles and procedures necessary to implement a key process area. Verifying Implementation typically encompasses reviews and audits by management and software quality assurance. and how common features discussed previously provide umbrella organization to the KPA practices in a hierarchical fashion. Peer Reviews involve a methodical examination of software work products by the producers' peers to identify defects and areas where changes are needed. the peer review is one of the seven key process areas to be satisfied. Measurement and Analysis typically includes examples of the measurements that could be taken to determine the status and effectiveness of the Activities Performed. and taking corrective action. This key process area covers the practices for performing peer reviews. . performing the work. Peer review activities are planned. The specific products that will undergo a peer review are identified in the project's defined software process and scheduled apart of the software project planning activities.

only that it must be planned. The CMM is a verbose guideline consisting of over 500 pages. The United States has input into the ISO standards through the American National Standards Institute (ANSI). & Warzusen. the United States. because the term "ISO" in Greek means equal. Typically. Over 60 countries including all members of the European Community. …and so on into each of the five common features (including examples). General Motors. rather. New Zealand. Adoption of the ISO standards has been ubiquitous across the globe. The ANSI standards: Q9000. The standard has been coined TL-9000. Chrysler. Canada. manufacturers of these products often require their suppliers to become registered. ANSI had developed a series of quality management and quality assurance standards in 1987 labeled ANSI/ASQ Z1. Australia. Defects in the software work products are identified and removed. it does not state how the peer review is to be planned. We see that the CMM does not dictate exactly how the implementor is to execute the procedure. For example. Similarly. 2002). 1994). Ford. a country typically permits only ISO registered companies to supply goods and services to government agencies and public utilities. so that each organization can tailor the specification to its culture and business goals. This example was truncated in the interest of brevity. an automotive specific variant of ISO 9000 (Quality System Requirements QS9000. but that is not the case.2. In turn. This flexibility is intentional. This work was later revised and renamed to align with the international guidelines developed by the ISO. In the United States. Telecommunications equipment and medical devices are examples of product categories that must be supplied by ISO registered companies. This KPA maps into each of the common features. Mexico.15-1979. Each of the practices is framed by one of the five common features similarly. The preparation of standards occurs through ISO technical committees of interested parties. Apparently. After adopting the standards. and several truck companies have developed QS 9000. companies will put their suppliers on notice that they must register to become ISO certified or risk the loss of a business relationship. Q9002. Q9001. Q9004 are now closely aligned and consistent with the ISO standards of the same name (Frank. For example. Private companies such as automobile and computer manufacturers frequently require their suppliers to be ISO registered as well. as do all of the process areas. the idea that they develop standards to place organizations on an equal footing (Praxiom. and the Pacific Rim have adopted the standards. 2002). the telecommunications industry has also developed a deviant of the ISO standard that incorporates elements specific to that industry. The ISO 9000 Model The International Organization for Standardization is a worldwide federation of national standards bodies. and the association wanted to convey the idea of equality. that is. the term ISO was chosen instead. The term ISO stands for the International Organization for Standardization. This type of motivating force has proven to drive the ISO . The "Ability to Perform" is encompassed by the fact that the KPA must provide for "Adequate resources and funding are provided for performing peer reviews on each software work product to be reviewed". Marriott. the "Commitment to Perform" is a corollary to the specification because "The project follows a written organizational policy for performing peer reviews". One would assume that the acronym for the International Organization for Standardization would be IOS.

Fundamentals and vocabulary horizontal to all categories.Quality systems Model for quality assurance in design. a company is issued a certificate from a registration body represented by the auditors. . 2001) ISO 9000:1994 Overview The ISO 9000/Q9000-1994 family of standards consisted of the following main categories (note that the Q9000 standards are technically equivalent to the ISO 9000 standards): • ISO 9000:1994. Additional requirements for continual improvement. ISO 9000 describes the quality elements that must be present for a quality assurance system to be compliant with the standard.Quality systems Model for quality assurance in final inspection and test. Like the CMM methodology. and the result was a set of updated standards now known as ISO 9000/Q9000-2000. Recognition of the needs of stakeholders (customer focus). production. Upon successful registration. Guidelines for performance improvements. Compatibility with other management system standards.Quality management and quality assurance standards. • ISO 9004:1994. • ISO 9001: 1994.Quality management and quality system elements. but it does not describe how an organization should implement these elements. installation and service. The major changes included: • • • • The adoption of a process approach to quality management. (Pressman. To become registered to one of the quality assurance system models contained in ISO 9000. development.Quality systems Model for quality assurance in production. ISO 9000:2000 Revisions The ISO standard underwent a major revision in the years leading up to the turn of the century. installation and service. • ISO 9003:1994. • ISO 9002:1994. a company’s quality system and operations are scrutinized by third party auditors auditors scrutinize a company's quality system operation for compliance to the standard and for effective operation.standards into widespread acceptance across many types of industries--both manufacturing and service.

The ISO 9001. ISO is also working on a fourth new standard: ISO 19011. and 9003 standards will officially expire on December 15. Simplified terminology: subcontractor is now "supplier" supplier is now "organization" inspection and testing is now "product verification and validation" quality system element is now "process" quality system is now "interrelated processes" One of the most obvious manifestations of the update is the fact that ISO 9002 and 9003 have been discontinued. Supply.Quality management systems . ISO 9000 Relevance to Software Engineering The ISO 9000:3 guideline provides an adaptation of the ISO 9001 standard to the field of software engineering.Guidance for performance improvement. Guidelines for the Application of ANSI/ISO/ASQC Q9001-1994 to the Development.• • Connects quality management systems to business processes. Another milestone related to the ISO 9000: 2000 update is the looming deadline for companies to certify to the updated standard. 2003 (Praxiom. ISO ISO 9001:2000. ISO 19011 will replace the old ISO 10011 quality auditing standards. Here are the consolidated standards of the ISO9000:2000 series at a highlevel: • • • ISO 9000:2000.Quality management systems . The 9000:3 standard was developed to satisfy the need for guidelines for processes and procedures that are specific to the creation and maintenance of software. ISO 9000:3 was approved as an American National Standard on August 18 1998. Installation and Maintenance of Computer Software. software engineering standards) subcommittee to make it fully compatible with ISO 9001:2000 (Frank. the 9000:3 requirement provides additional articles for quality planning (Frank. The final version of this new standard is expected sometime this year.e. tests. verification and validation activities . et al. A revision to match ISO 9001:2000 has been assigned to ISO/IEC JTC1/SC7 (i. The cut-off date is December 15.Fundamentals and vocabulary. 2002). 9002. 2003 to become ISO 9001:2000 certified. ISO 9004:2000. 2002).. The standard is listed as: ANSI/ISO/ASQ Q9000-3-1997. and guidelines for adaptation of the "hard" manufacturing emphasis of ISO 9001 to software engineering is provided as required.Quality management systems .Requirements. For example. et al. 2002): o o o o Measurable quality requirements Use of a life cycle model Criteria for starting and ending each project phase Identification of reviews. ISO 9001 is included verbatim..

It is out of the scope of this paper to list each of the requirements here. The ISO focus is the customer-supplier relationship. It is up to each organization to interpret the requirement and make it applicable to their business goals. In contrast.. et al.4 "Design control". specific responsibilities and authorities ISO 9000:3 Example The ISO 9000:3 specification consists of approximately twenty main "Quality Systems Requirements". The documented procedure should include the following details: o o o o o o o o Topics to be reviewed Chair of the review and review participants Records and actions of the meetings are kept Review methods Agenda setting Review guidelines for participants Review metrics Corrective action procedures for the meeting All known deficiencies from the design review meeting should be resolved before permitting project activities to proceed to the next step. or how they are to be collected. the customer can be a part of the review meeting. Contrasts and Applicability In general. the CMM and ISO 9000 address similar issues and have the common concern of quality and process management. The ISO 9000 standard is intentionally written for a wide range of industries other than software. As before. each with several subheadings.6 (Frank. the review area of this specification will be illustrated. If required. the CMM framework . section 4. However. These reviews may be scheduled or unscheduled. As one can see. For example. Hardgoods manufacturing was the original focus for this specification. the review is mentioned in the context of section 4. 2002): Design Review Representatives of all functions shall be present at appropriate reviews of design results. From the specification section 4. the CMM strength is the attention on the software supplier to improve its internal processes to achieve a higher quality product for the benefit of the customer. This requirement dictates the general approach and methodologies that are to be used in software design reviews.6 lists the heading "Design review". Specifically. the genesis of each framework is distinctly disparate. How may times have you seen the proclamation on the banner of a rusting factory "ISO 9001 Certified"? In contrast. but an example will give a sense of how the specification is elaborated. In the 9000:3 outline. Comparisons. the ISO requirement states that review metrics must be included in the review procedures--but does not articulate what metrics.4. attempting to reduce a customer's risk in choosing a supplier.4.o o Identification of configuration management techniques used Provision for detailed planning. the specification is broad enough to give individual organizations flexibility in how they implement the requirements.

Another aspect of the business model is the scope of your market. Verbosity in itself does not make a standard better. Conclusion: The Capability Maturity Model is better suited to organizations that are currently using. Contrast this one time review approach to the CMM peer review KPA. The ISO 9000’s concept is to follow a set of standards to make success repeatable. however. compared to the CMM. The converse is less true" is still applicable (p. 9). It was not practical to do the same with the CMM parallel. moving from one level of achievement to the next. 2002. so it benefits the customer. So if your business model takes your products into countries where the ISO standard is more widely recognized than CMM. 1995). The ISO standard gives brief guidelines for conducting a review in the context of a "Design review". his statement "Although the CMM does not adequately address some specific issues. The CMM emphasizes a process of continuous improvement. but identifies the purpose and focuses on how this activity will benefit the organization. in general it encompasses the concerns of ISO 9001.27). 1999. other business concerns may dictate the best model for your organization. or plan to implement. Remember that ISO 9001 is intended to be a supplier certification vehicle. His more recent work that takes into account the revisions of the ISO 9001:2000 standard confirms this "A Level 2 or 3 organization should have little difficulty in obtaining an ISO 9001 certificate" (Paulk. However. Conclusion: Your organization may benefit in terms of customer relations and market status by . the focus is on continuous improvement. and your business competitors may be already ISO certified. The CMM states this also. Nevertheless. As of December 31. in terms of software engineering. Once an organization has met the criteria to be ISO certified by an independent audit. an iterative lifecycle. The 9001:2000 and 9000:3 criterion combined makeup only about 60 pages of text. Even at the highest level of maturity in CMM. Conclusions The 20 elements of the ISO 9001:1994 standard that Mark Paulk used in his original CMM to ISO comparison paper are now gone (Paulk. which is to be universally and liberally applied throughout an iterative lifecycle. In essence. The CMM is over 500 pages long. and international standard. The ISO standard for peer review was presented in its entirety in this paper. By definition. the CMM is an on-going process of evaluation and improvement. one can get a sense of the depth of the CMM compared to the ISO by the peer review example presented previously. For example. the ISO states that these items should be present. your customers may want you to become ISO 9001 certified the market you sell in may expect the status. the next step is only to maintain that level of certification. the standard implies the waterfall lifecycle where a review would be a one-time occurrence—for the software design. by definition. p. The peer review example presented here gives insight into how the ISO standard is shallow. more so than the supplier.was created from the ground-up to be specific to the software industry. 2002). ISO 9000 is. the decision on which model to implement may be made for you by the marketplace. ISO 9000 certificates were issued in 150 countries (Praxiom. Although it would be possible for an organization to tailor the ISO specification. The CMM has more depth that the ISO standards.

This paper has analyzed the evolution of software engineering into the complex and challenging discipline that it is today. as discussed previously. In the end. It may concern future happenings. Project Risk Management – Identifying Risks and Assessment Process Risk is one of the few certainties of life. change in development environment and technologies. changes and choice. the ISO functions at a more abstract level than the CMM.becoming ISO 9001 certified. Risk can be defined as the event or circumstances which is a threat to . which is the best for your organization? The answer is that it is impossible to authoritatively state that one model is superior given the vast variables in product. the focus is "continuous process improvement is enabled by quantitative feedback from the process and from piloting innovative ideas and technologies" (Paulk. culture. ______________________________________________________________________________ ______________________________________________________________________________ Risk involves choice and the uncertainty that is tagged with choice. p. Conclusion: A software organization will be better positioned to accommodate technology evolution by embracing the CMM. the recognition of the needs of stakeholders (customer focus). where even at the highest level of certification. your work is mostly complete. It could be viewed as a preparatory system. determined the difference between a lifecycle model and a process model--and discovered how a process model benefits a software development organization. Have we answered the question. Risk thus includes a combination of uncertainty. the content of the process model may not be as important as the customer's expectations--the ultimate benchmark. This may be change in customer requirements. it is up to the individual organization to make the best choice--the CMM or ISO. must be constantly re-inventing themselves to keep pace with that change. It is the aspect of continual improvement that tilts the scales in favor of the CMM for software organizations. These choices may be regarding development tools and methods. it is important not to focus solely on a scorecard of certification status. They may be the risks that a software projects may undergo and cause planning to be in jeopardy. and business environment. When the ISO 9001:2000 revisions are placed under a microscope. It may involve changes concerning the project. the resources and the quality standards adopted. For example. As demonstrated here. As was demonstrated by the peer review example. change in opinions and actions of leading team members. 1999). In addition. it appears that the goals where to make the standard more like the CMM. Two of the most common process models in use where briefly compared and contrasted--the CMM and the ISO.30). Not true with the CMM. and the addition of requirements for continual improvement. and once certified. As Mark Paulk states "focusing on achieving a maturity level or certification without improving process performance is a real danger" (2002. This concept is most beneficial to software organizations who are faced with the integral component of changing technologies in their business models.

The category versus component matrix has a characterization of potential consequences described. Assessing overall project risk can be done by proposed check list such as [SEI93]. Identifying risk and doing assessment and analysis help software team to understand and prevent potential problems and mitigate the degree of risk in software projects to a large extent. The more superior strategy is the proactive one. Identifying risks beforehand by team will help either to avoid or manage them effectively. personnel. Monitoring and Management Plan (RMMM) can be developed to ensure that the high risks in the list are tackled and contingency planning is done. Technical Risks and Market Risks. One way of doing is by a risk item checklist. losing senior management support or focus. A project team can list all risks no matter how remote. Technical risks are risks associated with the design. Unmanaged risk is one of the main causes of project failure. Business risk is associated with the market. In any of the question has a negative answer the manager should initiate mitigations steps. the complexity of system and knowledge and experience of the staff. For this. [KEI98] where questions have been derived from risk data obtained by surveys conducted on project managers in different countries. Support and Schedule. lj is the likelihood of the risk . When this happens and the problem is unresolved the project is in chaos.execution of project. Once the risks or what can go wrong is identified they can be ranked by the probability or likelihood of occurrence and impact. marginal and negligible. This strategy monitors a project for risks and assigns resources to deal with them when the risks turn into problems. where the potential risks are identified. probability value for each risk and the impact of each risk. process that is adopted. The contingency plan developed enables the team to respond with controlled and effective measures. The list is sorted by probability and impact. risks can be categorized as Project risks. it is essential that the risks taken be the right risks’. The checklist can have relevant questions and answers to help planner determine the impact. all the resources from software team including managers should be involved. Three factors that affect the consequences of occurrences of risk are the nature of risk. Cost. Majority of software teams rely on reactive risk strategies. One such method is the guidelines given for software risk identification and abatement by the US Air force [AFC88]. Project Risks are associated with budget. the probability of occurrence. It is important to quantify the degree of uncertainty and loss associated with risks. environment used for development and testing. customer etc and their impact on projects delivery. implementation. The impact category is decided based on characterization that best fits the scenario. The overall risk exposure RE is determined by RE = P X C where P is the Probability of occurrence and C is the cost incurred if risk occurs. customer requirements. Two main risks strategies are Reactive and Proactive. business impact. schedule. A Risk Mitigation. In this approach the Project Manager should identify the risk drivers that affect the risk components which are Performance. In this process. A cutoff is determined to pinpoint risks that require detailed attention. its scope and timing. Risk Exposure needs to be computed for each risk and cost estimated. obsolescence and ambiguity in specification etc. [KAR96].lj.xj] where rj is the risk. or budgetary requirements not met etc. interfaces. In the words of Peter Drucker. Risk identification is a systematic approach to specify the threats in a project. The total of risk exposure for all relevant risks can be used to adjust the cost estimate of the project. assessed and ranked. monitoring or management plan should be developed for risks falling within cutoff. Risk projection or estimation attempts to rate risk based on the probability of risk occurrence and the consequence of the problem arising with the risk. All these categorizations have generic and product specific risks attached to it. understanding of product. Risks should be re evaluated periodically during the course of the project life cycle as its probability and impact may keep changing. Risk assessment can be computed in the form [CHA89] : [rj. critical. This checklist can attempt to focus on predictable risks in categories such as size of the product. ‘While it is futile to try to eliminate risk and questionable to try to minimize. The impact of each of this driver on the component is categorized as catastrophic. The team moves to fire fighting mode to correct the problem. Risk mitigation.

predict the referent points and try to predict how compound combination of risks affects referent points. the free encyclopedia Jump to: navigation. Software prototyping From Wikipedia. For the assessment to be useful a risk referent level must be defined.lj. Prototyping has several benefits: The software designer and implementer can obtain feedback from the users early in the project. monitoring identified assumption and identifying new risks. and may be completely different from the eventual implementation. (October 2009) Software prototyping. cost. rather than having to interpret and evaluate the design based on descriptions. This may be represented by risk components such as performance. It is an activity that occurs during certain software development and is comparable to prototyping as known from other fields. attempt to develop a relationship between each [rj. The client and the contractor can compare if the software made matches the .xj] and each of the referent level. incomplete versions of the software program being developed. Thus risk management. monitoring identified risks. The conventional purpose of a prototype is to allow users of the software to evaluate developers' proposals for the design of the eventual product by actually trying them out. such as mechanical engineering or manufacturing.. i. In continuing Risk Management three basic processes are. so "controlling the prototype" can be a key factor in the commercial relationship between developers and their clients. Prototyping can also be used by end users to describe and prove requirements that developers have not considered. A prototype typically simulates only a few aspects of the features of the eventual program. search The introduction to this article provides insufficient context for those unfamiliar with the subject. assessment techniques increases the quality of software being produced by minimizing or eradicating risks. Please help improve the article with a good introductory style. During risk assessment we define the risk referent level. refers to the activity of creating prototypes of software applications. Incorporating disciplined risk analysis.e. support and schedule. in concept referring to making decisions based on evaluation of factors that prove as a threat is critical for the overall success of a project.and xj is the impact of risk. In software risk analysis a risk referent level has a single referent point or break point at which the decision has to be made to proceed with the project or terminate it. [1] Interaction design in particular makes heavy use of prototyping with that goal.

. which led to higher software costs and poor estimates of time and cost.[citation needed] The monolithic approach has been dubbed the "Slaying the (software) Dragon" technique. The degree of completeness and the techniques used in the prototyping have been in development and debate since its proposal in the early 1970s. The practice of prototyping is one of the points Fred Brooks makes in his 1975 book The Mythical Man-Month and his 10-year anniversary article No Silver Bullet. according to which the software program is built. Prototyping can also avoid the great expense and difficulty of changing a finished software product.[6] This process is in contrast with the 1960s and 1970s monolithic development cycle of building the entire program first and then working out any inconsistencies between design and implementation. It also allows the software engineer some insight into the accuracy of initial project estimates and whether the deadlines and milestones proposed can be successfully met. since it assumes that the software designer and developer is a single hero who has to slay the entire dragon alone.software specification.

such as security.1 Screen generators.1 Horizontal Prototype o 2.2 Operational prototyping o 7. Develop Initial Prototype .1 Throwaway prototyping o 3.5 Scrum 8 Tools o 8.3 Evolutionary systems development o 7.4 Evolutionary rapid development o 7. 2. can typically be ignored.5 Non-relational environments o 8.4 LYMB o 8.2 Vertical Prototype 3 Types of prototyping o 3. Identify basic requirements Determine basic requirements including the input and output information desired.3 Incremental prototyping o 3.2 Evolutionary prototyping o 3.6 PSDL 9 Notes 10 References [edit] Overview The process of prototyping involves the following steps 1.1 Dynamic systems development method o 7.2 Application definition or simulation software o 8.Contents [hide] • • • • • • • • • • 1 Overview 2 Dimensions of prototypes o 2. design tools & Software Factories o 8.4 Extreme prototyping 4 Advantages of prototyping 5 Disadvantages of prototyping 6 Best projects to use prototyping 7 Methods o 7. Details.3 Requirements Engineering Environment o 8.

(See Horizontal Prototype. [edit] Dimensions of prototypes Nielsen summarizes the various dimension of prototypes in his book Usability Engineering [edit] Horizontal Prototype A common term for a user interface prototype is the horizontal prototype. below) 3. such as database access. It provides a broad view of an entire system or subsystem. examine the prototype and provide feedback on additions or changes. If changes are introduced then a repeat of steps #3 and #4 may be needed. It is useful for obtaining detailed requirements for a given function.The initial prototype is developed that includes only user interfaces. including end-users. Negotiation about what is within the scope of the contract/product may be necessary. Horizontal prototypes are useful for: • • • Confirmation of user interface requirements and system scope Demonstration version of the system to obtain buy-in from the business Develop preliminary estimates of development time. for network sizing and performance engineering Clarifies complex requirements by drilling down to actual system functionality . [edit] Vertical Prototype A vertical prototype is a more complete elaboration of a single subsystem or function. cost and effort. Review The customers. with the following benefits: • • • Refinement database design Obtain information on data volumes and system interface needs. Revise and Enhance the Prototype Using the feedback both the specifications and the prototype can be improved. 4. focusing on user interaction more than low-level system functionality.

If a project is changed after a considerable work has been done then small changes could require large efforts to implement since software systems have many dependencies. Another strength of Throwaway Prototyping is its ability to construct interfaces that the users can test. and therefore a greater enhancement to software productivity overall.[7] The most obvious reason for using Throwaway Prototyping is that it can be done quickly. a simple working model of the system is constructed to visually show the users what their requirements may look like when they are implemented into a finished system. after a relatively short investigation. [edit] Throwaway prototyping Also called close-ended prototyping. and the system is formally developed based on the identified requirements. Requirements can be identified. leads to the accurate specification of requirements. they may be able to refine them early in the development of the software. …it is asserted that revolutionary rapid prototyping is a more effective manner in which to deal with user requirements-related issues. Throwaway or Rapid Prototyping refers to the creation of a model that will eventually be discarded rather than becoming part of the final delivered software. the prototype model is 'thrown away'. The model then becomes the starting point from which users can re-examine their expectations and clarify their requirements. all the methods are in some way based on two major types of prototyping: Throwaway Prototyping and Evolutionary Prototyping. Speed is crucial in implementing a throwaway prototype. The user interface is what the user sees as the system. and the subsequent . Rapid Prototyping involved creating a working model of various parts of the system at a very early stage. Making changes early in the development lifecycle is extremely cost effective since there is nothing at that point to redo. it is much easier to grasp how the system will work. in turn. After preliminary requirements gathering is accomplished. This. simulated. and by seeing it in front of them. the most important factor being the speed with which the model is provided.[edit] Types of prototyping Software prototyping has many variants. and tested far more quickly and cheaply when issues of evolvability. since with a limited budget of time and money little can be expended on a prototype that will be discarded. and software structure are ignored. When this has been achieved. maintainability. However. If the users can get quick feedback on their requirements. The method used in building it is usually quite informal.

it must evolve through use in its intended operational environment. 2." it is always maturing as the usage environment changes…we often try to define a system using our most . One method of creating a low fidelity Throwaway Prototype is Paper Prototyping. 3. is the usage of storyboards. The main goal when using Evolutionary Prototyping is to build a very robust prototype in a structured manner and constantly refine it. but certainly in the same family. forms the heart of the new system. and the improvements and further requirements will be built. The prototype is implemented using paper and pencil. Not exactly the same as Throwaway Prototyping. For a system to be useful. 6. specifies new requirements Repeat if necessary Write the final requirements Develop the real products [edit] Evolutionary prototyping Evolutionary Prototyping (also known as breadboard prototyping) is quite different from Throwaway Prototyping. SUMMARY:-In this approach the prototype is constructed with the idea that it will be discarded and the final system will be built from scratch. 4. These are non-functional implementations but show how the system will look. When developing a system using Evolutionary Prototyping. the system is continually refined and rebuilt.construction of a valid and usable system from the user's perspective via conventional software development models. "…evolutionary prototyping acknowledges that we do not understand all the requirements and builds only those that are well understood. 5. animatics or drawings. a prototype that looks like the goal system. when built. and thus mimics the function of the actual product. A product is never "done. Write preliminary requirements Design the prototype User experiences/uses the prototype. interaction and timing."[5] This technique allows the development team to add features. "The reason for this is that the Evolutionary prototype. or make changes that couldn't be conceived during the requirements and design phase. Another method to easily build high fidelity Throwaway Prototypes is to use a GUI Builder and create a click dummy. but does not look at all like it. but does not provide any functionality. [8] Prototypes can be classified according to the fidelity with which they resemble the actual product in terms of appearance. The steps in this approach are: 1.

The partial system is sent to customer sites. The process is called Extreme Prototyping to draw attention to the second phase of the process. The first phase is a static prototype that consists mainly of HTML pages. In the third phase the services are implemented. it breaks down web development into three phases. In the second phase. Developers then take these enhancement requests along with their own and use sound configuration-management practices to change the software-requirements specification. sooner or later. the screens are programmed and fully functional using a simulated services layer. they detect opportunities for new features and give requests for these features to developers. At the end the separate prototypes are merged in an overall design. something resembling the envisioned system is delivered. A plan is enacted to develop the capability. and. they may be used on an interim basis until the final system is delivered."[7] In Evolutionary Prototyping. each one based on the preceding one. To minimize risk. [edit] Extreme prototyping Extreme Prototyping as a development process is used especially for developing web applications. developers can focus themselves to develop parts of the system that they understand instead of working on developing a whole system. Although they may not have all the features the users have planned.[10] [edit] Incremental prototyping The final product is built as separate prototypes. update the design. where a fully-functional UI is developed with very little regard to the services other than their contract.[9] Evolutionary Prototypes have an advantage over Throwaway Prototypes in that they are functional systems.familiar frame of reference---where we are now. . recode and retest. Basically. As users work with the system. "It is not unusual within a prototyping environment for the user to put an initial prototype to practical use while waiting for a more developed version… The user may decide that a 'flawed' system is better than no system at all. We make assumptions about the way business will be conducted and the technology base on which the business will be implemented. the developer does not implement poorly understood features.

to deliver core functionality on time and within budget). For example. some abstract.g. Since users know the problem domain better than anyone on the development team does. feel and performance. If users are able to require all proposed features be included in the final system this can lead to conflict. user representatives attending Enterprise software (e. whereas developers might think this is feature creep because they have made assumptions about the extent of user requirements. The final product is more likely to satisfy the users desire for look. increased interaction can result in final product that has greater tangible and intangible quality. preparation of incomplete specifications or the conversion of limited prototypes into poorly engineered final projects that are hard to maintain. Because changes cost exponentially more to implement as they are detected later in development.[8] Improved and increased user involvement: Prototyping requires user involvement and allows them to see and interact with a prototype allowing them to provide better and more complete feedback and specifications. Users might believe they can demand auditing on every field.[7] The presence of the prototype being examined by the user prevents many misunderstandings and miscommunications that occur when each side believe the other understands what they said.) This can lead them to expect the prototype to accurately model the performance of the final system when this is not the intent of the developers. This can lead to overlooking better solutions. Users can also become attached to features that were included in a prototype for consideration and then removed from the specification for a final system. [edit] Disadvantages of prototyping Using. Further. If the developer has committed delivery before the . PeopleSoft) events may have seen demonstrations of "transaction auditing" (where changes are logged and displayed in a difference grid view) without being told that this feature demands additional coding and often requires more hardware to handle extra database accesses. or perhaps misusing. prototyping can also have disadvantages.g. the early determination of what the user really wants can result in faster and less expensive software. User confusion of prototype and finished system: Users can begin to think that a prototype. often unaware of the effort needed to add error-checking and security features which a prototype may not have. for example. which may not be noticed if developers are too focused on building a prototype as a model. intended to be thrown away.[11] Insufficient analysis: The focus on a limited prototype can distract developers from properly analyzing the complete project.[edit] Advantages of prototyping There are many advantages to using prototyping in software development – some tangible. Developer misunderstanding of user objectives: Developers may assume that users share their objectives (e. is actually a final system that merely needs to be finished or polished.[11] Reduced time and costs: Prototyping can improve the quality of requirements and specifications provided to developers. since a prototype is limited in functionality it may not scale well if the prototype is used as the basis of a final deliverable. (They are. without understanding wider commercial issues.

When this underlying structure is omitted.) Excessive development time of the prototype: A key property to prototyping is the fact that it is supposed to be done quickly. such as batch processing or systems that mostly do calculations. should be used all the time.[13] [edit] Best projects to use prototyping It has been argued that prototyping. especially for transaction processing. In addition to training for the use of a prototyping technique. where the use of screen dialogs is much more in evidence. Sometimes. The greater the interaction between the computer and the user. this can lead to problems like attempting to convert a limited prototype into a final system when it does not have an appropriate underlying architecture. the greater the benefit is that can be obtained from building a quick system and letting the user play with it. A common problem with adopting prototyping technology is high expectations for productivity with insufficient effort behind the learning curve. Users can become stuck in debates over details of the prototype. It has been found that prototyping is very effective in the analysis and design of on-line systems.[7] Systems with little user interaction. particularly if user management derives some advantage from their failure to implement requirements. (This may suggest that throwaway prototyping. should be used. rather than evolutionary prototyping.user requirements were reviewed. retooling. there is an often overlooked need for developing corporate and project specific underlying structure to support the technology.[7] . and changing them can mean retraining. benefit little from prototyping. If the developers lose sight of this fact. prototyping is most beneficial in systems that will have many interactions with the users. When the prototype is thrown away the precisely developed requirements that it provides may not yield a sufficient increase in productivity to make up for the time spent developing the prototype. Developer attachment to prototype: Developers can also become attached to prototypes they have spent a great deal of effort producing. Expense of implementing prototyping: the start up costs for building a development team focused on prototyping may be high. in some form or another. the coding needed to perform the system functions may be too intensive and the potential gains that prototyping could provide are too small. Many companies tend to just jump into the prototyping without bothering to retrain their workers as much as they should. holding up the development team and delaying the final product. or both. they very well may try to develop a prototype that is too complex. developers are between a rock and a hard place. lower productivity can often result. Many companies have development methodologies in place. However.

It expands upon most understood definitions of a prototype. response time. a business process. Performance and capacity prototypes ."[8] [edit] Methods There are few formal prototyping methodologies even though most Agile Methods rely heavily upon prototyping techniques. DSDM prototypes may be throwaway or evolutionary.Prototyping is especially good for designing good human-computer interfaces. Evolutionary prototypes may be evolved horizontally (breadth then depth) or vertically (each section is built in detail with additional iterations detailing subsequent sections). etc. data storage volume. and is itself ISO 9001 approved. 3. and predict how systems will perform under peak loads as well as to demonstrate and evaluate other non-functional aspects of the system (transaction rates.used to define. "One of the most productive uses of rapid prototyping to date has been as a tool for iterative user requirements engineering and human-computer interface design. 4.) Capability/technique prototypes – used to develop. Usability prototypes – used to define. demonstrate. [edit] Dynamic systems development method Dynamic Systems Development Method (DSDM)[18] is a framework for delivering business solutions that relies heavily upon prototyping as a core technique. refine. or even a system placed into production. Designers develop only well-understood . demonstrate. Identify prototype Agree to a plan Create the prototype Review the prototype [edit] Operational prototyping Operational Prototyping was proposed by Alan Davis as a way to integrate throwaway and evolutionary prototyping with conventional system development. look and feel. The four categories of prototypes as recommended by DSDM are: • • • • Business prototypes – used to design and demonstrates the business processes being automated. accessibility. and evaluate a design approach or concept. Evolutionary prototypes can eventually evolve into final systems. and demonstrate user interface design usability. DSDM prototypes are intended to be incremental. 2. The DSDM lifecycle of a prototype is to: 1. According to DSDM the prototype may be a diagram. evolving from simple forms into more comprehensive ones. "[It] offers the best of both the quick-and-dirty and conventional-development worlds in a sensible manner.

It is now generally recognised[sic] that a good methodology should be flexible enough to be adjustable to suit all kinds of environment and situation…[7] . One particular type. and allows them to continue working. If the user likes the changes. Whenever the user encounters a problem or thinks of a new feature or requirement. the prototyper removes them. The user now uses the new system and evaluates. Systemscraft was not designed as a rigid 'cookbook' approach to the development process. At each site. This frees the user from having to record the problem. Copies of the baseline are sent to multiple customer sites along with a trained prototyper. the prototyper logs it. [edit] Evolutionary systems development Evolutionary Systems Development is a class of methodologies that attempt to formally implement Evolutionary Prototyping. the prototyper constructs a throwaway prototype on top of the baseline system."[5] Davis' belief is that to try to "retrofit quality onto a rapid prototype" is not the correct approach when trying to combine the two approaches. His idea is to engage in an evolutionary prototyping methodology and rapidly prototype the features of the system after each evolution. After the user session is over. Systemscraft was designed as a 'prototype' methodology that should be modified and adapted to fit the specific environment in which it was implemented. with the change requests in hand from all the sites. Obviously. The development team.features in building the evolutionary baseline. If the new changes aren't effective. called Systemscraft is described by John Crinnion in his book: Evolutionary Systems Development. the prototyper watches the user at the system. The Operational Prototyping methodology has many benefits in systems that are complex and have few known requirements in advance. then produce a new evolutionary prototype using conventional methods. specifying and implementing only the requirements that are well understood. a key to this method is to have well trained prototypers available to go to the user sites. the prototyper writes feature-enhancement requests and forwards them to the development team. while using throwaway prototyping to experiment with the poorly understood features. The specific methodology follows these steps: [5] • • • • • • • • An evolutionary prototype is constructed and made into a baseline using conventional development strategies.

Timeboxes are fixed periods of time in which specific tasks (e. not unlike Evolutionary Prototyping. Rather than allowing time to expand to satisfy . or customer requirements.g. It allows for multiple application components to be used to implement the services. A core set of functionality not likely to change is also identified and established. representing a class of solutions. The design framework for the system is based on using existing published or de facto standards.g. The architecture serves as a template to be used for guiding development of more than a single instance of the system. is to create a working system from the initial requirements and build upon it in a series of revisions. capacities. Fundamental to ERD is the concept of composing software systems based on the reuse of components. betas) are made available for use to provide insight into how the system could better support user and customer needs. frequent scheduled and ad hoc/impromptu meetings with the stakeholders are held. the use of software templates and on an architectural template. COTS applications). infrastructures. Systemscraft places heavy emphasis on traditional analysis being used throughout the development of the system. often parallel short-duration timeboxes with frequent customer interaction. Continuous evolution of system capabilities in rapid response to changing user needs and technology is highlighted by the evolvable architecture. The architecture is defined in terms of abstract interfaces that encapsulate the services and their implementation (e. The process focuses on the use of small artisan-based teams integrating software and systems engineering disciplines working multiple. and components with and adoption of leading edge technologies enabling the quick reaction to changes in technologies. Demonstrations of system capabilities are held to solicit feedback before design/implementation decisions are solidified.The basis of Systemscraft.. [edit] Evolutionary rapid development Evolutionary Rapid Development (ERD)[12] was developed by the Software Productivity Consortium. This assures that the system evolves to satisfy existing user needs. and functionality. The ERD process is structured to use demonstrated functionality rather than paper products as a way for stakeholders to communicate their needs and expectations. developing a set of functionality) must be performed.g. Frequent releases (e. Central to this goal of rapid delivery is the use of the "timebox" method.. Key to the success of the ERD-based projects is parallel exploratory analysis and development of features. the marketplace.[9] To elicit customer/user input.. a technology development and integration agent for the Information Technology Office of the Defense Advanced Research Projects Agency (DARPA). The system is organized to allow for evolving a set of capabilities that includes considerations for performance.

constraints) for the project. [edit] Screen generators. without writing code.some vague set of goals. 4th generation visual programming languages like Visual Basic and ColdFusion are frequently used since they are cheap. These plans provide a vision for the overall system and set boundaries (e. Once an architecture is established. Each iteration within the process is conducted in the context of these long-range plans. To keep development from degenerating into a "random walk. Users may prototype elements of an application themselves in a spreadsheet. This allows the team to assess progress objectively and identify potential problems quickly. The approach was first described by Takeuchi and Nonaka in "The New New Product Development Game" (Harvard Business Review. User demonstrations can be held at short notice since the system is generally ready to exercise at all times. Jan-Feb 1986) [edit] Tools Efficiently using prototyping requires that an organization have proper tools and a staff trained to use those tools. Software Factories are Code Generators that allow you to model the domain model and then drag and drop the UI. but limited. Also you can use the UI Controls that will later on be used for real development. diagnosing and removing the defect is rapid. Also they enable you to run the prototype and use basic database functionality. text or drawing-based mock-ups (or wireframes) sometimes called paper based .. and provides reports such as annotations. since to the users the interface essentially is the system. like the Requirements Engineering Environment (see below) are often developed or selected by the military or large organizations. Object oriented tools are also being developed like LYMB from the GE Research and Development Center.[2] Developing Human Computer Interfaces can sometimes be the critical part of the development effort. collaborate and validate the simulated program.g. Application Simulation falls between low-risk. software is integrated and tested on a daily basis. test. Application simulation software allows both technical and non-technical users to experience. Tools used in prototyping can vary from individual tools like 4th generation programming languages used for rapid prototyping to complex integrated CASE tools. screenshot and schematics. [edit] Scrum Scrum is an agile method for project management. the time is fixed (both in terms of calendar weeks and person-hours) and a set of goals is defined that realistically can be achieved within these constraints. well known and relatively easy and fast to use. supporting requirements analysis. animated simulations of another computer program. Since small amounts of the system are integrated at one time. design tools & Software Factories Also commonly used are screen generating programs that enable prototypers to show users systems that don't function. [edit] Application definition or simulation software A new class of software called also Application definition or simulation software enable users to rapidly build lightweight. CASE tools." long-range plans are defined to guide the iterations. As a solution specification technique. This approach allows you to explore the domain model and make sure it is in sync with the GUI prototype. but show what the screens may look like.

before development begins. allowing software professionals to validate requirements and design choices early on. These modeling activities are performed to gain a greater understanding of complex systems and lessen the impact that inaccurate requirement specifications have on cost and scheduling during the system development process. The third part of REE is a user interface to RIP and proto that is graphical and intended to be easy to use. Models can be constructed easily. building. [15] In 1996. In doing so. user interface . and performance prototype models of system components. The second part is called the Rapid Interface Prototyping System or RIP. and DefineIT from Borland. user interface. ProtoShare. such as screencasting software as those areas are closely related. and at varying levels of abstraction or granularity. It is: an integrated set of tools that allows systems analysts to rapidly build functional.[4][5][6] Some of the leading tools in this category are iRise. Axure. and consistency checking Analysis that the needs of diverse users taken together do not conflict and are technically and economically feasible Validation that requirements so derived are an accurate reflection of user needs. and executing models of critical aspects of complex systems.[15] REE is composed of three parts. simulation. To simulate applications one can also use software which simulate real-world software programs for computer based training."[15] Requirements Engineering Environment is currently used by the Air Force to develop systems. Their method has three main parts: • • • Elicitation from various sources which means u loose (users. and customer support. depending on the specific behavioral aspects of the model being exercised.prototyping. There are also more specialised tools. Rome Labs contracted Software Productivity Solutions (SPS) to further enhance REE to create "a commercial quality REE that supports requirements specification. called proto is a CASE tool specifically designed to support rapid prototyping. the developer of REE. and time-consuming. under development at Rome Laboratory since 1985. which is a collection of tools that facilitate the creation of user interfaces. Rome Laboratory. high-risk code-based prototypes. LucidChart. specification. [7][8][9] [edit] Requirements Engineering Environment "The Requirements Engineering Environment (REE). risks and costs associated with software implementations can be dramatically reduced[3]. Justinmind Prototyper. demonstration. provides an integrated toolset for rapidly representing. interfaces to other systems). The first. intended that to support their internal requirements gathering methodology.

^ Alan M.A. professor. [4] 2. ^ Stephen J. New York. 5. 4. ^ R. Virginia. Charette. Lecture notes on Rapid Prototyping.prototyping. using Caché or associative models) can help make end-user prototyping more productive by delaying or avoiding the need to normalize data at every iteration of a simulation. [edit] PSDL PSDL is a prototype description language to describe real-time software. 1990. and code generation…"[16] This system is named the Advanced Requirements Engineering Workstation or AREW. August. This may yield earlier/greater clarity of business requirements. mapping of requirements to hardware architectures. [edit] Notes 1. Gerri Akers: The Analysis and Prototyping of Effective Graphical User Interfaces. and rapid prototyping. IEEE Software. October 1996. Software Engineering Risk Analysis and Management. McGraw Hill. Stacy. ^ Todd Grimm: The Human Condition: A Justification for Rapid Prototyping. September 1992. Page 13. Page 71.g. Ontario. [edit] LYMB LYMB[17] is an object-oriented development environment aimed at developing applications that require combining graphics-based user interfaces. [edit] Non-relational environments Non-relational definition of data (e. Guelph. 1997. Davis: Operational Prototyping: A new Development Approach. Fairfax. AFCEA International Press. 1989. Time Compression Techn . visualization. Andriole: Information System Design Principles for the 90s: Getting it Right. Melissa Mcclendon. though it does not specifically confirm that requirements are technically and economically feasible in the target production system. ^ D. University of Guelph. [5] 3. Larry Regot. ^ C. 6.

from low tech sketches or paper screens(Pictive) from which users and developers can paste controls and objects. Prototyping can augment this process because it converts these basic. some will use paper in the initial analysis to facilitate concrete user feedback and then later develop an operational prototype using fourth generation languages. The user feedback gained from developing a physical system that the users can touch and see facilitates an evaluative response that the analyst can employ to modify existing requirements as well as developing new ones. prototypes are employed to help system designers build an information system that intuitive and easy to manipulate for end users. and conduct user interviews and collect documentation. they study the current information system. For example. system analysts gather information about the organization's current procedures and business processes related the proposed information system.Prototyping is the process of building a model of a system. This helps the analysts develop an initial set of system requirements. during the design stage. Prototyping comes in many forms . Prototyping is an iterative process that is part of the analysis phase of the systems development life cycle. During the requirements determination portion of the systems analysis phase. if there is one. In addition. specifications into a tangible but limited working model of the desired information system. to high tech operational systems using CASE (computer-aided software engineering) or fourth generation languages and everywhere in between. such as Visual Basic. In terms of an information system. Many organizations use multiple prototyping tools. yet sometimes intangible. .

the time saving benefit of prototyping can be lost. Facilitates system implementation since users know what to expect. Sometimes leads to incomplete documentation. . Results in higher user satisfaction. Developers receive quantifiable user feedback. Reduces development costs.Some Advantages of Prototyping: Reduces development time. Exposes developers to potential future system enhancements. Users expect the performance of the ultimate system to be the same as the prototype. Some Disadvantages of Prototyping Can lead to insufficient analysis. If sophisticated software prototypes (4th GL or CASE Tools) are employed. Developers can become too attached to their prototypes Can cause systems to be left unfinished and/or implemented before they are ready. Requires user involvement.

insurance. There were usable survey results received from 88 organizations representing 118 different projects. Based on the results of their research. By the early 1990's. organizations used prototyping approximately thirty percent (30%) of the time in development projects. government. transportation.educational. The represented organizations were comprised of a variety of industries . The guidelines practiced by industry whose adherence was found to have a statistical effect on system success were: . two experts believed some of the rules developed were nothing more than conjecture.Because prototypes inherently increase the quality and amount of communication between the developer/analyst and the end user. although not specifically stated. In the early 1980's. Hardgrave and Wilson wanted to find out how many of the popular prototyping guidelines outlined in literature were actually used by organizations and whether compliance affected system success (measured by the user's stated level of satisfaction). Wilson compare prototyping guidelines that appear in information systems literature with their actual use by organizations that have developed prototypes. Hardgrave and Wilson sent out 500 prototyping surveys to information systems managers throughout the United States. retail. its use had doubled to sixty percent (60%). A copy of the survey was also presented to a primary user and a key developer of two systems that the company had implemented within the two years of the survey. Bill C. In the article "An Investigation of Guidelines for Selecting a Prototyping Strategy". manufacturing and service. Hardgrave and Rick L. It should be noted that. Although there are guidelines on when to use software prototyping. not "low tech" paper or sketch prototypes. financial. its' use has become widespread. the study was based on the use of "high tech" software models. Hardgrave and Wilson found that industry followed only six of the seventeen recommended in information system literature. health service.

developers are less likely to become attached to their work. If experimentation and learning are needed before there can be full commitment to a project. Prototyping is not necessary if the developer is already familiar with the language ultimately used for system design. especially for initial systems analysis and design.Prototyping should be employed only when users are able to actively participate in the project. prototyping can be successfully used. several information systems consultants and researchers recommend using "low tech" prototyping tools (also known as paper prototypes or Pictive). and best of all. Developers should either have prototyping experience or given training. allows for more iterations. Among its' many benefits. Instead of software prototyping . this approach lowers the cost and time involved in prototyping. Prototypes should become part of the final system only if the developers are given access to prototyping support tools. your paper prototypes are usually "bug-free" (unlike most software prototypes)! . The paper approach allows both designers and users to literally cut and paste the system interface. It effectively eliminates many of the disadvantages of prototyping since paper prototypes are inexpensive to create. Object command and controls can be easily and quickly moved to suit user needs. and gives developers the chance to get immediate user feedback on refinements to the design. Users involved in the project should also have prototyping experience or be educated on the use and purpose of prototyping. users do not develop performance expectations.

Sign up to vote on this title
UsefulNot useful

Master Your Semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master Your Semester with a Special Offer from Scribd & The New York Times

Cancel anytime.