Master in Business Administration-MBA SEM – III MI0033 – Software Engineering

Q 1 Discuss the Objective & Principles behind Software Testing The importance of software testing and its implications with respect to software quality cannot be overemphasized .The development of software systems involves a series of production activities where opportunities for injection of human fallibilities are enormous. Errors may begin to occur at the very inception of the process where the objectives may be erroneously or imperfectly specified, as well as [in] later design and development stages. Because of human inability to perform and communicate with perfection, software development is accompanied by a quality assurance activity. Software testing is a critical element of software quality assurance and represents the ultimate review of specification, design, and code generation. The increasing visibility of software as a system element and the attendant "costs" associated with a software failure are motivating forces for well-planned, thorough testing. It is not unusual for a software development organization to expend between 30 and 40 percent of the total project effort on testing. In the extreme, testing of human-rated software (e.g., flight control, nuclear reactor monitoring) can cost three to five times as much as all other software engineering steps combined! Testing Objectives In an excellent book on software testing, Glen Myers states a number of rules that can serve well as testing objectives: 1. Testing is a process of executing a program with the intent of finding an error. 2. A good test case is one that has a high probability of finding an as return discovered error. 3. A successful test is one that uncovers an as-yet-undiscovered error. These objectives imply a dramatic change in viewpoint. They move counter to the commonly held view that a successful test is one in which no errors are found. Our objective is to design tests that systematically uncover different classes of errors, and to do so with a minimum amount of time and effort. If testing is conducted successfully (according to the objectives stated previously), it will uncover errors in the software. As a secondary benefit, testing demonstrates that software functions appear to be working according to specification, that behavioral and performance requirements appear to have been met. In addition, data collected as testing is conducted provide a good indication of software reliability, and some indication of software quality as a whole. But

testing cannot show the absence of errors and defects, it can show only that software errors and defects are present. It is important to keep this (rather gloomy) statement in mind as testing is being conducted.

Testing Principles Before applying methods to design effective test cases, a software engineer must understand the basic principles that guide software testing. Davis [DAV95] suggests a set of testing principles that have been adapted for use in this book: All tests should be traceable to customer requirements. As we have seen, the objective of software testing is to uncover errors. It follows that the most severe defects (from the customers point of view) are those that cause the program to fail to meet its requirements. Tests should be planned long before testing begins. Test planning can begin as soon as the requirements model is complete. Detailed definition of test cases can begin as soon as the design model has been solidified. Therefore, all tests can be planned and designed before any code has been generated. The Pareto principle applies to software testing. Stated simply, the Pareto principle implies that 80 percent of all errors uncovered during testing will most likely be traceable to 20 percent of all program components. The problem, of course, is to isolate these suspect components and to thoroughly test them. Testing should begin in the small and progress toward testing in the large. The first tests planned and executed generally focus on individual components. As testing progresses, focus shifts in an attempt to find errors in integrated clusters of components and ultimately in the entire system. Exhaustive testing is not possible. The number of path permutations for even a moderately sized program is exceptionally large. For this reason, it is impossible to execute every combination of paths during testing. It is possible, however, to adequately cover program logic and to ensure that all conditions in the component-level design have been exercised. To be most effective, testing should be conducted by an independent third party. By most effective, we mean testing that has the highest probability of finding errors (the primary objective of testing). For reasons that have been introduced earlier in this unit, the software engineer who created the system is not the best person to conduct all tests for the software.

2. To determine an organizations current state of process maturity. The SEI approach provides a measure of the global effectiveness of a company's software engineering practices. This level includes all characteristics defined for level 3. The software process for both management and engineering activities is documented. Discuss the CMM 5 Levels for Software Process The Process 2. All projects use a documented and approved version of the organization's process for developing and supporting software. Few processes are defined. The Software Engineering Institute (SEI) has developed a comprehensive model predicated on a set of software engineering capabilities that should be present as organizations reach different levels of process maturity. there has been a significant emphasis on process maturity. Level 3: Defined. This level includes all characteristics defined . schedule. Basic project management processes are established to track cost. and integrated into an organization-wide software process. The software process is characterized as ad hoc and occasionally even chaotic. standardized. Level 5: Optimizing. Level 4: Managed. and success depends on individual effort.Q 2.1 The Software Process In recent years. the SEI uses an assessment that results in a five point grading scheme. The necessary process discipline is in place to repeat earlier successes on projects with similar applications. Level 2: Repeatable. Both the software process and products are quantitatively understood and controlled using detailed measures. and establishes five process maturity levels that are defined in the following manner: Level 1: Initial. This level includes all characteristics defined for level 2. The grading scheme determines compliance with a capability maturity model (CMM) [PAU93] that defines key activities required at different levels of process maturity. and functionality. Continuous process improvement is enabled by quantitative feedback from the process and from testing innovative ideas and technologies. Detailed measures of the software process and product quality are collected.

Commitments requirements (imposed on the organization) that must be met to achieve the goals. software project planning. Discuss the Water Fall model for Software Development. it requires that any project goes through the stages of requirements analysis. Basically.. Q 3. requirements management) that must be present to satisfy good practice at a particular level. or provide proof of intent to comply with the goals. Abilities those things that must be in place (organizationally and technically) to enable the organization to meet the commitments. Waterfall still has its place in today's IT world. While it has been replaced to a large degree by the iterative models of software development. . The Waterfall software development model has been in use for a number of decades and it is still commonly used in software development projects today. design. The KPAs describe those software engineering functions (e. Methods for verifying implementation the manner in which proper practice for the KPA can be verified. The SEI has associated key process areas (KPAs) with each of the maturity levels. The results of the questionnaire are distilled to a single numerical grade that provides an indication of an organization's process maturity. Methods for monitoring implementation the manner in which the activities are monitored as they are put into place. The five levels defined by the SEI were derived as a consequence of evaluating responses to the SEI assessment questionnaire that is based on the CMM. It is a sequential model where the development process goes through a number of phases in a certain order. Activities the specific tasks required to achieve the KPA function.g.for level 4. Each KPA is described by identifying the following characteristics: Goals the overall objectives that the KPA must achieve.

implementation (coding). verification. The verification phase will involve testing and debugging of the software before it is released. the project only moves from one phase to the next when a phase is completed in its entirety. Direct measures of the software engineering process include cost and effort applied.. Also. measured by counting rejects). is complete the various components will be integrated into a working piece of software. . Software metrics can be categorized similarly. so when a phase is complete it has to be right. Although there are variations. there is no room for backtracking. execution speed. And while each phase of development should be 100% perfect before it is completed. The Waterfall model is often used for very large software development projects and may involve development teams working in different locations.g. Disadvantages The Waterfall model certainly isn't to everyone's taste. Explain the Different types of Software Measurement Techniques. or coding. the Waterfall model is seen as inflexible and linear—though it's preferred by many who feel iterative software development methodologies lack discipline. many modified Waterfall models have developed over the years that allow for increased flexibility Q4. Advantages Fans of the Waterfall software development model will argue that the amount of pre-planning that goes into the requirements and design phases makes it the most economical and risk free way to develop software as it identifies and weeds out any potential problems at the outset. Software Measurement Measurements in the physical world can be categorized in two ways: direct measures (e. it can become very complicated if they are not. the "quality" of bolts produced. in the true Waterfall model. The Waterfall model also puts an emphasis on documentation and structure. For these reasons. no work will begin on the design phase until requirements analysis is complete.. and maintenance. In comparison to iterative models. Direct measures of the product include lines of code (LOC) produced. Once implementation. This is an advantage when someone leaves the development team as the necessary documentation is there to help a new person take over.g. It also isn't very client-focused as it makes any requests to change the software during the development process almost impossible to agree to. Those who argue against it are usually opposed to its rigid structure and the inability to backtrack. If these problems arose later in a project they could be very costly. Therefore. the length of a bolt) and indirect measures (e.

Three people worked on the development of software for project alpha.000. code. and 29 defects were encountered after release to the customer within the first year of operation. maintainability. design. who suggested a measure called the function point. such as the one shown in Figure 4. reliability. A number of extensions to the basic function point measure have been proposed to remedy this situation. To accommodate these applications. complexity. 12. efficiency. It should be noted that the effort and cost recorded in the table represent all software engineering activities (analysis. it must be derived indirectly using other direct measures. can be created. For this reason. quality. 134 errors were recorded before the software was released. and test). Further information for project alpha indicates that 365 pages of documentation were developed. Indirect measures of the product include functionality. (3) Extended Function Point Metrics: The function point measure was originally designed to be applied to business information systems applications. and many other " abilities" (1) Size oriented metrics: Size-oriented software metrics are derived by normalizing quality and/or productivity measures by considering the size of the software that has been produced. the data dimension (the information domain values discussed previously) was emphasized to the exclusion of the functional and behavioral (control) dimensions. . Since functionality cannot be measured directly. 2) Function oriented metrics: Function-oriented software metrics use a measure of the functionality delivered by the application as a normalization value. and defects reported over some set period of time. The table lists each software development project that has been completed over the past few years and corresponding measures for that project.memory size. a table of size-oriented measures. Function points are derived using an empirical relationship based on countable (direct) measures of software's information domain and assessments of software complexity. not just coding. If a software organization maintains simple records. Function-oriented metrics were first proposed by Albrecht [ALB79]. the function point measure was inadequate for many engineering and embedded systems (which emphasize function and control).4.100 lines of code were developed with 24 person-months of effort at a cost of $168.

we . Software Project Estimation Software cost and effort estimation will never be an exact science. To achieve reliable cost and effort estimates. software project estimation can be transformed from a black art to a series of systematic steps that provide estimates with acceptable risk. called COCOMO II [BOE96. Like its predecessor. assessment of performance.Q5. Delay estimation until late in the project (obviously. Cost estimates must be provided "up front. However. 3. The original COCOMO model became one of the most widely used and discussed software cost estimation models in the industry. when prototyping of user interfaces. Base estimates on similar projects that have already been completed. and evaluation of technology maturity are paramount. BOE00]. The COCOMO Model In his classic book on software engineering economics. Explain the COCOMO Model & Software Estimation Technique. Use relatively simple decomposition techniques to generate project cost and effort estimates. It has evolved into a more comprehensive estimation model. the first option. technical. Post-architecture-stage model: Used during the construction of the software. for COnstructive COst MOdel. environmental. Too many variables human. is not practical. Barry Boehm [BOE81] introduced a hierarchy of software estimation models bearing the name COCOMO. a number of options arise: 1. we can achieve 100% accurate estimates after the project is complete!). 4. Early design stage model: Used once requirements have been stabilized and basic software architecture has been established. consideration of software and system interaction. however attractive. COCOMO II is actually a hierarchy of estimation models that address the following areas: Application composition model: Used during the early stages of software engineering. political can affect the ultimate cost of software and effort applied to develop it. Unfortunately." However. Use one or more empirical models for software cost and effort estimation. 2.

Unfortunately. deadlines) are equivalent.. For this reason. (2) the ability to translate the size estimate into human effort. we decompose the problem. Estimation uses one or both forms of partitioning. Yet both have a number of characteristics in common.g. The project planner begins with a bounded statement of software scope and from this statement attempts to decompose software into problem functions that can each be estimated individually. The second option can work reasonably well. But before an estimate can be made. LOC and FP data are used in two ways during software project estimation: (1) as an estimation variable to "size" each element of the software and (2) as baseline metrics collected from past projects and used in conjunction with estimation variables to develop cost and effort projections. Alternatively.. developing a cost and effort estimate for a software project) is too complex to be considered in one piece. if the current project is quite similar to past efforts and other project influences (e. and dollars (a function of the availability of reliable software metrics from past projects). more manageable) problems. the more we know. past experience has not always been a good indicator of future results. calendar time. and (4) the stability of product requirements and the environment that supports the software engineering effort. LOC and FP estimation are distinct estimation techniques. (ii) Problem-Based Estimation Lines of code and function points were described as measures from which productivity metrics can be computed. business conditions. (i) Software Sizing The accuracy of a software project estimate is predicated on a number of things: (1) the degree to which the planner has properly estimated the size of the product to be built. the project planner must understand the scope of the software to be built and generate an estimate of its size. the problem to be solved (i. LOC or FP (the estimation variable) is then estimated for each function. (3) the degree to which the project plan reflects the abilities of the software team. the customer.should recognize that the longer we wait.e. the less likely we are to make serious errors in our estimates. and the more we know. re-characterizing it as a set of smaller (and hopefully. The decomposition approach was discussed from two different points of view: decomposition of the problem and decomposition of the process. and in most cases. the SEE. Decomposition Techniques Software project estimation is a form of problem solving. the planner may choose another .

047 Doty model for KLOC > 9 FPoriented models have also been proposed. The empirical data that support most estimation models are derived from a limited sample of projects. no estimation model is appropriate for all classes of software and in all development environments. Values for LOC or FP are estimated using the approach described in Sections 5. the resultant values for LOC or FP are plugged into the estimation model. and C are empirically derived constants.2 and 5.. For this reason.2 x (KLOC)1.12 FP Matson.7 + 15. These include E = _13.288 x (KLOC)1.3.g. Empirical Estimation Models An estimation model for computer software uses empirically derived formulas to predict effort as a function of LOC or FP. problem complexity.component for sizing such as classes or objects. The implication is clear. The overall structure of such models takes the form [MAT94] E = A + B x (ev)C (5-2) where A. Function estimates are combined to produce an overall estimate for the entire project. LOC/pm or FP/pm9) are then applied to the appropriate estimation variable. and cost or effort for the function is derived. In addition to the relationship noted in Equation (5-2). E is effort in personmonths. B. Barnett. Baseline productivity metrics (e.16 Bailey-Basili model E = 3.5 + 0.73 x (KLOC)1. the results obtained from such models must be used judiciously..6.05 Boehm simple model E = 5.2 x (KLOC0_0.39 + 0.0545 FP Albrecht and Gaffney model E = 60. (i) The Structure of Estimation Models A typical estimation model is derived using regression analysis on data collected from past software projects. and Mellichamp model A quick examination of these models indicates that each will yield a different result 14 for the same values of LOC or FP. and ev is the estimation variable (either LOC or FP). changes.62 x 7. development environment).728 x 10-8 FP3 Kemerer model E = 585. Among the many LOC-oriented estimation models proposed in the literature are E = 5. But instead of using the tables described in those sections. Therefore.6.g. staff experience. Estimation models must be calibrated for local needs! . the majority of estimation models have some form of project adjustment component that enables E to be adjusted by other project characteristics (e.91 Walston-Felix model E = 5. or business processes affected.

However. people who were working must spend time . old attitudes and habits are difficult to modify. Computer-aided software engineering (CASE) tools are more important than hardware for achieving good quality and productivity. after all. a software manager often grasps at belief in a software myth. like managers in most disciplines. keep schedules from slipping. this statement may seem counterintuitive. Software Myths Today. are often under pressure to maintain budgets. Write a note on myths of Software. won't that provide my people with everything they need to know? Reality: The book of standards may very well exist. or PC to do high-quality software development. Management myths Managers with software responsibility. Reality: Software development is not a mechanistic process like manufacturing." Myth: My people have state-of-the-art software development tools. the answer to all of these questions is "no. Myth: If we get behind schedule. most knowledgeable professionals recognize myths for what they are misleading attitudes that have caused serious problems for managers and technical people alike. workstation. Myth: We already have a book that's full of standards and procedures for building software. and improve quality. yet the majority of software developers still do not use them effectively. we can add more programmers and catch up (sometimes called the Mongolian horde concept). we buy them the newest computers. In the words of Brooks [BRO75]: "adding people to a late software project makes it later.Q6. Reality: It takes much more than the latest model mainframe. as new people are added. and remnants of software myths are still believed. if that belief will lessen the pressure (even temporarily). However. Like a drowning person who grasps at a straw." At first. but is it used? Are software practitioners aware of its existence? Does it reflect modern software engineering practice? Is it complete? Is it streamlined to improve time to delivery while still maintaining a focus on quality? In many cases.

A formal and detailed description of the information domain. that is. When changes are requested during software design. Myths lead to false expectations (by the customer) and ultimately. Myth: If I decide to outsource the software project to a third party. and validation criteria is essential. Myth: Project requirements continually change. Reality: If an organization does not understand how to manage and control software projects internally. dissatisfaction with the developer. performance. the marketing/sales department. Change. I can just relax and let that firm build it. performance. but the impact of change varies with the time at which it is introduced. Reality: It is true that software requirements change. Changes in function. a technical group down the hall. If serious attention is given to up-front definition. early requests for change can be accommodated easily. In many cases. Figure 1. design constraints. function. Myth: A general statement of objectives is sufficient to begin writing programs we can fill in the details later. thereby reducing the amount of time spent on productive development effort. behavior. Reality: A poor up-front definition is the major cause of failed software efforts. Change can cause upheaval that requires additional resources and major design modification. The customer can review requirements and recommend modifications with relatively little impact on cost. it will invariably struggle when it outsources software projects. interfaces. or other characteristics during implementation (code and test) have a severe impact on cost. but change can be easily accommodated because software is flexible. Resources have been committed and a design framework has been established. when requested after software is in production.educating the newcomers. interface. additional cost. the customer believes myths about software because software managers and practitioners do little to correct misinformation. These characteristics can be determined only after thorough communication between customer and developer.3 illustrates the impact of change. People can be added but only in a planned and well-coordinated manner. or an outside company that has requested software under contract. . the cost impact grows rapidly. Customer myths: A customer who requests computer software may be a person at the next desk. can be over an order of magnitude more expensive than the same change requested earlier.

" Industry data ([LIE80]. Reality: Software engineering is not about creating documents. Documentation provides a foundation for successful engineering and. Myth: Until I get the program "running" I have no way of assessing its quality. Myth: The only deliverable work product for a successful project is the working program. Better quality leads to reduced rework. Myth: Software engineering will make us creates voluminous and unnecessary documentation and will invariably slow us down. Myth: Once we write the program and get it to work.Practitioner's myths: Myths that are still believed by software practitioners have been fostered by 50 years of programming culture. Old ways and attitudes die hard. programming was viewed as an art form. [JON91]. . Reality: Someone once said that "the sooner you begin 'writing code'. habitual attitudes and methods foster poor management and technical practices. Regrettably. our job is done. During the early days of software. even when reality dictates a better approach. It is about creating quality. Many software professionals recognize the fallacy of the myths just described. guidance for software support. Recognition of software realities is the first step towards formulation of practical solutions for software engineering. more importantly. And reduced rework results in faster delivery times. the longer it'll take you to get done. Reality: One of the most effective software quality assurance mechanisms can be applied from the inception of a project the formal technical review. Reality: A working program is only one part of a software configuration that includes many elements. Software reviews (described in Chapter 8) are a "quality filter" that have been found to be more effective than testing for finding certain classes of software defects. [PUT97]) indicates that between 60 and 80 percent of all effort expended on software will be expended after it is delivered to the customer for the first time.

the design quality of a product increases. Quality of conformance is the degree to which the design specifications are followed during manufacturing. and the design of the system. reviews. The grade of materials. Reliability is an underlying part of quality. Quality of conformance is an issue focused primarily on implementation. tighter tolerances and greater levels of performance are specified. As highergrade materials are used. and performance specifications all contribute to the quality of design. But how do we achieve quality control? Quality control involves the series of inspections. conformance quality is high. quality of design encompasses requirements. Again. and tests used throughout the software process to . specifications. Quality Control Variation control may be equated to quality control. tolerances. Quality and reliability are related concepts but are fundamentally different in a number of ways. Quality can be assured after reliable release of software over a period of time. if the product is manufactured according to specifications. Other hand both are interrelated though fundamentally different.2 (60 Marks) Q1. the higher is the level of quality of conformance. Basically quality of design refers to the characteristics that designers specify for an item.MI0033 – Software Engineering Assignment Set. Discuss them. If the implementation follows the design and the resulting system meets its requirements and performance goals. the greater the degree of conformance. In software development.

it is the management’s responsibility to address the problems. Quality control activities may be fully automated. Whenever software reliability is discussed.96 over eight elapsed processing hours. it matters little whether other software quality factors are acceptable. If a program repeatedly and frequently fails to perform. unlike many other quality factors. This approach views quality control as part of the manufacturing process. Software reliability is defined in statistical terms as "the probability of failure-free operation of a computer program in a specified environment for a specified time" [MUS87]. or a combination of automated tools and human interaction. A key concept of quality control is that all work products have defined measurable specifications to which we may compare the output of each process. Prevention costs include · Quality planning · Formal technical reviews · Test equipment On the other hand there is no doubt that the reliability of a computer program is an important element of its overall quality. if program X were to be executed 100 times and require eight hours of elapsed processing time (execution time). Cost of Quality The cost of quality includes all costs incurred in the pursuit of quality or in performing quality-related activities. it is likely to operate correctly (without failure) 96 times out of 100. appraisal. Software reliability. The combination of measurement and feedback allows us to tune the process when the work products created fail to meet their specifications. to improve our processes. we can evaluate the effect of changes in dollar-based terms. The feedback loop is essential to minimize the defects produced. a pivotal question arises: What is meant by the term failure? In the context of any discussion of software quality and reliability. Quality Assurance Quality assurance consists of the auditing and reporting functions of management. Furthermore. and apply the necessary resources to resolve quality issues. Quality control includes a feedback loop to the process that created the work product. we have the necessary data to evaluate where the opportunities lie. In other words. To illustrate. failure is . The basis of normalization is almost always dollars. and provide a normalized basis of comparison. thereby gaining insight and confidence that product quality is meeting its goals. and failure. Once we have normalized quality costs on a dollar basis. program X is estimated to have a reliability of 0.ensure each work product meets the requirements placed upon it. if the data provided through quality assurance identifies problems. The goal of quality assurance is to provide management with the data necessary to be informed about product quality. can be measured directed and estimated using historical and developmental data. identify opportunities for reducing the cost of quality. entirely manual. Of course. Cost of quality studies are conducted to provide a base-line for the current cost of quality. Quality costs may be divided into costs associated with prevention.

Measures of Reliability and Availability Early work in software reliability attempted to extrapolate the mathematics of hardware reliability theory (e. rather than failure due to design defects. Taking even a small software development project as an example. Any software development project that involves a team of people. Also known as revision control. there are gradations. a number of graphic designers and coders could be working on a project simultaneously. where MTBF = MTTF + MTTR The acronyms MTTF and MTTR are mean-time-to-failure and mean-time-to-repair.between-failure (MTBF). respectively. even within this definition. the correction of one failure may in fact result in the introduction of other errors that ultimately result in other failures. [LIT89]. and cause an entire system to fail.g. In hardware. it is worthwhile to consider a few simple concepts that apply to both system elements. There has been debate over the relationship between key concepts in hardware reliability and their applicability to software (e. complexity can increase by an order of magnitude or more.. software design features can be specified that will either eliminate or control potential hazards. hazards are identified and categorized by criticality and risk. Software safety is a software quality assurance activity that focuses on the identification and assessment of potential hazards that may affect software negatively. Although an irrefutable link has yet to be established. Q2 Explain Version Control & Change Control. Unfortunately.nonconformance to software requirements. all software failures can be traced to design or implementation problems. A modeling and analysis process is conducted as part of software safety. If we consider a computer-based system. version control allows the management of multiple revisions of the same project. [ALV64]) to the prediction of software reliability. In fact.g. some of the hazards associated with a computerbased cruise control for an automobile might be causes uncontrolled acceleration that cannot be stopped. Yet. Most hardware-related reliability models are predicated on failure due to wear.. as most of them do. a simple measure of reliability is meantime. shock) are more likely than a design-related failure. corrosion. If hazards can be identified early in the software engineering process. One failure can be corrected within seconds while another requires weeks or even months to correct. For example. Subtle design faults induced by human error – something that can be uncovered and eliminated in hardware-based conventional control – become much more difficult to uncover when software is used. requires some form of version control. Failures can be only annoying or catastrophic. Initially. Every time they complete an aspect of the project . the opposite is true for software.g. (ii) Software Safety When software is used as part of the control system. Complicating the issue even further. failures due to physical wear (e. wear (see Chapter 1) does not enter into the picture.. the effects of temperature. [ROO90]).

if possible it is best if only one developer works on a certain file at a time. It can either use file locking. However. So. it can use file merging where a merged file with both developers' changes included. However. These may require further changes later to merge both sets of changes successfully.they may wish to update the project. Version allows that each revision of the project is stored and all changes are associated with the person who made these changes. is checked in it is updated on this server and given a version code or number. they are at their most useful in software development environments whether the project team is based in one location or spread all over the globe. But it can also fix a big failure or enable wonderful new capabilities. We worry about change because a single rogue developer could sink the project. yet brilliant ideas originate in the minds of those rogues. Version control software also makes everyone accountable for their own work as it tracks what was 'checked in' by whom. The real beauty of version control is that developers can return to any earlier state of the project that was 'checked in'. In a software organization the change control document will contain the following items . There are a number of software tools available for version control that allow software development teams to manage their projects. for example some code is 'checked in' that corrupts the whole project. where only one developer can access certain files at any one time. Version control uses a centralized model where different stages of a project are stored in a shared server. it is simple to return to the previous state of the software before the damage was done. or an entire project. Version control is also invaluable when it comes to bug fixes and dealing with known issues. As software is developed new bugs will emerge and version control can be valuable in determining when and where bugs appeared and what caused them. We worry about change because a tiny perturbation in the code can create a big failure in the product. When a file. Alternatively. But the forces that make it necessary also make it annoying. and a burdensome change control process could effectively discourage them from doing creative work. The example software’s for version control are VSS (Visual source safe) and PVCS Change Control The reality of change control in a modern software engineering context has been summed up beautifully by James Bach [BAC98]: Change control is vital. Basic version control elements can also be found in applications like Microsoft Word. This is normally simple enough but things can get tricky if two developers make changes to and check in the same file at once. and is also commonly found on web applications like wikis. Version control deals with this in one of two ways.

The Modifier has completed implementing the requested change. as specified below. The Originator or someone else decided to cancel an approved change. updates the status of the request over time The person who submits a new change request The person who is responsible for overall planning and tracking of the development project activities The person who determines whether a change was made correctly Change Control Board Evaluator Modifier Originator Project Manager Verifier Change Request Status Status Changes A requested change will pass through several possible statuses during its life. asks someone to be the Evaluator for each change request and asks someone to be the Modifier for each approved change request The group that decides to approve or reject proposed changes for a specific project The person whom the CCB Chair asks to analyze the impact of a proposed change The person who is assigned responsibility for making changes in a work product in response to an approved change request. The CCB Chair has assigned a Modifier. These statuses. the issue Modifier. Any time an issue status is changed. and/or the CCB Chair. has final decisionmaking authority if the CCB does not reach agreement. are depicted in the state-transition diagram in Figure 1 and described in the Possible Statuses table. the change control tool will send an e-mail notification automatically to the issue Originator. Canceled Change Made . Notifications Possible Statuses Status Approved Meaning The CCB decided to implement the request and allocated it to a specific future build or product release. and the criteria for moving from one status to another.Roles and Responsibilities Role CCB Chair Description Chairperson of the change control board.

back out of modifications Closed . The Evaluator has performed an impact analysis of the request. back out of modifications Canceled Verifier has confirmed the change Verified Modifier has installed modified work products change was canceled. Modifier has installed modified work products change was canceled. Originator submitted an issue Evaluated Rejected Submitted Verified Submitted Evaluator performed impact analysis Evaluated CCB decided not to make the change Rejected CCB decided to make the change change was canceled. the modified work products have been installed. The CCB decided not to implement the requested change.Closed The change made has been verified (if required). back out of modifications Approved verification failed Modifier has made the change and requested verification Change Made no verification required. The Verifier has confirmed that the modifications in affected work products were made correctly. The Originator has submitted a new issue to the change control system. and the request is now completed.

Conceptually. Its primary responsibility is the control of change. and reporting. SCM is also responsible for the identification of individual SCIs and various versions of the software. Discuss the SCM Process.Q3. or test. Any discussion of SCM introduces a set of complex questions: How does an organization identify and manage the many existing versions of a program (and its documentation) in a manner that will enable change to be accommodated efficiently? How an organization control changes does before and after software is released to a customer? Who has responsibility for approving and ranking changes? How can we ensure that changes have been made properly? What mechanism is used to apprise others of changes that are made? These questions lead us to the definition of five SCM tasks: identification.2 A basic object is a "unit of text" that has been created by a software engineer during analysis. design. version control. or a suite of test cases that are used to exercise the code. and the reporting of all changes applied to the configuration. Identification of Objects in Software Configuration To control and manage software configuration items. For example. the auditing of the software configuration to ensure that it has been properly developed. and change control. configuration auditing. However. it can be viewed as a named (identified) list of pointers that specify basic objects such as data model and component N. code. a source listing for a component. Version Control . An aggregate object is a collection of basic objects and other aggregate objects. a basic object might be a section of a requirements specification. The SCM Process Software configuration management is an important element of software quality assurance. Two types of objects can be identified [CHO89]: basic objects and aggregate objects. each must be separately named and then organized using an object-oriented approach.

We worry about change because a tiny perturbation in the code can create a big failure in the product. Clemm [CLE89] describes version control in the context of SCM: Configuration management allows a user to specify alternative configurations of the software system through the selection of appropriate versions. such as MIL-STD-483. Change Control The reality of change control in a modern software engineering context has been summed up beautifully by James Bach [BAC98]: Change control is vital. yet brilliant ideas originate in the minds of those rogues. How can we ensure that the change has been properly implemented? The answer is twofold: (1) formal technical reviews and (2) the software configuration audit. 1028-1988 [IEE94]. focused on software developed for military applications. But the forces that make it necessary also make it annoying. such as ANSI/IEEE Stds. more recent ANSI/IEEE standards. and change control help the software developer to maintain order in what would otherwise be a chaotic and fluid situation. Many early SCM standards. This is supported by associating attributes with each software version. A formal technical review should be conducted for all but the most trivial changes. No. But it can also fix a big failure or enable wonderful new capabilities. and then allowing a configuration to be specified [and constructed] by describing the set of desired attributes. SCM Standards Over the past two decades a number of software configuration management standards have been proposed. However. 1042-1987. and a burdensome change control process could effectively discourage them from doing creative work. even the most successful control mechanisms track a change only until an ECO is generated. 828-1983. No. version control.Version control combines procedures and tools to manage different versions of configuration objects that are created during the software process. or potential side effects. DODSTD-480A and MIL-STD-1521A. omissions. However. Configuration Audit Identification. We worry about change because a single rogue developer could sink the project. No. are applicable for nonmilitary software and are recommended for both large and small software engineering organizations. . The reviewers assess the SCI to determine consistency with other SCIs. and Std.

be sure that everyone on the list is informed. and reports modifications that invariably occur while software is being developed. which are produced as a result of some software engineering activity. Therefore. programs. Change control is a procedural activity that ensures quality and consistency. Version control is the set of procedures and tools for managing the use of these objects. often called the "bathtub curve. software has characteristics that are considerably different than those of hardware: The failure rate is a function of time for hardware. the failure rate rises again as hardware components suffer from the cumulative effects of dust. vibration. as changes are made to a configuration object. The configuration is organized in a manner that enables orderly control of change. and data. controls. Software doesn’t Wear Out. The evolution of a program can be tracked by examining the revision history of all configuration objects. it becomes a baseline. The software configuration is composed of a set of interrelated objects. audits.Software configuration management is an umbrella activity that is applied throughout the software process. The relationship. Q4. In addition to documents." indicates that hardware exhibits relatively high failure rates early in its life (these failures are often attributable to design or manufacturing defects). When a change is made. Develop a need to know list for every SCI and keep it up to date. abuse. however. Basic and composite objects form an object pool from which variants and versions are created. the development environment that is used to create software can also be placed under configuration control. The change control process begins with a change request. ii. SCM identifies. Changes to a baselined object result in the creation of a new version of that object. . leads to a decision to make or reject the request for change. and after it has been released to a customer. also called software configuration items. Explain i. defects are corrected and the failure rate drops to a steady-state level (ideally. and culminates with a controlled update of the SCI that is to be changed. All information produced as part of software engineering becomes part of a software configuration. As time passes. Basically the software doesn’t Wear Out and Software is engineered & not manufactured are the characters of software Software is a logical rather than a physical system element. Once a configuration object has been developed and reviewed. quite low) for some period of time. Software is engineered & not manufactured.

temperature extremes. a prototyping paradigm may offer the best approach. But this may be an idealized view. . and developers get to build something immediately. or output requirements.." The one that Brooks recommends we throw away. The prototype is evaluated by the customer/user and used to refine requirements for the software to be developed. the hardware begins to wear out. A "quick design" then occurs. Consider the manner in which the control hardware for a computer-based product is designed and built. identify whatever requirements are known. input approaches and output formats). The design engineer draws a simple schematic of the digital circuitry. Q5. Users get a feel for the actual system. or the form that human/machine interaction should take. The quick design leads to the construction of a prototype. and a standard set of integration guidelines. and many other environmental maladies. Stated simply. 3. Iteration occurs as the prototype is tuned to satisfy the needs of the customer. After each component is selected. a defined and validated function. Although the industry is moving toward component-based assembly. and many other situations. a customer defines a set of general objectives for software but does not identify detailed input. a well-defined interface. and outline areas where further definition is mandatory. while at the same time enabling the developer to better understand what needs to be done. The quick design focuses on a representation of those aspects of the software that will be visible to the customer/user (e. It is true that both customers and developers like the prototyping paradigm. The Prototyping Model Often. Explain the Advantages of Prototype Model. it can be ordered off the shelf.g. In other cases. the adaptability of an operating system. The prototyping paradigm begins with requirements gathering. & Spiral Model in Contrast to Water Fall model. most software continues to be custom built. and then goes to the shelf where catalogs of digital components exist. does some fundamental analysis to assure that proper function will be achieved. In these. Each integrated circuit (called an IC or a chip) has a part number. processing. the developer may be unsure of the efficiency of an algorithm. Developer and customer meet and define the overall objectives for the software. The prototype can serve as "the first system.

Instead of concentrating on documentation. The work will even be faster and efficient if developers will collaborate more regarding the status of a specific function and develop the necessary adjustments in time for the integration. a prototype model has great advantage over other SDLC models since it doesn’t rely on what is suppose to happen in written documentation. Slowly the program is created with the customer in mind. It provides the potential for rapid development of incremental versions of the software. Generally. ―Over Design‖ happens when software has so many things to offer that it sacrifices the original use of the software. If something is unfavorable. Everyone has to work on the same thing and at the same time. reducing man hours in creating software. users could give their honest opinion about the software. . catering to the needs of the users. The work on prototype models could also be spread to others since there are practically no stages of work in this model. One of the key advantages a prototype modeled software has is the time frame of development. Another advantage of having a prototype modeled software is that the software is created using lots of user feedbacks.The other advantages are : Creating software using the prototype model also has its benefits. it can be changed. is an evolutionary software process model that couples the iterative nature of prototyping with the controlled and systematic aspects of the linear sequential model. This way. The Spiral Model The spiral model. more effort is placed in creating the actual software. ―Over Design‖ could also be avoided using this model. This also goes back in giving only what the customer wants. Instead it goes directly to the users and asking them what they really want from software. originally proposed by Boehm [BOE88]. the actual software could be released in advance. In every prototype created. Slowly the product is developed by professionals.

the subsequent activities like analysis. If any flaw is uncovered in the specification it leads to a serious/major error/defect in the building product. The errors/defects uncovered can be fixed on next version or iteration. At each iteration. testing etc will be commenced. A spiral model is divided into a number of framework activities. also called task regions. The requirements gathering is the important phase and once the requirements have been identified and agreed. the incremental release might be a paper model or prototype. flow of water can’t be reversed. and implementation exactly resembles to the requirements specification. This model suits for certain situations where you need to build a novel system or product which you don’t have adequate requirement’s gathering. software is developed in a series of incremental releases. the daemon activities like analysis. design.Using the spiral model. During later iterations. Also the spiral model stresses more on the risk analysis which is not much considered in waterfall model.       . the concept can be turned into a product or a system. similar to waterfall from a hill to ground. Typically. there are between three and six task regions. coding. Coming to spiral model. design. unlike waterfall model is iterative in nature. During early iterations. A spiral model contains six task regions: Customer Communication Planning Risk Analysis Engineering Construction and Release Customer Evaluation   Waterfall Model also called as linear Sequential Model or classical life cycle model is purely a pre Planned / strategic in nature. It is the best model if requirements gathering have done perfectly and clearly stated in the initial phase. Spiral model starts in developing a new/novel concept and upon subsequent refinements. No subsequent changes can be done once you have skipped to one phase another. increasingly more complete versions of the engineered system are produced.

This model was not the first model to discuss iterative development. it is a systems development method (SDM) used in information technology (IT). The major drawback of waterfall model is Customer should have patience and both customer and s/w engineer should have a perfect picture of what they are going to do. Q6. in an effort to combine advantages of top-down and bottom-up concepts. . Also known as the spiral lifecycle model (or spiral development). in real time situations this will happen rarely as most of customers and engineers are not fully experts in particular domain. The spiral model is a software development process combining elements of both design and prototypingin-stages. This model of development combines the features of the prototyping model and the waterfall model. and of course changing requirements continuously makes you select spiral model rather than the waterfall model. Write a Note on Spiral Model. expensive and complicated projects.  This should not be confused with the Helical model of modern systems architecture that uses a dynamic programming (mathematical not software type programming!) approach in order to optimize the system's architecture before design decisions are made by coders that would cause problems. History The spiral model was defined by Barry Boehm in his 1986 article "A Spiral Model of Software Development and Enhancement". The spiral model is intended for large.

This phase is the most important part of "Spiral Model". the iterations were typically 6 months to 2 years long. It was canceled in May 2009. 11. Routine maintenance is carried out on a continuing basis to prevent large-scale failures and to minimize downtime. another prototype is developed from it according to the fourfold procedure outlined above. The FCS should have resulted in three consecutive prototypes (one prototype per spiral—every two years). 10. At the customer's option. Also it is reasonable to use the spiral model in projects where business goals are unstable but the architecture must be realized well enough to provide high loading and stress ability. The existing prototype is evaluated in the same manner as was the previous prototype. 8. planning and designing the second prototype. Risk factors might involve development cost overruns. weaknesses. 2. A first prototype of the new system is constructed from the preliminary design. A preliminary design is created for the new system. and represents an approximation of the characteristics of the final product. based on the refined prototype. Analysis and engineering efforts are applied at each phase of the project. The final system is constructed. and. This phase has been added specially in order to identify and resolve all the possible risks in the project development.As originally envisioned. with an eye toward the end goal of the project. The final system is thoroughly evaluated and tested. 6. 3. the concept of agile software development is becoming a viable alternative. 3. For smaller projects. defining the requirements of the second prototype. operating-cost miscalculation. In this phase all possible (and available) alternatives which can help in developing a cost effective project are analyzed and strategies to use them are decided. The steps in the spiral model iteration can be generalized as follows: 1. If risks indicate any kind of uncertainty in requirements. This usually involves interviewing a number of users representing all the external or internal users and other aspects of the existing system. 9. prototyping may be used to proceed with the available data and find out possible solution in order to deal with the potential changes in the requirements. The US military had adopted the spiral model for its Future Combat Systems program. Each phase starts with a design goal and ends with the client (who may be internal) reviewing the progress thus far. Constructing and testing the second prototype. it had a two year iteration (spiral). The spiral model is mostly used in large projects. or any other factor that could. if necessary. The spiral model thus may suit small (up to $3 million) software applications and not a complicated ($3 billion) distributed interoperable. in the customer's judgment. 4. . evaluating the first prototype in terms of its strengths. 2. result in a less-than-satisfactory final product. 7. For example. A second prototype is evolved by a fourfold procedure: 1. and risks. 5. The system requirements are defined in as much detail as possible. the entire project can be aborted if the risk is deemed too great. 4. The preceding steps are iterated until the customer is satisfied that the refined prototype represents the final product desired. This is usually a scaled-down system. system of systems. The FCS project was canceled after six years (2003–2009).

.the Spiral Architecture Driven Development is the spiral based Software Development Life Cycle (SDLC) which shows one possible way how to reduce the risk of non-effective architecture with the help of a spiral model in conjunction with the best practices from other models.

Sign up to vote on this title
UsefulNot useful