You are on page 1of 50

Master of Business Administration – IS Semester 3 MI0033 – Software Engineering – 4 Credits

Assignment Set- 1 (60 Marks) Note: Each question carries 10 Marks. Answer all the questions.
1. Discuss the Objective & Principles Behind Software Testing.

What is Software Testing?

Test is a formal activity. It involves a strategy and a systematic approach. The different stages of tests supplement each other. Tests are always specified and recorded. Test is a planned activity. The workflow and the expected results are specified. Therefore the duration of the activities can be estimated. The point in time where tests are executed is defined. Test is the formal proof of software quality.

Overview of Test Methods
Statictests The software is not executed but analyzed offline. In this category would be code inspections (e.g. Fagan inspections), Lint checks, cross reference checks, etc. Dynamic tests This requires the execution of the software or parts of the software (using stubs). It can be executed in the target system, an emulator or simulator. Within the dynamic tests the state of the art distinguishes between structural and functional tests. Structural tests These are so called "white-box tests" because they are performed with the knowledge of the source code details. Input interfaces are stimulated with the aim to run through certain predefined branches or paths in the software. The software is stressed with critical values at the boundaries of the input values or even with illegal input values. The behavior of the output interface is recorded and compared with the expected (predefined) values. Functional tests These are the so called "black-box" tests. The software is regarded as a unit with unknown content. Inputs are stimulated and the values at the output results are recorded and compared to the expected and specified values.

Test by progressive Stages
The various tests are able to find different kinds of errors. Therefore it is not enough to rely on one kind of test and completely neglect the other. E.g. whitebox tests will be able to find coding errors. To detect the same coding error in the system test is very difficult. The system malfunction which may result from the coding error will not necessarily allow conclusions about the location of the coding error. Test therefore should be progressive and supplement each other in stages in order to find each kind of error with the appropriate method. Module test A module is the smallest compilable unit of source code. Often it is too small to allow functional tests (black-box tests). However it is the ideal candidate for white-box tests. These have to be first of all static tests (e.g. Lint and inspections) followed by dynamic tests to check boundaries, branches and paths. This will usually require the employment of stubs and special test tools. Component test This is the black-box test of modules or groups of modules which represent certain functionality. There are no rules about what can be called a component. It is just what the tester defined to be a component, however it should make sense and be a testable unit. Components can be step by step integrated to bigger components and tested as such. Integration test The software is step by step completed and tested by tests covering a collaboration of modules or classes. The integration depends on the kind of system. E.g. the steps could be to run the operating system first and gradually add one component after the other and check if the black-box tests still run (the test cases of course will increase with every added component). The integration is still done in the laboratory. It may be done using simulators or emulators. Input signals may be stimulated. System test This is a black-box test of the complete software in the target system. The environmental conditions have to be realistic (complete original hardware in the



Which Test finds which Error?
Possible error Can be found by
Compiler, Lint


Missing semicolons, Values defined but not initalized or used, order of evaluation disregarded.

Syntax errors

Data errors

Software inspection, tests

Overflow of variables at calculation, usage of inappropriate data types, values not module initialized, values loaded with wrong data or loaded at a wrong point in time, lifetime of pointers.

Software Algorithm and logical inspection, errors tests


Wrong program flow, use of wrong formulas and calculations. Overlapping ranges, range violation (min. and max. values not observed or limited), unexpected inputs, wrong sequence of input parameters.

Interface errors

Software inspection, module tests, component tests.

Operating system Disturbances by OS interruptions or Design inspection, errors, architecture hardware interrupts, timing problems, lifetime integration tests and design errors and duration problems. Integration errors System errors Integration system tests System tests tests, Resource problems (runtime, stack, registers, memory, etc.) Wrong system behaviour, specification errors

2. Discuss the CMM 5 Levels for Software Process.

The Capability Maturity Model (CMM) is a theoretical process capability maturity model. The CMM was originally developed as a tool for objectively assessing the ability of government contractors' processes to perform a contracted software project. For this reason, it has been used extensively for avionics software and government projects around the world. The 5-Level structure of the CMM can be illustrated by the diagram below (Figure 1).

Figure 1: Diagram of the CMM Although the CMM comes from the area of software development, it can be (and has been and still is being) applied as a generally applicable model to assist in understanding the process capability maturity of organisations in areas as diverse as, for example: software engineering, system engineering, project management, software maintenance, risk management, system acquisition, information technology (IT), and personnel management. The CMM was first described in the book Managing the Software Process (1989) by Watts Humphrey, and hence was also known as "Humphrey's CMM". Humphrey had started development the model at the SEI (US Dept. of Defense Software Engineering Institute) in 1986, basing it on the earlier work of Phil Crosby - the latter had earlier published the Quality Management Maturity Grid in his book Quality is Free (1979). The SEI was at Carnegie Mellon University in Pittsburgh. The CMM has been superseded by a variant - the Capability Maturity Model Integration (CMMA) - the old CMM being renamed to Software Engineering CMM (SE-CMM). Accreditations based on the SE-CMM expired on 31 December 2007. Variants of maturity models derived from the CMM emerged over the years, including, for example, Systems Security Engineering CMM (SSE-CMM) and the People Capability Maturity Model. Note that maturity models generally started to become part of international standards as part of ISO 15504.

Structure of the CMM
(See also Figure 1, above.) The CMM involves the following aspects:

Maturity Levels: A 5-Level process maturity continuum - where the uppermost (5th) level is a notional ideal state where processes would be systematically managed by a combination of process optimization and continuous process improvement. Key Process Areas: Within each of these maturity levels are Key Process Areas (KPAs) which characterise that level, and for each KPA there are five definitions identified: o Goals o Commitment o Ability

as they do . Ability to Perform.the stages that organisations' processes will need to pass through as they progress up the CMM continuum. While not rigorous. and. The goals signify the scope. effectiveness. and intent of each key process area. boundaries. The extent to which the goals have been accomplished is an indicator of how much capability the organisation has established at that maturity level. Activities Performed. not all of the stakeholders or participants in the processes may know or understand all of the components that make up the processes. tending to be driven in an ad hoc. There are five types of common features: Commitment to Perform. Key Practices: The key practices describe the elements of infrastructure and practice that contribute most effectively to the implementation and institutionalization of the KPAs. Measurement and Analysis. • • • Goals: The goals of a key process area summarize the states that must exist for that key process area to have been implemented in an effective and lasting way. Common Features: Common features include practices that implement and institutionalize a key process area. This provides a chaotic or unstable environment for the processes. Levels of the CMM (See also chapter 2 of (March 2002 edition of CMMI from SEI). As a result. the empirical evidence to date supports this belief. and Verifying Implementation. uncontrolled and reactive manner by users or events.Ad hoc (Chaotic) It is characteristic of processes at this level that they are (typically) undocumented and in a state of dynamic change.o o Measurement Verification The KPAs are not necessarily unique to CMM. page 11. and control of an organisation's software processes are believed to improve as the organisation moves up these five levels. process performance in such .) There are five levels defined along the continuum of the CMM. representing . according to the SEI: "Predictability." The levels are: Level 1 . Organisational implications: (a) Because institutional knowledge tends to be scattered (there being limited structured approach to knowledge management) in such environments.

organisations is likely to be variable (inconsistent) and depend heavily on the institutional knowledge. or abandon processes during a crisis. in doing so. possibly with consistent results. even though (say) some basic processes are established to track cost. there is significant risk that they will tend to exceed any estimated budgets/schedules for their work . or the heroic efforts of relatively few people or small groups. or the competence. with . but results may not always be consistent. Level 2 . However. organisations at this level may overcommit. These standard processes are in place (i. There tends to be limited planning. Level 3 . schedule.Defined It is characteristic of processes at this level that there are sets of defined and documented standard processes established and subject to some degree of improvement over time. Organisational implications: Processes and their outputs could be visible to management at defined points. and functionality. and it is unlikely that they will be able to repeat past successes. (c) Due to the lack of structure and formality.e. such organisations manage to produce products and services. and if a degree of process discipline is in place to repeat earlier successes on projects with similar applications and being difficult to estimate what a process will do when you do not fully understand the process (what it is that you do) in the first place and cannot therefore control it or manage it effectively. there could still be a significant risk of exceeding cost and time estimates. (b) Despite the chaos.Repeatable It is characteristic of processes at this level that some processes are repeatable. and limited acceptance of processes.. For example. Process discipline is unlikely to be rigorous. limited executive commitment or buy-in to projects of work. but where it exists it may help to ensure that existing processes are maintained during times of stress. Organisational implications: Process management starts to occur using defined documented processes. they are the ASIS processes) and are used to establish consistency of process performance across the organisation. for project/programme management processes.

process performance is measured and monitored. . (c) Both the defined processes and the organisation’s set of standard processes are targets for measurable improvement activities. using process metrics. In particular. and used as criteria in managing process improvement. Organisational implications: (a) Quantitative process-improvement objectives for the organisation are established. (d) A critical distinction between maturity level 4 and maturity level 5 is the type of process variation addressed: At maturity level 4: processes are concerned with addressing statistical special causes of process variation and providing statistical predictability of the results. and process performance is thus generally predictable and controllable.g. Organisational implications: (a) Quantitative quality goals tend to be set for process output .Managed It is characteristic of processes at this level that. (b) Using quantitative/statistical techniques.g. management can effectively control the AS-IS process (e. and deployed. software or software maintenance. Process Capability is established from this level.. process improvements to address common causes of process variation and measurably improve the organisation’s processes are identified. evaluated. management can identify ways to adjust and adapt the process without measurable loss of quality or deviations from specifications. for software development ). the results may be insufficient to achieve the established objectives. (b) The effects of deployed process improvements are measured and evaluated against the quantitative process-improvement objectives. Level 4 . and ensures that these objectives are appropriately addressed.e.Optimized It is characteristic of processes at this level that the focus is on continually improving process performance through both incremental and innovative technological changes/improvements. and though processes may produce predictable results.mandatory process objectives. continually revised to reflect changing business objectives. Level 5 . Thus..

System is the very essential requirement for the existence of software in any entity. the system analyst must study the information domain for the . Assumptions are the very important factors in the development or start of a product's development. shifting the mean of the process performance) to improve process performance. Discuss the Water Fall model for Software Development. Code Generation 5. the development team visits the customer and studies their system requirement. This model has the following activities. The view of this system is necessary when software must interface with other elements such as hardware. Software products are oriented towards customers like any other engineering products. project schedule and target dates. A lot of assumptions are made during market study. This would be done at the same time as maintaining the likelihood of achieving the established quantitative processimprovement objectives. The development team adopts one of the software development models to develop the proposed system and gives it to the customers. 3. Maintenance 1) System/Information Engineering and Modeling As software development is large process so work begins by establishing requirements for all system elements and then allocating some subset of these requirements to software. Customer Satisfaction was the main aim in the 1980's. Software Requirements Analysis 3. people and other resources. To understand what type of the programs to be built. Testing 6. It also consists of personnel assignments. the development team studies the software requirement for the system. The requirements analysis and information gathering process is intensified and focused specially on software. the customer's need is given to the Research and Development Department to develop a cost-effective system that could potentially solve customer's needs better than the competitors. This process is also called as market research. Once the system is developed and tested in a hypothetical environment. A market study is necessary to identify a potential customer's need. 2) Software Requirement Analysis Software Requirement Analysis is also known as feasibility study. the system should be re-engineered and spruced up. Systems Analysis and Design 4. The assumptions which are not realistic can cause a nosedive in the entire venture. Products which are not customer oriented have no place in the market although they are designed using the best technology. Customer Delight is today's logo and Customer Ecstasy is the new buzzword of the new millennium. Although assumptions are conceptual. there should be a move to develop tangible assumptions to move towards a successful product. In this requirement analysis phase. System/Information Engineering and Modeling 2. Once the Market study is done. 1. costs of the system. the development team takes control of it. They examine the need for possible software automation in the given software system. The already existing need and the possible future needs that are combined together for study. The basic popular models used by many software development firms are as follows: A) System Development Life Cycle (SDLC) Model B) Prototyping Model C) Rapid Application Development Model D) Component Assembly Model A) System Development Life Cycle Model (SDLC Model): This is also called as Classic Life Cycle Model (or) Linear Sequential Model (or) Waterfall Method. Once the ideal system is designed according to requirement. It is either driver by market or it drives the market. In some cases for maximum output.At maturity level 5: processes are concerned with addressing statistical common causes of process variation and changing the process (for example. The front end of the product is as crucial as the internal technology of the product. the development team provides a document that holds the different specific recommendations for the candidate system. After feasibility study.

performance and interfacing. Different testing methods are available to detect the bugs that were committed during the previous phases. Which is in turn depends on where you are in the life cycle or your scope definition. it is often difficult to find analogous efforts at the total system level. Change could happen due to some unpredicted input values into the system. Expert opinion is based on experience from similar programs. The main purpose of requirement analysis phase is to find the need and to define the problem that needs to be solved. Furthermore. if only cost data are available. Analysis and Design are very important in the whole development cycle as well as understand required function. The software should be implemented to accommodate changes that could be happen during the post development period 4. In this phase. Any fault in the design phase could be very expensive to solve in the software development process. After an analogous effort has been found. the design must be decoded into a machine-readable form. Programming tools like Compilers. If this is the case. a scaling factor may be applied based on expert opinion. the overall software structure and its outlay are defined. the number of tiers required for the package architecture. A number of testing tools and methods are already available for testing purpose. code generation can be achieved without much complication. the whole software development process. For coding purpose different high level programming languages like C. It may be possible. the quality of the estimate is directly proportional to the credibility of the data. 5) Testing After code generation phase the software program testing begins. In case of the client/server processing technology. You may be able to find completed efforts that are more or less similar in complexity. however. Deciding which of these methodlogies or combination or methodlogies is the most appropriate for your program usually depends on availability of data. Due to the inherent subjectivity of this method. C++. Cost and schedule are determined based on data from competed similar efforts. engineering builds reference similar experience at the unit level. The right programming language is chosen according to the type of application. It is prefereable to use effort rather than cost data. As with all methods. Associated data need to be assessed. parametrix models stratify internal data based to simulate environments from many analogous programs. In addition to this the changes in the system directly have an effect on the software operations. Analogies. Interpreters. the logical system of the product is developed. 6) Maintenance Software will definitely go through change once when it is delivered to the customer. Pascal and Java are used. however. Expert(Engineering opinion). Explain the Different types of Software Measurement Techniques. There are large numbers of reasons for the change. If the design of software product is done in a detailed manner. the database design. 4) Code Generation In Code Generation phase. these costs must be normalized to the same base year as your effort using current and appropriate inflation indicies. For generation of code. it is especially important that input from . Most estimating methodologies are predicated on analogous software programs. the data structure design etc are all defined in this phase. Cost and schedule are estimated by determining required effort based on input from personnel with extensive experience on similar programs. to find analogous efforts at the subsystem or lower level computer software configuration item/computer software component/computer software unit(CSCI/CSC/CSU). behavior. After designing part a software development model is created. 3) System Analysis and Design In System Analysis and Design phase. and Debuggers are used. and cost estimating relationships like parametric models regress algorithms from several analogous programs. When applying this method.

000 lines of code. and is based on a study of hundreds of software projects. so all of the details are published.several independent sources be used. Your initial estimate might be made on the basis of a system containing 3. COCOMO is an open model. project managers are included. including: • • • • The underlying cost estimation equations Every assumption made in the model (e. As you refine your knowledge of the problem. and to produce more accurate estimates Costar is a faithful implementation of the COCOMO model that is easy to use on small projects. Your second estimate might be more refined so that you now . and as you design more of the system. It is also important to request only effort data rather than cost 5. Unlike other cost estimation models.g. and you'll use Costar to give you early estimates about the proper schedule and staffing levels. Explain the COCOMO Model & Software Estimation Technique. Typically. Costar offers these advantages to its users: • • COCOMO estimates are more objective and repeatable than estimates made by methods relying on proprietary models COCOMO can be calibrated to reflect your software development environment. "the project will enjoy good management") Every definition (e. secretaries aren't) Because COCOMO is well defined.g. Costar allows you to define a software structure to meet your needs.g. The COCOMO cost estimation model is used by thousands of software project managers. you can use Costar to produce more and more refined estimates. and yet powerful enough to plan and control large projects. the precise definition of the Product Design phase of a project) The costs included in an estimate are explicitly stated (e. and because it doesn't rely upon proprietary estimation algorithms. you'll start with only a rough description of the software system that you'll be developing.

SLOC is defined such that: • • • • • Only Source lines that are DELIVERED as part of the product are included -.understand that your system will consist of two subsystems (and you'll have a more accurate idea about how many lines of code will be in each of the subsystems). . Most of the other COCOMO results. including the estimates for Requirements and Maintenance. that it's possible to misuse it -. which are very similar to SLOC. One word of warning: It is so easy to use Costar to make software cost estimates. but might be counted as several DSI. are derived from this quantity. an "if-then-else" statement would be counted as one SLOC. The major difference between DSI and SLOC is that a single Source Line of Code may be several physical lines. Source Lines of Code The COCOMO calculations are based on your estimates of a project's size in Source Lines of Code (SLOC).every Costar user should spend the time to learn the underlying COCOMO assumptions and definitions from Software Engineering Economics and Software Cost Estimation with COCOMO II. Introduction to the COCOMO Model The most fundamental calculation in the COCOMO model is the use of the Effort Equation to estimate the number of Person-Months required to develop a project.test drivers and other support software is excluded SOURCE lines are created by the project staff -. Your next estimate will continue the process -.code created by applications generators is excluded One SLOC is one logical line of code Declarations are counted as SLOC Comments are not counted as SLOC The original COCOMO 81 model was defined in terms of Delivered Source Instructions. For example. Costar permits you to continue this process until you arrive at the level of detail that suits your can use Costar to define the components of each subsystem.

That rating corresponds to an effort multiplier of 1. For example.The Scale Drivers In the COCOMO II model. The cost drivers are multiplicative factors that determine the effort required to complete your software project. these Scale Drivers determine the exponent used in the Effort Equation. if your project will develop software that controls an airplane's flight. You set each Scale Driver to describe your project. you would set the Required Software Reliability (RELY) cost driver to Very High. COCOMO II defines each of the cost drivers. Click here to see which Cost Drivers are in which Costar models. and the Effort Multiplier associated with each rating. The first two Scale Drivers.26. meaning that your project will require 26% more effort than a typical software project. development environment. Cost Drivers COCOMO II has 17 cost drivers � you assess your project. Precedentedness and Development Flexibility actually describe much the same influences that the original Development Mode did. COCOMO II Effort Equation The COCOMO II model makes its estimates of required effort (measured in Person-Months � PM) based primarily on your estimate of the software project's size (as measured in thousands of SLOC. KSLOC)): . and team to set each cost driver. Check the Costar help for details about the definitions and how to set the cost drivers. The 5 Scale Drivers are: • • • • • Precedentedness Development Flexibility Architecture / Risk Resolution Team Cohesion Process Maturity Note that the Scale Drivers have replaced the Development Mode of COCOMO 81. some of the most important factors contributing to a project's duration and cost are the Scale Drivers.

34). and Low for Language & Tools Experience (effort multiplier of 1.00 and exponent.0997 = 28.94 * (1.46) * (8)1. of 1. Assuming that the project is projected to consist of 8.34 and 1.9 Person-Months Effort Adjustment Factor The Effort Adjustment Factor in the effort equation is simply the product of the effort multipliers corresponding to each of the cost drivers for your project.94 * (1. The duration of a project is based on the effort predicted by the effort equation: Duration = 3. COCOMO II estimates that 28.0997 = 42.0) * (8)1. a project with all Nominal Cost Drivers and Scale Drivers would have an EAF of 1.34 * 1. if your project is rated Very High for Complexity (effort multiplier of 1. and all of the other cost drivers are rated to be Nominal (effort multiplier of 1.09 = 1. Effort Adjustment Factor = EAF = 1.3 Person-Months COCOMO II Schedule Equation The COCOMO II schedule equation predicts the number of months required to complete your software project.9 Person-Months of effort is required to complete it: Effort = 2.0997.09).00).94 * EAF * (KSLOC)E Where EAF Is the Effort Adjustment Factor derived from the Cost Drivers E Is an exponent derived from the five Scale Drivers As an example. E.000 source lines of code.09.Effort = 2.67 * (Effort)SE Where Effort Is the effort from the COCOMO II effort equation . the EAF is the product of 1. For example.46 Effort = 2.

3179 = 12. and an average staffing of between 3 and 4 people: Duration = 3.1 months Average staffing = (42.94 * (2. but assuming that SCED has a rating of Very Low. The SCED cost driver is used to account for the observation that a project developed on an accelerated schedule will require more effort than a project developed on its optimum schedule.4 Person-Months) / (9.43 = 2.1 Months) = 6.3 Person-Months) / (12.09 * 1.4 Person-Months Average staffing = (60.1 Months = 9.34 * 1.SE Drivers Is the schedule equation exponent derived from the five Scale Continuing the example. yields an estimate of just over a year.0997 = 60.3179 that is calculated from the scale drivers.2000 model) and means that you intend to finish your project in 75% of the optimum schedule (as determined by a previous COCOMO estimate). and substituting the exponent of 0. Continuing the example used earlier.09) * (8)1.67 * (42.3)0.1 Months Effort Adjustment Factor = EAF = 1.5 people The SCED Cost Driver The COCOMO cost driver for Required Development Schedule (SCED) is unique. COCOMO produces these estimates: Duration = 75% * 12. A SCED rating of Very Low corresponds to an Effort Multiplier of 1.7 people Notice that the calculation of duration isn't based directly on the effort (number of Person-Months) � instead it's based on the schedule that would have been required for the project assuming it had been developed on the nominal .43 (in the COCOMO II.1 Months) = 3. and requires a special explanation.09 Effort = 2.

Won't that provide my people with everything they need to know? Reality : The book of standards may very well exist.beliefs about software and the process used to build it . Reality : If an organization does not understand how to manage and control software project internally. I can just relax and let that firm build it. Remember that the SCED cost driver means "accelerated from the nominal schedule". and they are often promulgated by experienced practitioners who "know the score". but is it used? . this statement may seem counterintuitive. keep schedules from slipping. In the words of Brooks [BRO75]: "Adding people to a late software project makes it later. the marketing /sales department. people who were working must spend time educating the newcomers. However.Are software practitioners aware of its existence? . Write a note on myths of Software. Myth : We already have a book that's full of standards and procedures for building software. and improve quality. 6. Software Myths Software Myths. Customer Myths A customer who requests computer software may be a person at the next desk. Myth : If we get behind schedule.schedule. For instance. Management Myths Managers with software responsibility. or an outside company that has requested software under contract. a technical group down the hall. If the Belief will lessen the pressure. the answer to these entire question is no.Is it complete? Is it adaptable? .can be traced to the earliest days of computing.Does it reflect modern software engineering practice? . myths appear to be reasonable statements of fact. the customer believes myths about .Is it streamlined to improve time to delivery while still maintaining a focus on Quality? In many cases. In many cases. they have an intuitive feel. it will invariably struggle when it out sources software project." At first. we can add more programmers and catch up (sometimes called the Mongolian horde concept) Reality : Software development is not a mechanistic process like manufacturing. Like a drowning person who grasps at a straw. thereby reducing the amount of time spent on productive development effort Myth : If we decide to outsource the software project to a third party. like managers in most disciplines. Myths have a number of attributes that have made them insidious. a software manager often grasps at belief in a software myth. as new people are added. are often under pressure to maintain budgets.

Myths led to false expectations and ultimately. dissatisfaction with the developers. an ambiguous statement of objectives is a recipe for disaster. Answer all the questions . and change can cause upheaval that requires additional resources and major design modification. Myth : A general statement of objectives is sufficient to begin writing programs we can fill in details later. Master of Business Administration – IS Semester 3 MI0033 – Software Engineering – 4 Credits Assignment Set. Unambiguous requirements are developed only through effective and continuous communication between customer and developer. When requirement changes are requested early. Reality : It's true that software requirement change.resources have been committed. cost impact grows rapidly . However. but the impact of change varies with the time at which it is introduced. Reality : Although a comprehensive and stable statement of requirements is not always possible. cost impact is relatively small. a design framework has been established. as time passes. Myth : Project requirements continually change. but change can be easily accommodated because software is because software managers and practitioners do little to correct misinformation.2 (60 Marks) Note: Each question carries 10 Marks.

Quality_(business)#Definitions Software quality may be defined as conformance to explicitly stated functional and performance requirements. Tom DeMarco says "a product's quality is a function of how much it changes the world for the better. where internal quality characteristics are those that do not.different people will experience the quality of the same software very differently. Software requirements are the foundations from which quality is measured. maintainability etc. Specified standards define a set of development criteria that guide the management in software engineering. One of the challenges of software quality is that "everyone feels they understand it". Lack of conformance to requirement is lack of quality.[3] In addition to more software specific definitions given below. explicitly documented development standards and implicit characteristics that are expected of all professionally developed software. 3.1. for example ease of use."[5] This can be interpreted as meaning that user satisfaction is more important than anything in determining software quality. If criteria are not followed lack of quality will usually result.[1] Another definition. Quality and reliability are related concepts but are fundamentally different in a number of ways. 2. coined by Gerald Weinberg in Quality Software Management: Systems Thinking." This definition stresses that quality is inherently subjective . there are several applicable definitions of quality which are used in business. The three key points in this definition: 1. If software conforms to its explicit requirements but fails to meet implicit requirements. A set of implicit requirements often goes unmentioned. software quality is suspected. External quality characteristics are those parts of a product that face its users.[4] Another definition by Dr. Discuss them. One strength of this definition is the questions it invites . A definition in Steve McConnell's Code Complete divides software into two pieces: internal and external quality characteristics. is "Quality is value to some person.

As in any other field of engineering. Moreover. Industry data demonstrate that poor application structural quality results in cost and schedule overruns and creates waste in the form of rework (up to 45% of development time in some organizations). and package vendors to jointly address the challenge of standardizing the measurement of IT software quality and to promote a market-based ecosystem to support its deployment. The Consortium's goal is to bring together industry executives from Global 2000 IT organizations. Structural quality is the quality of the application’s architecture and the degree to which its implementation accords with software engineering best practices. However. application outages. and run-time – with measures of application structural quality. and performance problems. related to Reliability Scalability Correctness Completeness Absence of bugs Fault-tolerance o Extensibility o Maintainability Documentation The Consortium for IT Software Quality (CISQ) was launched in 2009 to standardize the measurement of software product quality. [edit] Source code quality A computer has no concept of "well-written" source teams to consider. outsourcers. security breaches. such as "Who are the people we want to value our software?" and "What will be valuable to them?" [edit] History [edit] Software product quality • • • • • • • Product quality o conformance to requirements or program specification. non-functional. poor structural quality is strongly correlated with high-impact business disruptions due to corrupted data. an application with good structural software quality costs less to maintain and is easier to understand and change in response to pressing business needs. from a human point of view source code can be written in a way that has an effect on the effort needed to . It is essential to supplement traditional testing – functional. system integrators.

An example of a programming error that lead to multiple deaths is discussed in Dr. (September 2010) Software reliability is an important facet of software quality. fixing. Software errors have even caused human fatalities. debugging. both the Food and Drug Administration (FDA) and Federal Aviation Administration (FAA) have requirements for software development. and can be estimated. whereas much of software quality is subjective criteria. measurable. Please help improve this article by adding reliable references. Leveson's paper [1] (PDF).[6] One of reliability's distinguishing characteristics is that it is objective. The causes have ranged from poorly designed user interfaces to direct programming errors. modification and portability Low complexity Low resource consumption: memory.[7] This distinction is especially important in the discipline of Software Quality Assurance. Unsourced material may be challenged and removed. . In the United States. CPU Number of compilation or lint warnings Robust input validation and error handling. which often stress readability and usually language-specific conventions are aimed at reducing the cost of source code maintenance. It is defined as "the probability of failure-free operation of a computer program in a specified environment for a specified time". Some of the issues that affect code quality include: • • • • • • Readability Ease of maintenance. These measured criteria are typically called software metrics. Many source code programming style guides. established by software fault injection Methods to improve the quality: • • • Refactoring Code Inspection or software review Documenting the code [edit] Software reliability This section needs additional citations for verification. software failure has caused more than inconvenience. This has resulted in requirements for development of some types software.comprehend its behavior. testing. [edit] History With software embedded into many devices today.

which is that software in some sense takes on a role which would otherwise be filled by a human being. the argument goes. up to and including outright failure. Secondly. which is the difficulty of determining. software is fundamentally incapable of most of the mental capabilities of humans which separate them from mere mechanisms: qualities such as adaptability. Regardless of the criticality of any single software application. Nevertheless. Firstly. The more critical the application of the software to economic and production processes. that computer software does not work the way it ought to. This is a problem on two levels. The problem seems to stem from a common conceptual error in the consideration of software. by both lay-persons and specialists. general-purpose knowledge.[edit] Goal of reliability The need for a means to objectively determine software reliability comes from the desire to apply the techniques of contemporary engineering fields to the development of software. with consequences for the data which is processed. It is only expected that this infiltration will continue. exactly how the software is intended to operate. [edit] Challenge of reliability The circular logic of the preceding sentence is not accidental—it is meant to illustrate a fundamental problem in the issue of measuring software reliability. the software should behave in the way it is intended. In other words. it is also more and more frequently observed that software has penetrated deeply into almost every aspect of modern life through the technology we use. especially at the high level of reliability that is often expected from software in comparison to humans. even singular purpose. That desire is a result of the common observation. in the way it should. As software becomes more and more crucial to the operation of the systems on which we depend. a sense of conceptual and functional context. the more important is the need to assess the software's reliability. and common sense. it only follows that the software should offer a concomitant level of dependability. or even better. most software programs could safely be considered to have a particular. software is seen to exhibit undesirable behaviour. along with an accompanying dependency on the software by the systems which maintain our society. In other words. or to life-sustaining systems. in advance. the machinery on which the software runs. If the possibility can be allowed that said purpose can be well or even completely defined. most modern software performs work which a human could never perform. it should present a means for at least considering objectively . and by extension the people and materials which those machines might negatively affect.

testing. reliable. know the program's desired behaviour in advance. What level of detail is considered sufficient is hotly debated.whether the software is. Unfortunately. In situ with the formalization effort is an attempt to help inform non-specialists. but may be impractical. and this is the focus of attempts to formalize the process of creating requirements for new software projects. a mathematical field of computer science which is an outgrowth of language and automata theory. without which it is probably impossible to determine the program's reliability with any certainty. if not actually impossible. Such attempts to improve software reliability can be applied at different stages of a program's development. in fact. with given data. However. it is still not known whether it is possible to exhaustively determine either the expected outcome or the actual outcome of the entire set of possible environment and input data to a given program. Communicating this knowledge is made more difficult by the fact that. in the case of real software. design. These stages principally include: requirements. various attempts are in the works to attempt to rein in the vastness of the space of software's environmental and input variables. [edit] Reliability in program development [edit] Requirements A program cannot be expected to work as desired if the developers of the program do not. by comparing the expected outcome to the actual outcome of running the software in a given environment. Whether a program's desired behaviour can be successfully specified in advance is a moot point if the behaviour cannot be specified at all. programming. . who commission software projects without sufficient knowledge of what computer software is in fact capable. even programmers cannot always know in advance what is actually possible for software in advance of trying. as hinted above. The idea of perfect detail is attractive. failed attempts. or more accurately. or if they cannot at least determine its desired behaviour in parallel with development. particularly nonprogrammers. The study of theoretical software reliability is predominantly concerned with the concept of correctness. in sufficient detail. to achieve it. This is because the desired behaviour tends to change as the possible range of the behaviour is determined through actual attempts. both for actual programs and theoretical descriptions of programs. and runtime evaluation. in fact.

Software design usually involves the use of more abstract and general means of specifying the parts of the software and what they do. sub-routine. Such inventions as statement. and thus the use of better languages should. and perhaps most controversially. library. It separates what are considered to be problems of architecture. thereby removing language-specific biases and limitations which would otherwise creep into the design. [edit] Programming The history of computer programming language development can often be best understood in the light of attempts to master the complexity of computer programs. The usefulness of design is also questioned by some. but this may be a different way of saying the same thing). such that they can know in advance how each of their contributions will interface with those of the other teams. which solve problems of actual data processing. it can be seen as a way to break a large program down into many smaller programs. file. template. and thereby—it is hoped—removing variables which could increase the likelihood of programming errors. or overall program concept and structure. perhaps unwittingly on the part of programmer-designers. at least at a high level. class. The purposes of high-level design are as follows. It applies additional constraints to the development process by narrowing the scope of the smaller software components. As such. to specify how the program should do it. including the specification of interfaces. design is meant. (Another way of looking at the evolution of programming languages is simply as a way of getting the computer to do more and more of the work. Lack of understanding of a program's overall structure and functionality is a sure way to fail to detect errors in the program. but those who look to formalize the process of ensuring reliability often offer good software design processes as the most significant means to accomplish it. it specifies the program independently of the implementation language or languages.[edit] Design While requirements are meant to specify what a program should do. It provides a program template. reduce the number of errors by enabling a better understanding. such that those smaller pieces together do the work of the whole program. which otherwise becomes more difficult to understand in proportion (perhaps exponentially) to the size of the programs. Improvements in languages tend to provide incrementally what software design has attempted to do in one fell swoop: consider the software at ever greater levels of abstraction. conversely. Finally. which can be shared by different teams of developers working on disparate parts. . from problems of actual coding.

can also prevent software testing from beginning. The software build is critical to software quality because if any of the generated files are incorrect the software build is likely to fail. link or package files together in order to create a usable runtime configuration of the software application. including how and when they are accessed. [edit] Software Build and Deployment Many programming languages such as C and Java require the program "source code" to be translated in to a form that can be executed by a computer.. if set incorrectly. Software builds are typically done in work area unrelated to the runtime area. change control and reporting are collectively known as Software configuration management. then testing can lead to false results.component and more have allowed the arrangement of a program's parts to be specified using abstractions such as layers. bind. In addition. and even the state of the data before and after it is accessed. A number of software tools have arisen to help meet the challenges of configuration management including file control tools and build control tools. hierarchies and modules. This translation is done by a program called a compiler. deployment. For example. The totality of the compiling and assembly process is generically called "building" the software. which provide structure at different granularities. Using the incorrect parameter can cause the application to fail to execute on the application server. if the incorrect version of a program is inadvertently used. a Java application server may have options for parent-first or parent-last class loading. improvements in languages have enabled more exact control over the shape and use of data elements. so that from any point of view the program's code can be imagined to be orderly and comprehensible. The deployment procedure may also involve technical parameters. For this reason. such as the application server. culminating in the abstract data type. The technical activities supporting software quality including build. a deployment step is needed to physically transfer the software build products to the runtime area. which. [edit] Testing Main article: Software Testing . And. Additional operations may be involved to associate. These data types can be specified to a very fine degree.

So rather than asking whether a software product “has” factor x.Software testing. Unit Testing Functional Testing Regression Testing Performance Testing Failover Testing Usability Testing A number of agile methodologies use testing early in the development cycle to ensure quality in their products. ask instead the degree to which it does (or does not). See the talk page for details. This is obviously subjective in that the user context must be taken . is used in Extreme Programming to ensure quality. Rather. Testing includes. Note that none of these factors are binary. 5. when done correctly. but go beyond simple confirmation of behaviour to the evaluation of qualities such as performance and interoperability with other code or particular hardware configurations. Some software quality factors are listed here: Understandability Clarity of purpose. (September 2008) A software quality factor is a non-functional requirement for a software program which is not called up by the customer's contract. can increase overall software quality of conformance by testing that the product conforms to its requirements. but is not limited to: 1. where tests are written before the code they will test. WikiProject Software or the Software Portal may be able to help recruit an expert. they are not “either you have it or you don’t” traits. [edit] Runtime runtime reliability determinations are similar to tests. For example. 4. 6. but nevertheless is a desirable requirement which enhances the quality of the software program. [edit] Software quality factors This section needs attention from an expert on the subject. the test-driven development practice. This goes further than just a statement of purpose. 3. all of the design and user documentation must be clearly written so that it is easily understandable. they are characteristics that one seeks to maximize in one’s software to optimize its quality. 2. that is.

Reliability Ability to be expected to perform its intended functions satisfactorily. symbology. Maintainability Propensity to facilitate updates to satisfy new requirements. This is affected by such things as the humancomputer interface. such as memory. network bandwidth.into account: for instance. It can be improved by replacing repeated functionality by one subroutine or function which achieves that functionality. Completeness Presence of all constituent parts. with each part fully developed. It also encompasses environmental considerations in that the product is required to perform correctly in whatever conditions it finds itself (sometimes termed robustness).e. etc. appearance. This is important where memory capacity is limited. Usability Convenience and practicality of use. Efficiency Fulfillment of purpose without waste of resources. Thus the software product that is maintainable should be well-documented. This means that if the code calls a subroutine from an external library. Testability Disposition to support acceptance criteria and evaluation of performance. storage and processor utilization and other resources. a complex design leads to poor testability. It also applies to documents. Such a characteristic must be built-in during the design phase if the product is to be easily testable. Besides the presence of appropriate security mechanisms such as authentication. and should have spare capacity for memory. the software package must provide reference to that library and all required parameters must be passed. if the software product is to be used by software engineers it is not required to be understandable to the layman. access control and encryption. and terminology within itself. Portability can mean both between different hardware—such as running on a PC as well as a smartphone—and between different operating systems—such as running on both Mac OS X and GNU/Linux. a GUI). time. All required input data must also be available. should not be complex. which for best usability is usually graphical (i. Consistency Uniformity in notation. Security Ability to protect data against unauthorized access and to withstand malicious or inadvertent interference with its operations. . Conciseness Minimization of excessive or redundant information or processing. space and processor utilization. and it is generally considered good practice to keep lines of code to a minimum. This implies a time factor in that a reliable product is expected to perform correctly over a period of time. Portability Ability to be run well and easily on multiple computer configurations. The component of the software that has most impact on this is the user interface (UI).

Some believe that quantitative measures of software quality are essential. intelligent and adaptive attackers.[8] If a team discovers that they will benefit from a drop in the number of reported bugs. that are decried as harmful by others. Software that contains few faults is considered by some to have higher quality than software that contains many faults. Does this account for the importance of the bugs (and the importance to the stakeholders of the people those bugs bug)? Does one try to weight this metric by the severity of the fault. or that four or five bugs get lumped into one bug report.[8][9] One example of a popular metric is the number of faults encountered in the software. [edit] Measurement of software quality factors There are varied perspectives within the field on measurement. All software quality metrics are in some sense measures of human behavior. There are a great many measures that are valued by some professionals—or in some contexts. Others believe that contexts where quantitative measures are useful are quite rare. how? And if not. and so prefer qualitative measures. Several leaders in the field of software testing have written about the difficulty of measuring what we truly want to measure well. The difficulty is measuring what we mean to measure. does that mean that the product is now higher quality than it was before? Or that this is a smaller/less ambitious change than before? Or that fewer tester-hours have gone into the project than before? Or that this project was tested by less skilled testers than before? Or that the team has discovered that fewer faults reported is in their interest? This last question points to an especially difficult one to manage. That may mean that email begins to circumvent the bug tracking system. how does one know that 100 faults discovered is better than 1000? 3. since humans create software. there is a strong tendency for the team to start reporting fewer defects. What constitutes “many faults?” Does this differ depending upon the purpose of the software (e. . how do I know what that means? For example. without creating incentives for software programmers and testers to consciously or unconsciously “game” the measurements. Questions that can help determine the usefulness of this metric in a particular context include: 1. navigational software)? Does this take into account the size and complexity of the software? 2. or the incidence of users it affects? If also implies resilience in the face of malicious.. blogging software vs. If the count of faults being discovered is shrinking. or that testers learn not to report minor annoyances.g.

For example. which can be used to quantify them as nonfunctional requirements. For every characteristic. [edit] Understandability Are variable names descriptive of the physical or functional property represented? Do uniquely recognisable functions contain adequate comments so that their purpose is clear? Are deviations from forward logical flow adequately commented? Are all elements of an array functionally related?. It is necessary to find measurements. Some such attributes are mean time to failure. thus reducing computation time? Are branch decisions too complex? [edit] Portability Does the program depend upon system or library routines unique to a particular installation? Have machine-dependent statements been flagged and commented? Has dependency on internal bit representation of alphanumeric or special characters been avoided? How much effort would be required to transfer the program from one hardware/software system or environment to another? Software portability refer the terms of support and existence in different environments like window environments.Software quality factors cannot be measured because of their vague definitions. from which a measurement of the characteristic can be obtained. or metrics. mac. A scheme that could be used for evaluating software quality factors is given below.. . However. an attribute of portability is the number of target-dependent statements in a program.. which can indeed be measured. [edit] Completeness Are all necessary components available? Does any process fail for lack of resources or programming? Are all potential pathways through the code accounted for. there are related attributes to reliability. and availability of the system. rate of failure occurrence. but cannot be evaluated in its own right. Similarly. Some type of scoring formula could be developed based on the answers to these questions. reliability is a software quality factor. including proper error handling? [edit] Conciseness Is all code reachable? Is any code redundant? How many statements within loops could be placed outside the loop. there are a set of questions which are relevant to that characteristic. linux etc.

but there could also be a problem with the requirement document. are schemes available for providing adequate test cases? [edit] Usability Is a GUI used? Is there adequate on-line help? Is a user manual provided? Are meaningful error messages provided? [edit] Reliability Are loop indexes range-tested? Is input data checked for range errors? Is divide-by-zero avoided? Is exception handling provided? It is the probability that the software performs its intended functions correctly in a specified period of time under stated operation conditions.. is a change likely to require restructuring the main program.[edit] Consistency Is one variable name used to represent different logical or physical entities in the program? Does the program contain only one representation for any given physical or mathematical constant? Are functionally similar arithmetic expressions similarly constructed? Is a consistent scheme used for indentation. does each module have distinct. the color palette. or just a module? [edit] Testability Are complex structures employed in the code? Does the detailed design contain clear pseudo-code? Is the pseudo-code at a higher level of abstraction than the code? If tasking is used in concurrent designs. recognizable functionality? Does the software allow for a change in data structures (object-oriented designs are more likely to allow for this)? If the code is procedure-based (rather than object-oriented). nomenclature.. [edit] Efficiency Have functions been optimized for speed? Have repeatedly used blocks of code been formed into subroutines? Has the program been checked for memory leaks or overflow errors? . fonts and other visual elements? [edit] Maintainability Has some memory capacity been reserved for future expansion? Is the design cohesive— i..e.

For Information Technology (IT) systems it is a major aspect of the broader discipline of change management. It reduces the possibility that unnecessary changes will be introduced to a system without forethought. Certain portions of the Information Technology Infrastructure Library cover change control. or changes to the electrical power systems supporting such infrastructure. The goals of a change control procedure usually include minimal disruption to services. adequate and correctly implemented? Can the software withstand attacks that can be anticipated in its intended environment? 2. configuration management and change control. and cost-effective utilization of resources involved in implementing change. The definition below is not yet integrated with definitions of the others. upgrades to network routing tables. Plan .[edit] Security Does the software protect itself and its data against unauthorized access and use? Does it allow its operator to enforce security policies? Are security mechanisms appropriate. Change control is currently used in a wide variety of products and systems. • [edit] The process There is considerable overlap and confusion between change management. Change control within Quality management systems (QMS) and Information Technology (IT) systems is a formal process used to ensure that changes to a product or system are introduced in a controlled and coordinated manner. Assess 3. reduction in back-out activities. installation of new operating systems. Certain experts describe change control as a set of six steps[who?]: 1. Typical examples from the computer and network environments are patches to software products. Explain Version Control & Change Control. Record / Classify 2. introducing faults into the system or undoing changes made by other users of software.

. Following implementation. They will then seek approval and request a time and date to carry out the implementation phase. Close / Gain Acceptance [edit] Record/classify The client initiates change by making a formal request for something to be changed. both to the business and to the process. and complexity. the change can be closed. and follow this by making a judgment on who should carry out the change. [edit] Build/test If all stakeholders agree with the plan. Everyone with a stake in the change then must meet to determine whether there is a business or technical justification for the change. The change control team then records and categorizes that request. The team's first job is to plan the change in detail as well as construct a regression plan in case the change needs to be backed out. [edit] Close/gain acceptance When the client agrees that the change was implemented correctly. This categorization would include estimates of importance. it is usual to carry out a post-implementation review which would take place at another stakeholder meeting. the head of the change control team will consolidate these. [edit] Implement All stakeholders must agree to a time. [edit] Assess The impact assessor or assessors then make their risk analysis typically by answering a set of questions concerning risk. Implement 6. the delivery team will build the solution. date and cost of implementation. which will then be tested. Build / Test 5. usually one with the specific role of carrying out this particular type of change. [edit] Plan Management will assign the change to a specific delivery team. If the change requires more than one type of assessment. impact. The change is then sent to the delivery team for planning.4.

DokuWiki. Changes are usually identified by a number or letter code. TWiki etc. also known as version control or source control (and an aspect of software configuration management or SCM). or simply "revision". KSpread. Joomla. restored.). Various industrial guidances and commentaries are available for people to comprehend this concept. the activity is usually directed by one or more SOPs. Microsoft Word. For example. Revisions can be compared. but revision control is also embedded in various types of software such as word processors (e. termed the "revision number". programs.. OpenOffice. revision control allows for the ability to revert a page to a previous revision.g.g. correct mistakes. Numbers. and other information stored as computer files. etc. and in various content management systems (e. When the first change is made. merged. spreadsheets (e.g. is the management of changes to documents. Version control systems (VCSs – singular VCS) most commonly run as stand-alone applications. Software tools for revision control are essential for the organization of multi-developer projects. an initial set of files is "revision 1". Microsoft Excel. OpenOffice.). In wikis.. and with some types of files. Pages. Integrated revision control is a key feature of wiki software packages such as MediaWiki. KWord. It is most commonly used in software development. which is critical for allowing editors to track each other's edits. the resulting set is "revision 2". WordPress).org Writer. it has been guided by another USFDA document Revision control.[edit] Regulatory environment In a Good Manufacturing Practice regulated industry. etc.[4] From the information technology perspective for clinical trials. Drupal.[1][2][3] As a common practice. and defend public wikis against vandalism and Calc. the topic is frequently encountered by its users. where a team of people may change the same files. Each revision is associated with a timestamp and the person making the change. "revision level".[1] Contents [hide] • • 1 Overview 2 Specialized strategies .. and so on.

It may also be necessary to develop two versions of the software concurrently (for instance. it is common for multiple versions of the same software to be deployed in different sites and for the software's developers to be working simultaneously on updates. in software development.3 Version merging o 3. develop and deploy software.• • • • • • • 3 Source-management models o 3. As teams design.2 File locking o 3. Bugs or features of the software are often only present in certain versions (because of the fixing of some problems and the introduction of others as the program develops). it has become increasingly common for a single document or snippet of code to be edited . and often leads to mistakes. Therefore. Software developers sometimes use revision control software to maintain documentation and configuration files as well as source code. systems to automate some or all of the revision control process have been developed.1 Atomic operations o 3. it is inefficient as many near-identical copies of the program have to be maintained.4 Baselines. while the other version is where new features are worked on (trunk). developers could simply retain multiple copies of the different versions of the program. Consequently. Moreover. revision control is any practice that tracks and provides control over changes to source code. labels and tags 4 Distributed revision control 5 Integration 6 Common vocabulary 7 See also 8 References 9 External links [edit] Overview In computer software engineering. This requires a lot of self-discipline on the part of developers. and label them appropriately. where one version has bugs fixed. At the simplest level. for the purposes of locating and fixing bugs. but no new features (branch). legal and business practice and other environments. This simple approach has been used on many large software projects. While this method can work. it is vitally important to be able to retrieve and run different versions of the software to determine in which version(s) the problem occurs.

Centralized revision control systems solve this problem in one of two different "source management models": file locking and version merging. The most sophisticated techniques are beginning to be used for the electronic tracking of changes to CAD files (see product data management). [edit] Atomic operations Main article: Atomic commit . and some of the revision control technology employed in these circles is subtle. the members of which may be geographically dispersed and may pursue different and even contrary interests. An entire industry has emerged to service the document revision control needs of business and other users. Version control is also widespread in business and law. If two developers try to change the same file at the same time. such as those typically stored in /etc or /usr/local/etc on Unix systems. without some method of managing access the developers may end up overwriting each other's work. supplanting the "manual" electronic implementation of traditional revision control. This gives system administrators another way to easily track changes made and a way to roll back to earlier versions should the need arise. This system of control implicitly allowed returning to any earlier state of the design.[citation needed] and are still employed in business and law with varying degrees of sophistication. and innovative. Indeed. "contract redline" and "legal blackline" are some of the earliest forms of revision control. Revision control may also track changes to configuration a team. for cases in which an engineering dead-end was reached in the development of the design. Sophisticated revision control that tracks and accounts for ownership of changes to documents and code may be extremely helpful or even necessary in such situations. powerful. [edit] Source-management models Traditional revision control systems use a centralized model where all the revision control functions take place on a shared server. [edit] Specialized strategies Engineering revision control developed from formalized processes based on tracking revisions of early blueprints or bluelines.

The commit operation is usually the most critical in this sense. notably. other developers may be tempted to bypass the revision control software and change the files locally. The concept of a reserved edit can provide an optional means to explicitly lock a file for exclusive write access. even when a merging capability exists. Once one developer "checks out" a file. Merging two files can be a very delicate operation. as in text files. It can provide some protection against difficult merge conflicts when a user is making radical changes to many sections of a large file (or group of files). but no one else may change that file until that developer "checks in" the updated version (or cancels the checkout).Computer scientists speak of atomic operations if the system is left in a consistent state even if the operation is interrupted. and preserve the changes from the first developer when other developers check in. . File locking has both merits and drawbacks. to make sure that the changes are compatible and that the merge operation does not introduce its own logic errors within the files. [edit] Version merging Main article: Merge (revision control) Most version control systems allow multiple developers to edit the same file at the same time. others can read that file. and usually possible only if the data structure is simple. if the files are left exclusively locked for too long. unless a specific merge plugin is available for the file types. The system may provide facilities to merge further changes into the central repository. These problems limit the availability of automatic or semi-automatic merge operations mainly to simple text based documents. leading to more serious problems. The second developer checking in code will need to take care with the merge. The first developer to "check in" changes to the central repository always succeeds. Not all revision control systems have atomic commits. However. Commits are operations which tell the revision control system you want to make a group of changes you have been making final and available to all users. [edit] File locking The simplest method of preventing "concurrent access" problems involves locking files so that only one developer at a time has write access to the central "repository" copies of those files. The result of a merge of two image files might not result in an image file at all. the widely-used CVS lacks this feature.

[3] [edit] Integration . label. such as those used to indicate published releases. only working copies. • Each working copy effectively functions as a remote backup of the codebase and of its change-history. [edit] Distributed revision control Main article: Distributed revision control Distributed revision control (DRCS) takes a peer-to-peer approach. Typically only one of the terms baseline. because there is no need to communicate with a central server. Most formal discussion of configuration management uses the term baseline. Rather than a single. branches.[2] Distributed revision control conducts synchronization by exchanging patches (change-sets) from peer to peer. as opposed to the client-server approach of centralized systems. label. and reverting changes) are fast. In most projects some snapshots are more significant than others. providing natural protection against data loss. Common operations (such as commits. central repository on which clients synchronize.[edit] Baselines. communication is only necessary when pushing or pulling changes to or from other peers. or tag is used in documentation or discussion[citation needed]. and baseline indicates the increased significance of any given label or tag.[3] Rather. This results in some important differences from a centralized system: • • No canonical. labels and tags Most revision control tools will use only one of these similar terms (baseline. or milestones. viewing history. they can be considered synonyms. label and tag usually refer to the mechanism within the tool of identifying or making the record of the snapshot. tag) to refer to the action of identifying a snapshot ("label the project") or the record of the snapshot ("try it with baseline X"). When both the term baseline and either of label or tag are used together in the same context. reference copy of the codebase exists by default. each peer's working copy of the codebase is a bona-fide repository.

This can also represent a sequential view of the source code. a changelist. Change A change (or diff. IntelliJ IDEA. and the system is unable to reconcile the changes. Eclipse and Visual Studio. The term 'checkout' can also be used as a noun to describe the working copy. NetBeans IDE and Xcode come with integrated version control support.Some of the more advanced revision-control tools offer many other facilities. or patch identifies the set of changes made in a single commit. This allows for more efficient storage of many different versions of files. from that time forward. two copies of those files may develop at different speeds or in different ways independently of each other. Plugins are often available for IDEs such as Oracle JDeveloper. A user may specify a specific revision or obtain the latest. or delta) represents a specific modification to a document under version control. install. The granularity of the modification considered a change varies between version control systems. change set. or by selecting one change in favour of the other. Branch A set of files under version control may be branched or forked at a point in time so that. allowing deeper integration with other tools and software-engineering processes. more rarely. ci or. submit or record) is the action of writing or merging the changes made in the working copy back to the repository. Dynamic stream . labels and tags. allowing the examination of source "as of" any particular changelist ID. Checkout A check-out (or co) is the act of creating a local working copy from the repository. Conflict A conflict occurs when different parties make changes to the same document. [edit] Common vocabulary Terminology can vary from system to system. A user must resolve the conflict by combining the changes. Change list On many version control systems with atomic multi-change commits. The terms 'commit' and 'checkin' can also be used in noun form to describe the new revision that is created as a result of committing. Delta compression Most revision control software uses delta compression. but some terms in common usage include: [4][5] Baseline An approved revision of a document or source file from which subsequent changes can be made. See baselines. which retains only the differences between successive versions of files. Commit A commit (checkin.

Reverse integration . • Promote The act of copying file content from a less controlled location into a more controlled location. or from a stream to its parent. and in some cases only doing so if the merge can be clearly and reasonably resolved). and the fix is then merged into the other branch. unified trunk. Merge A merge or integration is an operation in which two sets of changes are applied to a file or set of files. often on a server. Export exporting is the act of obtaining the files from the repository. working on a set of files. • A branch is created.[7] Repository The repository is where files' current and historical data are stored. the code in the files is independently edited.[6] • A user tries to check-in files that have been updated by others since the files were checked out. updates or syncs their working copy with changes made. for example. and the revision control software automatically merges the files (typically. from a user's workspace into a repository. Label See tag. but there can be a mainline for each branch. by other users. This is often used prior to publishing the contents. Some sample scenarios are as follows: A user. Resolve The act of user intervention to address a conflict between different changes to the same document. Head Also sometime called tip. AccuRev and Perforce). by SVK. It is similar to checking-out except that it creates a clean directory tree without the versioncontrol metadata used in a working copy. Sometimes also called a depot (for example. after prompting the user if it should proceed with the automatic merge. and checked into the repository. Import importing is the act of copying a local directory tree (that is not currently a working copy) into the repository for the first time. • A set of files is branched. For example. this refers to the most recent commit. Mainline Similar to trunk. a problem that existed before the branching is fixed in one branch.A stream in which some or all file versions are mirrors of the parent stream's versions. and the updated branch is later incorporated into a single.

Revision Also version: A version is any change in form.) from its parent stream. In SVK. See baselines. subscribers. Share The act of making one file or folder available in multiple branches at the same time. at a specific time or revision. .The process of merging different team branches into the main trunk of the versioning system. Ring [citation needed] See tag. hence the name. for example) into the local working copy. All work done to the files in a repository is initially done on a working copy. it is a sandbox. Streams form a hierarchy. When a shared file is changed in one branch. it is changed in other branches. etc. Traditional Software Configuration Management Process Traditional SCM process is looked upon as the best fit solution to handling changes in software projects. Traditional SCM process identifies the functional and physical attributes of a software at various points in time and performs systematic control of changes to the identified attributes for the purpose of maintaining software integrity and traceability throughout the software development life cycle. meaningful name or revision number. consistent across many files. Conceptually. Trunk The unique line of development that is not a branch (sometimes also called Baseline or Mainline) Update An update (or sync) merges changes made in the repository (by other people. namespace. Stream A container for branched files that has a known relationship to other such containers. Discuss the SCM Process. labels and tags. a Revision is the state at a point in time of the entire tree in the repository. 3.[6] Working copy The working copy is the local copy of files from a repository. Tag A tag or label refers to an important snapshot in time. These files at that point may all be tagged with a user-friendly. workflow rules. each stream can inherit various properties (like versions.

The number of CI in any software project and the grouping of artifacts that make up a CI is a decision made of the project. The baseline serves as a reference point in the software development life cycle. its related documentation and data can be called as a "configurable item"(CI). i. 2. Configuration Identification Software is usually made up of several programs. The status of the CIs at a given point in time is called as a baseline. The traditional SCM identifies four procedures that must be defined for each software project to ensure a good SCM process is implemented. They are • • • • Configuration Identification Configuration Control Configuration Status Accounting Configuration Authentication Most of this section will cover traditional SCM theory. .The SCM process further defines the need to trace the changes and the ability to verify that the final delivered software has all the planned enhancements that are supposed to be part of the release.1. Thus the capabilities of the software at a particular baseline is well known. Each new baseline is the sum total of an older baseline plus a series of approved changes made on the CI A baseline is considered to have the following attributes 1. 3. The features and functions of this particular baseline will be documented and available for reference.e. Each program. The end product is made up of a bunch of CIs. Functionally complete A baseline will have a defined functionality. all known bugs will be documented and the software will have undergone a complete round of testing before being put define as the baseline. Do not consider this as boring subject since this section defines and explains the terms that will be used throughout this document. Known Quality The quality of a baseline will be well defined.

cannot be changed. Configuration status accounting keeps a record of all the changes made to the previous baseline to reach the new baseline. Tools that aid Software Configuration Management . The configuration authentication is an audit performed on the delivery before it is opened to the entire world. The process involves verifying that all the functional aspects of the software is complete and also the completeness of the delivery in terms of the right programs. It should be kept in mind that configuration control only addresses the process after changes are approved.5. Also. The act of evaluating and approving changes to software comes under the purview of an entirely different process called change control. once defined. co-ordinating the approved changes for the proposed CIs and implementing the changes on the appropriate baseline is called Configuration control. Configuration Authentication Configuration authentication (CA) is the process of assuring that the new baseline has all the planned and approved changes incorporated. 3. 3. all the CIs will be under version control so the baseline can be recreated at any point in time.3. This procedure involves tracking what is in each version of software and the changes that lead to this version.2. Configuration Control The process of deciding. Configuration Status Accounting Configuration status accounting is the bookkeeping process of each release.3. 3. 3. Immutable and completely recreatable A baseline.4. The list of the CIs and their versions are set in stone. documentation and data are being delivered.

Source Code Control System (SCCS) Commercial Tools 1. Revision Control System (RCS) 3. chaotic processes to mature. The Software Engineering Institute (SEI) defines five levels of maturity of a software development process.Free Software Tools TODO: need some writeup here on each tool. They are denoted pictorially below. Rational ClearCase 2. disciplined software processes. Microsoft Visual SourceSafe 3. The CMM is designed towards organizations in improving their software processes for building better software faster and at a lower cost. . PVCS 3. Free software tools that help in SCM are 1. SCM and SEI Capability Maturity Model The Capability Maturity Model defined by the Software Engineering Institute (SEI) for Software describes the principles and practices to achieve a certain level of software process maturity. Concurrent Versions System (CVS) 2. The model is intended to help software organizations improve the maturity of their software processes in terms of an evolutionary path from ad hoc.6.

most personal and many members of the public at large feel that they understand software. one of the KPAs that has been identified is SCM.Associated with each level from level two onwards are key areas which an organization is required to focus on to move on to the next level. There is no question that other. 4. less than 1% of the public could have intelligently described what "computer software" meant. and (3) documents that describe the operation and use of the programs. (2) data structures that enable the programs to adequately manipulate information. Software Characteristics . In 1970. Software doesn’t Wear Out. But we need more than a formal definition. more complete definitions could be offered. Such focus areas are called as Key Process Areas (KPA) in CMM parlance. Today. Explain i. But do they? A text book description of software might take the following form: Software is (1) instructions (computer programs) that when executed provide desired function and performance. As part of level 2 maturity.

If we build a new computer. In. This means that software projects can not be managed as if they were manufacturing projects. without introducing other errors) and the curve flattens. circuit boards. quite low) for some period of time. the hardware begins to wear out. When hardware is built. design. vibration. our initial sketches. But it does deteriorate! . Software is developed or engineered. software has characteristics that are considerably different than those of hardware: 1. abuse. therefore. and any other environmental maladies. 2. it is not manufactured in the classical sense. etc).To gain an understanding of software. The relationship often called the "bath tub curve" indicates that hardware exhibits relatively high failure rates early in its life (these failures are often attributable to design or manufacturing defects). testing) is ultimately translated into a physical form. but the relationship between people applied and work accomplished is entirely different. Although some similarities exist between software development and hardware manufacture. Therefore. power supplies. temperature extremes. Software doesn't "wear out. and bread boarded prototype evolve into a physical product (chips. theory. the two activities are fundamentally different. Both activities require the construction of a "product" but the approaches are different. However. However these are corrected (ideally. Both activities are dependent on the people. As time passes. Software is not suspect able to the environmental maladies that cause hardware to wear out. Stated simply. the failure rate rises again as hardware components suffer from the cumulative effects of dust. formal design drawings. however. Undiscovered defects will cause high failure early in the life of a program. Software costs are concentrated in engineering. defects are corrected and the failure rate drops to a steadystate level (ideally. construction. In both activities. the failure rate curve for the software should take the form of the "idealized curve". the implication is clear--software doesn't wear out." Bath tub curve Figure above depicts failure rate as a function of time for hardware. Software is a logically rather than a physical system element. high quality is achieved through good design. but the manufacturing phase for hardware can introduce quality problems that are nonexistent (or easily corrected) for software. the human creative process (analysis. it is important to examine the characteristics of software that make it different from other things that human beings build.

ii. and then goes to the shelf where catalogs of digital components exist. it can be ordered off the shelf. As an engineering discipline evolves. most software continues to be custom built. Consider the manner in which the control hardware for a computer-based product is designed and built. It is something that has only begun to be achieved on a broad scale. The best indicators of how well a software process has worked are the quality. a defined and validated function.Idealized curve 3. and a standard set of integration guidelines. timeliness. Standard screws andoff-the-shelfintegrated circuits are only electrical engineers as they design new system. The software engineer's work products (programs. a well-defined interface. The reusable components have created so that the engineer can concentrate on the truly innovative elements of a design. the parts of the design that represents something new. A software process provides a framework for managing activities that can very easily get out of control. that is. documentation. data) are produced as consequences of the activities defined by the software process. The design engineer draws a simple schematic of the digital circuitry. Different projects require different software processes. The roadmap to building high quality software products is software process. does some fundamental analyst to assure that proper function will be achieved. Software is engineered & not manufactured. a collection of standard design components is created. Each integrated circuit (called an IC or a chip) has a part number. In the software world. After each component is selected. component reuse is a natural part of the engineering process. Software processes are adapted to meet the needs of software engineers and managers as they undertake the development of a software product. In the hardware world. Although the industry is moving towards component-based assembly. . and long-term viability of the resulting software product.

Software Engineering Software engineering encompasses a process. technical . management techniques. and the use of tools. www.relevant-education.

more effort is placed in creating the actual software. . code generation. The work on prototype models could also be spread to others since there are practically no stages of work in this model.focuses on how (software design. Another advantage of having a prototype modeled software is that the software is created using lots of user feedbacks. the actual software could be released in advance. Explain the Advantages of Prototype Model. One of the key advantages a prototype modeled software has is the time frame of development. Slowly the program is created with the customer in mind. Everyone has to work on the same thing and at the same time. preventative maintenance). adaptive maintenance. Instead of concentrating on documentation. requirements analysis). This way. software project planning. If something is unfavorable. Development phase . & Spiral Model in Contrast to Water Fall model. Software Engineering Activities Software project tracking and control Formal technical reviews Software quality assurance Software configuration management Document preparation and production Reusability management Measurement Risk management 5. users could give their honest opinion about the software. Prototype Model Advantages Creating software using the prototype model also has its benefits.focuses on what (information engineering. In every prototype created.Generic Software Engineering Phases Definition phase . software testing). it can be changed. The work will even be faster and efficient if developers will collaborate more regarding the status of a specific function and develop the necessary adjustments in time for the integration. Support phase .focuses on change (corrective maintenance. perfective maintenance. reducing man hours in creating a software.

Initiation. Analysis. Production/Implementation and Maintenance. The unmodified "waterfall model". Design. often used in software development processes. in which progress is seen as flowing steadily downwards (like a waterfall) through the phases of Conception. .The waterfall model is a sequential design process. Testing. like a waterfall. Construction. Progress flows from the top to the bottom.

in fact. expensive and complicated projects. in an effort to combine advantages of top-down and bottom-up concepts. The spiral model is a software development process combining elements of both design and prototyping-in-stages.[4] 7. The spiral model was defined by Barry Boehm in his 1986 article "A Spiral Model of Software Development and Enhancement". The first formal description of the waterfall model is often cited as a 1970 article by Winston W. The spiral model is intended for large. This should not be confused with the Helical model of modern systems architecture that uses a dynamic programming (mathematical not software type programming!) approach in order to optimise the system's architecture before design decisions are made by coders that would cause problems.The waterfall development model originates in the manufacturing and construction industries: highly structured physical environments in which after-the-fact changes are prohibitively costly. this hardware-oriented model was simply adapted for software development. Royce. Benington at Symposium on advanced programming methods for digital computers on 29 June 1956.[3] though Royce did not use the term "waterfall" in this article. Since no formal software development methodologies existed at the time. Also known as the spiral lifecycle model (or spiral development). In 1983 the paper was republished[2] with a foreword by Benington pointing out that the process was not in fact performed in strict top-down. but depended on a prototype. . non-working model (Royce 1970). This. if not impossible. Royce presented this model as an example of a flawed. This model of development combines the features of the prototyping model and the waterfall model.[1] This presentation was about the development of software for SAGE. it is a systems development method (SDM) used in information technology (IT). The first known presentation describing use of similar phases in software engineering was held by Herbert D. Write a Note on Spiral Model.[1] This model was not the first model to discuss iterative development. is how the term is generally used in writing about software development—to describe a critical view of a commonly used software practice.

with an eye toward the end goal of the project . Analysis and engineering efforts are applied at each phase of the project. the iterations were typically 6 months to 2 years long. Each phase starts with a design goal and ends with the client (who may be internal) reviewing the progress thus far.As originally envisioned.