You are on page 1of 22

1.

Software Development Process


1. Introduction

Computers are becoming a key element in our daily lives. They are now controlling all forms of monetary transactions, manufacturing, transportation, communication, defence systems, process control systems, and so on. they are harmless pieces of hardware.It is the software that gives life to them .There are well defined processes based on theoretical foundations to ensure the reliability of the hardware. There is no theory for software development. But at the same time, it is mandatory that software always behaves in a predictable manner, even in unforeseen circumstances. Hence there is a need to control its development through a well defined and systematic process. The old fashioned 'code & test' approach will not do any more. It may be good enough for 'toy' problems, but in real life, software is expected to solve enormously complex problems. Some of the aspects of real life software projects are: a.Team effort: Any large development effort requires the services of a team of specialists. For example the team could consist of domain experts, software design experts, coding specialists, testing experts, hardware specialists, etc. Each group could concentrate on a specific aspect of the problem and design suitable solution. However no group can work in isolation. There will be constant interaction among team members. b.Methodology: Broadly there are two types of methodologies, namely, 'procedure oriented methodolgies' and 'object oriented methodologies'. Though theoretically either of them could be used in any given problem situation, one of them should be chosen in advance. c.Documentation: Clear and unambiguous documentation of the artifacts of the development process are critical for the success of the software project. Oral communication and 'back of the envelop designs' are not sufficient. For example, documentation is necessary if client signoff is required at various stages of the process. Once developed, the software lives for a long time. During its life, it has to undergo lot of changes. Without clear design specifications and well documented code, it will be impossible to make changes. d.Planning: Since the development takes place against a client's requirements it is imperative that the whole effort is well planned to meet the schedule and cost constraints. e.Quality assurance: Clients expect value for money. In addition to meeting the client's requirements, the software should also meet additional quality constraints. They could be in terms of performance, security, etc. f.Lay user: Most of the time, these software packages will be used by non-computer savvy users. Hence the software has to be highly robust. g.Software tools: Documentation is important for the success of a software project, but it is a cumbersome task and many software practitioners balk at the prospect of documentation. There are tools known as Computer Aided Software Engineering (CASE) tools which simplify the process of documentation. h.Conformance to standards: We need to follow certain standards to ensure clear and unambiguous documentation. For example, IEEE standards for requirements specifications, design, etc. Sometimes, clients may specify the standards to be used. i.Reuse: The development effort can be optimised, by reusing well-tested components. For example, mathematical libraries, graphical user interface tool kits, EJBs, etc.

j.Non-developer maintenance: Software lives for a long time. The development team, may not be available to maintain the package. Some other team will have to ensure that the software continues to provide services. k.Change management: Whenever a change has to be made, it is necessary to analyse its impact on various parts of the software. Imagine modifying the value of global variable. Every function that accesses the variable will be effected. Unless care is taken to minimise the impact, the software may not behave as expected. l.Version control: Once changes are made to the software, it is important that the user gets the right version of the software. In case of failures, it should be possible to roll back to the previous versions. m.Subject to risks: Any large effort is subject to risks. The risks could be in terms of non-availability of skills, technology, inadequate resources, etc. It is necessary to constantly evaluate the risks, and put in place risk mitigation measures. 2. Software Quality The goal of any software development process is to produce high quality software. What is software quality? It has been variously defined as: Fitness for purpose Zero defects Conformability & dependability The ability of the software to meet customer's stated and implied needs

Some of the important attributes that can be used to measure software quality are: Correctness: Software should meet the customer's needs Robustness: Software must always behave in an expected manner, even when unexpected inputs are given Usability: Ease of use. A software with a graphical user interface is considered more user-friendly than one without it Portability: The ease with which software can be moved from one platform to another Efficiency: Optimal resource (memory & execution time) utilization Maintainability: Ease with which software can be modified Reliability: The probability of the software giving consistent results over a period of time Flexibility: Ease with which software can be adapted for use in different contexts Security: Prevention of unauthorised access Interoperabilty: The abilty of the software to integrate with existing systems Performance: The ability of the software to deliver the outputs with in the given constraints like time, accuracy, memory usage

Correctness is the most important attribute. Every software must be correct. The other attributes may be present in varying degrees. For example, it is an expensive proposition to make a software 100% reliable and it is not required in all contexts. If the software is going to be used in life critical situations, then 100% reliability is mandatory. But, say, in a weather monitoring system, a little less reliability may be acceptable.

However, the final decision lies with the client. One should keep in mind that some of the above attributes conflict with each other. For example, portability and efficiency. 3. What is a Process 3.1 Exercise - 100% Inspection Code and test Repeatable and measurable: The process is repeatable because the process used for the first line is reapeated for all lines & also can be used for similar exercises.It is also measurable bcoz,one can stop the count at any point and find out how much work has been completed and how much more needs to be completed.

3.2 Process: Definition and Phases A Process is a series of definable, repeatable, and measurable tasks leading to a useful result. The benefits of a well defined process are numerous. It provides visibility into a project. Visibility in turn aids timely mid-course corrections It helps developers to weed out faults at the point of introduction. This avoids cascading of faults into later phases It helps to organize workflow and outputs to maximize resource utilization It defines everybody's roles and responsibilities clearly. Individual productivity increases due to specialization and at the same time the team's productivity increases due to coordination of activities

A good software development process should: View software development as a value added business activity and not merely as a technical activity Ensure that every product is checked to see if value addition has indeed taken place Safeguard against loss of value once the product is complete Provide management information for in-situ control of the process To define such a process the following steps need to be followed: Identify the phases of development and the tasks to be carried out in each phase Model the intra and inter phase transitions Use techniques to carry out the tasks Verify and Validate each task and the results Exercise process and project management skills

The words 'verify' and 'validate' need some clarification. Verify means to check if the task has been executed correctly, while validate means to check if the correct task has been executed. In the context of software, the process of checking if an algorithm has been implemented correctly, is verification, while the process of checking if the result of the algorithm execution is the solution to the desired problem, is validation. The generic phases that are normally used in a software development process are:

Analysis: In this phase user needs are gathered and converted into software requirements. For example, if the user need is to generate the trajectory of a missile, the software requirement is to solve the governing equations. This phase should answer the question:what is to be done to meet user needs? Design: This phase answers the question: How to meet the user needs? With respect to the above example, design consists of deciding the algorithm to be used to solve the governing equations. The choice of the algorithm depends on design objectives like execution time, accuracy, etc. In this phase we determine organisation of various modules in the software system Construction: Coding is the main activity in this phase Testing: There are three categories of testing: unit testing, integration testing, and system testing. There are two types of testing: Black box testing and White box testing. Black box testing focuses on generating test cases based on requirements. White box testing focuses on generating test cases based on the internal logic of various modules Maintenance: Maintenance is the last stage of the software life cycle. After the product has been released, the maintenance phase keeps the software up to date with environment changes and changing user requirements. The earlier phases should be done so that the product is easily maintainable. The design phase should plan the structure in a way that can be easily altered. Similarly, the code should be in such a way that it is easily read, understood, and changed. Maintenance can only happen efficiently if the earlier phases are done properly.

4. Software Life Cycle Models In practice, two types of software life cycle models are used: sequential model and iterative model. 4.1 Waterfall Model Sequential model, also known as water fall model, has 4 phases:Analysis, design, construction, testing, implementation. It represents the development process as a sequence of steps (phases). It requires that a phase is complete before the next phase is started. Because of the explicit recognition of phases and sequencing, it helps in contract finalisation with reference to delivery and payment schedules. In practice it is difficult to use this model as it is, because of the uncertainity in the software requirements. It is often difficult to envisage all the requirements a priori. If a mistake in understanding the requirements gets detected during the coding phase, then the whole process has to be started all over again. A working version of the software will not be available until late in the project life cycle. So, iteration both within a phase and across phases is a necessity. 4.2 Prototyping Prototyping is discussed in the literature as a separate approach to software development. Prototyping as the name suggests, requires that a working version of the software is built early in the project life. There are two types of prototyping models, namely:Throw away prototype and Evolutionary prototype The objective of the throw away prototyping model is to understand the requirements and solution methodologies better. The essence is speed. Hence, an ad-hoc and quick development approach with no thought to quality, is resorted to. It is akin to 'code and test'. However, once the objective is met, the code is discarded and fresh development is started, ensuring that quality standards are met. Since the requirements are

now well understood, one could use the sequential approach. This model suffers from wastage of effort, in the sense that developed code is discarded, because it does not meet the quality standards. Evolutionary prototyping takes a different approach. The requirements are prioritised and the code is developed for the most important requirements first, always with an eye on quality. Software is continuously refined and expanded with feedback from the client. The chief advantage of prototyping is that the client gets a feel of the product early in the project life cycle. As can be seen, evolutionary prototyping is an iterative model. Such a model can be characterised by doing a little analysis, design, code, test and repeat the process till the product is complete. 4.3 Spiral Model Barry Boehm has suggested another iteartive model called the spiral model. It is more in the nature of a framework, which needs to be adapted to specific projects. Pictorially it can be shown thus: It allows best mix of other approaches and focusses on eliminating errors and unattractive alternatives early. An important feature of this model is the stress on risk analysis. Once the objectives, alternatives, and constraints for a phase are identified, the risks involved in carrying out the phase are evaluated, which is expected to result in a 'go, no go' decision. For evaluation purposes, one could use prototyping, simulations, etc. This model is best suited for projects, which involve new technology development. Risk analysis expertise is most critical for such projects. 4.4 ETVX Model IBM introduced the ETVX model during the 80's to document their processes. 'E' stands for the entry criteria which must be satisfied before a set of tasks can be performed, 'T' is the set of tasks to be performed, 'V' stands for the verification & validation process to ensure that the right tasks are performed, and 'X' stands for the exit criteria or the outputs of the tasks. If an activity fails in the validation check, either corrective action is taken or a rework is ordered. It can be used in any development process. Each phase in the process can be considered as an activity and structured using the ETVX model. If required, the tasks can be further subdivided and each subtask can be further structured using the ETVX model.

TASKS EXIT CRIT ERIA

ENTRY CRITERI A

VERIFICATION AND VALIDATION

4.5 Rational Unified Process Model Among the modern process models, Rational Unified Process (RUP) developed by Rational Corporation is noteworthy. It is an iterative model and captures many of the best practices of modern software development. RUP is explained more fully in the module OOAD with UML.The Rational Unified Process (RUP) is an iterative software development process framework created by the Rational Software Corporation, a division of IBM since 2003. RUP is not a single concrete prescriptive process, but rather an adaptable process framework, intended to be tailored by the development organizations and software project teams that will select the elements of the process that are appropriate for their needs. RUP is a specific implementation of the Unified Process. Four Project Life cycle Phases The RUP has determined a project life cycle consisting of four phases. These phases allow the process to be presented at a high level in a similar way to how a 'waterfall'-styled project might be presented, although in essence the key to the process lies in the iterations of development that lie within all of the phases. Also, each phase has one key objective and milestone at the end that denotes the objective being accomplished. The visualization of RUP phases and disciplines over time is referred to as the RUP hump chart. I. Inception Phase The primary objective is to scope the system adequately as a basis for validating initial costing and budgets. In this phase the business case which includes business context, success factors (expected revenue, market recognition, etc.), and financial forecast is established. To complement the business case, a basic use case model, project plan, initial risk assessment and project description (the core project requirements, constraints and key features) are generated. After these are completed, the project is checked against the following criteria: 1.Stakeholder concurrence on scope definition and cost/schedule estimates. 2. Requirements understanding as evidenced by the fidelity of the primary use cases. 3. Credibility of the cost/schedule estimates, priorities, risks, and development process. 4. Depth and breadth of any architectural prototype that was developed. 5. Establishing a baseline by which to compare actual expenditures versus planned expenditures. If the project does not pass this milestone, called the Lifecycle Objective Milestone, it either can be cancelled or repeated after being redesigned to better meet the criteria. II. Elaboration Phase The primary objective is to mitigate the key risk items identified by analysis up to the end of this phase. The elaboration phase is where the project starts to take shape. In this phase the problem domain analysis is made and the architecture of the project gets its basic form. The outcome of the elaboration phase is: 1. A use-case model in which the use-cases and the actors have been identified and most of the usecase descriptions are developed. The use-case model should be 80% complete. 2. A description of the software architecture in a software system development process. 3. An executable architecture that realizes architecturally significant use cases. 4. Business case and risk list which are revised. A development plan for the overall project.

5. Prototypes that demonstrably mitigate each identified technical risk. 6. A preliminary user manual (optional) This phase must pass the Lifecycle Architecture Milestone criteria answering the following questions: * Is the vision of the product stable? * Is the architecture stable? * Does the executable demonstration indicate that major risk elements are addressed and resolved? * Is the construction phase plan sufficiently detailed and accurate? * Do all stakeholders agree that the current vision can be achieved using current plan in the context of the current architecture? * Is the actual vs. planned resource expenditure acceptable? If the project cannot pass this milestone, there is still time for it to be cancelled or redesigned. However, after leaving this phase, the project transitions into a high-risk operation where changes are much more difficult and detrimental when made. The key domain analysis for the elaboration is the system architecture. III. Construction Phase The primary objective is to build the software system. In this phase, the main focus is on the development of components and other features of the system. This is the phase when the bulk of the coding takes place. In larger projects, several construction iterations may be developed in an effort to divide the use cases into manageable segments that produce demonstrable prototypes. This phase produces the first external release of the software. Its conclusion is marked by the Initial Operational Capability Milestone. IV. Transition Phase The primary objective is to 'transit' the system from development into production, making it available to and understood by the end user. The activities of this phase include training the end users and maintainers and beta testing the system to validate it against the end users' expectations. The product is also checked against the quality level set in the Inception phase. If all objectives are met, the Product Release Milestone is reached and the development cycle is finished. Six Best Practices Six Best Practices as described in the Rational Unified Process is a paradigm in software engineering, that lists six ideas to follow when designing any software project to minimize faults and increase productivity. I. Develop iteratively It is best to know all requirements in advance; however, often this is not the case. Several software development processes exist that deal with providing solution on how to minimize cost in terms of development phases. II. Manage requirements

Always keep in mind the requirements set by users. III. Use components Breaking down an advanced project is not only suggested but in fact unavoidable. This promotes ability to test individual components before they are integrated into a larger system. Also, code reuse is a big plus and can be accomplished more easily through the use of object oriented programming. IV. Model visually Use diagrams to represent all major components, users, and their interaction. "UML", short for Unified Modeling Language, is one tool that can be used to make this task more feasible. V. Verify quality Always make testing a major part of the project at any point of time. Testing becomes heavier as the project progresses but should be a constant factor in any software product creation. VI. Control changes Many projects are created by many teams, sometimes in various locations, different platforms may be used, etc. As a result it is essential to make sure that changes made to a system are synchronized and verified constantly. 4.6 Agile Methodologies All the methodologies described before are based on the premise that any software development process should be predictable and repeatable. One of the criticisms against these methodologies is that there is more emphasis on following procedures and preparing documentation. They are considered to be heavyweight or rigorous. They are also criticised for their excessive emphasis on structure. There is a movement called Agile Software Movement, questioning this premise. The proponents argue that software development being essentially a human activity, there will always have variations in processes and inputs and the model should be flexible enough to handle the variations. For example: the entire set of software requirements cannot be known at the begining of the project nor do they remain static. If the model cannot handle this dynamism, then there can be lot of wastage of effort or the final product may not meet the customer's needs. Hence the agile methodolgies advocate the principle "build short, build often". That is the given project is broken up in to subprojects and each subproject is developed and integrated in to the already delivered system. This way the customer gets continuous delivery of useful and usable systems. The subprojects are chosen so that they have short delivery cycles, usually of the order of 3 to 4 weeks. The development team also gets continuous feedback. A number of agile methodologies have been proposed. The more popular among them are SCRUM, DYNAMIC SYSTEMS DEVELOPMENT METHOD (DSDM), CRYSTAL METHODS, FEATUREDRIVEN DEVELOPMENT, LEAN DEVELOPMENT (LD), EXTREME PROGRAMMING (XP). A short description of each of these methods follows:

SCRUM: It is a project management framework. It divides the development in to short cycles called sprint cycles in which a specified set of features are delivered. It advocates daily team meetings for coordination and integration. More information on SCRUM can be obtained here DYNAMIC SYSTEMS DEVELOPMENT METHOD (DSDM): It is characterised by nine principles: Active user involvement Team empowerment Frequent delivery of products Fitness for business purpose Iterative and incremental development All changes during development are reversible Baselining of requirements at a high level Integrated testing Collaboration and cooperation between stakeholders

More information on DSDM can be obtained here CRYSTAL METHODOLOGIES: They are a set of configurable methodologies. They focus on the people aspects of development. The configuration is carried out based on project size, criticality and objectives. Some of the names used for the methodologies are Clear, Yellow, Orange, Orange web, , Red , etc. More information can be obtained from here. FEATURE DRIVEN DEVELOPMENT (FDD): It is a short iteration framework for software development. It focuses on building an object model, build feature list, plan by feature, design by feature, and build by feature. More information can be obtained from here. LEAN DEVELOPMENT (LD): This methodology is derived from the principles of lean production, the restructuring of the Japanese automobile manufacturing industry that occurred in the 1980s. It is based on the following principles of lean thinking: Eliminate waste, Amplify learning, Decide as late as possible, Deliver as fast as possible, Empower the team, Build the integrity in, See the whole. More information can be obtained from here. EXTREME PROGRAMMING (XP): This methodology is probably the most popular among the agile methodologies. It is based on three important principles, viz., test first, continuous refactoring, and pair programming. More information can be obtained from here. One of the important concepts popularised by XP is pair programming. Code is always developed in pairs. While one person is keying in the code, the other person would be reviewing. This site is dedicated to pair programming. The paper by Laurie Williams et al., demonstrates the efficacy of pair programming. The site agilealliance.com is dedicated to promoting agile software development methodologies. 5. How to choose a process Among the plethora of available processes, how can we choose one? There is no single answer to this question. Probably the best way to attack this problem is to look at the software requirements.

If they are stable and well understood, then waterfall model may be sufficient. If they are stable, but not clear, then throw away prototyping can be used. Where the requirements are changing, evolutionary prototyping is better. If the requirements are coupled with the underlying business processes, which are going through a process of change, then a model based on Boehm's spiral model, like the Rational Unified Process should be used. In these days of dynamic business environment, where 'time to market' is critical, and project size is relatively small, an agile process should be chosen.

These are but guidelines only. Many organisations choose a model and adapt it to their business requirements. For example, some organisations use waterfall model, modified to include iterations within phases. 6. Conclusions The most important take away from this module is that software development should follow a disciplined process. The choice of the process should depend upon the stabilty of the requirements, completeness of requirements, underlying business processes, organisational structure, and the prevailing business environment.

2.Analysis
1. Introduction The objectives of this module are: To establish the importance / relevance of requirement specifications in software development To bring out the problems involved in specifying requirements To illustrate the use of modelling techniques to minimise problems in specifying requirements

Requirements can be defined as follows: A condition or capability needed by a user to solve a problem or achieve an objective. A condition or capability that must be met or possessed by a system to satisfy a contract, standard, specification, or other formally imposed document.

At a high level, requirements can be classified as user/client requirements and software requirements. Client requirements are usually stated in terms of business needs. Software requirements specify what the software must do to meet the business needs. For example, a stores manager might state his requirements in terms of efficiency in stores management. A bank manager might state his requirements in terms of time to service his customers. It is the analyst's job to understand these requirements and provide an appropriate solution. To be able to do this, the analyst must understand the client's business domain: who are all the stake holders, how they affect the system, what are the constraints, what are the alterables, etc. The analyst should not blindly assume that only a software solution will solve a client's problem. He should have a broader vision. Sometimes, re-engineering of the business processes may be required to improve efficiency and that may be all that is required. After all this, if it is found that a software solution will add value, then a detailed statement

of what the software must do to meet the client's needs should be prepared. This document is called Software Requirements Specification (SRS) document. Stating and understanding requirements is not an easy task. Let us look at a few examples: "The counter value is picked up from the last record" In the above statement, the word 'last' is ambiguous. It could mean the last accessed record, which could be anywhere in a random access file, or, it could be physically the last record in the file "Calculate the inverse of a square matrix 'M' of size 'n' such that LM=ML=In where 'L' is the inverse matrix and 'In' is the identity matrix of size 'n' " This statement though appears to be complete, is missing on the type of the matrix elements. Are they integers, real numbers, or complex numbers. Depending on the answer to this question, the algorithm will be different. "The software should be highly user friendly" How does one determine, whether this requirement is satisfied or not. "The output of the program shall usually be given within 10 seconds" What are the exceptions to the 'usual 10 seconds' requirement? The statement of requirements or SRS should possess the following properties: All requirements must be correct. There should be no factual errors All requirements should have one interpretation only. We have seen a few examples of ambiguous statements above. The SRS should be complete in all respects. It is difficult to achieve this objective. Many times clients change the requirements as the development progresses or new requirements are added. The Agile development methodologies are specifically designed to take this factor in to account. They partition the requirements in to subsets called scenarios and each scenario is implemented separately. However, each scenario should be complete. All requirements must be verifiable, that is, it should be possible to verify if a requirement is met or not. Words like 'highly', 'usually', should not be used. All requirements must be consistent and non-conflicting As we have stated earlier, requirements do change. So the format of the SRS should be such that the changes can be easily incorporated. 2. Understanding Requirements 2.1 Functional and Non-Functional Requirements

Requirements can be classified in to two types, namely, functional requirements and non-functional requirements. Functional requirements specify what the system should do. Examples are: Calculate the compound interest at the rate of 14% per annum on a fixed deposit for a period of three years Calculate tax at the rate of 30% on an annual income equal to and above Rs.2,00,000 but less than Rs.3,00,000 Invert a square matrix of real numbers (maximum size 100 X 100)

Non-functional requirements specify the overall quality attributes the system must satisfy. The following is a sample list of quality attributes: Portability, Reliability, Performance, Testability, Modifiability, Security, Presentation, Reusability, Understandability, Acceptance criteria, Interoperability Some examples of non-functional requirements are:

Number of significant digits to which accuracy should be maintained in all numerical calculations is 10 The response time of the system should always be less than 5 seconds The software should be developed using C language on a UNIX based system A book can be deleted from the Library Management System by the Database Administrator only The matrix diagonalisation routine should zero out all off-diagonal elements, which are equal to or less than 10-3 Experienced officers should be able to use all the system functions after a total training of two hours. After this training, the average number of errors made by experienced officers should not exceed two per day.

2.2 Other Classifications Requirements can also be classified in to the following categories: Satisfiability, Criticality Stability User categories

Satisfiability: There are three types of satisfiability, namely, normal, expected, and exciting. Normal requirements are specific statements of user needs. The user satisfaction level is directly proportional to the extent to which these requirements are satisfied by the system. Expected requirements may not be stated by the users, but the developer is expected to meet them. If the requirements are met, the user satisfaction level may not increase, but if they are not met,

users may be thoroughly dissatisfied. They are very important from the developer's point of view. Exciting requirements, not only, are not stated by the users, they do not even expect them. But if the developer provides for them in the system, user satisfaction level will be very high. The trend over the years has been that the exciting requirements often become normal requirements and some of the normal requirements become expected requirements. For example, as the story goes, on-line help feature was first introduced in the UNIX system in the form of man pages. At that time, it was an exciting feature. Later, other users started demanding it as part of their systems. Now a days, users do not ask for it, but the developer is expected to provide it. Satisfaction

Normal

Exciting Fullfill expectations Dont fulfill expectations

Expected

Dissatisfaction

Criticality This is a form of priortising the requirements. They can be classified as mandatory, desirable, and non-essential. This classification should be done in consultation with the users and helps in determining the focus in an iterative development model. Stability Requirements can also be categorised as stable and non-stable. Stable requirements don't change often, or atleast the time period of change will be very long. Some requirements may change often. For example, if business process reengineering is going on alongside the development, then the corresponding requirements may change till the process stabilises. User categories As was stated in the introduction, there will be many stake holders in a system. Broadly they are of two kinds. Those who dictate the policies of the system and those who utilise the services of the

system. All of them use the system. There can be further subdivisons among these classes depending on the information needs and services required. It is important that all stakeholders are identified and their requirements are captured.

3. Design
1. Introduction to Design 1.1 Introduction Design is an iterative process of transforming the requirements specification into a design specification. Consider an example where Mrs. & Mr. XYZ want a new house. Their requirements include, A room for two children to play and sleep A room for Mrs. & Mr. XYZ to sleep A room for cooking A room for dining A room for general activities and so on. An architect takes these requirements and designs a house. The architectural design specifies a particular solution. In fact, the architect may produce several designs to meet this requirement. For example, one may maximize children's room, and other minimizes it to have large living room. In addition, the style of the proposed houses may differ: traditional, modern and two-storied. All of the proposed designs solve the problem, and there may not be a "best" design. Software design can be viewed in the same way. We use requirements specification to define the problem and transform this to a solution that satisfies all the requirements in the specification. Some definitions for Design: "Devising artifacts to attain goals" [H.A. Simon, 1981]. "The process of defining the architecture, component, interfaces and other characteristics of a system or component" [ IEEE 160.12]. "The process of applying various techniques and principles for the purpose of defining a device, a process or a system in sufficient detail to permit its physical realization"[Webster Dictionary]. Without Design, System will be Unmanageable since there is no concrete output until coding. Therefore it is difficult to monitor & control. Inflexible since planning for long term changes was not given due emphasis. Unmaintainable since standards & guidelines for design & construction are not used. No reusability consideration. Poor design may result in tightly coupled modules with low cohesion. Data disintegrity may also result.

Inefficient due to possible data redundancy and untuned code. Not portable to various hardware / software platforms. Design is different from programming. Design brings out a representation for the program - not the program or any component of it. The difference is tabulated below. Design Abstractions of operations and data ("What to do") Establishes interfaces Choose between design alternatives - Make trade-offs w.r.t.constraints etc. Devices representation of program 1.2 Qualities of a Good Design Functional: It is a very basic quality attribute. Any design solution should work, and should be constructable. Efficiency: This can be measured through: Run time (time taken to undertake whole of processing task or transaction) Response time (time taken to respond to a request for information) Throughput (no. of transactions / unit time) Memory usage, size of executable, size of source, etc Programming Device algorithms and data representations Consider run-time environments Choose functions, syntax of language

Construction of program

Flexibility: It is another basic and important attribute. The very purpose of doing design activities is to build systems that are modifiable in the event of any changes in the requirements. Portability & Security: These are to be addressed during design - so that such needs are not "hard-coded" later. Reliability: It tells the goodness of the design - how it work successfully (More important for real-time and mission critical and on-line systems). Economy: This can be achieved by identifying re-usable components. Usability:

Usability is in terms of how the interfaces are designed (clarity, aesthetics, directness, forgiveness, user control, ergonomics, etc) and how much time it takes to master the system. 1.3 Design Constraints Typical Design Constraints are: Budget, Time, Integration with other systems, Skills, Standards, Hardware and software platforms Budget and Time cannot be changed. The problems with respect to integrating to other systems (typically client may ask to use a proprietary database that he is using) has to be studied & solution(s) are to be found. 'Skills' is alterable (for example, by arranging appropriate training for the team). Mutually agreed upon standards has to be adhered to. Hardware and software platforms may remain a constraint. Designer try answer the "How" part of "What" is raised during the requirement phase. As such the solution proposed should be contemporary. To that extent a designer should know what is happening in technology. Large, central computer systems with proprietary architecture are being replaced by distributed network of low cost computers in an open systems environment We are moving away from conventional software development based on hand generation of code (COBOL, C) to Integrated programming environments. Typical applications today are internet based. 1.4 Popular Design Methods Popular Design Methods (Wasserman, 1995) include: Modular decomposition Based on assigning functions to components. It starts from functions that are to be implemented and explain how each component will be organized and related to other components. Event-oriented decomposition Based on events that the system must handle. It starts with cataloging various states and then describes how transformations take place. Object-oriented design Based on objects and their interrelationships It starts with object types and then explores object attributes and actions. Structured Design - uses modular decomposition. 2. High Level Design Activities Broadly, High Level Design include Architectural Design, Interface Design and Data Design. 2.1 Architectural Design

Shaw and Garlan (1996) suggest that software architecture is the first step in producing a software design. Architecture design associates the system capabilities with the system components (like modules) that will implement them. The architecture of a system is a comprehensive framework that describes its form and structure, its components and how they interact together. Generally, a complete architecture plan addresses the functions that the system provides, the hardware and network that are used to develop and operate it, and the software that is used to develop and operate it. An architecture styleinvolves its components, connectors, and constraints on combining components. Shaw and Garlan (1996) describe seven architectural styles. Commonly used styles include: Pipes and Filters, Call-and-return systems-Main program / subprogram architecture, Object-oriented systems, Layered systems, Data-centered systems, Distributed systems-Client/Server architecture In Pipes and Filters, each component (filter) reads streams of data on its inputs and produces streams of data on its output. Pipes are the connectors that transmit output from one filter to another. e.g. Programs written in Unix shell In Call-and-return systems, the program structure decomposes function into a control hierarchy where a "main" program invokes (via procedure calls) a number of program components, which in turn may invoke still other components. e.g.Structure Chart is a hierarchical representation of main program and subprograms. In Object-oriented systems, component is an encapsulation of data and operations that must be applied to manipulate the data. Communication and coordination between components is accomplished via message calls. In Layered systems, each layer provides service to the one outside it, and acts as a client to the layer inside it. They are arranged like an "onion ring". e.g. OSI ISO model. Data-centered systems use repositories. Repository includes a central data structure representing current state, and a collection of independent components that operate on the central data store. In a traditional database, the transactions, in the form of an input stream, trigger process execution. e.g. Database A popular form of distributed system architecture is the Client/Server where a server system responds to the requests for actions / services made by client systems. Clients access server by remote procedure call. The following issues are also addressed during architecture design: Security Data Processing: Centralized / Distributed / Stand-alone Audit Trails Restart / Recovery User Interface Other software interfaces

2.2 User Interface Design The design of user interfaces draws heavily on the experience of the designer. Pressman (Refer Chapter 15) presents a set of Human-Computer Interaction (HCI) design guidelines that will result in a "friendly,"

efficient interface. Three categories of HCI design guidelines are: 1. General interaction Guidelines for general interaction often cross the boundary into information display, data entry and overall system control. They are, therefore, all-encompassing and are ignored at great risk. The following guidelines focus on general interaction. Be consistent-Use a consistent format for menu selection, command input, data display and the myriad other functions that occur in a HCI. Offer meaningful feedback-Provide the user with visual and auditory feedback to ensure that two way communication (between user and interface) is established. Ask for verification of any nontrivial destructive action-If a user requests the deletion of a file, indicates that substantial information is to be overwritten, or asks for the termination of a program, an "Are you sure ..." message should appear. Permit easy reversal of most actions-UNDO or REVERSE functions have saved tens of thousands of end users from millions of hours of frustration. Reversal should be available in every interactive application. Reduce the amount of information that must be memorized between actions-The user should not be expected to remember a list of numbers or names so that he or she can re-use them in a subsequent function. Memory load should be minimized. Seek efficiency in dialog, motion, and thought-Keystrokes should be minimized, the distance a mouse must travel between picks should be considered in designing screen layout, the user should rarely encounter a situation where he or she asks, "Now what does this mean." Forgive mistakes-The system should protect itself from errors that might cause it to fail (defensive programming) Categorize activities by functions and organize screen geography accordingly-One of the key benefits of the pull down menu is the ability to organize commands by type. In essence, the designer should strive for "cohesive" placement of commands and actions. Provide help facilities that are context sensitive Use simple action verbs or short verb phrases to name commands-A lengthy command name is more difficult to recognize and recall. It may also take up unnecessary space in menu lists.

2. Information Display If information presented by the HCI is incomplete, ambiguous or unintelligible, the application will fail to satisfy the needs of a user. Information is "displayed" in many different ways with text, pictures and sound, by placement, motion and size, using color, resolution, and even omission. The following guidelines focus on information display. Display only information that is relevant to the current context-The user should not have to wade through extraneous data, menus and graphics to obtain information relevant to a specific system function.

Don't bury the user with data, use a presentation format that enables rapid assimilation of informationGraphs or charts should replace voluminous tables. Use consistent labels, standard abbreviations, and predictable colors-The meaning of a display should be obvious without reference to some outside source of information. Allow the user to maintain visual context-If computer graphics displays are scaled up and down, the original image should be displayed constantly (in reduced form at the corner of the display) so that the user understands the relative location of the portion of the image that is currently being viewed. Produce meaningful error messages Use upper and lower case, indentation, and text grouping to aid in understanding-Much of the information imparted by a HCI is textual, yet, the layout and form of the text has a significant impact on the ease with which information is assimilated by the user. Use windows to compartmentalize different types of information-Windows enable the user to "keep" many different types of information within easy reach. Use "analog" displays to represent information that is more easily assimilated with this form of representation-For example, a display of holding tank pressure in an oil refinery would have little impact if a numeric representation were used. However, a thermometer-like display were used, vertical motion and color changes could be used to indicate dangerous pressure conditions. This would provide the user with both absolute and relative information. Consider the available geography of the display screen and use it efficiently-When multiple windows are to be used, space should be available to show at least some portion of each. In addition, screen size (a system engineering issue) should be selected to accommodate the type of application that is to be implemented.

3. Data Input Much of the user's time is spent picking commands, typing data and otherwise providing system input. In many applications, the keyboard remains the primary input medium, but the mouse, digitizer and even voice recognition systems are rapidly becoming effective alternatives. The following guidelines focus on data input: Minimize the number of input actions required of the user-Reduce the amount of typing that is required. This can be accomplished by using the mouse to select from pre-defined sets of input; using a "sliding scale" to specify input data across a range of values; using "macros" that enable a single keystroke to be transformed into a more complex collection of input data. Maintain consistency between information display and data input-The visual characteristics of the display (e.g., text size, color, placement) should be carried over to the input domain. Allow the user to customize the input-An expert user might decide to create custom commands or dispense with some types of warning messages and action verification. The HCI should allow this. Interaction should be flexible but also tuned to the user's preferred mode of input-The user model will assist in determining which mode of input is preferred. A clerical worker might be very happy with keyboard input, while a manager might be more comfortable using a point and pick device such as a mouse. Deactivate commands that are inappropriate in the context of current actions.- This protects the user from attempting some action that could result in an error.

Let the user control the interactive flow-The user should be able to jump unnecessary actions, change the order of required actions (when possible in the context of an application), and recover from error conditions without exiting from the program. Provide help to assist with all input actions Eliminate "Mickey mouse" inputDo not let the user to specify units for engineering input (unless there may be ambiguity). Do not let the user to type .00 for whole number dollar amounts, provide default values whenever possible, and never let the user to enter information that can be acquired automatically or computed within the program.

4.Reviews, Walkthroughs & Inspections


1. Formal Definitions Quality Control (QC): A set of techniques designed to verify and validate the quality of work products and observe whether requirements are met. Software Element: Every deliverable or in-process document produced or acquired during the Software Development Life Cycle (SDLC) is a software element. Verification and validation techniques: Verification: Is the task done correctly? Validation: Is the correct task done? Static Testing: V&V is done on any software Dynamic Testing: V&V is done on executing the software with pre-defined test cases.

2. Importance of Static Testing The benefit is clear once you think about it. If you can find a problem in the requirements before it turns into a problem in the system, that will save time and money. The following statistics would be mind boggling: M.E. Fagan "Design and Code Inspections to Reduce Errors in Program Development", IBM Systems Journal, March 1976. Systems Product 67% of total defects during the development found in Inspection Applications Product 82% of all the defects found during inspection of design and code The following diagram of Fagan (Advances in Inspections, IEEE Transactions on Software Engineering) captures the importance of Static Testing. The lesson learned could be summarized in one sentence - Spend a little extra earlier or spend much more later.

The 'statistics', the above stories and Fagan's diagram emphasizes the need for Static Testing. It is appropriate to state that not all static testing involves people sitting at a table looking at a document. Sometimes automated tools can help. For C programmers, the lint program can help find potential bugs in programs. Java programmers can use tools like the JTest product to check their programs against a coding standard. When to start the Static Testing? To get value from static testing, we have to start at the right time. For example, reviewing the requirements after the programmers have finished coding the entire system may help testers design test cases. However, the significant return on thestatic testing investment is no longer available, as testers can't prevent bugs in code that's already written. For optimal returns, astatic testing should happen as soon as possible after the item to be tested has been created, while the assumptions and inspirations remain fresh in the creator's mind and none of the errors in the item have caused negative consequences in downstream processes. Effective reviews involve the right people. Business domain experts must attend requirements reviews, system architects must attend design reviews, and expert programmers must attend code reviews. As testers, we can also be valuable participants, because we're good at spotting inconsistencies, vagueness, missing details, and the like. However, testers who attend review meetings do need to bring sufficient knowledge of the business domain, system architecture, and programming to each review. And everyone who attends a review, walkthrough or inspection should understand the basic ground rules of such events. The following diagram of Somerville (Software Engineering 6th Edition) communicates, where Static Testing starts.

3. Reviews IEEE classifies Static Testing under three broad categories: Reviews, Walkthroughs,Inspections What is a Review? A meeting at which the software element is presented to project personnel, managers, users, customers or other interested parties for comment or approval. The software element can be Project Plans, URS, SRS, Design Documents, code, Test Plans, User Manual. What are objectives of Reviews? To ensure that: The software element conforms to its specifications. The development of the software element is being done as per plans, standards, and guidelines applicable for the project. Changes to the software element are properly implemented and affect only those system areas identified by the change specification. Reviews - Input: A statement of objectives for the technical reviews The software element being examined Software project management plan Current anomalies or issues list for the software product Documented review procedures Earlier review report - when applicable Review team members should receive the review materials in advance and they come prepared for the meeting Check list for defects

Reviews - Meeting: Examine the software element for adherence to specifications and standards Changes to software element are properly implemented and affect only the specified areas Record all deviations Assign responsibility for getting the issues resolved Review sessions are not expected to find solutions to the deviations. The areas of major concerns, status on previous feedback and review days utilized are also recorded. The review leader shall verify, later, that the action items assigned in the meeting are closed

Reviews - Outputs: List of review findings List of resolved and unresolved issues found during the later re-verification

You might also like