You are on page 1of 8

MC0071 - SOFTWARE ENGINEERING SET-1 4. What about the programming for Reliability?

Modern static analyzers have been used to find hundreds of bugs in the Linux kernel and many large commercial applications without the false alarm rate seen in previous generation tools such as lint. Static analyzers find bugs in source code without the need for execution or test cases. They parse the source code and then perform dataflow analysis to find potential error cases on all possible paths through each function. No technique is a silver bullet, and static analysis is no exception. Static analysis is usually only practical for certain categories of errors and cannot replace functional testing. However, compared with traditional QA and testing techniques, static analysis provides a way to achieve much higher coverage of corner cases in the code. Compared with manual code review, static analysis is impartial, thorough, and cheap to perform regularly. Inconsistent Assumptions Many types of bugs are caused by making inconsistent assumptions. Consider Listing One from line 188 of linux-2.6.4/drivers/scsi/aic7xxx/aic7770_osm.c. This seems like a strange error report. Why wouldahc_alloc () free the argument name? Looking at the implementation, you see Listing Two. Clearly if this function returns NULL, then the argument name is supposed to be freed, but the caller either didn't know or forgot this facet of the interface. Why wasn't this found in testing? Probably because this is code that runs when a device driver is initialized, and this is usually at boot time when malloc is unlikely to fail. Unfortunately, this is not always the case, and it is possible that a device would be initialized later (for example, with devices that can be hot-plugged). Inconsistent assumptions are relatively easy to avoid when initially designing and writing code, but they are difficult to avoid when maintaining code. Most code changes are usually localized to single functions or files, but assumptions made about that code may be scattered throughout the code base. It is especially common to see inconsistent assumptions related to conditions at the exit of a loop, the join points at the end of if statements, the entry and exit points for functions, and error-handling code. Static analyzers have an advantage over people in that they are not fazed by complex control flow, they look for any path that has a contradiction of assumptions, and they have immediate access to information across multiple source files and functions. Human auditors can be easily confused by irrelevant details of the code. For example, inListing Three, an array-bounds error from a Linux MIDI device driver-outEndpoint is incremented in a loop that is looking for a particular condition to hold. After the loop, the code assumes that the condition held at some point in the loop. But this assumption might be false; otherwise, the first part of the loop condition,outEndpoint < 15, is a redundant test. That is, if that test can ever be false (why else would it be there?) then outEndPointmight be equal to 15 at the end of the loop, which meansmouts outEndpoint will be accessing data past the end of the array allocated on the stack.

Error-Handling Code Developers typically program with the primary goal of implementing core functionality, while handling exceptional cases is often seen as a distraction. Furthermore, designing test cases that exercise error paths is difficult because it requires artificially inducing errors. Despite the difficulty of handling errors, it is critical to do. Unfortunately, debugging code often passes as error-handling code, such as in Listing Four from a Linux SCSI device driver. The problem in this case is that the call to printk does not terminate execution or signal an error to the caller. Either the error check on line 918 is unnecessary, or the accesses on 920-922 will be out of bounds, leading to memory corruption. There are plenty of cases where half-hearted error checking potentially leads to serious errors. Code auditors should determine whether or not error checking is needed, and that it is done properly when required. Even when error-handling code is in place, it is often riddled with bugs because it is difficult to make test cases that trigger error conditions. In Listing Five, memory is being leaked inside the Linux code that deals with IGMP, which is a protocol used for IP multicast. The function skb_alloc is a memory allocation function that returns a buffer. In line 281, you see that the memory leak occurs when the allocation succeeds (skb == NULL is false). In line 289, the function ip_route_output_key() is called. It doesn't matter what this function does. What matters is that it can't free or save a pointer to skb because skb is not passed to it. In the case where the function returns false, skb is still allocated. In line 294, another condition is tested; if it is true, the function returns on line 296 without freeing the memory allocated in line 280. If this code can be triggered by an external network packet, it becomes a denial of service or remote crash attack on machines with IGMP enabled. The error just described probably occurred because the programmer simply forgot to free the memory. The widely used practice of having an error exit that performs cleanup would have prevented this error. Alternatively, arenas or pools could have prevented this error by keeping memory associated with a particular task together, then freeing all of the memory allocated by the pool once the task is finished. In C++, the commonly used Resource Acquisition Is Initialization (RAII) idiom could prevent this sort of error by automating the cleanup task in the destructors of scoped objects. However, take care even when using RAII because resources that must survive past the end of the scope must have their cleanup actions prevented. A common way of doing this is to provide acommit() method for the RAII object, but this presents the potential for an error to occur if a commit() call is forgotten. In short, be careful because it is ultimately up to you to decide how long objects should survive. 3. Findout the following reasons in which reliabilty should always take procedure over efficiency for the following reasons? The need for a means to objectively determine software reliability comes from the desire to apply the techniques of contemporary engineering fields to the development of software. That desire is a result of the common observation, by both lay-persons and specialists, that computer software does not work the way it ought to. In other words, software is seen to exhibit undesirable behaviour, up to and including outright failure, with consequences for the data which is processed, the machinery on which the software runs, and

by extension the people and materials which those machines might negatively affect. The more critical the application of the software to economic and production processes, or to lifesustaining systems, the more important is the need to assess the software's reliability. Regardless of the criticality of any single software application, it is also more and more frequently observed that software has penetrated deeply into most every aspect of modern life through the technology we use. It is only expected that this infiltration will continue, along with an accompanying dependency on the software by the systems which maintain our society. As software becomes more and more crucial to the operation of the systems on which we depend, the argument goes; it only follows that the software should offer a concomitant level of dependability. In other words, the software should behave in the way it is intended, or even better, in the way it should. A software quality factor is a non-functional requirement for a software program which is not called up by the customer's contract, but nevertheless is a desirable requirement which enhances the quality of the software program. Note that none of these factors are binary; that is, they are not either you have it or you dont traits. Rather, they are characteristics that one seeks to maximize in ones software to optimize its quality. So rather than asking whether a software product has factor x, ask instead the degree to which it does (or does not). Some software quality factors are listed here: Understandability Clarity of purpose. This goes further than just a statement of purpose; all of the design and user documentation must be clearly written so that it is easily understandable. This is obviously subjective in that the user context must be taken into account: for instance, if the software product is to be used by software engineers it is not required to be understandable to the layman. Completeness Presence of all constituent parts, with each part fully developed. This means that if the code calls a subroutine from an external library, the software package must provide reference to that library and all required parameters must be passed. All required input data must also be available. Conciseness Minimization of excessive or redundant information or processing. This is important where memory capacity is limited, and it is generally considered good practice to keep lines of code to a minimum. It can be improved by replacing repeated functionality by one subroutine or function which achieves that functionality. It also applies to documents. Portability Ability to be run well and easily on multiple computer configurations. Portability can mean both between different hardwaresuch as running on a PC as well as a smart phone and between different operating systemssuch as running on both Mac OS X and GNU/Linux. Consistency Uniformity in notation, symbology, appearance, and terminology within itself.

Maintainability Propensity to facilitate updates to satisfy new requirements. Thus the software product that is maintainable should be well-documented, should not be complex, and should have spare capacity for memory, storage and processor utilization and other resources. Testability Disposition to support acceptance criteria and evaluation of performance. Such a characteristic must be built-in during the design phase if the product is to be easily testable; a complex design leads to poor testability. Usability Convenience and practicality of use. This is affected by such things as the humancomputer interface. The component of the software that has most impact on this is the user interface (UI), which for best usability is usually graphical (i.e. a GUI). Reliability Ability to be expected to perform its intended functions satisfactorily. This implies a time factor in that a reliable product is expected to perform correctly over a period of time. It also encompasses environmental considerations in that the product is required to perform correctly in whatever conditions it finds itself (sometimes termed robustness). Efficiency Fulfillment of purpose without waste of resources, such as memory, space and processor utilization, network bandwidth, time, etc. Security Ability to protect data against unauthorized access and to withstand malicious or inadvertent interference with its operations. Besides the presence of appropriate security mechanisms such as authentication, access control and encryption, security also implies resilience in the face of malicious, intelligent and adaptive attackers. Time Example There are two major differences between hardware and software curves. One difference is that in the last phase, software does not have an increasing failure rate as hardware does. In this phase, software is approaching obsolescence; there are no motivation for any upgrades or changes to the software. Therefore, the failure rate will not change. The second difference is that in the useful-life phase, software will experience a drastic increase in failure rate each time an upgrade is made. The failure rate levels off gradually, partly because of the defects found and fixed after the upgrades.

2. What are the limitations of the Linear Sequential Model? The Linear Sequential Model is also known as the waterfall model or the classic life cycle. This is the first model ever formalized, and other process models are based on this approach to development. It suggests a systematic and sequential approach to the development of the software. It begins by analyzing the system, progressing to the analysis of the software, design, coding, testing and maintenance. It insists that a phase cannot begin unless the previous phase is finished. Figure 1.3 shows this type of software process model. It is also called Classic Life Cycle or Waterfall model or Software Life Cycle suggests a systematic and sequential approach to software development that begins at the system level and progresses through analysis, design, coding, testing and support. The waterfall model derives its name due to the cascading effect from one phase. In this model each phase well defined starting and ending point, with identifiable deliveries to the next phase Analysis-->Design-->Coding-->Testing The advantages of this model are: It is the first process model ever formulated. It provides a basis for other software process models. The disadvantages of this model are: Real software projects rarely follow a strict sequential flow. In fact, it is very difficult to decide when one phase ends and the other begins. End-user involvement only occurs at the beginning (requirements engineering) and at the end (operations and maintenance). It does not address the fact the requirements may change during the software development project.

End-users sometimes have difficulty stating all of their requirements. Thus, it delays the development of the software. Linear Sequential Model

1. Find out the different types of Software applications? Programs that are designed to carry out certain tasks for computers are called software. The programs are written in special languages that use letters, numbers, or codes which the computer interprets. (E.g. many computer programs are written in VISUAL BASIC, C++, and FORTRAN). The programs can be system software which controls the actual operations of the computer itself, such as DOS (Disk Operating System), Microsoft Windows, Linux etc., System software will tell the computer how to load, store and execute the application programs it uses.

Application software constitutes the actual programs which a company or individual may require. These application software programs tell the computer how to produce the information stored. Some samples of application software are: word processing software, electronic spreadsheet software, computer graphics software or database software.

Word Processing Software can be used to write letters, memos and documents. It provides the user with easy ways to add, delete, sort or change text on screen until it is suitable. It saves or prints the information. Word processing software prepares forms and printouts that typewriters formerly prepared. The more elaborate programs can correct spelling, change the text

appearance, change margins and even relocate entire paragraphs in the editing stage. Word processing software is popular because of its quickness in printing and its disk storage capabilities. Some examples of such software are: WordPerfect, Microsoft Word, First Choice and WordStar.

Database Software had its origins in record-keeping systems of bygone years. The need for worksheets used in classifying, calculating and summarizing has always been strong in the accounting fields of finance. Manual systems were replaced by punch-card equipment, which have been superseded by computers. Database software allows the user to enter, retrieve and update data in an efficient manner. Information can be classified, sorted and produced as reports needed for managing businesses. Examples of such softwares are: AccPac Plus, dBASE IV, Lotus Works.

Electronic Spreadsheet Software is used by people working with numbers, who can enter data and formulas so that the program can calculate or project results. With spreadsheet programs, the user can ask what if questions by changing data and recalculating. Spreadsheets are helpful for setting production and sales reports. The data may be presented in rows, columns or tables. This kind of software is popular because of its timesaving advantage over manual calculations. Spreadsheets have aided in on-the-spot decision-making. Some examples are: Lotus 1-2-3, Excel.

Computer Graphics Software produces professional looking documents containing both text and graphics. It can transform series of number values into charts and graphs for easier analysis or interpretation. Computer graphics software is used in the architectural, drafting and design industries. These programs present the data in a graphic pie, to aid in understanding statistics, trends, relationships and survey results. The use of clip art or graphs in line, bar, or circle format can provide useful charts which can even be color-enhanced. Some examples are: Corel Print House, Desktop Publishing, Ventura, and PageMaker. In businesses, one may be faced with the question of what software to get? This depends on the requirement and study advantages and disadvantages of available software. Canned software is pre-written mass-market software, ready-to-use and available nationwide. Custom software is software programmed to the users specifications by experienced programmers. This method is usually undertaken only when it is determined that the necessary software does not exist. Computer Virus: A computer virus is a program that copies itself into other programs and spreads through multiple computer systems.

SET-2 2. What is about the object Identification? Object identification is used for uniquely identifying the objects they are two types of object identifications 1. Normal identification 2. Smart identification Actually they are 4 types of properties in object identification 1. Mandatory properties 2. Assistive properties 3. Base Filter properties 4. Optional Filter Properties Apart from the above Properties there is ordinal identifier proprty also.

You might also like