This action might not be possible to undo. Are you sure you want to continue?
Set ± 1 (60 Marks)
Answer all questions 10 x 6 = 60
Ques 1 Discuss the impact of ³information era´.
The Information Age, also commonly known as the Computer Age or Information Era, is an idea that the current age will be characterized by the ability of individuals to transfer information freely, and to have instant access to knowledge that would have been difficult or impossible to find previously. The idea is linked to the concept of a Digital Age or Digital Revolution, and carries the ramifications of a shift from traditional industry that the Industrial Revolution brought through industrialization, to an economy based around the manipulation of information. The period is generally said to have begun in the latter half of the 20th century, though the particular date varies. Since the invention of social media in the early 21st century, some have claimed that the Information Age has evolved into the Attention Age. The term has been widely used since the late 1980s and into the 21st century. The Internet The Internet was originally conceived as a distributed, fail-proof network that could connect computers together and be resistant to any one point of failure; the Internet cannot be totally destroyed in one event, and if large areas are disabled, the information is easily re-routed. It was created mainly by ARPA; its initial software applications were email and computer file transfer. It was with the invention of the World Wide Web in 1989 that the Internet truly became a global network. Today the Internet has become the ultimate platform for accelerating the flow of information and is, today, the fastest-growing form of media. Progression In 1956 in the United States, researchers noticed that the number of people holding "white collar" jobs had just exceeded the number of people holding "blue collar" jobs. These researchers realized that this was an important change, as it was clear that the Industrial Age was coming to an end. As the Industrial Age ended, the newer times adopted the title of "the Information Age". At that time, relatively few jobs had much to do with computers and computer- related technology. There was a steady trend away from people holding Industrial Age manufacturing jobs. An increasing number of people held jobs as clerks in stores, office workers, teachers, nurses, etc. The Western world was shifting into a service economy.
Eventually, Information and Communication Technology²computers, computerized machinery, fiber optics, communication satellites, Internet, and other ICT tools²became a significant part of the economy. Microcomputers were developed and many business and industries were greatly changed by ICT. Nicholas Negroponte captured the essence of these changes in his 1995 book, Being Digital.His book discusses similarities and differences between products made of atoms and products made of bits. In essence, one can very cheaply and quickly make a copy of a product made of bits, and ship it across the country or around the world both quickly and at very low cost. Thus, the term "Information Age" is often applied in relation to the use of cell phones, digital music, high definition television, digital cameras, the Internet, computer games, and other relatively new products and services that have come into widespread use.
Ques 2: Explain whether the linear sequential model of the software process is an accurate reflection of software development activities or not.
Linear Sequential Model It is also called ³Classic Life Cycle´ or ³Waterfall´ model or ³Software Life Cycle´ suggests a systematic and sequential approach to software development that begins at the system level and progresses through analysis, design, coding, testing and support. The waterfall model derives its name due to the cascading effect from one phase. In this model each phase well defined starting and ending point, with identifiable deliveries to the next phase Analysis-->Design-->Coding-->Testing
Advantages Simple and a desirable approach when the requirements are clear and well understood at the beginning. It provides a clear cut template for analysis, design, coding, testing and support. It is an enforced disciplined approach Disadvantages It is difficult for the customers to state the requirements clearly at the beginning. There is always certain degree of natural uncertainty at beginning of each project.
The mathematical function is usually higher order exponential or logarithmic. . Difficult and costlier to change when the changes occur at a later stages. The major difference of the two models is shown in Table 1. This may result in disaster if any undetected problems are precipitated to this stage Ques 3: Why it is inappropriate to use reliability metrics. none of the models can capture a satisfying amount of the complexity of software. Most software models contain the following parts: assumptions. constraints and assumptions have to be made for the quantifying process. Software modeling techniques can be divided into two subcategories: prediction modeling and estimation modeling. reliability Engineering approaches are practiced in software field as well. but how to quant ify software reliability still remains largely unsolved. and a mathematical function that relates the reliability with the factors. As many models as there are and many more emerging. there is no single model that can be used in all situations. One model may work well for a set of certain software. Since Software Reliability is one of the most important aspects of software quality. but may be completely off track for other kinds of problems. Therefore. and try to quantify software reliability. factors. Over 200 models have been developed since the early 1970s.Software Reliability Engineering (SRE) is the quantitative study of the operational behavior of software-based systems with respect to user requirements concerning reliability Software Reliability Models A proliferation of software reliability models have emerged as people try to understand the characteristics of how and why software fails. Both kinds of modeling techniques are based on observing and accumulating failure data and analyzing with statistical inference. No model is complete or even representative. Customer can see the working version only at the end. Thus any changes suggested here are not only difficult to incorporate but also expensive. which were developed for hardware systems in estimating software system reliability? Illustrate your answer with example.
source code is used (SLOC. There is no clear definition to what aspects are related to software reliability. Exponential models and Weibull distribution model are usually named as classical fault count/fault rate estimation models. it is not proven in scientific or real-time applications. The advent of new technologies of code reuses and code generation technique also cast doubt on this simple method. software reliability can be predicted early in the development phase and enhancements can be initiated to improve the reliability. development effort and reliability. and Rome Laboratory models TR-92-51 and TR-92-15. the modeling results can not be blindly believed and applied. We cannot find a suitable way to measure software reliability.Table 1. Until now. It is tempting to measure something related to reliability to reflect the characteristics. It is a measure of the functional complexity of the program. The method can be used to estimate the size of a software system as soon as these functions can be identified. etc. .[Lyu95] Because of the complexity of software. It measures the functionality delivered to the user and is independent of the programming language. This method cannot faithfully compare software not written in the same language. inquires. complexity is reduced and abstraction is achieved. Representative estimation models include exponential distribution models. and interfaces. or LOC in thousands (KLOC). Weibull distribution model. It is used primarily for business systems. if we cannot measure reliability directly. Measuring software reliability remains a difficult problem because we don't have a good understanding of the nature of software.the observed faults and/or failures. Difference between software reliability prediction models and software reliability estimation models Representative prediction models include Musa's Execution Time Model. Function point metric is a method of measuring the functionality of a proposed software development based upon a count of inputs. is an intuitive initial approach to measuring software size. etc. We have to carefully choose the right model that suits our specific case. The field has matured to the point that software models can be applied in practical situations and give meaningful results and. Putnam's Model. Even the most obvious product metrics such as software size have not uniform definition.executable statements are not counted. KSLOC) and comments and other non. the models tend to specialize to be applied to only a portion of the situations and a certain class of the problems. But there is not a standard way of counting. By doing so. and most of the aspects related to software reliability. outputs. the quest of quantifying software reliability has never ceased. second. Thompson and Chelson's model. any model has to have extr a assumptions. but not in software engineering. Most software reliability models ignore the software development process and focus on the results -. Using prediction models. master files. that there is no one model that is best in all situations. The current practices of software reliability measurement can be divided into four categories: Product metrics Software size is thought to be reflective of complexity. Lines Of Code (LOC). while Thompson and Chelson's model belong to Bayesian fault rate estimation models. however. Typically. Though frustrating. we still have no good way of measuring software reliability. Software Reliability Metrics Measurement is commonplace in other engineering field. Only limited factors can be put into consideration. Furthermore.
testing. Before the deployment of software products. both the number of faults found during testing (i. based on the assumption that software reliability is a function of the portion of software that has been successfully verified or tested. summarized and analyzed to achieve this goal. etc. before delivery) and the failures (or other problems) reported by users after delivery are collected. Minimally. the software may pass all tests and yet be prone to failure once delivered. Usually. configuration management process.. Higher reliability can be achieved by using better development process. failure metrics are based upon customer information regarding failures found after release of the software. so representing complexity is important. locate and remove software defects. Software testing is heavily used to trigger. Orthogonal Defect classification and formal methods. Software testing is still in its infant stage. Process metrics Based on the assumption that the quality of the product is a direct function of the process. fault-tree analysis. etc. Fault and failure metrics The goal of collecting fault and failure metrics is to be able to determine when the software is approaching failure-free execution. by simplifies the code into a graphical representation. Representative metric is McCabe's Complexity Metric. Project management metrics Researchers have realized that good management can result in better products. risk management process.Complexity is directly related to software reliability. The failure data collected is therefore used to calculate failure density. Research has demonstrated that a relationship exists between the development process and the ability to complete projects on time and within the desired quality objectives. or "quality management standards". because if the testing scenario does not cover the full functionality of the software. Various analysis tools such as trend analysis. monitor and improve the reliability and quality of software. Test coverage metrics are a way of estimating fault and reliability by performing tests on software products.e. verification and validation are necessary steps. Software Reliability Improvement Techniques Good engineering methods can largely improve software reliability. Complexity-oriented metrics is a method of determining the complexity of a program¶s control structure. Detailed discussion about various software testing methods can be found in topic Software Testing. Mean Time between Failures (MTBF) or other parameters to measure or predict software reliability. testing is crafted to suit specific needs in various software development projects in an ad-hoc manner. is the generic reference for a family of standards developed by the International Standards Organization(ISO). can also be used to minimize the . Test strategy is highly relative to the effectiveness of fault metrics. process metrics can be used to estimate. Costs increase when developers use inadequate processes. ISO-9000 certification.
Systems development phases Systems Development Life Cycle (SDLC) adheres to important phases that are essential for developers. This step involves breaking down the system in different pieces and drawingdi agrams to analyze the situation. After deployment of the software product. The feasibility study is sometimes used to present the project to upper management in an attempt to gain funding. Projects are typically evaluated in three areas of feasibility: economical. Ques 4: Explain why it is necessary to design the system architecture before the specifications are written. There is no definitively correct Systems Development Life Cycle model. and attempt to engage users so that definite . such as planning. and technical. There are several Systems Development Life Cycle Models in existence. andimplementation. Analyze project goals. Fault tolerance or fault/failure forecasting techniques will be helpful techniques and guide rules to minimize fault occurrence or impact of the fault on the system. Requirements gathering and analysis The goal of systems analysis is to determine where the problem is in an attempt to fix the system. These stages generally follow the same basic steps but many different waterfall methodologies give the steps different names and the number of steps seems to vary between 4 and 7. The tenth phase occurs when the system is disposed of and the task performed is either eliminated or transferred to other systems. The MIS is also a complement of those phases. it is also used as a reference to keep the project on track and to evaluate the progress of the MIS team. The tasks and work products for each phase are described in subsequent chapters. field data can be gathered and analyzed to study the behavior of software defects. and are explained in the section below. Initiation/planning To generate a high-level view of the intendedproject and determine the goals of the project. However. operational. The oldest model. phases may be combined or may overlap.possibility of defect occurrence after release and therefore improve software reliability. Not every project will require that the phases be sequentially executed. that was originally regarded as "the Systems Development Life Cycle" is the waterfall model: a sequence of stages in which the output of each stage becomes the input for the next. Furthermore. the phases are interdependent. break down functions that need to be created. This phase is also called the analysis phase. The SDLC can be divided into ten phases during which defined IT work products are created or modified. Depending upon the size and complexity of the project. but the steps canbe characterized and divided in several steps.design.analysis.
This stage is intermingled with the next in that individual modules will need testing before integration to the main project. Types of testing: Data set testing. Unit testing Integration testing Black box testing White box testing Module testing Back to back testing Automation testing . and generally include functional hierarchy diagrams. and a complete entity-relationship diagram with a full data dictionary.requirements can be defined. system and user acceptance testing are often performed. Design elements describe the desired software features in detail. including screen layouts. These design elements are intended to describe the software in sufficient detail that skilled programmers may develop the software with minimal additional input. Requirement Gathering sometimes require individual/team from client as well as service provider side to get a detailed and accurate requirements. workshops. business rules. Design In systems design functions and operations are described in detail. Testing The code is tested at various levels in software testing. Unit testing and module testing are done in this stage by the developers.code will be test in every sections. screen layout diagrams. pseudo code. but usually some occurs at this stage. Build or coding Modular and subsystemprogramming code will be accomplished during this stage. Iteration is not generally part of the waterfall model. For each requirement. The design stage takes as its initial input the requirements identified in the approved requirements document. tables of business rules. process diagrams and other documentation. The output of this stage will describe the new system as a collection of modules or subsystems. business process diagrams. This is a grey area as many different opinions exist as to what the stages of testing are and how much if any iteration occurs. Unit. and/or prototype efforts. a set of one or more design elements will be produced as a result of interviews.
Systems development life cycle topics Management and control SDLC Phases Related to Management Controls. The WBS format is mostly left to the project manager to establish in a way that best describes the project work. The middle section of the WBS is based on the seven Systems Development Life Cycle (SDLC) phases as a guide for WBS task development. Each task must have a measurable output (e. Control objectives help to provide a clear statement of the desired result or purpose and should be used throughout the entire SDLC process. Any part of the project needing support from contractors should have a Statement of work (SOW) written to include the appropriate tasks from the SDLC phases. A WBS task may rely on one or more activities (e. the upper section should provide an overview of the full scope and timeline of the project and will be part of the initial project description effort leading to project approval. new changes will be implemented. Each of the SDLC phase objectives are described in this section with key deliverables.User acceptance testing Performance testing Operations and maintenance Thedeployment of the system includes changes and enhancements before the decommissioning or sunset of the system. each project will be required to establish some degree of a Work Breakdown Structure (WBS) to capture and schedule the work necessary to complete the project. software engineering. a description recommended tasks. It is critical for the project manager to establish and monitor control objectives during each SDLC phase while executing projects. The development of a SOW does not occur . document. or analysis). and a summary of related control objectives for effective management. decision. To manage and control any SDLC initiative. The WBS elements should consist of milestones and ³tasks´ as opposed to ³activities´ and have a definitive period (usually two weeks or more). As key personnel change positions in the organization. either internal or external to the project. There are some key areas that must be defined in the WBS as part of the SDLC policy. and relate to the SDLC phases as shown in the figure. which will require system updates.g. The upper section of the Work Breakdown Structure (WBS) should identify the major phases and milestones of the project in a summary fashion. The Systems Development Life Cycle (SDLC) phases serve as a programmatic guide to project activity and provide a flexible but consistent way to conduct projects to a depth matching the scope of the project. The following diagram describes three key areas that will be addressed in the WBS in a manner established by the project manager.Maintaining the system is an important aspect of SDLC.g. systems engineering) and may require close coordination with other tasks. Work breakdown structure organization Work Breakdown Structure. The WBS and all programmatic material should be kept in the ³Project Description´ section of the project notebook. Control objectives can be grouped into major categories (Domains). In addition.
extension of earlier work in Prototyping and RAD. web development or e-commerce) where stakeholders need to review on a regular basis the software being designed. it is far more important to take the best practices from the SDLC model and apply it to whatever may be most appropriate for the software being designed. Open Source Development End-user development Object Oriented Programming Strengths and weaknesses Few people in the modern computing world would use a strictwaterfall model for their Systems Development Life Cycle (SDLC) as many modern methodologies have superseded this thinking. Instead of viewing SDLC from a strength or weakness perspective. At one time the model was beneficial mostly to the world of automating activities that were assigned to clerks and . Complementary to SDLC Complementary Software development methods to Systems Development Life Cycle (SDLC) are: Software Prototyping Joint Applications Design (JAD) Rapid Application Development (RAD) Extreme Programming (XP).e. and active user involvement in the development process. reduced development cost. Allocated Baseline: established after the preliminary design phase. Product Baseline: established after the detail design and development phase. Joint Application Development and implementation of CASE tools. Some will argue that the SDLC no longer applies to models like Agile computing. The advantages of RAD are speed. These baselines are established after four of the five phases of the SDLC and are critical to the iterative nature of the model Each baseline is considered as a milestone in the SDLC. The disadvantages to using the SDLC methodology is when there is need for iterative development or (i. The SDLC practice has advantages in traditional models of software development that lends itself more to a structured environment. It should not be assumed that just because the waterfall model is the oldest original SDLC model that it is the most efficient system. Functional Baseline: established after the conceptual design phase. but it is still a term widely in use in Technology circles.during a specific phase of SDLC but is developed to include the work from the SDLC process that may be conducted by external resources such as contractors. Baselines in the SDLC Baselines are an important part of the Systems Development Life Cycle (SDLC). Updated Product Baseline: established after the production construction phase. A comparison of the strengths and weaknesses of SDLC prototyping.
a business object can consist of people. which does not include object prototypebased approaches where objects are not typically obtained by instancing classes but by cloning other (prototype) objects. the events that external actors generate. Realize that an output artifact does not need to be completely developed to serve as input of object-oriented design. It is one approach to software design. Object-oriented design is the discipline of defining the objects and their interactions to solve a problem that was identified and documented during object-oriented analysis. equipment. and the artifacts can be continuously grown instead of completely developed in one shot. etc. lead to a system doing something useful. vehicles. the world of technological evolution is demanding that systems have a greater functionality that would assist help desk technicians/administrators or information technology specialists/analysts. For example.accountants. analysis and design may occur in parallel.oriented analysis. in a certain company. Some typical input artifacts for object-oriented design are: Conceptual model : Conceptual model is the result of object-oriented analysis. However. Ques 5 Discuss the difference between object oriented and function oriented design strategies. The 'object interface'. Use case actors may be end users or other systems. for a particular scenario of a use case. Use case diagrams are used to identify the actor (users or other systems) and the processes they perform. What follows is a description of the class-based subset of object-oriented design. artifacts. is also defined. taken together. Object Oriented Design refers to the objects that make up that business. Use case : Use case is description of sequences of events that. . An objectoriented program is described by the interaction of these objects. Both analysis and design can be performed incrementally. From a business perspective. System Sequence Diagram : System Sequence diagram (SSD) is a picture that shows. their order. In many circumstances use cases are further elaborated into use case diagrams. A N object contains encapsulated data and procedures grouped together to represent an entity. The conceptual model is explicitly chosen to be independent of implementation details. Object-oriented design is the process of planning a system of interacting objects for the purpose of solving a software problem. data files and database tables. how the object can be interacted with. such as concurrency or data storage. Input (sources) for object-oriented design The input for object-oriented design is provided by the output ofobject. it captures concepts in the problem domain. Each use case provides one or more scenarios that convey how the system should interact with the users called actors to achieve a specific business goal or function. and possible inter-system events. and in practice the results of one activity can feed the other in a short feedback cycle through an iterative process.
what it is and what it can do. If an object database is not used. This is realized by language keywords to enable a variable to be declared as private or protected to the owningclass. it is possible to develop the relational data model and the object-oriented design artifacts in parallel. It can also be thought of as a template for how to solve a problem that can be used in many different situations and/or applications. The ability to define thefunctionsormethods signatures without implementing them. It is not mandatory to have this. it is a description of a solution to a common problem. The so-calledsubclass has a whole section that is the superclass and then it has its own set of functions and data. Polymorphism : The ability to replace anobject with itssubobjects. These features are often referred to by these common names: Object/Class : A tight coupling or association of data structures with the methods or functions that act on the data. This is called a class. Object-oriented concepts The five basic concepts of object-oriented design are the implementation level features that are built into the programming language. User interface documentations (if applicable): Document that shows and describes the look and feel of the end product's user interface. the relational data model should usually be created before the design sincethe strategy chosen for object-relational mapping is an output of the OO design process. . which is a set of objects that are similar. but also all of its subobjects. and the growth of an artifact can stimulate the refinement of other artifacts. or object (an object is created based on a class). Relational data model (if applicable): A data model is an abstract model that describes how data is represented and used. Information hiding : The ability to protect some components of the object from external entities. The ability of an objectvariable to contain. creating class diagram from conceptual diagram: Usually map entity to class. in a context. An object can be part of a class. However. It is defined by its properties. Each object serves a separate function. Interface : The ability to defer the implementation of amethod. The main advantage of using a design pattern is that it can be reused in multiple applications. Identifyingattributes. Inheritance : The ability for aclass to extend or override functionality of anotherclass. without specifying the final application classes or objects that are involved. Object-oriented design patterns typically show relationships and interactions between classes or objects. but it helps to visualize the end-product and therefore helps the designer. Usedesign patterns (if applicable): A design pattern is not a finished design. not only thatobject. Designing concepts Defining objects.
Defineapplication framework (if applicable): Application framework is a term usually used to refer to a set of libraries or classes that are used to implement the standard structure of an application for a specific operating system. Some design principles and strategies Acyclic dependencies principle : The dependency graph of packages or components should have no cycles. Output (deliverables) of object-oriented design Sequence Diagrams : Extend theSystem Sequence Diagram to add specific objects that handle the system events. and. Identify persistent objects/data (if applicable): Identify objects that have to last longer than a single runtime of the application. .which depends on package A. their attributes. Firmware . System software such as operating systems. design the object relation mapping. such as word processors which perform productive tasks for users. or just software is a general term used to describe the role that computer programs. Class diagram : A class diagram is a type of static structureUML diagram that describes the structure of a system by showing the system's classes. package C depends on package B. as parallel vertical lines. Identify and define remote objects (if applicable). The messages and classes identified through the development of the sequence diagrams can serve as input to the automatic generation of the global class diagram of the system. Composite reuse principle : Favor polymorphic composition of objects over inheritance. the messages exchanged between them. as horizontal arrows. Middleware . different processes or objects that live simultaneously. in the order in which they occur. This is also referred to as having a directed acyclic graph. which interface with hardware to provide the necessary services for application software. much time is saved for the developer. then you would have a cycle. The term includes: Application software . A sequence diagram shows. and the relationships between the classes. which is software programmed resident to electrically programmable memory devices on board main boards or other types of integrated hardware carriers. Software. If package A also depended on package C. which controls and co-ordinates distributed systems. since he/she is saved the task of rewriting large amounts of standard code for each new application that is developed. If a relational database is used. For example. Ques 6 Explain why a software system which is used in a real -world environment must change or become progressively less useful. proceduresand documentation play in a computer system. By bundling a large amount of reusable code into a framework.
As such. testware is not a standing configuration but merely a working environment for application software orsubsets thereof Software includes things such aswebsites.programs or video games. A machine language consists of groups of binary values signifying processor instructions that change the state of the computer from its preceding state. essentially. Assembly language must be assembled into object code via an assembler. "Software" is sometimes used in a broader context to mean anything which is not hardware but which issued with hardware. that are coded by programming languages likeC orC++. Software may also be written in an assembly language. meaning that the "hard" are the parts that aretangible while the "soft" part is the intangible objects inside the computer. ASP. Testware . such as film. Software is an ordered sequence of instructions for changing the state of the computer hardware in a particular sequence.C#. microcode. software consists of a machine language specific to an individual processor.NET. Software also includes video games and the logic systems of modern consumer devices such as automobiles. It is usually written in high-level programming languages that are easier and more efficient for humans to use (closer to natural language) than machine language. tapes and records.PHP. and desktop applicationslike OpenOffice. Computer software is so called to distinguish it from computer hardware. scripting languages. Software encompasses an extremely wide array of products and technologies developed using different techniques likeprogramming languages. . Most software continues to be custom built.Java. andtoasters.C++. Microsoft Word developed by technologies likeC. XML. Software doesn't "wear-out".JSP. Software Characteristics Software is developed and engineered. High-level languages are compiled or interpreted into machine language object code.televisions.Software testing is a domain dependent of development and programming. orSmalltalk. Overview Computer software is often regarded as anything buthardware. The types of software include web pages developed by technologies likeHTML. which is an umbrella term or container term for allutilities andapplication software that serve in combination for testing asoftware package but not necessarily may optionally contribute to operational purposes.Perl. Software testing consists of various methods to test and declare a software product fit before it can be launched for use by either an individual or a group. or an FPGA configuration. Software usually runs on an underlying software operating systems such as the Linux orMicrosoft Windows. At the lowest level. which encompasses the physical interconnections and devices required to store and execute (or run) the software. a mnemonic representation of a machine language using a natural language alphabet.
e. Linux and Mac. programming softwareand application software. displays and keyboards. and also to partition the computer's resources such as memory and processor time in a safe and stable manner. Examples are. Typical applications include: industrial automation business software computer games quantum chemistry and solid state physics software telecommunications (i. The tools include: compilers debuggers interpreters linkers text editors An Integrated development environment (IDE) is a single application that attempts to manage all these functions.Windows XP. It includes a combination of the following: device drivers operating systems servers utilities windowing systems The purpose of systems software is to unburden the applications programmer from the often complex details of the particular computer being used. System software System software helps run the computer hardware and computer system. Programming software Programming software usually provides tools to assist a programmer in writing computer programs. device readers. although the distinction is arbitrary. and often blurred. the internet and everything that flows on it) databases educational software medical software military software .Types of software A layer structure showing where Operating System is located on generally used software systems on desktopsPractical computer systemsdivide software systems into three major classes system software. printers.. including such accessories as communications devices. and software using different programming languages in a more convenient way. Application software Application software allows end users to accomplish one or more specific (not directly computer development related)task s.
Often. emergence of new and/or reemergence of old faults is quite common. Therefore. Such regressions occur whenever previously working software functionality stops working as intended. Those strategies can be automated by an external tool. it has often been the case that when some feature is redesigned. molecular modeling software image editing spreadsheet simulation software Word processing Decision making software Application software exists for and has impacted a wide variety of topics Ques 7: Explain why regression testing is necessary and how automated testing tools can assist with this type of testing Regression testing is any type of software testing that seeks to uncover software regressions. Finally. a fix for a problem in one area inadvertently causes a software bug in another area. Sometimes reemergence occurs because a fix gets lost through poor revision control practices (or simple human error in revision control). Traditionally. in the corporate world. However. a fix for a problem will be "fragile" in that it fixes the problem in the narrow case where it was first observed but not in more general cases which may arise over the lifetime of the software.run all regression tests at specified intervals and report any failures (which could imply a regression or an out-of-date test). regression testing has been performed by a software quality assurance team after the development team has completed work. Frequently. a test that exposes the bug is recorded and regularly retested after subsequent changes to the program. repeatable. the same mistakes that were made in the original implementation of the feature were made in the redesign. regressions occur as an unintended consequence of program changes. and automated testing of the entire software package at every stage in the software development cycle. or once a week. Although this may be done through manual testing procedures using programming techniques. every night. Such a test suite contains software tools that allow the testing environment to execute all the regression test cases automatically. Experience has shown that as software is fixed. Common strategies are to run such a system after every successful compile (for small projects). design documents are replaced by extensive. in most software development situations it is considered good practice that when a bug is located and fixed. This problem is being addressed by the rise of . defects found at this stage are the most costly to fix. Regression testing is an integral part of the extreme programming software development method. Typically. it is often done using automated testing tools. some projects even set up automated systems to automatically re. Common methods of regression testing include rerunning previously run tests and checking whether previously fixed faults have re-emerged. In this method. such as BuildBot.
Although developers have always written test cases as part of the development cycle. it has often been the case that when some feature is redesigned the same . Therefore.developer testing. to ensure that it has not been damaged in an obscure way. repeatable. but often also for tracking the quality of its output. Uses Regression testing can be used not only for testing thecorrectness of a program. after each fix one must run the entire batch of test cases previously run against the system. in the design of a of the test suite cases. In this method. it is often done using automated testing tools. "Also as a consequence of the introduction of new bugs. Such a test suite contains software tools that allow the testing environment to execute all the back to back test cases automatically. back to backs occur as an unintended consequence of program changes. Those strategies can be automated by an external tool. In practice. Common strategies are to run such a system after every successful compile (for small projects). a fix for a problem in one area inadvertently causes a software bug in another area. Experience has shown that as software is fixed. Typically. emergence of new and/or reemergence of old faults is quite common. For instance. Frequently. or once a week. and ." Ques8 : Explain how back-to-back testing may be used to test critical system with replicated software Back to back is any type of software testing that seeks to uncover software back to backs. a test that exposes the bug is recorded and regularly retested after subsequent changes to the program. Developer testing compels a developer to focus on unit testing and to include both positive and negative test cases. Often. Theoretically. mistakes that were made in the original implementation of the feature were made in the redesign. Although this may be done through manual testing procedures using programming techniques. a fix for a problem will be "fragile" in that it fixes the problem in the narrow case where it was first observed but not in more general cases which may arise over the lifetime of the software. Finally. and it is very costly.run all back to back tests at specified intervals and report any failures (which could imply a back to back or an out-of-date test). Such back to backs occur whenever previously working software functionality stops working as intended. such regression testing must indeed approximate this theoretical idea. such as BuildBot. program maintenance requires far more system testing per statement written than any other programming. Sometimes reemergence occurs because a fix gets lost through poor revision control practices (or simple human error in revision control). Back to back testing is an integral part of the extreme programming software development method. design documents are replaced by extensive. every night. Common methods of back to back testing include rerunning previously run tests and checking whether previously fixed faults have re-emerged. these test cases have generally been either functional tests or unit tests that verify only intended outcomes. in most software development situations it is considered good practice that when a bug is located and fixed. some projects even set up automated systems to automatically re.
program maintenance requires far more system testing per statement written than any other programming. the process of executing a program or application with the intent of finding software bugs. Software testing is an empirical investigation conducted to provide stakeholders with information about the quality of the product or service under test. However. defects found at this stage are the most costly to fix. Software Testing can also be stated as the process of validating and verifying that a software program/application/product: meets the business and technical requirements that guided its design and development. with respect to the context in which it is intended to operate. In a more traditional model.automated testing of the entire software package at every stage in the software development cycle. Test techniques include. Traditionally. the methodology of the test is governed by the Softwa re Development methodology adopted. after each fix one must run the entire batch of test cases previously run against the system. back to back testing has been performed by a software quality assurance team after the development team has completed work. such back to back testing must indeed approximate this theoretical idea. works as expected. This problem is being addressed by the rise of developer testing. in the corporate world. most of the test effort occurs after the requirements have been defined and the coding process has been completed. Different software development models will focus the test effort at different points in the development process. As such. Newer . depending on the testing method employed. back to back testing should track the code size. but are not limited to. Although developers have always written test cases as part of the development cycle. Theoretically. However. to ensure that it has not been damaged in an obscure way. "Also as a consequence of the introduction of new bugs." Ques 9 Write a note on Software Testing Strategy. simulation time and time of the test suite cases. and it is very costly. In practice. but often also for tracking the quality of its output. most of the test effort occurs after the requirements have been defined and the coding process has been completed. Software Testing. Software Testing also provides an objective. can be implemented with the same characteristics. can be implemented at any time in the development process. independent view of the software to allow the business to appreciate and understand the risks at implementation of the software. For instance. these test cases have generally been either functional tests or unit tests that verify only intended outcomes. Developer testing compels a developer to focus on unit testing and to include both positive and negative test cases Uses Back to back testing can be used not only for testing thecorrectnes s of a program. in the design of a compiler.
development models. unrecognized requirements. History The separation of debugging from testing was initially introduced by Glenford J. although some development methodologies work from use cases or user stories.. from that of verification. Myers in 1979.Demonstration oriented 1979±1982 .Destruction oriented 1983±1987 .g. There are various roles for testing team members. This is a non-trivial pursuit. Defects and failures Not all software defects are caused by coding errors.Prevention oriented Software testing A primary purpose for testing is to detect software failures so that defects may be uncovered and corrected. One common source of expensive defects is caused by requirement gaps. a testing organization may be separate from the development team.Evaluation oriented 1988±2000 . such as scalability or security.Non-functional testing refers to aspects of the software that may not be related to a specific function or user action. scalability.) it illustrated the desire of the software engineering community to separate fundamental development activities.functional testing tends to answer such questions as "how many people can log in at once". Functional tests tend to answer the question of "can the user do this" or "does this particular feature work". Information derived from software testing may be used to correct the process by which software is developed. or "how easy is it to hack this software". Functional vs non-functional testing Functional testing refers to tests that verify a specific action or function of the code. e. Dave Gelperin and William C. These are usually found in the code requirements documentation. A common source of requirements gaps is nonfunctional requirements such as testability. Non. Testing cannot establish that a product functions properly under all conditions but can only establish that it does not function properly under specific conditions. In the current culture of software development. such as Agile or XP. . maintainability. Although his attention was on breakage testing ("a successful test is one that finds a bug". usability. often employ test driven development and place an increased portion of the testing up front in the development process. performance. Hetzel classified in 1988 the phases and goals in software testing in the following stages: Until 1956 . in the hands of the developer. such as debugging. The scope of software testing often includes examination of code as well as execution of that code in various environments and conditions as well as examining the aspects of code: does it do what it is supposed to do and do what it needs to do.Debugging oriented 1957±1978 . that result in errors of omission by the program designer.
non-functional dimensions of quality (how it is supposed to be versus what it is supposed to do)²usability. A programmer makes an error (mistake).1×10× 15× 25±100× Construction . something that constitutes sufficient value to one person may be intolerable to another. as cited below Input combinations and preconditions A very fundamental problem with software testing is that testing under all combinations of inputs and preconditions (initial state) is not feasible.. A defect can turn into a failure when the environment is changed. "the latest version of" this-or-that operating system. may have resulted in (unintended. compatibility. or testing the software upon. scalability.. In any case.Software faults occur through the following processes. Hetzel. web browser version. as witnessed by some significant population of computer users. This means that the number of defects in a software product can be very large and defects that occur infrequently are difficult to find in testing. Finding faults early It is commonly believed that the earlier a defect is found the cheaper it is to fix it. The unintended consequence of this fact is that: their latest work might not be fully compatible with earlier mixtures of software/hardware. A single defect may result in a wide range of failure symptoms. If this defect is executed. or. increasingly.) because the programmers have only considered coding their programs for. More significantly. a new operating system. then it would cost 10± 100 times more to fix than if it had already been found by the requirements review. causing a failure.a nd security. The following table shows the cost of fixing the defect depending on the stage it was found. in certain situations the system will produce wrong results. Not all defects will necessarily result in failures. For example. defects in dead code will never result in failures.) software failures. For example.. or it might not be fully compatible with another important operating system. Time Detected Requirements Architecture Construction System Test Post-Release Time IntroducedRequirements 1× 3× 5±10×10× 10±100× Architecture .-10× 10±25× Compatibility A frequent cause of software failure is compatibility with another application. even with a simple product. whatever they might be. this can occur (for example. In the case of lack of backward compatibility. which results in a defect (fault. bug) in the software source code. . alterations in source data or interacting with different software. these differences. reliability²can be highly subjective..This could be considered a "prevention oriented strategy" that fits well with the latest testing phase suggested by Dave Gelperin and William C. performance.release. Examples of these changes in environment include the software being run on a new hardware platform. if a problem in the requirements is found only post.
What constitutes an "acceptable defect rate" depends on the nature of the software. by their very nature. Software verification and validation Software testing is used in association with verification and validation: Verification: Have we built the software right? (i. Spreadsheet programs are. an arcade video game designed to simulate flying an airplane would presumably have a much higher tolerance for defects than mission critical software such as that used to . They examine and change the software engineering process itself to reduce the amount of faults that end up in the delivered software: the so-called defect rate. tester. software testing may be viewed as an important part of the software quality assurance (SQA) process.. with results displayed immediately after each calculation or text manipulation.Static vs. Regarding the periods and the different goals in software testing. but later it was also seen as a separate profession. Validation is the process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements.e. walkthroughs. Reviews. For example. or inspections are considered as static testing. For example. Dynamic testing may begin before the program is 100% complete in order to test particular sections of code (modules or discrete functions). different roles have been established: manager. it is also common to see these two terms incorrectly defined. whereas actually executing programmed code with a given set of test cases is referred to as dynamic testing. software process specialists and auditors take a broader view on software and its development. In SQA. The terms verification and validation are commonly used interchangeably in the industry. automation developer. test lead. Static testing can be (and unfortunately in practice often is) omitted. test designer. Software Quality Assurance (SQA) Though controversial. does it match the specification). and test administrator. The software testing team Software testing can be done by software testers. According to the IEEE Standard Glossary of Software Engineering Terminology: Verification is the process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. Typical techniques for this are either using stubs/drivers or execution from a debugger environment. Validation: Have we built the right software? (i.. Until the 1980s the term "software tester" was used generally.e. dynamic testing There are many approaches to software testing. is this what the customer wants). tested to a large extent interactively ("on the fly"). Dynamic testing takes place when the program itself is used for the first time (which is generally considered the beginning of the testing stage).
Regression Testing . and there may be no SQA function in some companies.control the functions of an airliner that really is flying! Although there are close links with SQA. and will result in apass or fail boolean outcome. a number of tasks must occur Basic user requirements must be communicated between the customer and the software engineer Classes must be identified A class hierarchy must be specified Object-to-Object relationships should be represented Object behavior must be modeled The above tasks are reapplied iteratively until the model is complete. tothe anticipated user's environment. testing departments often exist independently. Apart from the above factor we have cycle to completed Acceptance Testing Acceptance testing generally involves running a suite of tests on the completedsystem.Software Testing is a task intended to detect defects in software by contrasting a computer program's expected results with its actual results for a given set of inputs. known as a case. The test environment is usually designed to be identical. QA (Quality Assurance) is the implementation of policies and procedures intended to prevent defects from occurring in the first place. By contrast. or as close as possible. Each individual test. exercises a particular operatingcondition of the user's environment or feature of the system. There isgenerally no degree of success or failure. wholes and parts In order to build an analysis model five basic principle were applied. These test casesmust each be accompanied by test case input data or a formal description of theoperational activities (or both) to be performed²intended to thoroughly exercisethe specific case²and a formal description of the expected results. Ques 10 Discuss whether it is possible for engineers to test their own programs in an objective way. The information domain is modeled Function is described Behavior is represented Data functional and behavioral models are partitioned top expose greater details Early models represent the essence of the problem while later models provide implementation details To accomplish this. classes and number. OOA-object oriented analysis is based upon the concepts like objects and attributes. including extremes of such.
regressions occur asan unintended consequence of program changes. Such regressions occur whenever previously working softwarefunctionality stops working as intended.So logically speaking for engineers to test their own programs is possible if all the above things in place but for all the practical reasons the professional projects are handling professionally based out of the test case functions. Common methods of regressiontesting include rerunning previously run tests and checking whether previouslyfixed faults have re-emerged. Typically. .Regression testing is any type of software testing that seeks to uncover softwareregressions.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue listening from where you left off, or restart the preview.