You are on page 1of 55

Software Engineering

UNIT – I
 Introduction to Software Engineering
Software Engineering:
The term is made of two words, software and engineering.
Software is more than just a program code. A program is an executable code, which serves
some computational purpose. Software is considered to be collection of executable
programming code, associated libraries and documentations. Software, when made for a
specific requirement is called software product.
Engineering on the other hand, is all about developing products, using well-defined,
scientific principles and methods.
Software engineering is an engineering branch associated with development of software
product using well-defined scientific principles, methods and procedures. The outcome of
software engineering is an efficient and reliable software product.
Definitions
IEEE defines software engineering as:
(1) The application of a systematic,disciplined,quantifiable approach to the
development,operation and maintenance of software; that is, the application of engineering
to software.
(2) The study of approaches as in the above statement.
Fritz Bauer, a German computer scientist, defines software engineering as:
Software engineering is the establishment and use of sound engineering principles in order
to obtain economically software that is reliable and work efficiently on real machines.
Software Evolution
The process of developing a software product using software engineering principles and
methods is referred to as software evolution. This includes the initial development of
software and its maintenance and updates, till desired software product is developed, which
satisfies the expected requirements.
Evolution starts from the requirement gathering process. After which developers create a
prototype of the intended software and show it to the users to get their feedback at the early
stage of software product development. The users suggest changes, on which several
consecutive updates and maintenance keep on changing too. This process changes to the
original software, till the desired software is accomplished.
Even after the user has desired software in hand, the advancing technology and the changing
requirements force the software product to change accordingly. Re-creating software from
scratch and to go one-on-one with requirement is not feasible. The only feasible and
economical solution is to update the existing software so that it matches the latest
requirements.
The Evolving Role of Software:
 Software can be considered in a dual role. It is a product and a vehicle for delivering a
product.
 As a product, it delivers the computing potential in material form of computer
hardware. Eg: A network of computers accessible by local hardware, whether it
resides within a cellular phone or operates inside a mainframe computer.
 As a vehicle used to deliver the product. Software delivers the most important product
of our time- information. Software transforms personal data; it manages business
information to enhance competitiveness; it provides a gateway to worldwide
information networks and provides the means for acquiring information in all of its
forms
 Software acts as the basis for operating systems, networks, software tools and
environments.
Changing Nature of Software:
The following categories of computer software present the challenges for the software

 System software:
System software is a collection of programs written to service other programs E.g.:
Compilers, editors and file management utilities, Operating system components,
drivers … etc
 Application software:
Application software consists of standalone programs that solve a specific business
need.
 Engineering and Scientific Software:
This software is used to facilitate the engineering function and task. however modern
application within the engineering and scientific area are moving away from the
conventional numerical algorithms. Computer-aided design, system simulation, and
other interactive applications have begun to take a real-time and even system software
characteristic.

 EmbeddedSoftware:
Embedded software resides within the system or product and is used to implement and
control feature and function for the end-user and for the system itself. Embedded software
can perform the limited and esoteric function or provided significant function and control
capability.
 Product-lineSoftware:
Designed to provide a specific capability for use by many different customers, product
line software can focus on the limited and esoteric marketplace or address the mass
consumer market.
 WebApplication:
It is a client-server computer program which the client runs on the web browser. In their
simplest form, Web apps can be little more than a set of linked hypertext files that present
information using text and limited graphics. However, as e-commerce and B2B
application grow in importance. Web apps are evolving into a sophisticate computing
environment that not only provides a standalone feature, computing function, and content
to the end user.
 ArtificialIntelligenceSoftware:
Artificial intelligence software makes use of a nonnumerical algorithm to solve a complex
problem that is not amenable to computation or straightforward analysis. Application
within this area includes robotics, expert system, pattern recognition, artificial neural
network, theorem proving and game playing.
The Software Process:
A software process (also knows as software methodology) is a set of related activities that
leads to the production of the software. These activities may involve the development of the
software from the scratch, or, modifying an existing system.
Any software process must include the following four activities:

1. Software specification (or requirements engineering): Define the main functionalities of


the software and the constrains around them.

2. Software design and implementation: The software is to be designed and programmed.

3. Software verification and validation: The software must conform to its specification
and meets the customer needs.

4. Software evolution (software maintenance): The software is being modified to meet


customer and market requirements changes.
In practice, they include sub-activities such as requirements validation, architectural design,
unit testing, …etc.

There are also supporting activities such as configuration and change management, quality
assurance, project management, user experience.

Along with other activities aim to improve the above activities by introducing new
techniques, tools, following the best practice, process standardization (so the diversity of
software processes is reduced), etc.

When we talk about a process, we usually talk about the activities in it. However, a process
also includes the process description, which includes:

1. Products: The outcomes of an activity. For example, the outcome of architectural design
maybe a model for the software architecture.

2. Roles: The responsibilities of the people involved in the process. For example, the
project manager, programmer, etc.

3. Pre and post conditions: The conditions that must be true before and after an activity.
For example, the pre-condition of the architectural design is the requirements have been
approved by the customer, while the post condition is the diagrams describing the
architectural have been reviewed.

Software process is complex, it relies on making decisions. There’s no ideal process and most
organizations have developed their own software process.

For example, an organization works on critical systems has a very structured process, while
with business systems, with rapidly changing requirements, a less formal, flexible process is
likely to be more effective.
Software Development Myths
Pressman (1997) describes a number of common beliefs or myths that software managers,
customers, and developers believe falsely. He describes these myths as ``misleading attitudes
that have caused serious problems.'' We look at these myths to see why they are false, and
why they lead to trouble.

Software Management Myths. Pressman describes managers' beliefs in the following


mythology as grasping at straws:

 Development problems can be solved by developing and documenting standards.


Standards have been developed by companies and standards organizations. They can
be very useful. However, they are frequently ignored by developers because they are
irrelevant and incomplete, and sometimes incomprehensible.
 Development problems can be solved by using state-of-the art tools. Tools may help,
but there is no magic. Problem solving requires more than tools, it requires great
understanding. As Fred Brooks (1987) says, there is no silver bullet to slay the
software development werewolf.
 When schedules slip, just add more people This solution seems intuitive: if there is
too much work for the current team, just enlarge it. Unfortunately, increasing team
size increases communication overhead. New workers must learn project details
taking up the time of those who are already immersed in the project. Also, a larger
team has many more communication links, which slows progress. Fred Brooks (1975)
gives us one of the most famous software engineering maxims, which is not a myth,
``adding people to a late project makes it later.''

Software Customer Myths. Customers often vastly underestimate the difficulty of


developing software. Sometimes marketing people encourage customers in their misbeliefs.

 Change is easily accommodated, since software is malleable.

Software can certainly be changed, but often changes after release can require an
enormous amount of labour.

 A general statement of need is sufficient to start coding

This myth reminds me of a cartoon that I used to post on my door. It showed the
software manager talking to a group of programmers, with the quote: ``You
programmers just start coding while I go down and find out what they want the
program to do.'' This scenario is an exaggeration. However, for developers to have a
chance to satisfy the customers’ requirements, they need detailed descriptions of these
requirements. Developers cannot read the minds of customers.

Developer Myths. Developers often want to be artists (or artisans), but the software
development craft is becoming an engineering discipline. However, myths remain:

 The job is done when the code is delivered.

Commercially successful software may be used for decades. Developers must


continually maintain such software: they add features and repair bugs. Maintenance
costs predominate over all other costs; maintenance may be 70% of the development
costs. This myth is true only for shelfware --- software that is never used, and there
are no customers for next release of a shelfware product.

 Project success depends solely on the quality of the delivered program.

Documentation and software configuration information is very important to the


quality. After functionality, maintainability, see the preceding myth, is of critical
importance. Developers must maintain the software and they need good design
documents, test data, etc to do their job.

 You can't assess software quality until the program is running.

There are static ways to evaluate quality without running a program. Software
reviews can effectively determine the quality of requirements documents, design
documents, test plans, and code. Formal (mathematical) analyses are often used to
verify safety critical software, software security factors, and very-high reliability
software.

Software Process Framework || A Generic Process Model


Framework is a Standard way to build and deploy applications. Software Process Framework
is a foundation of complete software engineering process. Software process framework
includes all set of umbrella activities. It also includes number of framework activities that are
applicable to all software projects.

A generic process framework encompasses five activities which are given below one by one:
 Communication:In this activity, heavy communication with customers and other
stakeholders, requirement gathering is done.
 Planning:
In this activity, we discuss the technical related tasks, work schedule, risks, required
resources etc.
 Modelling:Modelling is about building representations of things in the ‘real world’. In
modelling activity, a product’s model is created in order to better understanding and
requirements.
 Construction:In software engineering, construction is the application of set of procedures
that are needed to assemble the product. In this activity, we generate the code and test the
product in order to make better product.
 Deployment:In this activity, a complete or non-complete products or software are
represented to the customers to evaluate and give feedback. on the basis of their feedback
we modify the products for supply better product.
Umbrella activities include:
 Risk management
 Software quality assurance (SQA)
 Software configuration management (SCM)
 Measurement
 Formal technical reviews (FTR)
Software Process Assessment:
A software process assessment is a disciplined examination of the software processes used
by an organization, based on a process model. The assessment includes the identification
and characterization of current practices, identifying areas of strengths and weaknesses, and
the ability of current practices to control or avoid significant causes of poor (software)
quality, cost, and schedule.
A software assessment (or audit) can be of three types.
 A self-assessment (first-party assessment) is performed internally by an
organization's own personnel.
 A second-party assessment is performed by an external assessment team or the
organization is assessed by a customer.
 A third-party assessment is performed by an external party or (e.g., a supplier being
assessed by a third party to verify its ability to enter contracts with a customer).
Software process assessments are performed in an open and collaborative environment.
They are for the use of the organization to improve its software processes, and the results are
confidential to the organization. The organization being assessed must have members on the
assessment team.
Capability Maturity Model Integration (CMMI):
The Capability Maturity Model Integration, or CMMI, is a process model that provides a
clear definition of what an organization should do to promote behaviours that lead to
improved performance. With five “Maturity Levels” or three “Capability Levels,” the CMMI
defines the most important elements that are required to build great products, or deliver great
services, and wraps them all up in a comprehensive model.

The CMMI helps us understand the answer to the question “how do we know?”

 How do we know what we are good at?


 How do we know if we’re improving?
 How do we know if the process we use is working well?
 How do we know if our requirements change process is useful?
 How do we know if our products are as good as they can be?
The CMMI also helps us identify and achieve measurable business goals, build better
products, keep customers happier, and ensure that we are working as efficiently as possible.

CMMI is comprised of a set of “Process Areas.” Each Process Area is intended be adapted to
the culture and behaviours of your own company. The CMMI is not a process, it is a book of
“whats” not a book of “hows,” and does not define how your company should behave. More
accurately, it defines what behaviours need to be defined. In this way, CMMI is a
“behavioural model” and well as a “process model.”

Organizations can be “Rated” at a Capability or Maturity Level based on over 300 discreet
“Specific” and “Generic” Practices. Intended to be broadly interpreted, the CMMI is not a
“Standard” (ala ISO), so achieving a “Level” of CMMI is not a certification, but a “rating.”

 Process Models
Prescriptive Process Models:

The following framework activities are carried out irrespective of the process model chosen
bytheorganization.

1.Communication
2.Planning
3.Modelling
4.Construction
5.Deployment

The name 'prescriptive' is given because the model prescribes a set of activities, actions,
tasks, quality assurance and change the mechanism for every project.

There are three types of prescriptive process models. They are:

1.TheWaterfallModel
2.IncrementalProcessmodel
3. RAD model

1. The Waterfall Model


 The waterfall model is also called as 'Linear sequential model' or 'Classic life cycle
model'.
 In this model, each phase is fully completed before the beginning of the next phase.
 This model is used for the small projects.
 In this model, feedback is taken after each phase to ensure that the project is on the right
path.
 Testing part starts only after the development is complete.

NOTE: The description of the phases of the waterfall model is same as that of the process
model.

An alternative design for 'linear sequential model' is as follows:


Advantages of waterfall model

 The waterfall model is simple and easy to understand, implement, and use.
 All the requirements are known at the beginning of the project; hence it is easy to manage.
 It avoids overlapping of phases because each phase is completed at once.
 This model works for small projects because the requirements are understood very well.
 This model is preferred for those projects where the quality is more important as compared
to the cost of the project.
Disadvantages of the waterfall model

 This model is not good for complex and object-oriented projects.


 It is a poor model for long projects.
 The problems with this model are uncovered, until the software testing.
 The amount of risk is high.
2. Incremental Process model

 The incremental model combines the elements of waterfall model and they are applied in an
iterative fashion.
 The first increment in this model is generally a core product.
 Each increment builds the product and submits it to the customer for any suggested
modifications.
 The next increment implements on the customer's suggestions and add additional
requirements in the previous increment.
 This process is repeated until the product is finished.
For example, the word-processing software is developed using the incremental model.
Advantages of incremental model

 This model is flexible because the cost of development is low and initial product delivery is
faster.
 It is easier to test and debug during the smaller iteration.
 The working software generates quickly and early during the software life cycle.
 The customers can respond to its functionalities after every increment.
Disadvantages of the incremental model

 The cost of the final product may cross the cost estimated initially.
 This model requires a very clear and complete planning.
 The planning of design is required before the whole system is broken into small increments.
 The demands of customer for the additional functionalities after every increment causes
problem during the system architecture.
3. RAD model
 RAD is a Rapid Application Development model.
 Using the RAD model, software product is developed in a short period of time.
 The initial activity starts with the communication between customer and developer.
 Planning depends upon the initial requirements and then the requirements are divided into
groups.
 Planning is more important to work together on different modules.
The RAD model consists of following phases:
1. Business Modelling

 Business modelling consist of the flow of information between various functions in the
project.
 For example, what type of information is produced by every function and which are the
functions to handle that information.
 A complete business analysis should be performed to get the essential business information.
2. Data modelling

 The information in the business modelling phase is refined into the set of objects and it is
essential for the business.
 The attributes of each object are identified and define the relationship between objects.
3. Process modelling

 The data objects defined in the data modelling phase are changed to fulfil the information
flow to implement the business model.
 The process description is created for adding, modifying, deleting or retrieving a data object.
4. Application generation

 In the application generation phase, the actual system is built.


 To construct the software the automated tools are used.
5. Testing and turnover

 The prototypes are independently tested after each iteration so that the overall testing time is
reduced.
 The data flow and the interfaces between all the components are are fully tested. Hence,
most of the programming components are already tested.
Evolutionary Process Models:

Evolutionary Process Models

 Evolutionary models are iterative type models.


 They allow to develop more complete versions of the software.

Following are the evolutionary process models.

1. The prototyping model


2. The spiral model
3. Concurrent development model
1. The Prototyping model

 Prototype is defined as first or preliminary form using which other forms are copied or
derived.
 Prototype model is a set of general objectives for software.
 It does not identify the requirements like detailed input, output.
 It is software working model of limited functionality.
 In this model, working programs are quickly produced.
The different phases of Prototyping model are:

1. Communication
In this phase, developer and customer meet and discuss the overall objectives of the software.

2. Quick design

 Quick design is implemented when requirements are known.


 It includes only the important aspects like input and output format of the software.
 It focuses on those aspects which are visible to the user rather than the detailed plan.
 It helps to construct a prototype.
3. Modelling quick design

 This phase gives the clear idea about the development of software because the software is
now built.
 It allows the developer to better understand the exact requirements.
4. Construction of prototype
The prototype is evaluated by the customer itself.

5. Deployment, delivery, feedback

 If the user is not satisfied with current prototype then it refines according to the requirements
of the user.
 The process of refining the prototype is repeated until all the requirements of users are met.
 When the users are satisfied with the developed prototype then the system is developed on
the basis of final prototype.
Advantages of Prototyping Model
 Prototype model need not know the detailed input, output, processes, adaptability of
operating system and full machine interaction.
 In the development process of this model users are actively involved.
 The development process is the best platform to understand the system by the user.
 Errors are detected much earlier.
 Gives quick user feedback for better solutions.
 It identifies the missing functionality easily. It also identifies the confusing or difficult
functions.
Disadvantages of Prototyping Model:

 The client involvement is more and it is not always considered by the developer.
 It is a slow process because it takes more time for development.
 Many changes can disturb the rhythm of the development team.
 It is a thrown away prototype when the users are confused with it.
2. The Spiral model

 Spiral model is a risk driven process model.


 It is used for generating the software projects.
 In spiral model, an alternate solution is provided if the risk is found in the risk analysis, then
alternate solutions are suggested and implemented.
 It is a combination of prototype and sequential model or waterfall model.
 In one iteration all activities are done, for large project's the output is small.
The framework activities of the spiral model are as shown in the following figure.
NOTE: The description of the phases of the spiral model is same as that of the process
model.

Advantages of Spiral Model

 It reduces high amount of risk.


 It is good for large and critical projects.
 It gives strong approval and documentation control.
 In spiral model, the software is produced early in the life cycle process.

Disadvantages of Spiral Model

 It can be costly to develop a software model.


 It is not used for small projects.
3. The concurrent development model

 The concurrent development model is called as concurrent model.


 The communication activity has completed in the first iteration and exits in the awaiting
changes state.
 The modelling activity completed its initial communication and then go to the
underdevelopment state.
 If the customer specifies the change in the requirement, then the modelling activity moves
from the under-development state into the awaiting change state.
 The concurrent process model activities moving from one state to another state.
Advantages of the concurrent development model

 This model is applicable to all types of software development processes.


 It is easy for understanding and use.
 It gives immediate feedback from testing.
 It provides an accurate picture of the current state of a project.
Disadvantages of the concurrent development model

 It needs better communication between the team members. This may not be achieved all the
time.
 It requires to remember the status of the different activities.

Unified Process Model:

Definition
The unified process model (or UPM) is an iterative, incremental, architecture-centric, and
use-case driven approach to software development. Let's first take a look at the use-case
driven approach.
Use-Case Driven Approach
A use-case defines the interaction between two or more entities. The list of requirements
specified by a customer are converted to functional requirements by a business analyst and
generally referred to as use-cases. A use-case describes the operation of a software as
interactions between the customer and the system, resulting in a specific output or a
measurable return. For example, the online cake shop can be specified in terms of use cases
such as 'add cake to cart,' 'change the quantity of added cakes in cart,' 'cake order checkout,'
and so on. Each use case represents a significant functionality and could be considered for an
iteration.
Architecture-Centric Approach
Now, let's take a closer look at the architecture-centric approach. Using this approach,
you'd be creating a blueprint of the organization of the software system. It would include
taking into account the different technologies, programming languages, operating systems,
development and release environments, server capabilities, and other such areas for
developing the software.
Iterative and Incremental Approach
And finally, let's take a closer look at the iterative and incremental approach.
Using an iterative and incremental approach means treating each iteration as a mini-project.
Therefore, you'd develop the software as a number of small mini-projects, working in cycles.
You'd develop small working versions of the software at the end of each cycle. Each iteration
would add some functionality to the software according to the requirements specified by the
customer.
Now that we saw the distinctive characteristics of the unified process model, let's take a look
at the process steps involved.
Unified Process Model Phases

The life of a software system can be represented as a series of cycles. A cycle ends with the
release of a version of the system to customers.

Within the Unified Process, each cycle contains four phases. A phase is simply the span of
time between two major milestones, points at which managers make important decisions
about whether to proceed with development and, if so, what's required concerning project
scope, budget, and schedule.

Figure 1-1: Phases and Major Milestones


Figure 1-1 shows the phases and major milestones of the Unified Process. In it, you can see
that each phase contains one or more iterations. We'll explore the concept of iterations in the
section "Iterations and Increments" later in this chapter.

The following subsections describe the key aspects of each of these phases.

Inception

The primary goal of the Inception phase is to establish the case for the viability of the
proposed system.

The tasks that a project team performs during Inception include the following:

 Defining the scope of the system (that is, what's in and what's out)

 Outlining a candidate architecture, which is made up of initial versions of six different


models

 Identifying critical risks and determining when and how the project will address them

 Starting to make the business case that the project is worth doing, based on initial
estimates of cost, effort, schedule, and product quality

The concept of candidate architecture is discussed in the section "Architecture-Centric" later


in this chapter. The six models are covered in the next major section of this chapter, "The
Five Workflows."

The major milestone associated with the Inception phase is called Life-Cycle Objectives.
The indications that the project has reached this milestone include the following:

 The major stakeholders agree on the scope of the proposed system.

 The candidate architecture clearly addresses a set of critical high-level requirements.

 The business case for the project is strong enough to justify a green light for continued
development.

Elaboration

The primary goal of the Elaboration phase is to establish the ability to build the new system
given the financial constraints, schedule constraints, and other kinds of constraints that the
development project faces.
The tasks that a project team performs during Elaboration include the following:

 Capturing a healthy majority of the remaining functional requirements

 Expanding the candidate architecture into a full architectural baseline, which is an


internal release of the system focused on describing the architecture

 Addressing significant risks on an ongoing basis

 Finalizing the business case for the project and preparing a project plan that contains
sufficient detail to guide the next phase of the project (Construction)

The architectural baseline contains expanded versions of the six models initialized during the
Inception phase.

The major milestone associated with the Elaboration phase is called Life-Cycle
Architecture. The indications that the project has reached this milestone include the
following:

 Most of the functional requirements for the new system have been captured in the use
case model.

 The architectural baseline is a small, skinny system that will serve as a solid foundation
for ongoing development.

 The business case has received a green light, and the project team has an initial project
plan that describes how the Construction phase will proceed.

The use case model is described in the upcoming section "The Five Workflows." Risks are
discussed in the section "Iterations and Increments" later in this chapter.

Construction

The primary goal of the Construction phase is to build a system capable of operating
successfully in beta customer environments.

During Construction, the project team performs tasks that involve building the system
iteratively and incrementally (see "Iterations and Increments" later in this chapter), making
sure that the viability of the system is always evident in executable form.
The major milestone associated with the Construction phase is called Initial Operational
Capability. The project has reached this milestone if a set of beta customers has a more or
less fully operational system in their hands.

Transition

The primary goal of the Transition phase is to roll out the fully functional system to
customers.

During Transition, the project team focuses on correcting defects and modifying the system
to correct previously unidentified problems.

The major milestone associated with the Transition phase is called Product Release.

Personal Software Process

Personal software process (PSP), is designed to assist software developers in using sound
engineering practices. PSP shows software developers how to plan and track their projects,
use a measured and defined process, establish goals, and track their performance against
these goals. PSP assists engineers in managing software quality from the start of a project to
completion, analysing the results of each task and using the results to improve the software
process of the next project.

Additionally, PSP concentrates more on the work of individual engineers. It extends an


improvement process to practicing engineers. The fundamental principle behind PSL is
producing quality software systems. Thus, every engineer working on a network must do
high-quality work.

Objectives of Personal Software Process

The aim of PSP is providing software developers with disciplined methods and strategies for
improving personal software development processes. PSP assists software engineers to:

 Improve their planning and estimating skills.


 Make commitments and schedules they can keep and meet.
 Reduce defects in their projects.
 Manage the quality of their plans.

Principles of Personal Software Process

The design of PSP is based on these planning and quality principles:


 Every engineer is different. For software engineers to become more active, they
should plan their work and base these plans on their personal data.
 To improve their performance, software engineers should personally use regular and
well-defined processes.
 For software engineers to produce quality software products, they should feel
personally responsible for the quality of the products they are making. Superior
software products are never created by mistake but by striving to do quality work.
 It’s cheaper to trace and fix defects earlier than later.
 It’s easier to prevent errors than finding and fixing them.
 The cheapest and fastest way to do any task is doing it in the right direction.

Team Software Process:

The success of an organization that produces software-intensive systems mainly depends on a


well-managed software development process. Implementing disciplined and quality software
methods are often challenging. Organizations always want to know what their software
development teams are doing, but they find themselves struggling with how to do it.

Team Software Process (TSP) comes in handy to offer operational procedures and strategies
that assist engineers and managers organize projects effectively and produce quality software
using disciplined software process methods. TSP is used in combination with personal
software process (PSP) at individual and team levels. Organizations implemented TSP
experience significant improvements in the overall quality of their software products. They
also experience reduced schedule deviation.

Overview

The primary objective of TSP is creating a team environment that supports disciplined work
while still building and maintaining a self-directed team. TSP guides a team in addressing
essential business needs of schedule management, cycle-time reduction, effective quality
management, and better cost management. It defines a product framework of customizable
software processes and introduces strategies that include training for engineers and managers,
building management sponsorship, automated tool support, mentoring, and coaching.

Team software process can be applied in all aspects of software development, that is
requirements analysis and definition, design, implementation, testing, and maintenance.
Additionally, TSP can also be used to support multidisciplinary teams ranging from a team of
two engineers to a team of hundreds of engineers. It can also be used in developing different
software products ranging from embedded real-time control systems to commercial client-
server applications.

What Makes TSP Work?


Typical software projects tend to be late, difficult to track, over budget, and of poor quality.
Software development teams often have unrealistic schedules and deadlines dictated to them.
They’re required to use imposed standards, tools and process. They find themselves taking
shortcuts to meet tight schedule pressures. Only a few teams can work successfully and
consistently in such environments. As software systems become more complex such
problems get worse than before. Moreover, teams have to consider customer desires,
technical capability, and business needs.

To balance all these pressures and conflicting forces and handling software development
projects a team has to be self-directed. A self-directed team should have these qualities:

 Understands product and business goals


 Produces their own plans for addressing the goals
 Makes their personal commitments
 Directs their own projects
 Consistently uses processes and methods that they select
 Manages quality.

Team software process builds and maintains self-directed teams. A successful self-directed
team requires capable and skilled team members. Their commitment, discipline, and skills
come together to produce high-quality software. Therefore, high-quality software products
are a team effort. TSP creates an environment that supports disciplined and self-directed
teamwork.

UNIT-II

 Software Requirements
Functional Requirements:
These are the requirements that the end user specifically demands as basic facilities that the
system should offer. All these functionalities need to be necessarily incorporated into the
system as a part of the contract. These are represented or stated in the form of input to be
given to the system, the operation performed and the output expected. They are basically the
requirements stated by the user which one can see directly in the final product, unlike the
non-functional requirements.
For example, in a hospital management system, a doctor should be able to retrieve the
information of his patients. Each high-level functional requirement may involve several
interactions or dialogues between the system and the outside world. In order to accurately
describe the functional requirements, all scenarios must be enumerated.
There are many ways of expressing functional requirements e.g., natural language, a
structured or formatted language with no rigorous syntax and formal specification language
with proper syntax.
Non-functional requirements:
These are basically the quality constraints that the system must satisfy according to the
project contract. The priority or extent to which these factors are implemented varies from
one project to other. They are also called non-behavioural requirements.
They basically deal with issues like:
 Portability
 Security
 Maintainability
 Reliability
 Scalability
 Performance
 Reusability
 Flexibility
NFR’s are classified into following types:
 Interface constraints
 Performance constraints: response time, security, storage space, etc.
 Operating constraints
 Life cycle constraints: maintainability, portability, etc.
 Economic constraints
The process of specifying non-functional requirements requires the knowledge of the
functionality of the system, as well as the knowledge of the context within which the system
will operate.
Domain requirements: Domain requirements are the requirements which are characteristic
of a particular category or domain of projects. The basic functions that a system of a specific
domain must necessarily exhibit come under this category. For instance, in an academic
software that maintains records of a school or college, the functionality of being able to
access the list of faculty and list of students of each grade is a domain requirement. These
requirements are therefore identified from that domain model and are not user specific.

Requirements Specification

It’s the process of writing down the user and system requirements into a document. The
requirements should be clear, easy to understand, complete and consistent.

In practice, this is difficult to achieve as stakeholders interpret the requirements in different


ways and there are often inherent conflicts and inconsistencies in the requirements.

As we’ve mentioned before, the process in requirements engineering are interleaved, and it’s
done iteratively. First iteration you specify the user requirements, then, you specify a more
detailed system requirement.
User requirements:

The user requirements for a system should describe the functional and non-functional
requirements so that they are understandable by users who don’t have technical knowledge.

You should write user requirements in natural language supplied by simple tables, forms, and
intuitive diagrams.

The requirement document shouldn’t include details of the system design, and you shouldn’t
use any of software jargon, or formal notations.

System requirements:

The system requirements on the other hand are expanded version of the user requirements
that are used by software engineers as the starting point for the system design.

They add detail and explain how the user requirements should be provided by the system.
They shouldn’t be concerned with how the system should be implemented or designed.

The system requirements may also be written in natural language but other ways based on
structured forms, or graphical notations are usually used.

Ways of Writing Requirements Specification


As we’ve mentioned, there are different ways to specify the requirements. The most two
common ways are the natural and structured languages.

Ways of writing a requirements specification


Natural Language Specification

 It’s a way of writing the requirements in normal plain text, there is no defined format by
default.

 Requirements written in natural language are vague, and ambiguous.

Structured Language Specification


 It’s a way of writing the requirements in more formal and structured form.

 It uses standard templates to specify the requirements. The specification can be structured
around the functions or events performed by the system.

Software Requirements Document


The software requirements document (also called software requirements specification or
SRS) is an official document of what should be implemented. It’s also used as a contract
between the system buyer and the software developers.

Users of requirements document and how they use it


It should include both; user and system requirements. Usually, the user requirements
aredefined in an introduction to the system requirements.
In other cases, especially if there are large number of requirements, the detailed system
requirements may be presented in a separate document.
The requirement document has a diverse set of users, ranging from the customers till the
system engineers.
The diversity of possible users means that the requirements document has to be a compromise
between communicating the requirements to customers, defining the requirements in detail
for developers and testers, and information about predicted changes can help system
designers to avoid restrictive design decisions, and help the system maintenance engineers
to adapt the system to new requirements.
In agile methods, since the requirements change so rapidly, it’s a waste of time to deliver a
full document at once, instead, collects the requirements incrementally, and write them on a
card as user stories.
 Requirements Engineering Process
A feasibility study is carried out to select the best system that meets performance
requirements.
The main aim of the feasibility study activity is to determine whether it would be financially
and technically feasible to develop the product. The feasibility study activity involves the
analysis of the problem and collection of all relevant information relating to the product such
as the different data items which would be input to the system, the processing required to be
carried out on these data, the output data required to be produced by the system as well as
various constraints on the behaviour of the system.

Technical Feasibility:
This is concerned with specifying equipment and software that will successfully satisfy the
user requirement. The technical needs of the system may vary considerably, but might
include :
• The facility to produce outputs in a given time.
• Response time under certain conditions.
• Ability to process a certain volume of transaction at a particular speed.
• Facility to communicate data to distant locations.
In examining technical feasibility, configuration of the system is given more importance than
the actual make of hardware. The configuration should give the complete picture about the
system’s requirements:
How many workstations are required, how these units are interconnected so that they could
operate and communicate smoothly.
What speeds of input and output should be achieved at particular quality of printing.
Economic Feasibility:
Economic analysis is the most frequently used technique for evaluating the effectiveness of a
proposed system. More commonly known as Cost / Benefit analysis, the procedure is to
determine the benefits and savings that are expected from a proposed system and compare
them with costs. If benefits outweigh costs, a decision is taken to design and implement the
system. Otherwise, further justification or alternative in the proposed system will have to be
made if it is to have a chance of being approved. This is an outgoing effort that improves in
accuracy at each phase of the system life cycle.

Operational Feasibility:
This is mainly related to human organizational and political aspects. The points to be
considered are:
• What changes will be brought with the system?
• What organizational structure are disturbed?
• What new skills will be required? Do the existing staff members have these skills? If not,
can they be trained in due course of time?
This feasibility study is carried out by a small group of people who are familiar with
information system technique and are skilled in system analysis and design process.
Proposed projects are beneficial only if they can be turned into information system that will
meet the operating requirements of the organization. This test of feasibility asks if the system
will work when it is developed and installed.

Requirements Elicitation & Analysis

It’s a process of interacting with customers and end-users to find out about the domain
requirements, what services the system should provide, and the other constrains.

It may also involve a different kind of stockholders; end-users, managers, system engineers,
test engineers, maintenance engineers, etc.

The requirements elicitation and analysis have 4 main process


We typically start by gathering the requirements, this could be done through a general
discussion or interviews with your stakeholders, also it may involve some graphical notation.

Then you organize the related requirements into sub components and prioritize them, and
finally, you refine them by removing any ambiguous requirements that may raise from some
conflicts.
Here are the 4 main process of requirements elicitation and analysis.

The process of requirements elicitation and analysis

It shows that it’s an iterative process with a feedback from each activity to another. The
process cycle starts with requirements discovery and ends with the requirements document.
The cycle ends when the requirements document is complete.
Requirements validation:
Requirements validation is the process of checking that requirements defined for
development, define the system that the customer really wants. To check issues related to
requirements, we perform requirements validation. We usually use requirements validation to
check error at the initial phase of development as the error may increase excessive rework
when detected later in the development process.
In the requirements validation process, we perform a different type of test to check the
requirements mentioned in the Software Requirements Specification (SRS), these checks
include:
 Completeness checks
 Consistency checks
 Validity checks
 Realism checks
 Ambiguity checks
 Verifiability
The output of requirements validation is the list of problems and agreed on actions of
detected problems. The lists of problems indicate the problem detected during the process of
requirement validation. The list of agreed action states the corrective action that should be
taken to fix the detected problem.
There are several techniques which are used either individually or in conjunction with other
techniques to check to check entire or part of the system:
1. Testcasegeneration:
Requirement mentioned in SRS document should be testable, the conducted tests reveal
the error present in the requirement. It is generally believed that if the test is difficult or
impossible to design than, this usually means that requirement will be difficult to
implement and it should be reconsidered.
2. Prototyping:
In this validation techniques the prototype of the system is presented before the end-
user or customer, they experiment with the presented model and check if it meets their
need. This type of model is generally used to collect feedback about the requirement of
the user.
3. RequirementsReviews:
In this approach, the SRS is carefully reviewed by a group of people including people
from both the contractor organisations and the client side, the reviewer systematically
analyses the document to check error and ambiguity.
4. AutomatedConsistencyAnalysis:
This approach is used for automatic detection of an error, such as nondeterminism,
missing cases, a type error, and circular definitions, in requirements specifications.
First, the requirement is structured in formal notation then CASE tool is used to check
in-consistency of the system, the report of all inconsistencies is identified and corrective
actions are taken.
5. Walk-through:
A walkthrough does not have a formally defined procedure and does not require a
differentiated role assignment.
 Checking early whether the idea is feasible or not.
 Obtaining the opinions and suggestion of other people.
 Checking the approval of others and reaching an agreement.

Introduction to Requirements Management

Requirements management is the process of collecting, analysing, refining, and prioritizing


product requirements and then planning for their delivery. The purpose of requirements
management is to ensure that the organization validates and meets the needs of its customers
and external and internal stakeholders.

Requirements management involves communication between the project team members and
stakeholders, and adjustment to requirements changes throughout the course of the project.
To prevent one class of requirements from overriding another, constant communication
among members of the development team is critical.

Requirements management does not end with product release. From that point on, the data
coming in about the application’s acceptability is gathered and fed into the Investigation
phase of the next generation or release. Thus, the process begins again.

What is requirements management?


A requirement is a defined capability to which the results of certain work (in this case
software development) should meet. It is a continuous process throughout the lifecycle of a
product and requirements can be generated by many stakeholders including: customers,
partners, sales, support, management, engineering, operations, and of course product
management. When requirements are being properly curated and managed there is clear and
consistent communication between the product team and engineering members and any
needed changes are broadly shared with all stakeholders.

UNIT-III
 DESIGN ENGINEERING
The Design Process

The design phase of software development deals with transforming the customer
requirements as described in the SRS documents into a form implementable using a
programming language.
The software design process can be divided into the following three levels of phases of
design:
1. Interface Design
2. Architectural Design
3. Detailed Design

InterfaceDesign:
Interface design is the specification of the interaction between a system and its environment.
this phase proceeds at a high level of abstraction with respect to the inner workings of the
system i.e., during interface design, the internal of the systems are completely ignored and the
system is treated as a black box. Attention is focussed on the dialogue between the target
system and the users, devices, and other systems with which it interacts. The design problem
statement produced during the problem analysis step should identify the people, other
systems, and devices which are collectively called agents.

Interface design should include the following details:


 Precise description of events in the environment, or messages from agents to which the
system must respond.
 Precise description of the events or messages that the system must produce.
 Specification on the data, and the formats of the data coming into and going out of the
system.
 Specification of the ordering and timing relationships between incoming events or
messages, and outgoing events or outputs.
ArchitecturalDesign:
Architectural design is the specification of the major components of a system, their
responsibilities, properties, interfaces, and the relationships and interactions between them. In
architectural design, the overall structure of the system is chosen, but the internal details of
major components are ignored.
Issues in architectural design includes:
 Gross decomposition of the systems into major components.
 Allocation of functional responsibilities to components.
 Component Interfaces
 Component scaling and performance properties, resource consumption properties,
reliability properties, and so forth.
 Communication and interaction between components.
The architectural design adds important details ignored during the interface design. Design of
the internals of the major components is ignored until the last phase of the design.
DetailedDesign:
Design is the specification of the internal elements of all major system components, their
properties, relationships, processing, and often their algorithms and the data structures.
The detailed design may include:
 Decomposition of major system components into program units.
 Allocation of functional responsibilities to units.
 User interfaces
 Unit states and state changes
 Data and control interaction between units
 Data packaging and implementation, including issues of scope and visibility of program
elements
 Algorithms and data structures

Software Design Concepts:

The set of fundamental software design concepts are:


1. Abstraction:
An abstraction is a powerful design tool which allows a designer to consider a
component at an abstract level without bothering about the internal details of the
implementation. The concept of abstraction can be used in two ways: as a process and
as an entity.
As a process, it defines a mechanism of hiding irrelevant details and representing only
the essential features of an item. As an entity, it defines a model or view of an item.
There are two common abstraction mechanisms are Functional Abstraction and Data
Abstraction. A sequence of instruction that contains a specific and limited function
refers to a Functional Abstraction and Data Abstraction is a collection of data that
describes a data object.
2. Architecture:
The complete structure of the software, which is composed of various components of a
system, the attributes of those components and the relationship amongst them is called
Software Architecture. This software architecture is the structure of program modules
where they interact with each other in a specialized way and enables software engineers
to analyse the software design efficiently.
3. Modularity:
A modular design achieves effective decomposition of the problem that means the
problem has been decomposed into a set of modules. Modularity is the single attribute
of software that allows a program to be easily manageable.
Advantage of modularization:
 Program can be divided based on functional aspects.
 Each module is a well-defined system that can be used with other applications.
 It allows large programs to be written by several or different people.
 It provides a framework for complete testing, more accessible to test.
 Concurrent execution can be made possible.
4. InformationHiding:
The fundamental concept of Information Hiding suggests that modules can be
characterized by the design decisions that protect from the others. The use of
information hiding provides the most significant benefits when modifications are
required during testing’s and later during software maintenance.
DifferentlevelsofSoftwareDesign:
There are three different levels of software design. They are:
1. ArchitecturalDesign:
The architecture of a system can be viewed as the overall structure of the system & the
way in which structure provides conceptual integrity of the system. The architectural
design identifies the software as a system with many components interacting with each
other. At this level, the designers get the idea of the proposed solution domain.
2. Preliminaryorhigh-leveldesign:
Here the problem is decomposed into a set of modules, the control relationship among
various modules identified and also the interfaces among various modules are
identified. The outcome of this stage is called the program architecture. Design
representation techniques used in this stage are structure chart and UML.
3. Detaileddesign:
Once the high-level design is complete, detailed design is undertaken. In detailed
design, each module is examined carefully to design the data structure and algorithms.
The stage outcome is documented in the form of a module specification document.
The Design Model:
1. Data design elements
 The data design element produced a model of data that represent a high level of abstraction.
 This model is then more refined into more implementation specific representation which is
processed by the computer-based system.
 The structure of data is the most important part of the software design.
2. Architectural design elements
 The architecture design elements provide us overall view of the system.
 The architectural design element is generally represented as a set of interconnected
subsystems that are derived from analysis packages in the requirement model.
The architecture model is derived from following sources:
 The information about the application domain to build the software.
 Requirement model elements like data flow diagram or analysis classes, relationship and
collaboration between them.
 The architectural style and pattern as per availability.
3. Interface design elements
 The interface design elements for software represents the information flow within it and out
of the system.
 They communicate between the components defined as part of architecture.
Following are the important elements of the interface design:
1. The user interface
2. The external interface to the other systems, networks etc.
3. The internal interface between various components.

4. Component level diagram elements


 The component level design for software is similar to the set of detailed specification of
each room in a house.
 The component level design for the software completely describes the internal details of
each software component.
 The processing of data structure occurs in a component and an interface which allows all the
component operations.
 In a context of object-oriented software engineering, a component shown in a UML
diagram.
 The UML diagram is used to represent the processing logic.

5. Deployment level design elements


 The deployment level design element shows the software functionality and subsystem that
allocated in the physical computing environment which support the software.
 Following figure shows three computing environments as shown. These are the personal
computer, the CPI server and the Control panel.
ARCHITECTURAL DESIGN:
Software Architecture
Architecture serves as a blueprint for a system. It provides an abstraction to manage the
system complexity and establish a communication and coordination mechanism among
components.
 It defines a structured solution to meet all the technical and operational
requirements, while optimizing the common quality attributes like performance and
security.
 Further, it involves a set of significant decisions about the organization related to
software development and each of these decisions can have a considerable impact on
quality, maintainability, performance, and the overall success of the final product.
These decisions comprise of −
o Selection of structural elements and their interfaces by which the system is
composed.
o Behaviour as specified in collaborations among those elements.
o Composition of these structural and behavioural elements into large
subsystem.
o Architectural decisions align with business objectives.
o Architectural styles guide the organization.
Architectural Styles
1. Data-cantered architecture

 The data store in the file or database is occupying at the centre of the architecture.
 Store data is access continuously by the other components like an update, delete, add,
modify from the data store.
 Data-cantered architecture helps integrity.
 Pass data between clients using the blackboard mechanism.
 The processes are independently executed by the client components.
2. Data-flow architecture

 This architecture is applied when the input data is converted into a series of manipulative
components into output data.
 A pipe and filter pattern are a set of components called as filters.
 Filters are connected through pipes and transfer data from one component to the next
component.
 The flow of data degenerates into a single line of transform then it is known as batch
sequential.
3. Call and return architectures

This architecture style allows to achieve a program structure which is easy to modify.

Following are the sub styles exist in this category:

1. Main program or subprogram architecture

 The program is divided into smaller pieces hierarchically.


 The main program invokes many of program components in the hierarchy that program
components are divided into subprogram.
2. Remote procedure call architecture

 The main program or subprogram components are distributed in network of multiple


computers.
 The main aim is to increase the performance.
4. Object-oriented architectures

 This architecture is the latest version of call-and-return architecture.


 It consists of the bundling of data and methods.
5. Layered architectures

 The different layers are defined in the architecture. It consists of outer and inner layer.
 The components of outer layer manage the user interface operations.
 Components execute the operating system interfacing at the inner layer.
 The inner layers are application layer, utility layer and the core layer.
 In many cases, it is possible that more than one pattern is suitable and the alternate
architectural style can be designed and evaluated.
Architectural Design
Requirements of the software should be transformed into an architecture that describes the
software's top-level structure and identifies its components. This is accomplished through
architectural design (also called system design), which acts as a preliminary 'blueprint' from
which software can be developed. IEEE defines architectural design as 'the process of
defining a collection of hardware and software components and their interfaces to establish
the framework for the development of a computer system.'This framework is established by
examining the software requirements document and designing a model for providing
implementation details. These details are used to specify the components of the system along
with their inputs, outputs, functions, and the interaction between them. An architectural
design performs the following functions.
1. It defines an abstraction level at which the designers can specify the functional and
performance behaviour of the system.
2. It acts as a guideline for enhancing the system (whenever required) by describing those
features of the system that can be modified easily without affecting the system integrity.
3. It evaluates all top-level designs.
4. It develops and documents top-level design for the external and internal interfaces.
5. It develops preliminary versions of user documentation.
6. It defines and documents preliminary test requirements and the schedule for software
integration.
7. The sources of architectural design are listed below.
8. Information regarding the application domain for the software to be developed
9. Using data-flow diagrams
10. Availability of architectural patterns and architectural styles.
Architectural design is of crucial importance in software engineering during which the
essential requirements like reliability, cost, and performance are dealt with. This task is
cumbersome as the software engineering paradigm is shifting from monolithic, stand-alone,
built-from-scratch systems to componentized, evolvable, standards-based, and product line-
oriented systems. Also, a key challenge for designers is to know precisely how to proceed
from requirements to architectural design. To avoid these problems, designers adopt
strategies such as reusability, componentization, platform-based, standards-based, and so on.
Though the architectural design is the responsibility of developers, some other people like
user representatives, systems engineers, hardware engineers, and operations personnel are
also involved. All these stakeholders must also be consulted while reviewing the architectural
design in order to minimize the risks and errors.
Architectural Design Representation
Architectural design can be represented using the following models.

1. Structural model: Illustrates architecture as an ordered collection of program components


2. Dynamic model: Specifies the behavioural aspect of the software architecture and indicates
how the structure or system configuration changes as the function changes due to change in
the external environment
3. Process model: Focuses on the design of the business or technical process, which must be
implemented in the system
4. Functional model: Represents the functional hierarchy of a system
5. Framework model: Attempts to identify repeatable architectural design patterns encountered
in similar types of application. This leads to an increase in the level of abstraction.
Data-flowArchitecture
Data-flow architecture is mainly used in the systems that accept some inputs and transform it
into the desired outputs by applying a series of transformations. Each component, known as
filter, transforms the data and sends this transformed data to other filters for further
processing using the connector, known as pipe. Each filter works as an independent entity,
that is, it is not concerned with the filter which is producing or consuming the data. A pipe is
a unidirectional channel which transports the data received on one end to the other end. It
does not change the data in anyway; it merely supplies the data to the filter on the receiver
end.

Most of the times, the data-flow architecture degenerates a batch sequential system. In this
system, a batch of data is accepted as input andthen a series of sequential filters are applied to
transform this data. One commonexample of this architecture is UNIX shell programs. In
these programs, UNIXprocesses act as filters and the file system through which UNIX
processes interact,act as pipes. Other well-known examples of this architecture are compilers,
signal processingsystems, parallel programming, functional programming, and
distributedsystems. Some advantages associated with the data-flow architecture are
listedbelow.
1. It supports reusability.
2. It is maintainable and modifiable.
3. It supports concurrent execution.
4. Some disadvantages associated with the data-flow architecture are listed below.
5. It often degenerates to batch sequential system.
6. It does not provide enough support for applications requires user interaction.
7. It is difficult to synchronize two different but related streams.

System Models
Context Model
Context models are used to illustrate the operational context of a system - they show what
lies outside the system boundaries. Social and organizational concerns may affect the
decision on where to position system boundaries. Architectural models show the system and
its relationship with other systems.
System boundaries are established to define what is inside and what is outside the system.
They show other systems that are used or depend on the system being developed. The
position of the system boundary has a profound effect on the system requirements. Defining a
system boundary is a political judgment since there may be pressures to develop system
boundaries that increase/decrease the influence or workload of different parts of an
organization.
Context models simply show the other systems in the environment, not how the system being
developed is used in that environment. Process models reveal how the system being
developed is used in broader business processes. UML activity diagrams may be used to
define business process models.
The example below shows a UML activity diagram describing the process of involuntary
detention and the role of MHC-PMS (mental healthcare patient management system) in it.
Behavioural Models
Behavioural models are models of the dynamic behaviour of the system as it is
executing. They show what happens or what is supposed to happen when a system
responds to a stimulus from its environment. You can think of these stimuli as being of
two types:
1. Data. Some data arrives that has to be processed by the system.
2. Events. Some event happens that triggers system processing. Events may have
associated data but this is not always the case.
Many business systems are data processing systems that are primarily driven by data.
They are controlled by the data input to the system with relatively little external event
processing. Their processing involves a sequence of actions on that data and the
generation of an output. For example, our bookstore system will accept information
about orders made by a customer, calculate the costs of these orders, and using another
system, it will generate an invoice to be sent to that customer.
Data-driven modelling
Data-driven models show the sequence of actions involved in processing input data and
generating the associated output. This is very useful during the analysis stage since they
show end-to-end processing in a system which means that they show the entire action
sequence of how input data become output data. In other words, it shows the response of
the system to particular input.
In UML, activity and sequence diagrams can be used to describe such data flows. Note
that these are the same diagram types we used for interaction modelling but now the
emphasis is put on the processing itself, not on the objects that will participate in
processing (interactions). That is why activity diagrams are better used for that purpose
since the lifelines of sequence diagrams depict objects and actors therefore some
attention must be paid to responsibility allocation when using sequence diagrams.
The basic processes of the online store (starting from the insertion of a new product
through browsing and selecting products to buy till tracking the order's delivery and
provide feedback) is shown on the following activity diagram.
The first action in the system is populating it with some products. This is the
administrator’s job. The next step in the flow is a fork indicating that the following
activities can be executed parallelly. The manager can set discounts for customers and
products in an arbitrary order. The next step joins the two branches of the flow. Based on
the information set up, the administrator can compose and send newsletters. The
customer receives the newsletter sent and visits the site. He/she performs a browsing or
searching action. Then, he/she selects one or more products from the list that can be
placed either onto the bookshelf or into the shopping cart. If items got onto the shelf and
the customer wants to place an order later, the first step is to move selected items into the
shopping cart. In the other case, when items got into the shopping cart and the customer
wants to save them for later, the first step is to move the selected items onto the shelf.
Then, the customer can continue with browsing/searching or can place an order. If the
items got earlier onto the bookshelf or into the shopping cart and the customer does not
want to either place an order or save items, he/she can continue with browsing/searching
products. Steps described previously can be repeated as many times as needed.
If the customer finishes with browsing/searching/selecting and would like to place an
order, items of the shopping cart are used to create the new order. The system calculates
the total price for the order. At this point, the customer can cancel the order process. If
he/she continues with ordering, some pieces of order data (e.g. shipping and billing
address) should be filled and the payment mode must be selected. Then if the customer
does not cancel the process, the system validates the entered information and as a result,
the order is actually created. Customers can check the status of their latest order
(pending order) later. The status can be updated by the administrator. Then, customers
can send a feedback that is processed also by the administrator. Thereafter, the manager
can generate reports about sales data in order to support decision making.
Event-driven modelling
Event-driven modelling shows how a system responds to external and internal events
(stimuli). It is based on the assumption that a system has a finite number of states and
that an event (stimulus) may cause a transition from one state to another.
The UML supports event-based modelling using state machine diagrams
UML 2 State machine diagrams
A state machine diagram models the behaviour of a single object, specifying the
sequence of events that an object goes through during its lifetime in response to events.
It contains the following elements:
 State. A state is denoted by a round-cornered rectangle with the name of the state
written inside it. There are two special states:
o Initial state. The initial state is denoted by a filled black circle and may be
labelled with a name.
o Final state. The final state is denoted by a circle with a dot inside and may
also be labelled with a name.
 Transitions. Transitions from one state to the next are denoted by lines with
arrowheads. A transition may have a trigger, a guard and an effect, as below.
Trigger is the cause of the transition, which could be a signal, an event, a change
in some condition, or the passage of time. Guard is a condition which must be
true in order for the trigger to cause the transition. Effect is an action which will
be invoked directly on the object that owns the state machine as a result of the
transition.
 State action. State actions describe effects associated with a state. A state action
is an activity label/behaviour expression pair. The activity label identifies the
circumstances under which the behaviour will be invoked. There are three
reserved activity labels:
o entry: the behaviour is performed upon entry to the state,
o do: ongoing behaviour, performed as long as the element is in the state,
o exit: a behaviour that is performed upon exit from the state.
 Self-transition. A state can have a transition that returns to itself. This is most
useful when an effect is associated with the transition.
 Entry point. If we do not enter the machine at the normal initial state, we can
have additional entry points.
 Exit point. In a similar manner to entry points, it is possible to have named
alternative exit points.
Besides these main elements, it is important to note that UML state machine diagrams
support the notion of super states that encapsulate a number of separate states. This
superstrate can be used as a single state on a higher-level model but is then expanded to
show more details.
The following state machine diagram shows the internal states and transitions of a
shopping cart.
Figure 3.39. State machine diagram of the class Shopping Cart

When the shopping cart is initiated, it will be in an Empty state. Whenever products are
added, a transition to Collecting state is performed. When the shopping cart is in the
Collecting state and a Product deleted stimulus happens then based on the guards (if the
number of products in cart is equal to 1 or greater) a transition to Empty state might
happen.
One might argue that Empty and Collecting are very similar that it does not worth
separating them. This can be a valid point, however, the reason why we separated is that
when converting a bookshelf's contents (like a Wishlist) to shopping cart contents will
not allow having an empty cart so the inclusion of the entry point resulted in the
separation.
Data Modelling
Data modelling (data modelling) is the process of creating a data model for the data to be
stored in a Database. This data model is a conceptual representation of Data objects, the
associations between different data objects and the rules. Data modelling helps in the visual
representation of data and enforces business rules, regulatory compliances, and government
policies on the data. Data Models ensure consistency in naming conventions, default values,
semantics, security while ensuring quality of the data.

Data model emphasizes on what data is needed and how it should be organized instead of
what operations need to be performed on the data. Data Model is like architect's building plan
which helps to build a conceptual model and set the relationship between data items.

The two types of Data Models techniques are

1. Entity Relationship (E-R) Model


2. UML (Unified Modelling Language)

Object Model
The object model visualizes the elements in a software application in terms of objects. In
this chapter, we will look into the basic concepts and terminologies of object–oriented
systems.

Objects and Classes

The concepts of objects and classes are intrinsically linked with each other and form the
foundation of object–oriented paradigm.
Object
An object is a real-world element in an object–oriented environment that may have a
physical or a conceptual existence. Each object has −
 Identity that distinguishes it from other objects in the system.
 State that determines the characteristic properties of an object as well as the values of
the properties that the object holds.
 Behaviour that represents externally visible activities performed by an object in terms
of changes in its state.
Objects can be modelled according to the needs of the application. An object may have a
physical existence, like a customer, a car, etc.; or an intangible conceptual existence, like a
project, a process, etc.
Class
A class represents a collection of objects having same characteristic properties that exhibit
common behaviour. It gives the blueprint or description of the objects that can be created
from it. Creation of an object as a member of a class is called instantiation. Thus, object is
an instance of a class.
The constituents of a class are −
 A set of attributes for the objects that are to be instantiated from the class. Generally,
different objects of a class have some difference in the values of the attributes.
Attributes are often referred as class data.
 A set of operations that portray the behaviour of the objects of the class. Operations
are also referred as functions or methods.
Example
Let us consider a simple class, Circle, that represents the geometrical figure circle in a two–
dimensional space. The attributes of this class can be identified as follows −

 x–coord, to denote x–coordinate of the centre


 y–cord, to denote y–coordinate of the centre
 a, to denote the radius of the circle
Some of its operations can be defined as follows −

 findArea(), method to calculate area


 findCircumference(), method to calculate circumference
 scale(), method to increase or decrease the radius
During instantiation, values are assigned for at least some of the attributes. If we create an
object my circle, we can assign values like x-coord : 2, y-coord : 3, and a : 4 to depict its
state. Now, if the operation scale() is performed on my circle with a scaling factor of 2, the
value of the variable a will become 8. This operation brings a change in the state of my
circle, i.e., the object has exhibited certain behaviour.

Encapsulation and Data Hiding

Encapsulation
Encapsulation is the process of binding both attributes and methods together within a class.
Through encapsulation, the internal details of a class can be hidden from outside. It permits
the elements of the class to be accessed from outside only through the interface provided by
the class.
Data Hiding
Typically, a class is designed such that its data (attributes) can be accessed only by its class
methods and insulated from direct outside access. This process of insulating an object’s data
is called data hiding or information hiding.
Example
In the class Circle, data hiding can be incorporated by making attributes invisible from
outside the class and adding two more methods to the class for accessing class data, namely

 setValues(), method to assign values to x-coord, y-coord, and a


 getValues(), method to retrieve values of x-coord, y-coord, and a
Here the private data of the object my circle cannot be accessed directly by any method that
is not encapsulated within the class Circle. It should instead be accessed through the
methods setValues() and getValues().

Message Passing

Any application requires a number of objects interacting in a harmonious manner. Objects in


a system may communicate with each other using message passing. Suppose a system has
two objects: obj1 and obj2. The object obj1 sends a message to object obj2, if obj1 wants
obj2 to execute one of its methods.
The features of message passing are −

 Message passing between two objects is generally unidirectional.


 Message passing enables all interactions between objects.
 Message passing essentially involves invoking class methods.
 Objects in different processes can be involved in message passing.

Inheritance

Inheritance is the mechanism that permits new classes to be created out of existing classes
by extending and refining its capabilities. The existing classes are called the base
classes/parent classes/super-classes, and the new classes are called the derived classes/child
classes/subclasses. The subclass can inherit or derive the attributes and methods of the
super-class(es) provided that the super-class allows so. Besides, the subclass may add its
own attributes and methods and may modify any of the super-class methods. Inheritance
defines an “is – a” relationship.
Example
From a class Mammal, a number of classes can be derived such as Human, Cat, Dog, Cow,
etc. Humans, cats, dogs, and cows all have the distinct characteristics of mammals. In
addition, each has its own particular characteristics. It can be said that a cow “is – a”
mammal.
Types of Inheritance
 Single Inheritance − A subclass derives from a single super-class.
 Multiple Inheritance − A subclass derives from more than one super-class.
 Multilevel Inheritance − A subclass derives from a super-class which in turn is
derived from another class and so on.
 Hierarchical Inheritance − A class has a number of subclasses each of which may
have subsequent subclasses, continuing for a number of levels, so as to form a tree
structure.
 Hybrid Inheritance − A combination of multiple and multilevel inheritance so as to
form a lattice structure.
The following figure depicts the examples of different types of inheritance.
Polymorphism

Polymorphism is originally a Greek word that means the ability to take multiple forms. In
object-oriented paradigm, polymorphism implies using operations in different ways,
depending upon the instance they are operating upon. Polymorphism allows objects with
different internal structures to have a common external interface. Polymorphism is
particularly effective while implementing inheritance.
Example
Let us consider two classes, Circle and Square, each with a method findArea(). Though the
name and purpose of the methods in the classes are same, the internal implementation, i.e.,
the procedure of calculating area is different for each class. When an object of class Circle
invokes its findArea() method, the operation finds the area of the circle without any conflict
with the findArea() method of the Square class.

Generalization and Specialization

Generalization and specialization represent a hierarchy of relationships between classes,


where subclasses inherit from super-classes.
Generalization
In the generalization process, the common characteristics of classes are combined to form a
class in a higher level of hierarchy, i.e., subclasses are combined to form a generalized
super-class. It represents an “is – a – kind – of” relationship. For example, “car is a kind of
land vehicle”, or “ship is a kind of water vehicle”.
Specialization
Specialization is the reverse process of generalization. Here, the distinguishing features of
groups of objects are used to form specialized classes from existing classes. It can be said
that the subclasses are the specialized versions of the super-class.
The following figure shows an example of generalization and specialization.

Links and Association

Link
A link represents a connection through which an object collaborates with other objects.
Rumbaugh has defined it as “a physical or conceptual connection between objects”. Through
a link, one object may invoke the methods or navigate through another object. A link depicts
the relationship between two or more objects.
Association
Association is a group of links having common structure and common behaviour.
Association depicts the relationship between objects of one or more classes. A link can be
defined as an instance of an association.
Degree of an Association
Degree of an association denotes the number of classes involved in a connection. Degree
may be unary, binary, or ternary.
 A unary relationship connects objects of the same class.
 A binary relationship connects objects of two classes.
 A ternary relationship connects objects of three or more classes.
Cardinality Ratios of Associations
Cardinality of a binary association denotes the number of instances participating in an
association. There are three types of cardinality ratios, namely −
 One–to–One − A single object of class A is associated with a single object of class
B.
 One–to–Many − A single object of class A is associated with many objects of class
B.
 Many–to–Many − An object of class A may be associated with many objects of
class B and conversely an object of class B may be associated with many objects of
class A.

Aggregation or Composition

Aggregation or composition is a relationship among classes by which a class can be made up


of any combination of objects of other classes. It allows objects to be placed directly within
the body of other classes. Aggregation is referred as a “part–of” or “has–a” relationship,
with the ability to navigate from the whole to its parts. An aggregate object is an object that
is composed of one or more other objects.
Example
In the relationship, “a car has–a motor”, car is the whole object or the aggregate, and the
motor is a “part–of” the car. Aggregation may denote −
 Physical containment − Example, a computer is composed of monitor, CPU, mouse,
keyboard, and so on.
 Conceptual containment − Example, shareholder has–a share.

Benefits of Object Model

Now that we have gone through the core concepts pertaining to object orientation, it would
be worthwhile to note the advantages that this model has to offer.
The benefits of using the object model are −
 It helps in faster development of software.
 It is easy to maintain. Suppose a module develops an error, then a programmer can
fix that particular module, while the other parts of the software are still up and
running.
 It supports relatively hassle-free upgrades.
 It enables reuse of objects, designs, and functions.
 It reduces development risks, particularly in integration of complex systems.
Structured Models
Structural models of software display the organization of a system in terms of the
components that make up that system and their relationships. Structural models may
be static models, which show the structure of the system design, or dynamic models, which
show the organization of the system when it is executing. You create structural models of a
system when you are discussing and designing the system architecture.
UML class diagrams are used when developing an object-oriented system model to show the
classes in a system and the associations between these classes. An object class can be thought
of as a general definition of one kind of system object. An association is a link between
classes that indicates that there is some relationship between these classes. When you are
developing models during the early stages of the software engineering process, objects
represent something in the real world, such as a patient, a prescription, doctor, etc.

Generalization is an everyday technique that we use to manage complexity. In modelling


systems, it is often useful to examine the classes in a system to see if there is scope for
generalization. In object-oriented languages, such as Java, generalization is implemented
using the class inheritance mechanisms built into the language. In a generalization, the
attributes and operations associated with higher-level classes are also associated with the
lower-level classes. The lower-level classes are subclasses inherit the attributes and
operations from their super classes. These lower-level classes then add more specific
attributes and operations.
An aggregation model shows how classes that are collections are composed of other classes.
Aggregation models are similar to the part-of relationship in semantic data models.

Object-Oriented Design

In the object-oriented design method, the system is viewed as a collection of objects (i.e.,
entities). The state is distributed among the objects, and each object handles its state data. For
example, in a Library Automation Software, each library representative may be a separate
object with its data and functions to operate on these data. The tasks defined for one purpose
cannot refer or change data of other objects. Objects have their internal data which represent
their state. Similar objects create a class. In other words, each object is a member of some
class. Classes may inherit features from the superclass.

The different terms related to object design are:


1. Objects: All entities involved in the solution design are known as objects. For
example, person, banks, company, and users are considered as objects. Every entity
has some attributes associated with it and has some methods to perform on the
attributes.
2. Classes: A class is a generalized description of an object. An object is an instance of a
class. A class defines all the attributes, which an object can have and methods, which
represents the functionality of the object.
3. Messages: Objects communicate by message passing. Messages consist of the
integrity of the target object, the name of the requested operation, and any other action
needed to perform the function. Messages are often implemented as procedure or
function calls.
4. Abstraction In object-oriented design, complexity is handled using abstraction.
Abstraction is the removal of the irrelevant and the amplification of the essentials.
5. Encapsulation: Encapsulation is also called an information hiding concept. The data
and operations are linked to a single unit. Encapsulation not only bundles essential
information of an object together but also restricts access to the data and methods
from the outside world.
6. Inheritance: OOD allows similar classes to stack up in a hierarchical manner where
the lower or sub-classes can import, implement, and re-use allowed variables and
functions from their immediate superclasses.This property of OOD is called an
inheritance. This makes it easier to define a specific class and to create generalized
classes from specific ones.
7. Polymorphism: OOD languages provide a mechanism where methods performing
similar tasks but vary in arguments, can be assigned the same name. This is known as
polymorphism, which allows a single interface is performing functions for different
types. Depending upon how the service is invoked, the respective portion of the code
gets executed.

Software Evolution
Software Evolution is a term which refers to the process of developing software initially,
then timely updating it for various reasons, i.e., to add new features or to remove obsolete
functionalities etc. The evolution process includes fundamental activities of change analysis,
release planning, system implementation and releasing a system to customers.
The cost an impact of these changes is accessed to see how much system is affected by the
change and how much it might cost to implement the change. If the proposed changes are
accepted, a new release of the software system is planned. During release planning, all the
proposed changes (fault repair, adaptation, and new functionality) are considered.
A design is then made on which changes to implement in the next version of the system. The
process of change implementation is an iteration of the development process where the
revisions to the system are designed, implemented and tested.

Laws used for Software Evolution:


1. Lawofcontinuingchange:
This law states that any software system that represents some real-world reality
undergoes continuous change or become progressively less useful in that environment.
2. Lawofincreasingcomplexity:
As an evolving program changes, its structure becomes more complex unless effective
efforts are made to avoid this phenomenon.
3. Lawofconservationoforganizationstability:
Over the lifetime of a program, the rate of development of that program is
approximately constant and independent of the resource devoted to system
development.
4. Lawofconservationoffamiliarity:
This law states that during the active lifetime of the program, changes made in the
successive release are almost constant.

You might also like