You are on page 1of 35

UNIT 1

1 Software Engineering

The term software engineering is the product of two words, software, and engineering. The software is
a collection of integrated programs. Software subsists of carefully-organized instructions and code
written by developers on any of various particular computer languages. Computer programs and related
documentation such as requirements, design models and user manuals. Engineering is the application
of scientific and practical knowledge to invent, design, build, maintain, and improve frameworks,
processes, etc.

Software Engineering is an engineering branch related to the evolution of software product using well-
defined scientific principles, techniques, and procedures. The result of software engineering is an
effective and reliable software product.

1.1 Need of Software Engineering

The necessity of software engineering appears because of a higher rate of progress in user requirements
and the environment on which the program is working.
o Huge Programming: It is simpler to manufacture a wall than to a house or building, similarly,
as the measure of programming become extensive engineering has to step to give it a scientific
process.
o Adaptability: If the software procedure were not based on scientific and engineering ideas, it
would be simpler to re-create new software than to scale an existing one.
o Cost: As the hardware industry has demonstrated its skills and huge manufacturing has let down
the cost of computer and electronic hardware. But the cost of programming remains high if the
proper process is not adapted.
o Dynamic Nature: The continually growing and adapting nature of programming hugely
depends upon the environment in which the client works. If the quality of the software is
continually changing, new upgrades need to be done in the existing one.
o Quality Management: Better procedure of software development provides a better and quality
software product.

1.2 Need of Software Engineering

 Reduces complexity: Big software is always complicated and challenging to progress. Software
engineering has a great solution to reduce the complication of any project. Software engineering
divides big problems into various small issues. And then start solving each small issue one by
one. All these small problems are solved independently to each other.
 To minimize software cost: Software needs a lot of hard work and software engineers are highly
paid experts. A lot of manpower is required to develop software with a large number of codes.
But in software engineering, programmers project everything and decrease all those things that
are not needed. In turn, the cost for software productions becomes less as compared to any
software that does not use software engineering method.
 To decrease time: Anything that is not made according to the project always wastes time. And
if you are making great software, then you may need to run many codes to get the definitive
running code. This is a very time-consuming procedure, and if it is not well handled, then this
can take a lot of time. So if you are making your software according to the software engineering
method, then it will decrease a lot of time.
 Handling big projects: Big projects are not done in a couple of days, and they need lots of
patience, planning, and management. And to invest six and seven months of any company, it
requires heaps of planning, direction, testing, and maintenance. No one can say that he has given
four months of a company to the task, and the project is still in its first stage. Because the
company has provided many resources to the plan and it should be completed. So to handle a big
project without any problem, the company has to go for a software engineering method.
 Reliable software: Software should be secure, means if you have delivered the software, then it
should work for at least its given time or subscription. And if any bugs come in the software, the
company is responsible for solving all these bugs. Because in software engineering, testing and
maintenance are given, so there is no worry of its reliability.
 Effectiveness: Effectiveness comes if anything has made according to the standards. Software
standards are the big target of companies to make it more effective. So Software becomes more
effective in the act with the help of software engineering.

2 Software Crisis

Software Crisis is a term used in computer science for the difficulty of writing useful and efficient
computer programs in the required time. The software crisis was due to using the same workforce,
same methods, same tools even though rapidly increasing in software demand, the complexity of
software, and software challenges. With the increase in the complexity of software, many software
problems arise because existing methods were insufficient. If we will use the same workforce, same
methods, and same tools after the fast increase in software demand, software complexity, and software
challenges, then there arise some problems like software budget problems, software efficiency
problems, software quality problems, software managing and delivering problem, etc. This condition
is called a software crisis.

Causes of Software Crisis:

 The cost of owning and maintaining software was as expensive as developing the software
 At that time Projects were running over-time
 At that time Software was very inefficient
 The quality of the software was low quality
 Software often did not meet user requirements
 The average software project overshoots its schedule by half
 At that time Software was never delivered
 Non-optimal resource utilization.
 Difficult to alter, debug, and enhance.
 The software complexity is harder to change.
 Poor project management.
 Lack of adequate training in software engineering.
 Less skilled project members.
 Low productivity improvements.

Solution of Software Crisis: There is no single solution to the crisis. One possible solution to a
software crisis is Software Engineering because software engineering is a systematic, disciplined, and
quantifiable approach. For preventing software crises, there are some guidelines:
 Reduction in software over budget.
 The quality of software must be high.
 Less time is needed for a software project.
 Experienced and skilled people working over the software project.
 Software must be delivered.
 Software must meet user requirements.

3 Software Processes

Software is the set of instructions in the form of programs to govern the computer system and to
process the hardware components. To produce a software product the set of activities is used. This set
is called a software process.

There are four basic key process activities:

 Software Specifications – In this process, detailed description of a software system to be


developed with its functional and non-functional requirements.
 Software Development – In this process, designing, programming, documenting, testing, and
bug fixing is done.
 Software Validation – In this process, evaluation software product is done to ensure that the
software meets the business requirements as well as the end user’s needs.
 Software Evolution – It is a process of developing software initially, then timely updating it
for various reasons.

3.1 Software Process Model

A software process model is an abstraction of the actual process, which is being described. It can also
be defined as a simplified representation of a software process. Each model represents a process from
a specific perspective. Basic software process models on which different type of software process
models can be implemented:

 A workflow Model – It is the sequential series of tasks and decisions that make up a business
process.
 The Waterfall Model – It is a sequential design process in which progress is seen as flowing
steadily downwards. Phases in waterfall model:
i. Requirements Specification
ii. Software Design
iii. Implementation
iv. Testing
 Dataflow Model – It is diagrammatic representation of the flow and exchange of information
within a system.
 Evolutionary Development Model – Following activities are considered in this method:
i. Specification
ii. Development
iii. Validation
 Role / Action Model – Roles of the people involved in the software process and the activities.

4 Software life cycle models


A software life cycle model (also termed process model) is a pictorial and diagrammatic representation
of the software life cycle. A life cycle model represents all the methods required to make a software
product transit through its life cycle stages. It also captures the structure in which these methods are to
be undertaken.

In other words, a life cycle model maps the various activities performed on a software product from its
inception to retirement. Different life cycle models may plan the necessary development activities to
phases in different ways. Thus, no element which life cycle model is followed, the essential activities
are contained in all life cycle models though the action may be carried out in distinct orders in different
life cycle models. During any life cycle stage, more than one activity may also be carried out.

SDLC Cycle represents the process of developing software. SDLC framework includes the following
steps:

The stages of SDLC are as follows:

Stage1: Planning and requirement analysis

Requirement Analysis is the most important and necessary stage in SDLC. The senior members of the
team perform it with inputs from all the stakeholders and domain experts or SMEs in the industry.
Planning for the quality assurance requirements and identifications of the risks associated with the
projects is also done at this stage. Business analyst and Project organizer set up a meeting with the client
to gather all the data like what the customer wants to build, who will be the end user, what is the objective
of the product. Before creating a product, a core understanding or knowledge of the product is very
necessary.

For Example, A client wants to have an application which concerns money transactions. In this method,
the requirement has to be precise like what kind of operations will be done, how it will be done, in which
currency it will be done, etc.

Once the required function is done, an analysis is complete with auditing the feasibility of the growth of
a product. In case of any ambiguity, a signal is set up for further discussion. Once the requirement is
understood, the SRS (Software Requirement Specification) document is created. The developers should
thoroughly follow this document and also should be reviewed by the customer for future reference.

Stage2: Defining Requirements

Once the requirement analysis is done, the next stage is to certainly represent and document the software
requirements and get them accepted from the project stakeholders. This is accomplished through "SRS"-
Software Requirement Specification document which contains all the product requirements to be
constructed and developed during the project life cycle.

Stage3: Designing the Software

The next phase is about to bring down all the knowledge of requirements, analysis, and design of the
software project. This phase is the product of the last two, like inputs from the customer and requirement
gathering.

Stage4: Developing the project

In this phase of SDLC, the actual development begins, and the programming is built. The
implementation of design begins concerning writing code. Developers have to follow the coding
guidelines described by their management and programming tools like compilers, interpreters,
debuggers, etc. are used to develop and implement the code.

Stage5: Testing

After the code is generated, it is tested against the requirements to make sure that the products are solving
the needs addressed and gathered during the requirements stage. During this stage, unit testing,
integration testing, system testing, acceptance testing are done.

Stage6: Deployment

Once the software is certified, and no bugs or errors are stated, then it is deployed. Then based on the
assessment, the software may be released as it is or with suggested enhancement in the object segment.
After the software is deployed, then its maintenance begins.
Stage7: Maintenance

Once when the client starts using the developed systems, then the real issues come up and requirements
to be solved from time to time. This procedure where the care is taken for the developed product is
known as maintenance.

4.1 Waterfall Model

Winston Royce introduced the Waterfall Model in 1970.This model has five phases: Requirements
analysis and specification, design, implementation, and unit testing, integration and system testing, and
operation and maintenance. The steps always follow in this order and do not overlap. The developer
must complete every phase before the next phase begins. This model is named "Waterfall Model",
because its diagrammatic representation resembles a cascade of waterfalls.

Step 1. Requirements analysis and specification phase: The aim of this phase is to understand the
exact requirements of the customer and to document them properly. Both the customer and the software
developer work together so as to document all the functions, performance, and interfacing requirement
of the software. It describes the "what" of the system to be produced and not "how”. In this phase, a
large document called Software Requirement Specification (SRS) document is created which contained
a detailed description of what the system will do in the common language.

Step 2. Design Phase: This phase aims to transform the requirements gathered in the SRS into a suitable
form which permits further coding in a programming language. It defines the overall software
architecture together with high level and detailed design. All this work is documented as a Software
Design Document (SDD).

Step 3. Implementation and unit testing: During this phase, design is implemented. If the SDD is
complete, the implementation or coding phase proceeds smoothly, because all the information needed
by software developers is contained in the SDD.4 Pro is FINALLY Getting This Killer Feature!

During testing, the code is thoroughly examined and modified. Small modules are tested in isolation
initially. After that these modules are tested by writing some overhead code to check the interaction
between these modules and the flow of intermediate output.

Step 4. Integration and System Testing: This phase is highly crucial as the quality of the end product
is determined by the effectiveness of the testing carried out. The better output will lead to satisfied
customers, lower maintenance costs, and accurate results. Unit testing determines the efficiency of
individual modules. However, in this phase, the modules are tested for their interactions with each other
and with the system.

Step 5. Operation and maintenance phase: Maintenance is the task performed by every user once the
software has been delivered to the customer, installed, and operational.

Some Circumstances where the use of the Waterfall model is most suited are:
o When the requirements are constant and not changed regularly.
o A project is short
o The situation is calm
o Where the tools and technology used is consistent and is not changing
o When resources are well prepared and are available to use.

Advantages of Waterfall model


o This model is simple to implement also the number of resources that are required for it is
minimal.
o The requirements are simple and explicitly declared; they remain unchanged during the entire
project development.
o The start and end points for each phase is fixed, which makes it easy to cover progress.
o The release date for the complete product, as well as its final cost, can be determined before
development.
o It gives easy to control and clarity for the customer due to a strict reporting system.

Disadvantages of Waterfall model


o In this model, the risk factor is higher, so this model is not suitable for more significant and
complex projects.
o This model cannot accept the changes in requirements during development.
o It becomes tough to go back to the phase. For example, if the application has now shifted to the
coding phase, and there is a change in requirement, it becomes tough to go back and change it.
o Since the testing done at a later stage, it does not allow identifying the challenges and risks in
the earlier phase, so the risk reduction strategy is difficult to prepare.

4.2 Prototype Model

The prototype model requires that before carrying out the development of actual software, a working
prototype of the system should be built. A prototype is a toy implementation of the system. A prototype
usually turns out to be a very crude version of the actual system, possible exhibiting limited functional
capabilities, low reliability, and inefficient performance as compared to actual software. In many
instances, the client only has a general view of what is expected from the software product. In such a
scenario where there is an absence of detailed information regarding the input to the system, the
processing needs, and the output requirement, the prototyping model may be employed.
Step 1: Requirements gathering and analysis- A prototyping model starts with requirement analysis.
In this phase, the requirements of the system are defined in detail. During the process, the users of the
system are interviewed to know what their expectation from the system is.

Step 2: Quick design- The second phase is a preliminary design or a quick design. In this stage, a simple
design of the system is created. However, it is not a complete design. It gives a brief idea of the system
to the user. The quick design helps in developing the prototype.

Step 3: Build a Prototype- In this phase, an actual prototype is designed based on the information
gathered from quick design. It is a small working model of the required system.

Step 4: Refine requirement incorporation customer suggestion- In this stage, the proposed system
is presented to the client for an initial evaluation. It helps to find out the strength and weakness of the
working model. Comment and suggestion are collected from the customer and provided to the developer.

Step 5: Customer evaluation of prototype- If the user is not happy with the current prototype, you
need to refine the prototype according to the user’s feedback and suggestions.

Step 6: Acceptance by customer- Once the user is satisfied with the re-developed prototype, only then
the final system is start developing based on the approved final prototype.

…….Remaining steps are same as waterfall model

Advantage of Prototype Model


 Reduce the risk of incorrect user requirement
 Good where requirement are changing/uncommitted
 Regular visible process aids management
 Support early product marketing
 Reduce Maintenance cost.
 Errors can be detected much earlier as the system is made side by side.

Disadvantage of Prototype Model


 An unstable/badly implemented prototype often becomes the final product.
 Require extensive customer collaboration
o Costs customer money
o Needs committed customer
o Difficult to finish if customer withdraw
o May be too customer specific, no broad market
 Difficult to know how long the project will last.
 Easy to fall back into the code and fix without proper requirement analysis, design, customer
evaluation, and feedback.
 Prototyping tools are expensive.
 Special tools & techniques are required to build a prototype.
 It is a time-consuming process.

4.3 Evolutionary model

Evolutionary model is a combination of Iterative and Incremental model of software development life
cycle. Delivering your system in a big bang release, delivering it in incremental process over time is
the action done in this model. Some initial requirements and architecture envisioning need to be done.
It is better for software products that have their feature sets redefined during development because of
user feedback and other factors. The Evolutionary development model divides the development cycle
into smaller, incremental waterfall models in which users are able to get access to the product at the
end of each cycle. Feedback is provided by the users on the product for the planning stage of the next
cycle and the development team responds, often by changing the product, plan or process. Therefore,
the software product evolves with time. All the models have the disadvantage that the duration of time
from start of the project to the delivery time of a solution is very high. Evolutionary model solves this
problem in a different approach.
Evolutionary model suggests breaking down of work into smaller chunks, prioritizing them and then
delivering those chunks to the customer one by one. The number of chunks is huge and is the number
of deliveries made to the customer. The main advantage is that the customer’s confidence increases
as he constantly gets quantifiable goods or services from the beginning of the project to verify and
validate his requirements. The model allows for changing requirements as well as all work in broken
down into maintainable work chunks.

Application of Evolutionary Model:


 It is used in large projects where you can easily find modules for incremental implementation.
Evolutionary model is commonly used when the customer wants to start using the core features
instead of waiting for the full software.
 Evolutionary model is also used in object oriented software development because the system can
be easily portioned into units in terms of objects.

Necessary conditions for implementing this model:


 Customer needs are clear and been explained in deep to the developer team.
 There might be small changes required in separate parts but not a major change.
 As it requires time, so there must be some time left for the market constraints.
 Risk is high and continuous targets to achieve and report to customer repeatedly.
 It is used when working on a technology is new and requires time to learn.

Advantages:
 In evolutionary model, a user gets a chance to experiment partially developed system.
 It reduces the error because the core modules get tested thoroughly.

Disadvantages:
 Sometimes it is hard to divide the problem into several versions that would be acceptable to the
customer which can be incrementally implemented and delivered.

4.4 Spiral models

The spiral model, initially proposed by Boehm, is an evolutionary software process model that couples
the iterative feature of prototyping with the controlled and systematic aspects of the linear sequential
model. It implements the potential for rapid development of new versions of the software. Using the
spiral model, the software is developed in a series of incremental releases. During the early iterations,
the additional release may be a paper model or prototype. During later iterations, more and more
complete versions of the engineered system are produced.
Each cycle in the spiral is divided into four parts:

Objective setting: Each cycle in the spiral starts with the identification of purpose for that cycle, the
various alternatives that are possible for achieving the targets, and the constraints that exists.

Risk Assessment and reduction: The next phase in the cycle is to calculate these various alternatives
based on the goals and constraints. The focus of evaluation in this stage is located on the risk perception
for the project.

Development and validation: The next phase is to develop strategies that resolve uncertainties and
risks. This process may include activities such as benchmarking, simulation, and prototyping.

Planning: Finally, the next step is planned. The project is reviewed, and a choice made whether to
continue with a further period of the spiral. If it is determined to keep, plans are drawn up for the next
step of the project.

The development phase depends on the remaining risks. For example, if performance or user-interface
risks are treated more essential than the program development risks, the next phase may be an
evolutionary development that includes developing a more detailed prototype for solving the risks.

The risk-driven feature of the spiral model allows it to accommodate any mixture of a specification-
oriented, prototype-oriented, simulation-oriented, or another type of approach. An essential element of
the model is that each period of the spiral is completed by a review that includes all the products
developed during that cycle, including plans for the next cycle. The spiral model works for development
as well as enhancement projects.

When to use Spiral Model?


o When deliverance is required to be frequent.
o When the project is large
o When requirements are unclear and complex
o When changes may require at any time
o Large and high budget projects

Advantages
o High amount of risk analysis
o Useful for large and mission-critical projects.

Disadvantages
o Can be a costly model to use.
o Risk analysis needed highly particular expertise
o Doesn't work well for smaller projects.

5 Overview of Quality Standards

Quality standards are defined as documents that provide requirements, specifications, guidelines, or
characteristics that can be used consistently to ensure that materials, products, processes, and services
are fit for their purpose.

Standards provide organizations with the shared vision, understanding, procedures, and vocabulary
needed to meet the expectations of their stakeholders. Because standards present precise descriptions
and terminology, they offer an objective and authoritative basis for organizations and consumers around
the world to communicate and conduct business.
Principles of Quality Standards

WHO USES QUALITY STANDARDS

Organizations turn to standards for guidelines, definitions, and procedures that help them achieve
objectives such as:

 Satisfying their customers’ quality requirements


 Ensuring their products and services are safe
 Complying with regulations
 Meeting environmental objectives
 Protecting products against climatic or other adverse conditions
 Ensuring that internal processes are defined and controlled

5.1 ISO 9001

ISO 9001 is the international standard for creating a Quality Management Systems (QMS), published
by ISO (the International Organization for Standardization). The standard was most recently updated in
2015, and it is referred to as ISO 9001:2015. In order to be released and updated, ISO 9001 had to be
agreed upon by a majority of member countries so that it would become an internationally recognized
standard, which means it is accepted by a majority of countries worldwide.
As stated above, ISO 9001:2015 is an internationally recognized standard for creating, implementing,
and maintaining a Quality Management System for a company. It is intended to be used by organizations
of any size or industry, and it can be used by any company. As an international standard, it is recognized
as the basis for any company to create a system to ensure customer satisfaction and improvement and,
as such, many corporations require this certification from their suppliers.
ISO 9001 certification provides your customer’s reassurance that you have established a Quality
Management System based on the seven quality management principles of ISO 9001.

5.2 SEI-CMM

CMM was developed by the Software Engineering Institute (SEI) at Carnegie Mellon University in
1987.
 It is not a software process model. It is a framework that is used to analyse the approach and
techniques followed by any organization to develop software products.
 It also provides guidelines to further enhance the maturity of the process used to develop those
software products.
 It is based on profound feedback and development practices adopted by the most successful
organizations worldwide.
 This model describes a strategy for software process improvement that should be followed by
moving through 5 different levels.
 Each level of maturity shows a process capability level. All the levels except level-1 are further
described by Key Process Areas (KPA’s).
Each of these KPA’s defines the basic requirements that should be met by a software process in order
to satisfy the KPA and achieve that level of maturity.
Conceptually, key process areas form the basis for management control of the software project and
establish a context in which technical methods are applied, work products like models, documents,
data, reports, etc. are produced, milestones are established, quality is ensured and change is properly
managed.
The 5 levels of CMM are as follows:

Level-1: Initial –
 No KPA’s defined.
 Processes followed are Adhoc and immature and are not well defined.
 Unstable environment for software development.
 No basis for predicting product quality, time for completion, etc.

Level-2: Repeatable –
 Focuses on establishing basic project management policies.
 Experience with earlier projects is used for managing new similar natured projects.
 Project Planning- It includes defining resources required, goals, constraints, etc. for the project.
It presents a detailed plan to be followed systematically for the successful completion of good
quality software.
 Configuration Management- The focus is on maintaining the performance of the software
product, including all its components, for the entire lifecycle.
 Requirements Management- It includes the management of customer reviews and feedback
which result in some changes in the requirement set. It also consists of accommodation of those
modified requirements.
 Subcontract Management- It focuses on the effective management of qualified software
contractors i.e. it manages the parts of the software which are developed by third parties.
 Software Quality Assurance- It guarantees a good quality software product by following certain
rules and quality standard guidelines while developing.

Level-3: Defined –
 At this level, documentation of the standard guidelines and procedures takes place.
 It is a well-defined integrated set of project-specific software engineering and management
processes.
 Peer Reviews- In this method, defects are removed by using a number of review methods like
walkthroughs, inspections, buddy checks, etc.
 Intergroup Coordination- It consists of planned interactions between different development
teams to ensure efficient and proper fulfillment of customer needs.
 Organization Process Definition- Its key focus is on the development and maintenance of the
standard development processes.
 Organization Process Focus- It includes activities and practices that should be followed to
improve the process capabilities of an organization.
 Training Programs- It focuses on the enhancement of knowledge and skills of the team members
including the developers and ensuring an increase in work efficiency.

Level-4: Managed –
 At this stage, quantitative quality goals are set for the organization for software products as well
as software processes.
 The measurements made help the organization to predict the product and process quality within
some limits defined quantitatively.
 Software Quality Management- It includes the establishment of plans and strategies to develop
quantitative analysis and understanding of the product’s quality.
 Quantitative Management- It focuses on controlling the project performance in a quantitative
manner.

Level-5: Optimizing –
 This is the highest level of process maturity in CMM and focuses on continuous process
improvement in the organization using quantitative feedback.
 Use of new tools, techniques, and evaluation of software processes is done to prevent recurrence
of known defects.
 Process Change Management- Its focus is on the continuous improvement of the organization’s
software processes to improve productivity, quality, and cycle time for the software product.
 Technology Change Management- It consists of the identification and use of new technologies
to improve product quality and decrease product development time.
 Defect Prevention- It focuses on the identification of causes of defects and prevents them from
recurring in future projects by improving project-defined processes.

6 Software Metrics

A software metric is a measure of software characteristics which are measurable or countable. Software
metrics are valuable for many reasons, including measuring software performance, planning work items,
measuring productivity, and many other uses.

Within the software development process, many metrics are that are all connected. Software metrics are
similar to the four functions of management: Planning, Organization, Control, or Improvement.

Advantage of Software Metrics

 Comparative study of various design methodology of software systems.


 For analysis, comparison, and critical study of different programming language concerning their
characteristics.
 In comparing and evaluating the capabilities and productivity of people involved in software
development.
 In the preparation of software quality specifications.
 In the verification of compliance of software systems requirements and specifications.
 In making inference about the effort to be put in the design and development of the software
systems.
 In getting an idea about the complexity of the code.
 In taking decisions regarding further division of a complex module is to be done or not.
 In guiding resource manager for their proper utilization.
 In comparison and making design tradeoffs between software development and maintenance
cost.
 In providing feedback to software managers about the progress and quality during various phases
of the software development life cycle.
 In the allocation of testing resources for testing the code.

Disadvantage of Software Metrics

 The application of software metrics is not always easy, and in some cases, it is difficult and
costly.
 The verification and justification of software metrics are based on historical/empirical data
whose validity is difficult to verify.
 These are useful for managing software products but not for evaluating the performance of the
technical staff.
 The definition and derivation of Software metrics are usually based on assuming which are not
standardized and may depend upon tools available and working environment.
 Most of the predictive models rely on estimates of certain variables which are often not known
precisely.
Software
Metrices

Data Structure Information


Size Metrics Design Metrics
Metrics Flow Metrics

LOC

Token Count

Function
Count

6.1 Size Metric

Size Metrics derived by normalizing quality and productivity Point Metrics measures by considering
size of the software that has been produced. The organization builds a simple record of size measure
for the software projects. It is built on past experiences of organizations. It is a direct measure of
software.
This metrics is one of simplest and earliest metrics that is used for computer program to measure size.
Size Oriented Metrics are also used for measuring and comparing productivity of programmers. It is
a direct measure of a Software. The size measurement is based on lines of code computation. The lines
of code are defined as one line of text in a source file.
While counting lines of code, simplest standard is:
 Don’t count blank lines
 Don’t count comments
 Count everything else
 The size-oriented measure is not a universally accepted method.

Simple set of size measure that can be developed is as given below:


1. Size = Kilo Lines of Code (KLOC)
2. Effort = Person / month
3. Productivity = KLOC / person-month
4. Quality = Number of faults / KLOC
5. Cost = $ / KLOC
6. Documentation = Pages of documentation / KLOC
Advantages:
 Using these metrics, it is very simple to measure size.
 Artifact of Software development which is easily counted.
 LOC is used by many methods that are already existing as a key input.
 A large body of literature and data based on LOC already exists.

Disadvantages:
 This measure is dependent upon programming language.
 This method is well designed upon programming language.
 It does not accommodate non-procedural languages.
 Sometimes, it is very difficult to estimate LOC in early stage of development.
 Though it is simple to measure but it is very hard to understand it for users.
 It cannot measure size of specification as it is defined on code.

6.1.1 LOC

A Line of Code (LOC) is any line of text in a code that is not a comment or blank line, and also header
lines, in any case of the number of statements or fragments of statements on the line. LOC clearly
consists of all lines containing the declaration of any variable, and executable and non-executable
statements. As LOC only counts the volume of code, you can only use it to compare or estimate
projects that use the same language and are coded using the same coding standards.

Features:
 Variations such as “source lines of code”, are used to set out a codebase.
 LOC is frequently used in some kinds of arguments.
 They are used in assessing a project’s performance or efficiency.

Advantages:
 Most used metric in cost estimation.
 Its alternates have many problems as compared to this metric.
 It is very easy in estimating the efforts.

Disadvantages:
 Very difficult to estimate the LOC of the final program from the problem specification.
 It correlates poorly with quality and efficiency of code.
 It doesn’t consider complexity.

Research has shown a rough correlation between LOC and the overall cost and length of developing
a project/ product in Software Development, and between LOC and the number of defects. This means
the lower your LOC measurement is, the better off you probably are in the development of your
product.
Let’s take an example and check how the Line of code works in the simple sorting program given
below:
void selSort(int x[], int n) {
//Below function sorts an array in ascending order
int i, j, min, temp;
for (i = 0; i < n - 1; i++) {
min = i;
for (j = i + 1; j < n; j++)
if (x[j] < x[min])
min = j;
temp = x[i];
x[i] = x[min];
x[min] = temp;
}
}

So, now If LOC is simply a count of the number of lines then the above function shown contains 13
lines of code. But when comments and blank lines are ignored, the function shown above contains 12
lines of code.

Let’s take another example and check how the Line of code works the given below:

void main()
{
int fN, sN, tN;
cout << "Enter the 2 integers: ";
cin >> fN >> sN;
// sum of two numbers in stored in variable sum
sum = fN + sN;
// Prints sum
cout << fN << " + " << sN << " = " << sum;
return 0;
}

Here also, If LOC is simply a count of the numbers of lines then the above function shown contains
11 lines of code. But when comments and blank lines are ignored, the function shown above contains
9 lines of code.

6.1.2 Token Count

Halstead's Software Metrics/ Token Count

According to Halstead's "A computer program is an implementation of an algorithm considered to be a


collection of tokens which can be classified as either operators or operand." In these metrics, a computer
program is considered to be a collection of tokens, which may be classified as either operators or
operands. All software science metrics can be defined in terms of these basic symbols. These symbols
are called as a token.
The basic measures are

n1 = count of unique operators.


n2 = count of unique operands.
N1 = count of total occurrences of operators.
N2 = count of total occurrence of operands.

In terms of the total tokens used, the size of the program can be expressed as N = N1 + N2.

Halstead metrics are:

Program Volume (V)

The unit of measurement of volume is the standard unit for size "bits." It is the actual size of a program
if a uniform binary encoding for the vocabulary is used.

V=N*log2n

Program Level (L)

The value of L ranges between zero and one, with L=1 representing a program written at the highest
possible level (i.e., with minimum size).

L=V*/V

Program Difficulty

The difficulty level or error-proneness (D) of the program is proportional to the number of the unique
operator in the program.

D= (n1/2) * (N2/n2)

Programming Effort (E)

The unit of measurement of E is elementary mental discriminations.

E=V/L=D*V

Estimated Program Length

According to Halstead, The first Hypothesis of software science is that the length of a well-structured
program is a function only of the number of unique operators and operands.

N=N1+N2

And estimated program length is denoted by N^


N^ = n1log2n1 + n2log2n2

The following alternate expressions have been published to estimate program length:

o NJ = log2 (n1!) + log2 (n2!)


o NB = n1 * log2n2 + n2 * log2n1
o NC = n1 * sqrt(n1) + n2 * sqrt(n2)
o NS = (n * log2n) / 2

Potential Minimum Volume

The potential minimum volume V* is defined as the volume of the most short program in which a
problem can be coded.

V* = (2 + n2*) * log2 (2 + n2*)

Here, n2* is the count of unique input and output parameters

Size of Vocabulary (n)

The size of the vocabulary of a program, which consists of the number of unique tokens used to build a
program, is defined as:

n=n1+n2

Where

n=vocabulary of a program
n1=number of unique operators
n2=number of unique operands

Language Level - Shows the algorithm implementation program language level. The same algorithm
demands additional effort if it is written in a low-level program language. For example, it is easier to
program in Pascal than in Assembler.

L' = V / D / D
lambda = L * V* = L2 * V

Counting rules for C language

1. Comments are not considered.


2. The identifier and function declarations are not considered
3. All the variables and constants are considered operands.
4. Global variables used in different modules of the same program are counted as multiple
occurrences of the same variable.
5. Local variables with the same name in different functions are counted as unique operands.
6. Functions calls are considered as operators.
7. All looping statements e.g., do {...} while ( ), while ( ) {...}, for ( ) {...}, all control statements
e.g., if ( ) {...}, if ( ) {...} else {...}, etc. are considered as operators.
8. In control construct switch ( ) {case:...}, switch as well as all the case statements are considered
as operators.
9. The reserve words like return, default, continue, break, sizeof, etc., are considered as operators.
10. All the brackets, commas, and terminators are considered as operators.
11. GOTO is counted as an operator, and the label is counted as an operand.
12. The unary and binary occurrence of "+" and "-" are dealt with separately. Similarly "*"
(multiplication operator) are dealt separately.
13. In the array variables such as "array-name [index]" "array-name" and "index" are considered as
operands and [ ] is considered an operator.
14. In the structure variables such as "struct-name, member-name" or "struct-name -> member-
name," struct-name, member-name are considered as operands and '.', '->' are taken as operators.
Some names of member elements in different structure variables are counted as unique operands.
15. All the hash directive is ignored.

Example: Consider the sorting program as shown in fig: List out the operators and operands and also
calculate the value of software science measure like n, N, V, E, λ, etc.

Solution: The list of operators and operands is given in the table

Operators Occurrences Operands Occurrences

Int 4 SORT 1

() 5 x 7

, 4 n 3
[] 7 i 8

if 2 j 7

< 2 save 3

; 11 im1 3

for 2 2 2

= 6 1 3

- 1 0 1

<= 2 - -

++ 2 - -

return 2 - -

{} 3 - -

n1=14 N1=53 n2=10 N2=38

Here N1=53 and N2=38. The program length N=N1+N2=53+38=91

Vocabulary of the program n=n1+n2=14+10=24

Volume V= N * log2N=91 x log2 24=417 bits.

The estimate program length N of the program

= 14 log214+10 log2)10
= 14 * 3.81+10 * 3.32
= 53.34+33.2=86.45

Conceptually unique input and output parameters are represented by n2*.


n2*=3 {x: array holding the integer to be sorted. This is used as both input and output}

{N: the size of the array to be sorted}

The Potential Volume V*=5log25=11.6

Since L=V*/V

We may use another formula

V^=V x L^= 417 x 0.038=15.67


E^=V/L^=D^ x V

Therefore, 10974 elementary mental discrimination is required to construct the program.

This is probably a reasonable time to produce the program, which is very simple.

6.1.3 Function Count/Functional Point (FP) Analysis

Allan J. Albrecht initially developed function Point Analysis in 1979 at IBM and it has been further
modified by the International Function Point Users Group (IFPUG). FPA is used to make estimate of
the software project, including its testing I n terms of functionality or function size of the software
product. However, functional point analysis may be used for the test estimation of the product. The
functional size of the product is measured in terms of the function point, which is a standard of
measurement to measure the software application.

Objectives of FPA

The basic and primary purpose of the functional point analysis is to measure and provide the software
application functional size to the client, customer, and the stakeholder on their request. Further, it is used
to measure the software project development along with its maintenance, consistently throughout the
project irrespective of the tools and the technologies.

Following are the points regarding FPs

1. FPs of an application is found out by counting the number and types of functions used in the
applications. Various functions used in an application can be put under five types, as shown in Table:

Types of FP Attributes

Measurements Parameters Examples

1.Number of External Inputs(EI) Input screen and tables

2. Number of External Output (EO) Output screens and reports

3. Number of external inquiries (EQ) Prompts and interrupts.

4. Number of internal files (ILF) Databases and directories

5. Number of external interfaces (EIF) Shared databases and shared routines.

All these parameters are then individually assessed for complexity.

The FPA functional units are shown in Fig:


2. FP characterizes the complexity of the software system and hence can be used to depict the project
time and the manpower requirement.

3. The effort required to develop the project depends on what the software does.

4. FP is programming language independent.

5. FP method is used for data processing systems, business systems like information systems.

6. The five parameters mentioned above are also known as information domain characteristics.

Example: Compute the function point, productivity, documentation, cost per function for the following
data:

1. Number of user inputs = 24


2. Number of user outputs = 46
3. Number of inquiries = 8
4. Number of files = 4
5. Number of external interfaces = 2
6. Effort = 36.9 p-m
7. Technical documents = 265 pages
8. User documents = 122 pages
9. Cost = $7744/ month

Complexity factors: 14.3, Weighing factor= 4,4,6,10,5

Solution:

Measurement Parameter Count Weighing


factor

1. Number of external inputs (EI) 24 * 4 = 96

2. Number of external outputs (EO) 46 * 4 = 184

3. Number of external inquiries (EQ) 8 * 6 = 48

4. Number of internal files (ILF) 4 * 10 = 40

5. Number of external interfaces (EIF) 2 * 5=10


Total= 378

FP = Count-total * [0.65 + 0.01 *∑(fi)]


= 378 * [0.65 + 0.01 * 43]
= 378 * [0.65 + 0.43]
= 378 * 1.08 = 408

Total pages of documentation = technical document + user document


= 265 + 122 = 387pages

Documentation = Pages of documentation/FP


= 387/408 = 0.94

6.2 Design Metrics

A Design Metric is a measurable feature of the system’s performance, cost, time for
implementation and safety etc. Most of these are conflicting requirements i.e. optimizing one
shall not optimize the other: e.g. a cheaper processor may have a lousy perfor mance as far as
speed and throughput is concerned.
 NRE cost (nonrecurring engineering cost)- It is one-time cost of designing the system.
Once the system is designed, any number of units can be manufactured without incurring
any additional design cost; hence the term nonrecurring.
 Unit cost- The monetary cost of manufacturing each copy of the system, excluding NRE
cost.
 Size- The physical space required by the system, often measured in bytes for software,
and gates or transistors for hardware.
 Performance- The execution time of the system.
 Power Consumption- It is the amount of power consumed by the system, which may
determine the lifetime of a battery, or the cooling requirements of the IC, since more power
means more heat.
 Flexibility- The ability to change the functionality of the system without incurring heavy
NRE cost. Software is typically considered very flexible.
 Time-to-prototype-The time needed to build a working version of the system, which may
be bigger or more expensive than the final system implementation, but it can be used to
verify the system’s usefulness and correctness and to refine the system’s functionality.
 Time-to-market- The time required to develop a system to the point that it can be released
and sold to customers. The main contributors are design time, manufacturing time, and
testing time. This metric has become especially demanding in recent years. Introducing an
embedded system to the marketplace early can make a big difference in the system’s
profitability.
 Maintainability- It is the ability to modify the system after its initial release, especially
by designers who did not originally design the system.
 Correctness- This is the measure of the confidence that we have implemented the
system’s functionality correctly. We can check the functionality throughout the process
of designing the system, and we can insert test circuitry to check that manufacturing was
correct.

6.3 Data Structure Metrics

Essentially the need for software development and other activities are to process data. Some data is input
to a system, program or module; some data may be used internally, and some data is the output from a
system, program, or module.

Example:
Program Data Input Internal Data Data Output

Payroll Name/Social Security No./Pay Withholding rates Gross Pay


rate/Number of hours worked Overtime Factors withholding Net Pay
Insurance Premium Rates Pay Ledgers
Spreadsheet Item Names/Item Cell computations Spreadsheet of
Amounts/Relationships among Subtotal items and totals
Items

Software Program Size/No of Software Model Parameter Est. project effort


Planner developer on team Constants Coefficients Est. project duration

That's why an important set of metrics which capture in the amount of data input, processed in an output
form software. A count of this data structure is called Data Structured Metrics. In these concentrations
is on variables (and given constant) within each module & ignores the input-output dependencies.

There are some Data Structure metrics to compute the effort and time required to complete the project.
There metrics are:

 The Amount of Data.


 The Usage of data within a Module.
 Program weakness.
 The sharing of Data among Modules.

6.3.1. The Amount of Data: To measure the amount of Data, there are further many different metrics,
and these are:

o Number of variable (VARS): In this metric, the Number of variables used in the program is
counted.
o Number of Operands (η2): In this metric, the Number of operands used in the program is
counted.
η2 = VARS + Constants + Labels
o Total number of occurrence of the variable (N2): In this metric, the total number of
occurrence of the variables are computed

6.3.2. The Usage of data within a Module: The measure this metric, the average numbers of live
variables are computed. A variable is live from its first to its last references within the procedure.

For Example: If we want to characterize the average number of live variables for a program having
modules, we can use this equation.
Where (LV) is the average live variable metric computed from the ith module. This equation could
compute the average span size (SP) for a program of n spans.

6.3.3. Program weakness: Program weakness depends on its Modules weakness. If Modules are
weak(less Cohesive), then it increases the effort and time metrics required to complete the project.

Module Weakness (WM) = LV* γ

A program is normally a combination of various modules; hence, program weakness can be a useful
measure and is defined as:

Where

WMi: Weakness of the ith module

WP: Weakness of the program

m: No of modules in the program

6.3.4.There Sharing of Data among Module: As the data sharing between the Modules increases
(higher Coupling), no parameter passing between Modules also increased, As a result, more effort and
time are required to complete the project. So Sharing Data among Module is an important metrics to
calculate effort and time.

6.4 Information Flow Metrics

The other set of metrics we would live to consider are known as Information Flow Metrics. The basis
of information flow metrics is found upon the following concept the simplest system consists of the
component, and it is the work that these components do and how they are fitted together that identify
the complexity of the system. The following are the working definitions that are used in Information
flow:

 Component: Any element identified by decomposing a (software) system into it's constituent's
parts.
 Cohesion: The degree to which a component performs a single function.
 Coupling: The term used to describe the degree of linkage between one components to others
in the same system.

Information Flow metrics deal with this type of complexity by observing the flow of information among
system components or modules. This metrics is given by Henry and Kafura. So it is also known as
Henry and Kafura's Metric.

This metrics is based on the measurement of the information flow among system modules. It is sensitive
to the complexity due to interconnection among system component. This measure includes the
complexity of a software module is defined to be the sum of complexities of the procedures included in
the module. A process contributes complexity due to the following two factors.

1. The complexity of the procedure code itself.


2. The complexity due to the procedure's connections to its environment. The effect of the first
factor has been included through LOC (Line Of Code) measure. For the quantification of the
second factor, Henry and Kafura have defined two terms, namely FAN-IN and FAN-OUT.

FAN-IN: FAN-IN of a procedure is the number of local flows into that procedure plus the number of
data structures from which this procedure retrieve information.

FAN -OUT: FAN-OUT is the number of local flows from that procedure plus the number of data
structures which that procedure updates.

Procedure Complexity = Length * (FAN-IN * FANOUT)^2


Last year question papers

https://www.ipjugaad.com/b-tech-5th-sem-software-engineering-paper-2018_5cab1be993.html
https://www.ipjugaad.com/b-tech-5th-sem-software-engineering-paper-2016_5c090a76cc.html

“EDUCATION AND HARD WORK IS THE PASSPORT TO THE FUTURE, BETTER


TOMORROW BELONGS TO THOSE WHO PREPARE THEMSELVES FOR IT.”

GOOD LUCK….

You might also like