Professional Documents
Culture Documents
https://www.ques10.com/p/8317/write-suitable-applications-of-different-softwar-1/
1. Waterfall model
● This model is used only when the requirements are very well known, clear and
fixed.
● Product definition is stable.
● Technology is understood.
● There are no ambiguous requirements
● Ample resources with required expertise are available freely
● The project is short.
2. V-model
● The V-shaped model should be used for small to medium sized projects where
requirements are clearly defined and fixed.
● The V-Shaped model should be chosen when ample technical resources are
available with needed technical expertise.
3. Incremental model
● This model can be used when the requirements of the complete system are
clearly defined and understood.
● Major requirements must be defined; however, some details can evolve with
time.
● There is a need to get a product to the market early.
● A new technology is being used
● Resources with needed skill set are not available
● There are some high risk features and goals.
4. RAD model
● RAD should be used when there is a need to create a system that can be
modularized in 2-3 months of time.
● It should be used if there’s high availability of designers for modelling and the
budget is high enough to afford their cost along with the cost of automated code
generating tools.
● RAD SDLC model should be chosen only if resources with high business
knowledge are available and there is a need to produce the system in a short
span of time (2-3 months).
5. Agile model
● When new changes need to be implemented. The freedom agile gives to change
is very important. New changes can be implemented at very little cost because of
the frequency of new increments that are produced.
● To implement a new feature the developers need to lose only the work of a few
days, or even only hours, to roll back and implement it.
6. iterative model
● Major requirements must be defined; however, some details can evolve with
time.
7. Spiral model
8. Prototype model
● Prototype model should be used when the desired system needs to have a lot of
interaction with the end users.
● Typically, online systems, web interfaces have a very high amount of interaction
with end users, and are best suited for the Prototype model. It might take a while
for a system to be built that allows ease of use and needs minimal training for
the end user.
What is Agile Methodology? Explain it with the principles used and
give example of 1 such software model
https://www.ques10.com/p/8324/what-is-agile-methodology-explain-it-with-the-pr-1/
● Agile Principles
● Example
○ Extreme programming (XP) is one of the best known agile processes. The
extreme programming approach was suggested by Kent Beck in 2000.
Extreme programming is explained as follows
1. A place to start
2. The benefit of a community’s prior experiences
3. A common language and a shared vision
4. A framework for prioritising actions
5. A way to define what improvement means for your organisation
In CMMI models with a staged representation, there are five maturity levels
designated by the numbers 1 through 5 as shown below:
1. Initial
2. Managed
3. Defined
4. Quantitatively Managed
5. Optimising
Maturity levels consist of a predefined set of process areas. The maturity levels are
measured by the achievement of the specific and generic goals that apply to each
predefined set of process areas. The following sections describe the characteristics of
each maturity level in detail.
● The organisation usually does not provide a stable environment. Success in these
organisations depends on the competence and heroics of the people in the
organisation and not on the use of proven processes.
● Maturity level 1 organisations often produce products and services that work but
the company has no standard process for software development. Nor does it
have a project-tracking system that enables developers to predict costs or finish
dates with any accuracy.
Company has installed basic software management processes and controls. But there is
no consistency or coordination among different groups.
● At maturity level 2, an organisation has achieved all the specific and generic goals
of the maturity level 2 process areas. In other words, the projects of the
organisation have ensured that requirements are managed and that processes
are planned, performed, measured, and controlled.
● The process discipline reflected by maturity level 2 helps to ensure that existing
practices are retained during times of stress. When these practices are in place,
projects are performed and managed according to their documented plans.
Company has pulled together a standard set of processes and controls for the entire
organisation so that developers can move between projects more easily and customers
can begin to get consistency from different groups.
● At maturity level 3, processes are well characterised and understood, and are
described in standards, procedures, tools, and methods.
● A critical distinction between maturity level 2 and maturity level 3 is the scope of
standards, process descriptions, and procedures. At maturity level 2, the
standards, process descriptions, and procedures may be quite different in each
specific instance of the process (for example, on a particular project). At maturity
level 3, the standards, process descriptions, and procedures for a project are
tailored from the organisation’s set of standard processes to suit a particular
project or organisational unit.
● At maturity level 4, an organisation has achieved all the specific goals of the
process areas assigned to maturity levels 2, 3, and 4 and the generic goals
assigned to maturity levels 2 and 3.
● Quantitative objectives for quality and process performance are established and
used as criteria in managing processes. Quantitative objectives are based on the
needs of the customer, end users, organisation, and process implementers.
Quality and process performance are understood in statistical terms and are
managed throughout the life of the processes.
Company has accomplished all of the above and can now begin to see patterns in
performance over time, so it can tweak its processes in order to improve productivity
and reduce defects in software development across the entire organisation.
● Optimising processes that are agile and innovative depends on the participation
of an empowered workforce aligned with the business values and objectives of
the organisation.
Agile process and its advantages? Explain any one Agile process
https://www.ques10.com/p/17108/agile-process-and-its-advantagesexplain-any-one--1/
● Agile Methods break the product into small incremental builds. These builds are
provided in iterations. Each iteration typically lasts from about one to three
weeks. Every iteration involves cross functional teams working simultaneously on
various areas like planning, requirements analysis, design, coding, unit testing,
and acceptance testing.
● At the end of the iteration a working product is displayed to the customer and
important stakeholders.
● The Agile model believes that every project needs to be handled differently and
the existing methods need to be tailored to best suit the project requirements. In
agile the tasks are divided into time boxes (small time frames) to deliver specific
features for a release.
● Iterative approach is taken and working software build is delivered after each
iteration. Each build is incremental in terms of features; the final build holds all
the features required by the customer.
● Agile thought process had started early in the software development and started
becoming popular with time due to its flexibility and adaptability.
Advantages
● Provides a framework for building and maintaining systems which meet tight
time constraints using incremental prototyping in a controlled environment
● Uses Pareto principle (80% of project can be delivered in 20% required to deliver
the entire project)
● Each increment only delivers enough functionality to move to the next increment
● Uses time boxes to fix time and resources to determine how much functionality
will be delivered in each increment
● Guiding principles
Software Engineering
● Software engineering is a discipline in which theories, methods and tools are
applied to develop professional software products.
● The definition of software engineering is based on two terms:
○ Discipline
■ For finding a solution to the problem an Engineer applies
appropriate theories, methods and tools. While finding the
solutions, Engineers must think of the organisational and financial
constraints. Within these constraints only he/she has to find the
solution
○ Product
The software product gets developed after following systematic theories,
methods and tools along with the appropriate management activities.
■ Software Engineering is a layered technology. Any software can be
developed using these layered approaches.
■ Various layers on which the technology is based are Quality focus
layer, Process layer, methods layer, tools layer.
■ A disciplined quality management is a backbone of software
engineering technology.
■ A process layer is a foundation of software engineering. Basically,
process defines the framework for timely delivery of software.
● In the method layer the actual method of implementation is carried out with the
help of requirement analysis, designing, coding using desired programming
constructs and testing.
● Software tools are used to bring automation in the software development
process.
● Thus, software engineering is a combination of process, methods and tools for
development of quality software
● The following generic process (used as a basis for the description of process
models in subsequent chapters) is applicable to the vast majority of software
projects:
1. Communication
a. This framework activity involves heavy communication and
collaboration with the customer (and other stakeholders) and
encompasses requirements gathering and other related activities.
2. Planning
a. This activity establishes a plan for the software engineering work
that follows. It describes the technical tasks to be conducted, the
risks that are likely, the resources that will be required, the work
products to be produced, and a work schedule.
3. Modelling
a. This activity encompasses the creation of models for the developer
and the customer to better understand software requirements
and the design that will achieve those requirements.
4. Construction
a. This activity combines code generation (either manual or
automated) and the testing that is required to uncover errors in
the code.
5. Deployment
a. The software (as a complete entity or as a partially completed
increment) is delivered to the customers who evaluate the
delivered product and provide feedback based on the evaluation.
● These five generic framework activities can be used during the development of
small programs, the creation of large web applications, and for the engineering
of large, complex computer based systems. The details of the software process
will be quite different in each case, but the framework activities remain the
same.
Extreme Programming
1. Planning:
● The planning activity begins with the creation of a set of stories that describe
required features and functionality for software to be built.
● Each story is written by the customer and is placed on an index card. The
customer assigns a value to the story based on the overall business value of the
feature of function.
● Members of the XP (Extreme Programming) team then assess each story and
assign a cost – measured in development weeks – to it.
● If the story will require more than three development weeks, the customer is
asked to split the story into smaller stories, and the assignment of value and cost
occurs again.
● Customers and the XP team work together to decide how to group stories into
the next release to be developed by the XP team.
● Once a basic commitment is made for a release, the XP team orders the stories
that will be developed in one of three ways:
1. All stories will be implemented immediately.
2. The stories with highest value will be moved up in the schedule and
implemented first.
3. The riskiest stories will be moved up in the schedule and implemented first.
● As development work proceeds, the customer can add stories, change the value
of an existing story, split stories or eliminate them.
● The XP team then reconsiders all remaining releases and modifies its plan
accordingly.
2. Design :
● XP design follows the KIS (Keep It Simple) principle. A simple design is always
preferred over a more complex representation.
● The design provides implementation guidance for a story as it is written –
nothing less, nothing more.
● XP encourages the use of CRC (Class Responsibility Collaborator) cards as an
effective mechanism for thinking about the software in an object oriented
context.
● CRC cards identify and organise the object oriented classes that are relevant to
current software increment.
● The CRC cards are the only design work product produced as a part of XP
process.
● If a difficult design is encountered as a part of the design of a story, XP
recommends the immediate creation of that portion of the design called ‘spike
solution’.
● XP encourages refactoring – a construction technique.
3. Coding
● XP recommends that after stories are developed and preliminary design work is
done, the team should not move to cord, but rather develop a series of unit test
that will exercise each story.
● Once the unit test has been created, the developer is better able to focus on
what must be implemented to pass the unit test.
● Once the code completes, it can be unit tested immediately, thereby providing
instantaneous feedback to the developer.
● A key concept during the coding activity is pair programming. XP recommends
that two people work together at one computer workstation to create code for a
story. This provides a mechanism for real time problem solving and real time
quality assurance.
● As pair programmers complete their work, the code they developed is integrated
with the work of others.
● This continuous integration strategy helps to avoid compatibility and interfacing
problems and provides a smoke testing environment that helps to uncover
errors early.
4. Testing :
● The creation of unit tests before coding is the key element of the XP approach.
● The unit tests that are created should be implemented using a framework that
enables them to be automated. This encourages regression testing strategy
whenever code is modified.
● Individual unit tests are organised into a “Universal Testing Suit”, integration and
validation testing of the system can occur on a daily basis. This provides the XP
team with a continual indication of progress and also can raise warning flags
early if things are going away.
● XP acceptance tests, also called customer tests, are specified by the customer
and focus on the overall system feature and functionality that are visible and
reviewable by the customer.
Module 2 - Software Requirements Analysis and Modeling
Q.1 a) What is an SRS document? Build an SRS Document for an online Student
feedback system.
Ans.
a)
SRS stands for Software Requirement Specification. An SRS document is an
organization’s understanding of a customer/potential client’s system requirement
and dependencies. It gives a detailed information about the system by including
several aspects related to the software. It includes Functionality, external interfaces,
performance, attributes, and design constraints.
The outcome of requirement gathering and analysis phase is the SRS. It is the base
for all software engineering actions and is generated only when all the requirements
are correctly gathered.
1. Introduction
a. Document purpose
b. Product Scope
c. Intended audience, & Document overview
d. Definitions, acronyms, abbreviations
e. Document conventions
f. Refences and acknowledgements.
2. Overall description
a. Product perspective
b. Product functionality
c. Users and characteristics
d. Operating environments
e. Design and Implementation
f. User documentation
g. Assumptions and dependencies
3. Specific Requirements
a. External interface requirements
b. Functional requirements
c. Behavioural requirements
5. Other requirements
Appendix – A Data Dictionary
1. Introduction
a. Document purpose = The document aims to provide details to build an Online
Feedback System for students.
b. Product Scope = This system will allow students to give their feedback
regarding subjects taught in college. This feedback system will be provided by their
professors and students’ feedback will be accessible to the HoD’s and to the college
Principle.
e. Document conventions = The typography followed here was Font Style: Arial,
Font size: 13, Titles are represented in bold and subtitles have font-size: 13
f. Refences and acknowledgements =
2. Overall description
a. Product perspective = This system aims to create an application that will
allow students to add individual feedback, either anonymously or by their roll
numbers, thus protecting their identity. It will also allow the college authorities to
view students’ responses corresponding to a course and its instructor.
b. Product functionality =
i) Schools
ii) Educational institutions
iii) Universities
iv) Coaching institutes
c. Users and characteristics = Common users of this product are as follows
i) School/College students
ii) Hostel Students
iii) College authorities
iv) Coaching institutes
d. Operating environments =
The software can operate on any system with java installed, it could be
1. Windows
2. Mac OS
3. Linux
4. A code editor such as VS Code, IntelliJ Idea
3. Functional Requirements:
3. 1 Description:
For identity of the staff, the system should display staff photograph, along with their
names, corresponding subjects, and other information.
3. Operations Scenario
There should be a students and a teacher’s database. The student database should
contain name and corresponding feedback.
4. Preliminary Schedule:
The system was designed in 6 months.
2022
Product perspective:
Scope and objectives: = The system will allow the department to handle its
enrolment, subject provisions and allotments, teachers should be able to select
the subjects to teach of their interest, and students should be capable of
enrolling for a subject.
Functional requirements =
Non-functional requirements=
a. Performance requirement
b. Safety and security
c. Software quality assurance
b. Safety and security: The system should make sure that not more than 1
department is teaching the same subject. It should also ensure that a
student does not register as a teacher in the system.
LINES OF CODE-
Lines of code is basically a method in which we count the number of lines present the code
in order to estimate the size of the software.
Things we count in lines of code-
Declaration
Actual code
Logic and computation
We don't count -
1) blank lines
2) commented lines
They are not included in the lines of code still they are written the code because-
1) blank lines-to increase readability
2)commented lines-to understand the functionality of code and for maintenance purposes.
The size of the software basically shows the amount of effort that is given and writing the
code. But counting the blank and commented lines, they give a false impression about the
productivity of the software.
KLOC- Thousand lines of code
The size is estimated by comparing it with the existing systems of the same kind. The experts use it
to predict the required size of various components of software and then add them to get the total
size.
It’s tough to estimate LOC by analysing the problem definition. Only after the whole code has been
developed can accurate LOC be estimated. This statistic is of little utility to project managers
because project planning must be completed before development activity can begin.
Two separate source files having a similar number of lines may not require the same effort. A file
with complicated logic would take longer to create than one with simple logic. Proper estimation
may not be attainable based on LOC. The length of time it takes to solve an issue is measured in LOC.
This statistic will differ greatly from one programmer to the next. A seasoned programmer can write
the same logic in fewer lines than a newbie coder.
● Simple to use.
● It is difficult to estimate the size using this technique in the early stages of the project.
• Stage-I:
It supports estimation of prototyping. For this it uses Application Composition
Estimation Model. This model is used for the prototyping stage of application
generator and system integration.
• Stage-II:
It supports estimation in the early design stage of the project, when we less know
about it. For this it uses Early Design Estimation Model. This model is used in early
design stage of application generators, infrastructure, system integration.
• Stage-III:
It supports estimation in the post architecture stage of a project. For this it uses Post
Architecture Estimation Model. This model is used after the completion of the detailed
architecture of application generator, infrastructure, system integration.
• Application Composition Estimation Model allows one to estimate the cost, effort
at the stage 1 of the COCOMO II Model.
• In this model size is first estimated using Object Points.
• Object Points are easy to identify and count. Object Points defines screen, reports,
third generation (3GL) modules as objects.
• Object Point estimation is a new size estimation technique but it is well suited
2. Number of entities in ER diagram: ER model provides a static view of the project. It describes the
entities and their relationships. The number of entities in ER model can be used to measure the
estimation of the size of the project. The number of entities depends on the size of the project. This
is because more entities needed more classes/structures thus leading to more coding.
Advantages:
Size estimation can be done during the initial stages of planning.
● The number of entities is independent of the programming technologies used. Disadvantages: No
fixed standards exist. Some entities contribute more project size than others.
● Just like FPA, it is less used in the cost estimation model. Hence, it must be converted to LOC. 3.
Total number of processes in detailed data flow diagram: Data Flow Diagram(DFD) represents the
functional view of software. The model depicts the main processes/functions involved in software
and the flow of data between them. Utilization of the number of functions in DFD to predict
software size. Already existing processes of similar type are studied and used to estimate the size of
the process. Sum of the estimated size of each process gives the final estimated size.
Advantages:
It is independent of the programming language.
● Each major process can be decomposed into smaller processes. This will increase the accuracy of
estimation
Disadvantages:
Studying similar kinds of processes to estimate size takes additional time and effort.
● All software projects are not required for the construction of DFD
. 4. Function Point Analysis: In this method, the number and type of functions supported by the
software are utilized to find FPC(function point count).
The steps in function point analysis are: Count the number of functions of each proposed type.
● Compute the Unadjusted Function Points(UFP).
● Find Total Degree of Influence(TDI).
● Compute Value Adjustment Factor(VAF).
● Find the Function Point Count(FPC).
SOFTWARE ENGINEERING
Modularity
● Software architecture and design pattern embody modularity.
● That is, software is divided into separately named and addressable components,
sometimes called modules that are integrated to satisfy problem requirements.
Creating such modules bring modularity in software.
● Modularity is a “single attribute of software that allows a program to be intellectually
manageable”
● Meyer defines five criteria that enables us to evaluate design method with respect
to its ability to define an effective modular system.
1. Modular decomposability
A design method provides a systematic mechanism for decomposing the problem
into sub-problems. This reduces the complexity of the problem and modularity can
be achieved.
2. Modular composability
A design method enables existing design components to be assembled into a new
system.
3. Modular understandability
A module can be understood as a standalone unit. Then it will be easier to build and
easier to change.
4. Modular continuity
Small changes to the system requirements results in changes to individual modules
rather than system-wide changes.
5. Modular protection
An aberrant condition occurs within a module and its effects are constrained within
the module.
Functional Independence
● The concept of functional independence is a direct outgrowth of modularity and the
concepts of abstraction and information hiding.
● Functional independence is achieved by developing modules with “single-minded”
function and an “aversion” to excessive interaction with other modules.
● Stated in other way, we want to design software so that each module addresses a
specific sub function of requirements and has a simple interface when viewed from
other part of program structure.
1
● Independence is important because, software with effective modularity I.e.
independent modules is easier to develop because function may be
compartmentalized and interfaces are simplified.
● Independent modules are easier to maintain because secondary effects caused by
design or code modification are limited, error progression are reduced, and
reusable modules are possible
● Functional independence is a key to good design, and design is the key to software
quality.
● Independence is assessed using two qualitative criteria
1. Cohesion
Cohesion is an indication of relative function strength of a module. A cohesive
module perform a single task, requiring little interaction with other components in
other parts of program
2. Coupling
Coupling is an indication of the relative interdependence among modules. Coupling
depends on the interface complexity between modules, the point at which entry or
reference is made to module and what data passes across the interface.
Architectural Style:-
3
explicit and loosely coupled. Layering your application appropriately helps to support a
strong separation of concerns that, in turn, supports flexibility and maintainability.
Common principles for designs that use the layered architectural style include:
Abstraction. Layered architecture abstracts the view of the system as whole while
providing enough detail to understand the roles and responsibilities of individual layers and
the relationship between them.
Encapsulation. No assumptions need to be made about data types, methods and
properties, or implementation during design, as these features are not exposed at layer
boundaries.
Reusable. Lower layers have no dependencies on higher layers, potentially allowing them
to be reusable in other scenarios.
Loose coupling. Communication between layers is based on abstraction and events to
provide loose coupling between layers.
5) Message Bus Architectural Style
Message bus architecture describes the principle of using a software system that can
receive and send messages using one or more communication channels, so that
applications can interact without needing to know specific details about each other. It is a
style for designing applications where interaction between applications is accomplished by
passing messages (usually asynchronously) over a common bus.
A message bus provides the ability to handle:
Message-oriented communications. All communication between applications is based
on messages that use known schemas.
Complex processing logic. Complex operations can be executed by combining a set of
smaller operations, each of which supports specific tasks, as part of a multistep itinerary.
Modifications to processing logic. Because interaction with the bus is based on
common schemas and commands, you can insert or remove applications on the bus to
change the logic that is used to process messages.
4
● Improved productivity: A good interface does not distract the user, but rather
allows him to concentrate on the task to be done.
● Reduced Errors: Many so-called 'human errors' can be attributed to poor user
interface quality. Avoiding inconsistencies, ambiguities, and so on, reduces user
errors.
● Reduced Training: A poor user interface hampers learning. A well-designed user
interface encourages its users to create proper models and reinforces learning, thus
reducing training time.
● Improved Acceptance: Users prefer systems whose interface is well-designed.
Such systems make information easy to find and provide the information in a form
which is easy to use.
User Interface Design for Online Air Ticket Reservation System
There are two types of users for the Air Ticket Reservation System. One is the Customer
and the other is the administrator. Both the customer and administrator user interface
would be a Graphical User Interface.
The graphical user interface for the customer home page would be as follows:
6
4. What is the User Interface design process? Explain with one
example.
User Interface(U) Design:
User Interface (UI) Design focuses on anticipating what users might need to do and
ensuring that the interface has elements that are easy to access, understand, and use to
facilitate those actions.
UI brings together concepts from interaction design, visual design, and information
architecture.
User interface design creates an effective communication medium between a human and
a computer. Software Engineer designs the user interface by applying an interactive
process.
Features of Good User Interface:
● Increased efficiency: If the system fits the way its users work and if it has a good
ergonomic design, users can perform their tasks efficiently. They do not lose time
struggling with the functionality and its appearance on the screen.
● Improved productivity: A good interface does not distract the user, but rather
allows him to concentrate on the task to be done.
● Reduced Errors: Many so-called 'human errors' can be attributed to poor user
interface quality. Avoiding inconsistencies, ambiguities, and so on, reduces user
errors.
● Reduced Training: A poor user interface hampers learning. A well-designed user
interface encourages its users to create proper models and reinforces learning, thus
reducing training time.
User Interface Design for Online Air Ticket Reservation System:
There are two types of users for the Air Ticket Reservation System. One is the Customer
and the other is the administrator. Both the customer and administrator user interface
would be a Graphical User Interface.
The graphical user interface for the customer home page would be as follows:
7
Whenever Customer want to book flight he need to be register himself/herself as a user in
this system, so Above fig shows Customer Registration interface which take all necessary
information from user.
8
As shown in above fig which depicted searching flight for booking. Customer needs to fill
up all required fields for searching flight and also no. of passenger information is
necessary, by clicking user will get available flight for corresponding data given by
customer.
The Graphical User Interface would mainly consist of Hyperlinks, Data entry fields like the
E-mail Id field, pushdown buttons like the Login buttons etc.
The Administrator of the website would also have a similar Graphical User Interface. After
an administrator logs onto the system, the home page for the administrator would be as
follows:
9
What is Coupling and Cohesion? Explain different forms of
it.
11
Cohesion: Cohesion is a measure of the degree to which the elements of the
module are functionally related. It is the degree to which all elements directed
towards performing a single task are contained in the component. Basically,
cohesion is the internal glue that keeps the module together. A good software
design will have high cohesion.
Types of Cohesion:
● Functional Cohesion: Every essential element for a single
computation is contained in the component. A functional cohesion
performs the task and functions. It is an ideal situation.
● Sequential Cohesion: An element outputs some data that becomes
the input for other element, i.e., data flow between the parts. It
occurs naturally in functional programming languages.
● Communicational Cohesion: Two elements operate on the same
input data or contribute towards the same output data. Example-
update record in the database and send it to the printer.
● Procedural Cohesion: Elements of procedural cohesion ensure the
order of execution. Actions are still weakly connected and unlikely to
be reusable. Ex- calculate student GPA, print student record,
calculate cumulative GPA, print cumulative GPA.
● Temporal Cohesion: The elements are related by their timing
involved. A module connected with temporal cohesion all the tasks
must be executed in the same time span. This cohesion contains the
code for initializing all the parts of the system. Lots of different
activities occur, all at unit time.
● Logical Cohesion: The elements are logically related and not
functionally. Ex- A component reads inputs from tape, disk, and
12
network. All the code for these functions is in the same component.
Operations are related, but the functions are significantly different.
● Coincidental Cohesion: The elements are not related(unrelated).
The elements have no conceptual relationship other than location in
source code. It is accidental and the worst form of cohesion. Ex- print
next line and reverse the characters of a string in a single
component.
Figure 2 :Cohesion
13
List principles of software design.
Design means to draw or plan something to show the look, functions and working of
it.
Software Design is also a process to plan or convert the software requirements
into a step that are needed to be carried out to develop a software system. There
are several principles that are used to organize and arrange the structural
components of Software design. Software Designs in which these principles are
applied affect the content and the working process of the software from the
beginning.
These principles are stated below :
Principles Of Software Design :
1. Should not suffer from “Tunnel Vision” –
While designing the process, it should not suffer from “tunnel vision” which
14
means that is should not only focus on completing or achieving the aim
but on other effects also.
2. Traceable to analysis model –
The design process should be traceable to the analysis model which
means it should satisfy all the requirements that software requires to
develop a high-quality product.
3. Should not “Reinvent The Wheel” –
The design process should not reinvent the wheel that means it should not
waste time or effort in creating things that already exist. Due to this, the
overall development will get increased.
4. Minimize Intellectual distance –
The design process should reduce the gap between real-world problems
and software solutions for that problem meaning it should simply minimize
intellectual distance.
5. Exhibit uniformity and integration –
The design should display uniformity which means it should be uniform
throughout the process without any change. Integration means it should mix
or combine all parts of software i.e. subsystems into one system.
6. Accommodate change
The software should be designed in such a way that it accommodates the
change implying that the software should adjust to the change that is required
to be done as per the user’s need.
7. Degrade gently
The software should be designed in such a way that it degrades gracefully
which means it should work properly even if an error occurs during the
execution.
8. Assessed or quality
The design should be assessed or evaluated for the quality meaning that
during the evaluation, the quality of the design needs to be checked and
focused on.
15
9. Review to discover errors
The design should be reviewed which means that the overall evaluation
should be done to check if there is any error present or if it can be minimized.
16
Module 5 – Software Testing
• To help the developers to understand the code base and enable them to make changes quickly.
Workflow of Unit
Testing:
Unit Testing Techniques:
• Black Box Testing: This testing technique is used in covering the unit tests for input, user
interface, and output parts.
• White Box Testing: This technique is used in testing the functional behavior of the system by
giving the input and checking the functionality output including the internal design structure and
code of the modules.
• Gray Box Testing: This technique is used in executing the relevant test cases, test methods, test
functions, and analyzing the code performance for the modules.
2. Unit Testing allows developers to learn what functionality is provided by a unit and how to use it
to gain a basic understanding of the unit API.
3. Unit testing allows the programmer to refine code and make sure the module works properly.
4. Unit testing enables testing parts of the project without waiting for others to be completed.
2. Unit Testing will not cover all the errors in the module because there is a chance of having errors
in the modules while doing integration testing.
3. Unit Testing is not efficient for checking the errors in the UI(User Interface) part of the module.
4. It requires more time for maintenance when the source code is changed frequently.
5. It cannot cover the non-functional testing parameters such as scalability, the performance of the
system, etc.
Integration Test Case differs from other test cases in the sense it focuses mainly on the interfaces & flow
of data/information between the modules. Here priority is to be given for the integrating links rather
than the unit functions which are already tested.
Sample Integration Test Cases for the following scenario: Application has 3 modules say ‘Login Page’,
‘Mailbox’ and ‘Delete emails’ and each of them is integrated logically.
Here do not concentrate much on the Login Page testing as it’s already been done in Unit Testing. But
check how it’s linked to the Mail Box Page.
Similarly Mail Box: Check its integration to the Delete Mails Module.
4. Bottom Up Approach
Big Bang Testing is an Integration testing approach in which all the components or modules are
integrated together at once and then tested as a unit. This combined set of components is considered as
an entity while testing. If all of the components in the unit are not completed, the integration process
will not execute.
Advantages:
Convenient for small systems.
Disadvantages:
Given the sheer number of interfaces that need to be tested in this approach, some interfaces link to be
tested could be missed easily.
Since the Integration testing can commence only after “all” the modules are designed, the testing team
will have less time for execution in the testing phase.
Since all modules are tested at once, high-risk critical modules are not isolated and tested on priority.
Peripheral modules which deal with user interfaces are also not isolated and tested on priority.
Incremental Testing
In the Incremental Testing approach, testing is done by integrating two or more modules that are
logically related to each other and then tested for proper functioning of the application. Then the other
related modules are integrated incrementally and the process continues until all the logically related
modules are integrated and tested successfully.
Bottom Up
Top Down
Stubs and Drivers are the dummy programs in Integration testing used to facilitate the software testing
activity. These programs act as a substitutes for the missing models in the testing. They do not
implement the entire programming logic of the software module but they simulate data communication
with the calling module while testing.
Bottom-up Integration Testing is a strategy in which the lower level modules are tested first. These
tested modules are then further used to facilitate the testing of higher level modules. The process
continues until all modules at top level are tested. Once the lower level modules are tested and
integrated, then the next level of modules are formed.
Diagrammatic Representation:
Advantages:
No time is wasted waiting for all modules to be developed unlike Big-bang approach
Disadvantages:
Critical modules (at the top level of software architecture) which control the flow of application are
tested last and may be prone to defects.
Top Down Integration Testing is a method in which integration testing takes place from top to bottom
following the control flow of software system. The higher level modules are tested first and then lower
level modules are tested and integrated in order to check the software functionality. Stubs are used for
testing if some modules are not ready.
Diagrammatic Representation:
Advantages:
Critical Modules are tested on priority; major design flaws could be found and fixed first.
Disadvantages:
Sandwich Testing
Sandwich Testing is a strategy in which top level modules are tested with lower level modules at the
same time lower modules are integrated with top modules and tested as a system. It is a combination of
Top-down and Bottom-up approaches therefore it is called Hybrid Integration Testing. It makes use of
both stubs as well as drivers.
The Integration test procedure irrespective of the Software testing strategies (discussed above):
System test falls under the black box testing category of software testing.
White box testing is the testing of the internal workings or code of a software application. In contrast,
black box or System Testing is the opposite. System test involves the external workings of the software
from the user’s perspective.
Testing the fully integrated applications including external peripherals in order to check how
components interact with one another and with the system as a whole. This is also called End to End
testing scenario.
Verify thorough testing of every input in the application to check for desired outputs.
That is a very basic description of what is involved in system testing. You need to build detailed test
cases and test suites that test each aspect of the application as seen from the outside without looking
at the actual source code.
As with almost any software engineering process, software testing has a prescribed order in which
things should be done. The following is a list of software testing categories arranged in chronological
order. These are the steps taken to fully test new software in preparation for marketing it:
Unit testing performed on each module or block of code during development. Unit Testing is normally
done by the programmer who writes the code.
Integration testing done before, during and after integration of a new module into the main software
package. This involves testing of each individual code module. One piece of software can contain several
modules which are often created by several different programmers. It is crucial to test each module’s
effect on the entire program model.
System testing done by a professional testing agent on the completed software product before it is
introduced to the market.
Acceptance testing – beta testing of the product done by the actual end users.
Types of system testing a large software development company would typically use
• Usability Testing – mainly focuses on the user’s ease to use the application, flexibility in handling
controls and ability of the system to meet its objectives
• Load Testing – is necessary to know that a software solution will perform under real-life loads.
• Regression Testing – involves testing done to make sure none of the changes made over the
course of the development process have caused new bugs. It also makes sure no old bugs
appear from the addition of new software modules over time.
• Recovery Testing – is done to demonstrate a software solution is reliable, trustworthy and can
successfully recoup from possible crashes.
• Migration Testing – is done to ensure that the software can be moved from older system
infrastructures to current system infrastructures without any issues.
• Functional Testing – Also known as functional completeness testing, Functional Testing involves
trying to think of any possible missing functions. Testers might make a list of additional
functionalities that a product could have to improve it during functional testing.
• Statement Coverage
• Decision Coverage
• Branch Coverage
• Condition Coverage
• Multiple Condition Coverage
• Finite State Machine Coverage
• Path Coverage
• Control flow testing
• Data flow testing
Basis path testing is a technique of selecting the paths in the control flow graph, that provide a basis set
of execution paths through the program or module.
Since this testing is based on the control structure of the program, it requires complete knowledge of
the program’s structure. To design test cases using this technique, four steps are followed :
Basis path testing, a structured testing or white box testing technique used for
designing test cases intended to examine all possible paths of execution at least once.
Creating and executing tests for all possible paths results in 100% statement coverage
and 100% branch coverage.
Example:
1. Function fn_delete_element (int value, int array_size, int array[])
2. {
3. 1 int i;
4. location = array_size + 1;
5.
6. 2 for i = 1 to array_size
7. 3 if ( array[i] == value )
8. 4 location = i;
9. end if;
10. end for;
11.
12. 5 for i = location to array_size
13. 6 array[i] = array[i+1];
14. end for;
15. 7 array_size --;
16. }
Steps to Calculate the independent paths
Step 1 : Draw the Flow Graph of the Function/Program under consideration as shown
below:
17.
Path 1: 1-2-5-7
Path 2: 1-2-5-6-7
Path 3: 1-2-3-2-5-6-7
Path 4: 1-2-3-4-2-5-6-7
M = E – N + 2P
where,
E = the number of edges in the control flow graph
N = the number of nodes in the control flow graph
P = the number of connected components
A = 10
IF B > C THEN
A = B
ELSE
A = C
ENDIF
Print A
Print B
Print C
The above Black-Box can be any software system you want to test. For Example, an
operating system like Windows, a website like Google, a database like Oracle or
even your own custom application.
Here are the generic steps followed to carry out any type of Black Box Testing.
• Initially, the requirements and specifications of the system are examined.
• Tester chooses valid inputs (positive test scenario) to check whether SUT
processes them correctly. Also, some invalid inputs (negative test scenario)
are chosen to verify that the SUT is able to detect them.
• Tester determines expected outputs for all those inputs.
• Software tester constructs test cases with the selected inputs.
• The test cases are executed.
• Software tester compares the actual outputs with the expected outputs.
• Defects if any are fixed and re-tested.
Types of Black Box Testing
There are many types of Black Box Testing but the following are the prominent
ones –
• Functional testing – This black box testing type is related to the functional
requirements of a system; it is done by software testers.
• Non-functional testing – This type of black box testing is not related to
testing of specific functionality, but non-functional requirements such as
performance, scalability, usability.
• Regression testing – Regression Testing is done after code fixes, upgrades
or any other system maintenance to check the new code has not affected the
existing code.
1. Corrective maintenance:
Corrective maintenance of a software product may be essential either
to rectify some bugs observed while the system is in use, or to
enhance the performance of the system.
2. Adaptive maintenance:
This includes modifications and updations when the customers need
the product to run on new platforms, on new operating systems, or
when they need the product to interface with new hardware and
software.
3. Perfective maintenance:
A software product needs maintenance to support the new features
that the users want or to change different types of functionalities of the
system according to the customer demands.
4. Preventive maintenance:
This type of maintenance includes modifications and updations to
prevent future problems of the software. It goals to attend problems,
which are not significant at this moment but may cause serious issues
in future.
Corrective software maintenance is what one would typically associate with the
maintenance of any kind. Correct software maintenance addresses the errors and faults
within software applications that could impact various parts of your software, including
the design, logic, and code. These corrections usually come from bug reports that were
created by users or customers – but corrective software maintenance can help to spot
them before your customers do, which can help your brand’s reputation.
1. Collection Information:
This step focuses on collecting all possible information (i.e., source
design documents etc.) about the software.
8. Generate documentation:
Finally, in this step, the complete documentation including SRS,
design document, history, overview, etc. are recorded for future use.
Module 6 - Software Configuration Management, Quality
Assurance and Maintenance
2. Version Control
Version control is a set of procedures and tools for managing the creation and use of multiple
occurrences of objects in the SCM repository.
Required version control capabilities:
• An SCM repository that stores all relevant configuration objects.
• A version management capability that stores all versions of a configuration object (or
enables any version to be constructed using differences from past versions).
• A make facility that enables the software engineer to collect all relevant configuration
objects and construct a specific version of the software.
• Issues tracking (bug tracking) capability that enables the team to record and track the status
of all outstanding issues associated with each configuration object.
The SCM repository maintains a change set:
• Serves as a collection of all changes made to a baseline configuration:
• Used to create a specific version of the software.
• Captures all changes to all files in the configuration along with the reason for changes and
details of who made the changes and when.
• Product metrics − Describes the characteristics of the product such as size, complexity,
design features, performance, and quality level.
• Process metrics − These characteristics can be used to improve the development and
maintenance activities of the software.
• Project metrics − This metrics describe the project characteristics and execution. Examples
include the number of software developers, the staffing pattern over the life cycle of the
software, cost, schedule, and productivity.
This metric can be calculated for the entire development process, for the front-end before code
integration and for each phase. It is called early defect removal when used for the front-end and
phase effectiveness for specific phases. The higher the value of the metric, the more effective the
development process and the fewer the defects passed to the next phase or to the field. This metric is
a key concept of the defect removal model for software development.
If BMI is larger than 100, it means the backlog is reduced. If BMI is less than 100, then the backlog increased.
Q.4] What are Software Risks? Write a note on RMMM for delayed projects.
Ans: A risk is a potential problem – it might happen and it might not.
Conceptual definition of risk:
• Risk concerns future happenings
• Risk involves change in mind, opinion, actions, places, etc.
• Risk involves choice and the uncertainty that choice entails
1. Project Risk
Project risk arises in the software development process then they basically affect budget, schedule,
staffing, resources, and requirement. When project risks become severe then the total cost of project
get increased.
2. Technical Risk
These risks affect quality and timeliness of the project. If technical risk becomes reality then potential
design, implementation, interface, verification and maintenance problem gets created. Technical risks
occur when problem becomes harder to solve.
3. Business Risk
When feasibility of software product is in suspect then business risks occur. Business risks can be
classified as follows
i. Market Risk
When quality of software product built but if there is no customer for this product then it is called
market risk (i.e. no market for product).
ii. Strategic Risk
When product is built and if it is not following the company’s business policies then such product
brings strategic risks.
iii. Sales Risk
When product is built but how to sell is not clear then such situation brings sales risk.
iv. Management Risk
When senior management or the responsible staff leaves the organizations then management risk
occur.
v. Budget Risk
Losing the overall budget of the project called Budget risk.
Known risks are those that are identified by evaluating the project plan. There are two types of known
risk
a. Predictable Risk
Predictable risk are those that can be identified in advance based on past project experience
b. Unpredictable Risk
Unpredictable risks are those that cannot be guessed earlier.
RMMM
RMM stands for risk mitigation, monitoring and management. There are three issues in strategy for
handling the risk is
• Risk Avoidance
• Risk Monitoring
• Risk Management
• Risk Mitigation
Risk mitigation means preventing the risk to occur (risk avoidance). Following are the steps to be taken
for mitigating the risks.
1. Communicate with the concerned staff to find of probable risk.
2. Find out and eliminate all those causes that can create risk before the project starts.
3. Develop a policy in an organization which will help to continue the project even through same
staff leaves the organization.
4. Everybody in the project team should be acquainted with the current development activity
5. Maintain the corresponding documents in timely manner
6. Conduct timely reviews in order to speed up work.
7. For conducting every critical activity during software development, provide the additional staff if
required.
• Risk Monitoring
In Risk Monitoring process following thing must be monitored by the project manager.
1. The approach and behaviour of the team member as pressure of project varies.
2. The degree in which the team performs with the spirit of “Team-Work”.
3. The type of cooperation between the team members.
4. The type of problem occur in team member.
5. Availability of jobs within and outside of the organization.
The objective of risk mitigation is:
1. To check whether the predicted risk really occur or not.
2. To ensure the steps defined to avoid the risk are applied properly or not.
3. To gather the information this can be useful for analyzing the risk.
• Risk Management
Project manager performs this task when risk becomes a reality. If project manager is successful in
applying the project mitigation effectively then it becomes very much easy to manage the risks.
For example,
Consider a scenario that many people are leaving the organization then if sufficient additional staff is
available, if current development activity is known to everybody in the team, if latest and systematic
documentation is available then any ‘new comer’ can easily understand current development activity.
This will ultimately help in continuing the work without any interval.
Q.5] Discuss the different categories of risk that help to define impact values in a
risk table.
Ans: Predictable Risk Categories to be consider in Risk Table:
1. Product size – risks associated with overall size of the software to be built
2. Business impact – risks associated with constraints imposed by management or the marketplace.
3. Customer characteristics – risks associated with sophistication of the customer and the developer's
ability to communicate with the customer in a timely manner.
4. Process definition – risks associated with the degree to which the software process has been defined
and is followed.
5. Development environment – risks associated with availability and quality of the tools to be used to
build the project.
6. Technology to be built – risks associated with complexity of the system to be built and the "newness"
of the technology in the system.
7. Staff size and experience – risks associated with overall technical and project experience of the
software engineers who will do the work.
The review meeting: Each review meeting should be held considering the following constraints-
Involvement of people:
• Between 3, 4 and 5 people should be involve in the review.
• Advance preparation should occur but it should be very short that is at the most 2 hours of work
for every person.
• The short duration of the review meeting should be less than two hour. Gives these constraints, it
should be clear that an FTR focuses on specific (and small) part of the overall software.
At the end of the review, all attendees of FTR must decide what to do.
• Accept the product without any modification.
• Reject the project due to serious error (Once corrected, another app need to be reviewed), or
• Accept the product provisional (minor errors are encountered and should be corrected, but no
additional review will be required).
The decision was made, with all FTR attendees completing a sign-of indicating their participation in
the review and their agreement with the findings of the review team.
Review guidelines :- Guidelines for the conducting of formal technical reviews should be established
in advance. These guidelines must be distributed to all reviewers, agreed upon, and then followed. A
review that is unregistered can often be worse than a review that does not minimum set of guidelines
for FTR.
1. Review the product, not the manufacture (producer).
2. Take written notes (record purpose)
3. Limit the number of participants and insists upon advance preparation.
4. Develop a checklist for each product that is likely to be reviewed.
5. Allocate resources and time schedule for FTRs in order to maintain time schedule.
6. Conduct meaningful training for all reviewers in order to make reviews effective.
7. Reviews earlier reviews which serve as the base for the current review being conducted.
8. Set an agenda and maintain it.
9. Separate the problem areas, but do not attempt to solve every problem notes.
10. Limit debate and rebuttal.
Q.7] Short Note on Risk management.
Ans: Risk management is the process of identifying, assessing, and prioritizing the risks to minimize,
monitor, and control the probability of unfortunate events.
Risk Management Process:
Risk Management process can be easily understood with use of the following workflow:
A software project can be concerned with a large variety of risks. In order to be adept to systematically
identify the significant risks which might affect a software project, it is essential to classify risks into
different classes. The project manager can then check which risks from each class are relevant to the
project.
There are three main classifications of risks which can affect a software project:
1. Project risks
2. Technical risks
3. Business risks
1. Project risks: Project risks concern differ forms of budgetary, schedule, personnel, resource, and
customer-related problems. A vital project risk is schedule slippage. Since the software is
intangible, it is very tough to monitor and control a software project. It is very tough to control
something which cannot be identified. For any manufacturing program, such as the manufacturing
of cars, the plan executive can recognize the product taking shape.
2. Technical risks: Technical risks concern potential method, implementation, interfacing, testing,
and maintenance issue. It also consists of an ambiguous specification, incomplete specification,
changing specification, technical uncertainty, and technical obsolescence. Most technical risks
appear due to the development team's insufficient knowledge about the project.
3. Business risks: This type of risks contain risks of building an excellent product that no one need,
losing budgetary or personnel commitments, etc.
Disadvantage of SQA:
There are a number of disadvantages of quality assurance. Some of them include adding more
resources, employing more workers to help maintain quality and so much more.