Professional Documents
Culture Documents
Reliability
Maintainability
Usability
Portability
Correctness
Efficiency
Security
Testability
Flexibility
Scalability
Compatibility
Supportability
In other words, we can say that the testing is a collection of techniques to determine the
accuracy of the application under the predefined specification but, it cannot identify all
the defects of the software.
3. For larger projects, implementing an RMMM may itself turn out to be another
tedious project.
4. RMMM does not guarantee a risk-free project, infact, risks may also come up after
the project is delivered.
Personal Software Process (PSP) is the skeleton or the structure that assist the
engineers in finding a way to measure and improve the way of working to a great extent.
It helps them in developing their respective skills at a personal level and the way of
doing planning, estimations against the plans.
1. Behavioral Diagrams
Activity Diagram
Interaction Diagram
2. Structure Diagrams
Class Diagram
Object Diagram
Component Diagram
Deployment Diagram
3. Project Metrics: Project Metrics are used to assess a project’s overall quality. It
is used to estimate a project’s resources and deliverables, as well as to
determine costs, productivity, and flaws.
PART B
(1)Explain briefly on (i) the incremental model (ii) The RAD
Model? 10M
(i) Incremental Model
• The incremental process model is also known as the Successive version model.
• First, a simple working system implementing only a few basic features is built and then
that is delivered to the customer.
• Then after many successive iterations/ versions are implemented and delivered to the
customer until the desired system is released.
A, B, and C are modules of Software Products that are incrementally developed and delivered.
• 1. Staged Delivery Model: Construction of only one part of the project at a time.
2. Parallel Development Model
• Different subsystems are developed at the same time. It can decrease the calendar time
needed for the development, i.e. TTM (Time to Market) if enough resources are
available.
• Various phases in RAD are Requirements Gathering, Analysis and Planning, Design, Build
or Construction, and finally Deployment.
• Multiple teams work on developing the software system using RAD model and it is
shown in fig.
• This model consists of 4 basic phases:
User Description – This phase consists of taking user feedback and building the prototype
using developer tools.
Construction – In this phase, refinement of the prototype and delivery takes place.
Cutover – All the interfaces between the independent modules developed by separate teams
have to be tested properly. The use of powerfully automated tools and subparts makes testing
easier. This is followed by acceptance testing by the user.
Objectives of CMMI
• Staged Representation
• Continuous Representation
• Staged Representation :
Continuous Representation :
Various methods that are used to validate the system include black-box testing, white-
box testing, integration testing, and unit testing.
There are various techniques that can be used to validate the requirements. They include:
During these checks, we also check the traceability level between all the requirements.
For this, the creation of a traceability matrix is required. This matrix ensures that all the
requirements are being considered seriously and everything that is specified is justified.
We also check the format of requirements during these checks. We see if the
requirements are clear and well-written or not.
We can just reach out to the users and stakeholders and get their feedback
• Test Design – During test designing, we follow a small procedure where we first finalize
the testing team, then build a few testing scenarios. Functional tests can be derived
from the requirements specification itself where each requirement has an associated
test.
• On the contrary, the non-functional requirements are hard to test as each test has to be
traced back to its requirement. The aim of this is to figure out the errors in the
specification or the details that are missed out.
• A checklist is prepared consisting of various standards and the reviewers check the
boxes to provide a formal review. After that, a final approval sign-off is done.
• Data modeling is the process of creating a simplified diagram of a software system and
the data elements it contains, using text and symbols to represent the data and how it
flows.
• A data model can be thought of as a flowchart that illustrates data entities, their
attributes and the relationships between entities.
• New requirements emerge during the process as business needs a change, and a better
understanding of the system is developed.
The business and technical environment of the system changes during the development.
• Notations − In these diagrams, the objects that participate in the interaction are shown
using vertices. The links that connect the objects are used to send and receive
messages.
• This component diagram shows the structure of the ATM system, which consists of the
software components and their interfaces, and how they work together.
• The component diagram of ATM system has 8 components which are the account
database, transaction database, balance inquiry, withdraw, deposit, loan, card, and the
user.
• The components under the ATM system are the required interface at the same time are
provided interface which serves as the provider for the transaction database and
required for the accounts database.
The ATM System UML component diagram explains the sketch of the required software
and hardware components and the dependencies between them.
©Explain software design? Explain data flow oriented design?
10M
Software design is a mechanism to transform user requirements into some suitable form, which
helps the programmer in software coding and implementation. It deals with representing the
client's requirement, as described in SRS (Software Requirement Specification) document, into a
form, i.e., easily implementable using programming language.
The software design phase is the first step in SDLC (Software Design Life Cycle), which moves
the concentration from the problem domain to the solution domain. In software design, we
consider the system to be a set of components or modules with clearly defined behaviors &
boundaries.
DFD is the abbreviation for Data Flow Diagram. The flow of data of a system or a process is
represented by DFD. It also gives insight into the inputs and outputs of each entity and the
process itself. DFD does not have control flow and no loops or decision rules are present.
Specific operations depending on the type of data can be explained by a flowchart. It is a
graphical tool, useful for communicating with users, managers and other personnel. It is useful
for analyzing existing as well as proposed system.
Data Flow Diagram can be represented in several ways. The DFD belongs to structured-analysis
modeling tools. Data Flow diagrams are very popular because they help us to visualize the
major steps and data involved in software-system processes.
Characteristics of DFD
DFDs are quite general and are not limited to problem analysis for software
requirements specification.
DFDs are very useful in understanding a system and can be effectively used during
analysis.
It views a system as a function that transforms the inputs into desired outputs.
There are various types of conventional testing techniques that can be used during the testing
process. Some of the commonly used techniques include:
1. Unit testing: This involves testing individual modules or components of the software to
ensure that they perform as expected.
2. Integration testing: This involves testing the software modules in combination to ensure
that they work together correctly.
3. System testing: System testing is a type of testing that verifies a software product's
integration and completion. A system test's objective is to gauge how well the system
requirements are met from beginning to end. In most cases, a bigger computer-based
system merely consists of a small portion of the software.
4. Validation Testing: The process of evaluating software during the development process
or at the end of the development process to determine whether it satisfies specified
business requirements. Validation Testing ensures that the product actually meets the
client's needs.
5. Acceptance testing: This involves testing the software from the end-user's perspective
to ensure that it meets their needs and requirements.
6. Regression testing: This involves rerunning previously executed test cases to ensure
that the changes made to the software have not introduced new defects or errors.
(b) Discuss a framework for product metrics 5M
A fundamental framework and a set of basic principles for the measurement of product
metrics for software should be established. Talking in terms of software engineering:
Product metrics assist in evaluation of analysis and design models, gives an indication
of the complexity and facilitate design of more effective testing. Steps for an effective
measurement process are:
System Testing is a black-box testing. System Testing is performed after the integration testing
and before the acceptance testing.
Validation testing
The process of evaluating software during the development process or at the end of the
development process to determine whether it satisfies specified business requirements.
Validation Testing ensures that the product actually meets the client's needs. It can also be
defined as to demonstrate that the product fulfills its intended use when deployed on
appropriate environment.
Validation testing can be best demonstrated using V-Model. The Software/product under test is
evaluated during this type of testing.
Activities:
Unit Testing
Integration Testing
System Testing
Portability: A software device is said to be portable, if it can be freely made to work in various
operating system environments, in multiple machines, with other software products, etc.
Usability: A software product has better usability if various categories of users can easily invoke
the functions of the product.
Reusability: A software product has excellent reusability if different modules of the product can
quickly be reused to develop new products.
Correctness: A software product is correct if various requirements as specified in the SRS
document have been correctly implemented.
Maintainability: A software product is maintainable if bugs can be easily corrected as and when
they show up, new tasks can be easily added to the product, and the functionalities of the
product can be easily modified, etc.
Software quality assurance is a planned and systematic plan of all actions necessary to provide
adequate confidence that an item or product conforms to establish technical requirements. A
set of activities designed to calculate the process by which the products are developed or
manufactured.
o Formal technical reviews that are tested throughout the software process
Reactive risk management catalogues all previous accidents and documents them to find the
errors which lead to the accident. Preventive measures are recommended and implemented
via the reactive risk management method. This is the earlier model of risk management.
Reactive risk management can cause serious delays in a workplace due to the unpreparedness
for new accidents. The unpreparedness makes the resolving process complex as the cause of
accident needs investigation and solution involve high cost, plus extensive modification.
Contrary to reactive risk management, proactive risk management seeks to identify all relevant
risks earlier, before an incident occurs. The present organization has to deal with an era of
rapid environmental change that is caused by technological advancements, deregulation, fierce
competition, and increasing public concern. So, a risk management which relies on past
incidents is not a good choice for any organization. Therefore, new thinking in risk management
was necessary, which paved the way for proactive risk management.
SET -2
PART A
1 Describe a layered technology of software engineering?
2M
Software Engineering is a fully layered technology, to develop software we need to go
from one layer to another. All the layers are connected and each layer demands the
fulfillment of the previous layer.
2.Process
3.Method
4.Tools
Software not only makes your computer hardware perform important tasks, but can
also help your business work more efficiently. The right software can even lead to new
ways of working.
Maintaining software quality helps reduce problems and errors in the final product. It
also allows a company to meet the expectations and requirements of customers.
(1)Dependency
(2)Association
(3)Generalization
(4)Realization
1. Recovery testing: Systems must recover from faults and resume processing within a
pre specified time.
2. Security Testing: This verifies that protection mechanisms built into a system will
protect it from improper penetrations.
2. Product Metrics: A product’s size, design, performance, quality, and complexity are
defined by product metrics.
3. Project Metrics: Project Metrics are used to assess a project’s overall quality. It is used
to estimate a project’s resources and deliverables, as well as to determine costs, productivity,
and flaws.
PART B
1 (a) Define Software Myth? Explain briefly about the types of
myths? 10M
• Myths are false beliefs or misleading attitudes often set up in a user’s brain and often
cause trouble for managers and technical people, users, managers, and developers.
1. Management Myths
2. Customer Myths
3. Practitioner’s Myths
1.Management Myths
Managers are often under pressure for software development under a tight budget, improved
quality, and a packed schedule, often believing in some software myths. Following are some
management myths.
Myth 1
Manuals containing simple procedures, principles, and standards are enough for developers to
acquire all the information they need for software development.
Myth 2
Myth 3
If a project is outsourced to a third party, we could just relax and wait for them to build it.
2. Customer Myths
Customer Myths are generally due to false expectations by customers, and these myths end up
leaving customers with dissatisfaction with the software developers.
Myth 1
Not only detailed conditions, a vague collection of software objectives is enough to begin
programming with.
Myth 2
Softwares are flexible, and developers could accommodate any change later. Developers can
quickly take care of these changes in requirements.
3. Practitioners Myths
Developers often work under management pressure to complete software within a timeframe,
with fewer resources often believing in these software myths. Following are some practitioners’
myths.
Myth 1
Once the software is developed or the code is delivered to the customer, the developer's work
ends.
Myth 2
Software testing could only be possible when the software program starts running.
Myth 3
• In its diagrammatic representation, it looks like a spiral with many loops. The exact
number of loops of the spiral is unknown and can vary from project to project.
• Each loop of the spiral is called a Phase of the software development process.
• The Spiral Model is a Software Development Life Cycle (SDLC) model that provides a
systematic and iterative approach to software development.
Planning: The first phase of the Spiral Model is the planning phase, where the scope of the
project is determined and a plan is created for the next iteration of the spiral.
Risk Analysis: In the risk analysis phase, the risks associated with the project are identified and
evaluated.
Engineering: In the engineering phase, the software is developed based on the requirements
gathered in the previous iteration.
Evaluation: In the evaluation phase, the software is evaluated to determine if it meets the
customer’s requirements and if it is of high quality.
Planning: The next iteration of the spiral begins with a new planning phase, based on the results
of the evaluation.
The Spiral Model is often used for complex and large software development projects, as it
allows for a more flexible and adaptable approach to software development. It is also well-
suited to projects with significant uncertainty or high levels of risk.
QFD helps to achieve structured planning of product by enabling development team to clearly
specify customer needs and expectations of product and then evaluate each part of product
systematically.
DFD is the abbreviation for Data Flow Diagram. The flow of data of a system or a process is
represented by DFD. It also gives insight into the inputs and outputs of each entity and the
process itself. DFD does not have control flow and no loops or decision rules are present.
Specific operations depending on the type of data can be explained by a flowchart. It is a
graphical tool, useful for communicating with users, managers and other personnel; it is useful
for analyzing existing as well as proposed system.
• Data modeling is the process of creating a simplified diagram of a software system and
the data elements it contains, using text and symbols to represent the data and how it
flows.
• A data model can be thought of as a flowchart that illustrates data entities, their
attributes and the relationships between entities.
© Explain the class and sequence diagrams with an example.
10M
• Sequence diagrams are interaction diagrams that illustrate the ordering of
messages according to time.
• Notations − These diagrams are in the form of two-dimensional charts. The objects that
initiate the interaction are placed on the x–axis.
• The messages that these objects send and receive are placed along the y–axis, in the
order of increasing time from top to bottom.
• First of all, Order and Customer are identified as the two elements of the system. They
have a one-to-many relationship because a customer can have multiple orders.
• Order class is an abstract class and it has two concrete classes (inheritance relationship)
Special Order and Normal Order.
• The two inherited classes have all the properties as the Order class. In addition, they
have additional functions like dispatch () and receive ().
3 (a) Discuss architectural styles and patterns?
5M
Architectural Style
The architectural style shows how we organize our code, or how the system will look
like from 10000 feet helicopter view to show the highest level of abstraction of our
system design. Furthermore, when building the architectural style of our system we
focus on layers and modules and how they are communicating with each other. There
are different types of architectural styles:
Structure architectural styles: such as layered, pipes and filters and component-based
styles.
Distributed systems: such as service-oriented, peer to peer style, object request broker,
and cloud computing styles.
Architectural Patterns
Software Architecture:
Software architecture is the blueprint of building software. It shows the overall structure
of the software, the collection of components in it, and how they interact with one
another while hiding the implementation. There are various ways to organize the
components in software architecture. And the different predefined organization of
components in software architectures is known as software architecture patterns.
Layered Pattern
Client-Server Pattern
Event-Driven Pattern
Microkernel Pattern
Microservices Pattern
Design Patterns
Design patterns are accumulative best practices and experiences that software
professionals used over the years to solve the general problem by – trial and error –
they faced during software development.
The design patterns set contain 23 patterns and categorized into three main sets:
1. Creational design patterns: Provide a way to create objects while hiding the creation
logic.
The creational design patterns are: Abstract factory pattern, Singleton pattern, Builder
Pattern, Prototype pattern.
The structural design patterns are: Adapter pattern, Bridge pattern, Filter pattern,
Composite pattern, Decorator pattern.
Abstraction: It means displaying only essential information and hiding the details.
Encapsulation: The wrapping up of data and functions into a single unit is known as
Encapsulation.
Inheritance: It is the process of deriving new class from the existing ones.
Product metrics assist in evaluation of analysis and design models, gives an indication
of the complexity and facilitate design of more effective testing. Steps for an effective
measurement process are:
If the invoked sub module hasn't been developed yet, stubs are used for temporarily
simulating that sub module.
Higher level modules are tested & integrated first and then lower level modules are
tested & integrated.
Main modules are created first and then sub modules are called from it.
This approach is advantageous if the significant bugs occur in the top modules.
Bottom Up Testing
In bottom up integration, testing takes place from bottom to top means system
integration begins with lower level or sub modules.
If the main module hasn't been developed yet, drivers are used for temporarily
simulating that main module.
Lower level modules are tested & integrated first and then higher level modules are
tested & integrated.
Different smaller modules are developed and then integrated with the main module.
This approach is advantageous if the crucial defects occur in the lower modules.
©Discuss and Compare black box testing with white box testing?
10M
Software Testing can be majorly classified into two categories:
Black box testing is a testing technique in which the internal workings of the software
are not known to the tester. The tester only focuses on the input and output of the
software. Whereas, White box testing is a testing technique in which the tester has
knowledge of the internal workings of the software, and can test individual code
snippets, algorithms and methods.
Black box testing is easy to use, requires no programming knowledge and is effective in
detecting functional issues. However, it may miss some important internal defects that
are not related to functionality. White box testing is effective in detecting internal
defects, and ensures that the code is efficient and maintainable. However, it requires
programming knowledge and can be time-consuming.
Equivalence partitioning
Error guessing
Branch testing
Non-functional testing
Regression Testing
Path Testing
Loop Testing
Condition testing
Risk Mitigation:
It is an activity used to avoid problems (Risk Avoidance).
Steps for mitigating the risks as follows.
Risk Monitoring:
It is an activity used for project tracking.
It has the following primary objectives as follows.
4. To allocate what problems are caused by which risks throughout the project.
It assumes that the mitigation activity failed and the risk is a reality. This task is done by
Project manager when risk becomes reality and causes severe problems. If the project
manager effectively uses project mitigation to remove risks successfully then it is
easier to manage the risks. This shows that the response that will be taken for each risk
by a manager. The main objective of the risk management plan is the risk register. This
risk register describes and focuses on the predicted threats to a software project.