You are on page 1of 38

PART A

1. Define Software Engineering. 2M


Software Engineering is the process of designing, developing, testing, and maintaining
software. It is a systematic and disciplined approach to software development that
aims to create high-quality, reliable, and maintainable software. Software engineering
includes a variety of techniques, tools, and methodologies, including requirements
analysis, design, testing, and maintenance.

2 What is requirement engineering. 2M


Requirements engineering is the process of identifying, eliciting, analyzing, specifying,
validating, and managing the needs and expectations of stakeholders for a software
system.

3 What are the quality attributes in data design 2M


Software Quality Attributes are the benchmarks that describe system’s intended
behavior

Reliability

Maintainability

Usability

Portability

Correctness

Efficiency

Security

Testability

Flexibility
Scalability

Compatibility

Supportability

4 Define testing & Debugging? 2M


Testing: Software testing is a process of identifying defects in the software product. It
is performed to validate the behavior of the software or the application compared to
requirements.

In other words, we can say that the testing is a collection of techniques to determine the
accuracy of the application under the predefined specification but, it cannot identify all
the defects of the software.

Debugging: In the software development process, debugging includes detecting and


modifying code errors in a software program.

5 What is direct and indirect measurement?


2M
There are 2 types of software measurement:

Direct Measurement: In direct measurement, the product, process, or thing is measured


directly using a standard scale.

Indirect Measurement: In indirect measurement, the quantity or quality to be measured


is measured using related parameters i.e. by use of reference.

6 Identify the issues in RMMM? 2M


1. It incurs additional project costs.

2. It takes additional time.

3. For larger projects, implementing an RMMM may itself turn out to be another
tedious project.
4. RMMM does not guarantee a risk-free project, infact, risks may also come up after
the project is delivered.

7 What is personal software process in software engineering?


2M
The Personal Software Process realized that the process of individual use is completely
different from that required by the team.

Personal Software Process (PSP) is the skeleton or the structure that assist the
engineers in finding a way to measure and improve the way of working to a great extent.
It helps them in developing their respective skills at a personal level and the way of
doing planning, estimations against the plans.

8 What is the major distinction between user requirements


and system requirements? 2M
User requirements: These requirements describe what the end-user wants from the
software system. User requirements are usually expressed in natural language and are
typically gathered through interviews, surveys, or user feedback.

System requirements: These requirements specify the technical characteristics of the


software system, such as its architecture, hardware requirements, software
components, and interfaces. System requirements are typically expressed in technical
terms and are often used as a basis for system design.

9 What are the types of diagrams in UML. 2M


UML Diagrams are classified in to two types:

1. Behavioral Diagrams

 Activity Diagram

 State Machine Diagram


 Use Case Diagram

 Interaction Diagram

2. Structure Diagrams

 Class Diagram

 Object Diagram

 Component Diagram

 Deployment Diagram

 Composite Structure Diagram

10 What are the metrics for testing? 2M


Software testing metrics are quantifiable indicators of the software testing process
progress, quality, productivity, and overall health.

Software testing metrics are divided into three categories:

1. Process Metrics: A project’s characteristics and execution are defined by


process metrics.

2. Product Metrics: A product’s size, design, performance, quality, and complexity


are defined by product metrics.

3. Project Metrics: Project Metrics are used to assess a project’s overall quality. It
is used to estimate a project’s resources and deliverables, as well as to
determine costs, productivity, and flaws.

PART B
(1)Explain briefly on (i) the incremental model (ii) The RAD
Model? 10M
(i) Incremental Model
• The incremental process model is also known as the Successive version model.

• First, a simple working system implementing only a few basic features is built and then
that is delivered to the customer.

• Then after many successive iterations/ versions are implemented and delivered to the
customer until the desired system is released.

A, B, and C are modules of Software Products that are incrementally developed and delivered.

Types of Incremental model

There are two types of incremental model.

1. Staged Delivery Model

2. Parallel Development Model

• 1. Staged Delivery Model: Construction of only one part of the project at a time.
2. Parallel Development Model

• Different subsystems are developed at the same time. It can decrease the calendar time
needed for the development, i.e. TTM (Time to Market) if enough resources are
available.

(ii) Rapid application development model (RAD)


• The RAD model is a type of incremental process model in which there is extremely short
development cycle. When the requirements are fully understood and the component-
based construction approach is adopted then the RAD model is used.

• Various phases in RAD are Requirements Gathering, Analysis and Planning, Design, Build
or Construction, and finally Deployment.

• Multiple teams work on developing the software system using RAD model and it is
shown in fig.
• This model consists of 4 basic phases:

Requirements Planning – It involves the use of various techniques used in requirements


elicitation like brainstorming, task analysis, form analysis, user scenarios, FAST (Facilitated
Application Development Technique), etc.

User Description – This phase consists of taking user feedback and building the prototype
using developer tools.

Construction – In this phase, refinement of the prototype and delivery takes place.

Cutover – All the interfaces between the independent modules developed by separate teams
have to be tested properly. The use of powerfully automated tools and subparts makes testing
easier. This is followed by acceptance testing by the user.

(b)Explain in detail the capability Maturity Model Integration


(CMMI)? 10M
Capability Maturity Model Integration (CMMI) is a successor of CMM and is a more evolved
model that incorporates best components of individual disciplines of CMM like Software CMM,
Systems Engineering CMM, People CMM, etc.

Since CMM is a reference model of matured practices in a specific discipline, so it becomes


difficult to integrate these disciplines as per the requirements. This is why CMMI is used as it
allows the integration of multiple disciplines as and when needed.

Objectives of CMMI

• Fulfilling customer needs and expectations.

• Value creation for investors/stockholders.


• Market growth is increased.

• Improved quality of products and services.

• Enhanced reputation in Industry.

Representations for CMMI

• A representation allows an organization to pursue a different set of improvement


objectives. There are two representations for CMMI :

• Staged Representation

• Continuous Representation

• Staged Representation :

– uses a pre-defined set of process areas to define improvement path.

– provides a sequence of improvements, where each part in the sequence serves


as a foundation for the next.

– an improved path is defined by maturity level.

– maturity level describes the maturity of processes in organization.

– Staged CMMI representation allows comparison between different organizations


for multiple maturity levels.

Continuous Representation :

 allows selection of specific process areas.

 uses capability levels that measures improvement of an individual process area.

 Continuous CMMI representation allows comparison between different organizations on


a process-area-by-process-area basis.

 allows organizations to select processes which require more improvement.

 In this representation, order of improvement of various processes can be selected which


allows the organizations to meet their objectives and eliminate risks.

2 (a)Discuss briefly about requirement validation. 5M


• Validation is a process used for checking if the system is up to the mark or not.
It is about testing and validating the system and seeing if the system we built is right or
not and if it meets the customer’s expectations or not.

Various methods that are used to validate the system include black-box testing, white-
box testing, integration testing, and unit testing.

Validation always comes after verification.

There are various techniques that can be used to validate the requirements. They include:

Checks – While checking the requirements, we proofread the requirements documents


to ensure that no elicitation notes are missed out.

During these checks, we also check the traceability level between all the requirements.
For this, the creation of a traceability matrix is required. This matrix ensures that all the
requirements are being considered seriously and everything that is specified is justified.

We also check the format of requirements during these checks. We see if the
requirements are clear and well-written or not.

Prototyping – This is a way of building a model or simulation of the system that is to be


built by the developers. This is a very popular technique for requirements validation among
stakeholders and users as it helps them to easily identify the problems.

We can just reach out to the users and stakeholders and get their feedback

• Test Design – During test designing, we follow a small procedure where we first finalize
the testing team, then build a few testing scenarios. Functional tests can be derived
from the requirements specification itself where each requirement has an associated
test.

• On the contrary, the non-functional requirements are hard to test as each test has to be
traced back to its requirement. The aim of this is to figure out the errors in the
specification or the details that are missed out.

• Requirements Review – During requirement review, a group of knowledgeable people


analyze the requirements in a structured and detailed manner and identify the potential
problems. After that, they gather up to discuss the issues and figure out a way to
address the issues.

• A checklist is prepared consisting of various standards and the reviewers check the
boxes to provide a formal review. After that, a final approval sign-off is done.

(b)Write short notes on data modeling? 5M


• Data modeling is the process of creating a visual representation of either a whole
information system or parts of it to communicate connections between data points and
structures.

• Data modeling is the process of creating a simplified diagram of a software system and
the data elements it contains, using text and symbols to represent the data and how it
flows.

• A data model can be thought of as a flowchart that illustrates data entities, their
attributes and the relationships between entities.

©Explain how requirements are managed in software project


management? 10M
• Requirements management is the process of gathering, analyzing, verifying, and
validating the needs and requirements for the given product or system being developed.

• Requirement management is the process of managing changing requirements during


the requirements engineering process and system development.

• New requirements emerge during the process as business needs a change, and a better
understanding of the system is developed.

• The priority of requirements from different viewpoints changes during development


process.

 The business and technical environment of the system changes during the development.

3 (a) Explain about the collaboration diagram. 5M


• Collaboration diagrams are interaction diagrams that illustrate the structure of the
objects that send and receive messages.

• Notations − In these diagrams, the objects that participate in the interaction are shown
using vertices. The links that connect the objects are used to send and receive
messages.

Example − Collaboration diagram for the Automated Trading House System


(b)Explain about the component diagram. 5M
A component diagram depicts how components are wired together to form larger components
or software systems.

• This component diagram shows the structure of the ATM system, which consists of the
software components and their interfaces, and how they work together.

• The component diagram of ATM system has 8 components which are the account
database, transaction database, balance inquiry, withdraw, deposit, loan, card, and the
user.

• The components under the ATM system are the required interface at the same time are
provided interface which serves as the provider for the transaction database and
required for the accounts database.

The ATM System UML component diagram explains the sketch of the required software
and hardware components and the dependencies between them.
©Explain software design? Explain data flow oriented design?
10M
Software design is a mechanism to transform user requirements into some suitable form, which
helps the programmer in software coding and implementation. It deals with representing the
client's requirement, as described in SRS (Software Requirement Specification) document, into a
form, i.e., easily implementable using programming language.

The software design phase is the first step in SDLC (Software Design Life Cycle), which moves
the concentration from the problem domain to the solution domain. In software design, we
consider the system to be a set of components or modules with clearly defined behaviors &
boundaries.

Data flow oriented design

DFD is the abbreviation for Data Flow Diagram. The flow of data of a system or a process is
represented by DFD. It also gives insight into the inputs and outputs of each entity and the
process itself. DFD does not have control flow and no loops or decision rules are present.
Specific operations depending on the type of data can be explained by a flowchart. It is a
graphical tool, useful for communicating with users, managers and other personnel. It is useful
for analyzing existing as well as proposed system.

Data Flow Diagram can be represented in several ways. The DFD belongs to structured-analysis
modeling tools. Data Flow diagrams are very popular because they help us to visualize the
major steps and data involved in software-system processes.

Characteristics of DFD

 DFDs are commonly used during problem analysis.

 DFDs are quite general and are not limited to problem analysis for software
requirements specification.

 DFDs are very useful in understanding a system and can be effectively used during
analysis.

 It views a system as a function that transforms the inputs into desired outputs.

4.(a)Explain about the importance of test strategies for


conventional software? 5M
Conventional testing refers to the traditional approach of software testing that has been widely
used for several decades. This approach involves a series of activities that aim to identify
defects or errors in a software product and ensure that the software meets the specified
requirements and performs as expected.

Types of Conventional Testing

There are various types of conventional testing techniques that can be used during the testing
process. Some of the commonly used techniques include:

1. Unit testing: This involves testing individual modules or components of the software to
ensure that they perform as expected.

2. Integration testing: This involves testing the software modules in combination to ensure
that they work together correctly.

3. System testing: System testing is a type of testing that verifies a software product's
integration and completion. A system test's objective is to gauge how well the system
requirements are met from beginning to end. In most cases, a bigger computer-based
system merely consists of a small portion of the software.

4. Validation Testing: The process of evaluating software during the development process
or at the end of the development process to determine whether it satisfies specified
business requirements. Validation Testing ensures that the product actually meets the
client's needs.

5. Acceptance testing: This involves testing the software from the end-user's perspective
to ensure that it meets their needs and requirements.

6. Regression testing: This involves rerunning previously executed test cases to ensure
that the changes made to the software have not introduced new defects or errors.
(b) Discuss a framework for product metrics 5M
A fundamental framework and a set of basic principles for the measurement of product
metrics for software should be established. Talking in terms of software engineering:

- Measure provides a quantitative indication of extent, amount, dimension, size of an


attribute of a product or process. Measure is established when a single data point has been
collected.

- Measurement is an act of determining a measure. Measurement occurs when one or


more data points are collected.

- Metric is the quantitative measure of the degree to which a system, component or


process possess a given attribute. It relates individual measures in some way.

- Indicator is a metric or combination of metrics providing insight into software process,


project or product itself.

There is a need to measure and control software complexity. It should be possible to


develop measures of different attributes. These measures and metric can be used as
independent indicators of the quality of analysis and design models.

Product metrics assist in evaluation of analysis and design models, gives an indication
of the complexity and facilitate design of more effective testing. Steps for an effective
measurement process are:

- Formulation which means the derivation of software measures and metrics.

- Collection is the way ti accumulate data required to derive the metrics.

- Analysis is the computation of metrics.

- Interpretation is the evaluation of metrics.

- Feedback is the recommendation derived after interpretation.

©Compare validation testing and system testing? 10M


System Testing is a type of software testing that is performed on a complete integrated system
to evaluate the compliance of the system with the corresponding requirements. In system
testing, integration testing passed components are taken as input. The goal of integration
testing is to detect any irregularity between the units that are integrated together. System
testing detects defects within both the integrated units and the whole system. The result of
system testing is the observed behavior of a component or a system when it is tested. System
Testing is carried out on the whole system in the context of either system requirement
specifications or functional requirement specifications or in the context of both.

System Testing is a black-box testing. System Testing is performed after the integration testing
and before the acceptance testing.

Validation testing

The process of evaluating software during the development process or at the end of the
development process to determine whether it satisfies specified business requirements.

Validation Testing ensures that the product actually meets the client's needs. It can also be
defined as to demonstrate that the product fulfills its intended use when deployed on
appropriate environment.

Validation Testing - Workflow:

Validation testing can be best demonstrated using V-Model. The Software/product under test is
evaluated during this type of testing.
Activities:

 Unit Testing

 Integration Testing

 System Testing

 User Acceptance Testing

5.(a)Explain seven principals of risk management by developing


a risk table? 10M
7 Principles of Project Risk Management
(b)Explain about Quality concepts and Quality assurance?
5M
Software Quality Concepts

Portability: A software device is said to be portable, if it can be freely made to work in various
operating system environments, in multiple machines, with other software products, etc.

Usability: A software product has better usability if various categories of users can easily invoke
the functions of the product.

Reusability: A software product has excellent reusability if different modules of the product can
quickly be reused to develop new products.
Correctness: A software product is correct if various requirements as specified in the SRS
document have been correctly implemented.

Maintainability: A software product is maintainable if bugs can be easily corrected as and when
they show up, new tasks can be easily added to the product, and the functionalities of the
product can be easily modified, etc.

Software Quality Assurance (SQA)

Software quality assurance is a planned and systematic plan of all actions necessary to provide
adequate confidence that an item or product conforms to establish technical requirements. A
set of activities designed to calculate the process by which the products are developed or
manufactured.

SQA consists of:

o A quality management approach

o Effective Software engineering technology (methods and tools)

o Formal technical reviews that are tested throughout the software process

o A multitier testing strategy

o Control of software documentation and the changes made to it.

o A procedure to ensure compliances with software development standards

o Measuring and reporting mechanisms.

©Elaborate the concepts of Risk management Reactive vs Proactive Risk


strategies 5M
Definition of Proactive and Reactive Risk Management

Reactive: “A response based risk management approach, which is dependent on accident


evaluation and audit based findings.”

Proactive: “Adaptive, closed loop feedback control strategy based on measurement,


observation of the present safety level and planned explicit target safety level with a creative
intellectuality.”

Reactive Risk Management

Reactive risk management catalogues all previous accidents and documents them to find the
errors which lead to the accident. Preventive measures are recommended and implemented
via the reactive risk management method. This is the earlier model of risk management.
Reactive risk management can cause serious delays in a workplace due to the unpreparedness
for new accidents. The unpreparedness makes the resolving process complex as the cause of
accident needs investigation and solution involve high cost, plus extensive modification.

Proactive Risk Management

Contrary to reactive risk management, proactive risk management seeks to identify all relevant
risks earlier, before an incident occurs. The present organization has to deal with an era of
rapid environmental change that is caused by technological advancements, deregulation, fierce
competition, and increasing public concern. So, a risk management which relies on past
incidents is not a good choice for any organization. Therefore, new thinking in risk management
was necessary, which paved the way for proactive risk management.

SET -2
PART A
1 Describe a layered technology of software engineering?
2M
Software Engineering is a fully layered technology, to develop software we need to go
from one layer to another. All the layers are connected and each layer demands the
fulfillment of the previous layer.

Layered technology is divided into four parts:

1.A quality focus

2.Process

3.Method

4.Tools

2 What is meant by Process Assessment? 2M

Software Process Assessment is a disciplined and organized examination of the


software process which is being used by any organization based on the process model.
The Software Process Assessment includes many fields and parts like identification
and characterization of current practices, the ability of current practices to control or
avoid significant causes of poor (software) quality, cost, schedule and identifying areas
of strengths and weaknesses of the software.

3 What is meant by feasibility study? 2M


A feasibility study analyzes the viability of a proposed project or venture. It is used to
evaluate a project's potential, including the technical, financial, and economic aspects,
and to determine whether it should proceed. A feasibility study aims to identify and
assess a proposed project's strengths, weaknesses, opportunities, and threats to
determine its potential for success.

4 Define software architecture and why is it important?


2M
The software architecture of a system represents the design decisions related to overall
system structure and behavior. Architecture helps stakeholders understand and analyze
how the system will achieve essential qualities such as modifiability, availability, and
security.

Software not only makes your computer hardware perform important tasks, but can
also help your business work more efficiently. The right software can even lead to new
ways of working.

5 Discuss about Design Quality 2M


Software quality is defined as a field of study and practice that describes the desirable
attributes of software products. There are two main approaches to software quality:
defect management and quality attributes.

Maintaining software quality helps reduce problems and errors in the final product. It
also allows a company to meet the expectations and requirements of customers.

6 Draw a neat use case diagram for ATM withdraws


transaction? 2M
7 What are the types of relationships in UML. 2M
Relationship is most important building block of UML.

There are four kinds of relationships available.

(1)Dependency

(2)Association

(3)Generalization

(4)Realization

8 Define types of system testing? 2M


The types of tests are:

1. Recovery testing: Systems must recover from faults and resume processing within a
pre specified time.

2. Security Testing: This verifies that protection mechanisms built into a system will
protect it from improper penetrations.

3. Stress testing: It executes a system in a manner that demands resources in


abnormal quantity, frequency or volume and tests the robustness of the system.

4. Performance Testing: This is designed to test the run-time performance of s/w


within the context of an integrated system. They require both h/w and s/w
instrumentation.
9 What are the metrics for testing? 2M
Software testing metrics are quantifiable indicators of the software testing process progress,
quality, productivity, and overall health.

Software testing metrics are divided into three categories:

1. Process Metrics: A project’s characteristics and execution are defined by process


metrics.

2. Product Metrics: A product’s size, design, performance, quality, and complexity are
defined by product metrics.

3. Project Metrics: Project Metrics are used to assess a project’s overall quality. It is used
to estimate a project’s resources and deliverables, as well as to determine costs, productivity,
and flaws.

10 Define about Software Quality Assurance 2M


Software Quality Assurance (SQA) is simply a way to assure quality in the software. It is
the set of activities which ensure processes, procedures as well as standards are
suitable for the project and implemented correctly. Software Quality Assurance is a
process which works parallel to development of software. It focuses on improving the
process of development of software so that problems can be prevented before they
become a major issue.

PART B
1 (a) Define Software Myth? Explain briefly about the types of
myths? 10M
• Myths are false beliefs or misleading attitudes often set up in a user’s brain and often
cause trouble for managers and technical people, users, managers, and developers.

Types of Software Myths


There are three kinds of software myths:-

1. Management Myths

2. Customer Myths

3. Practitioner’s Myths

1.Management Myths

Managers are often under pressure for software development under a tight budget, improved
quality, and a packed schedule, often believing in some software myths. Following are some
management myths.

Myth 1

Manuals containing simple procedures, principles, and standards are enough for developers to
acquire all the information they need for software development.

Myth 2

Falling behind on schedule could be taken care of by adding more programmers.

Myth 3

If a project is outsourced to a third party, we could just relax and wait for them to build it.

2. Customer Myths

Customer Myths are generally due to false expectations by customers, and these myths end up
leaving customers with dissatisfaction with the software developers.

Following are some customer myths.

Myth 1

Not only detailed conditions, a vague collection of software objectives is enough to begin
programming with.

Myth 2

Softwares are flexible, and developers could accommodate any change later. Developers can
quickly take care of these changes in requirements.

3. Practitioners Myths

Developers often work under management pressure to complete software within a timeframe,
with fewer resources often believing in these software myths. Following are some practitioners’
myths.
Myth 1

Once the software is developed or the code is delivered to the customer, the developer's work
ends.

Myth 2

Software testing could only be possible when the software program starts running.

Myth 3

Unnecessary Documentation slows down the process of software development.

(b)Explain the SPIRAL model in detail. 10M


• The Spiral Model is one of the most important Software Development Life Cycle models,
which provides support for Risk Handling.

• In its diagrammatic representation, it looks like a spiral with many loops. The exact
number of loops of the spiral is unknown and can vary from project to project.

• Each loop of the spiral is called a Phase of the software development process.

• The Spiral Model is a Software Development Life Cycle (SDLC) model that provides a
systematic and iterative approach to software development.

Phases of Spiral Model

Planning: The first phase of the Spiral Model is the planning phase, where the scope of the
project is determined and a plan is created for the next iteration of the spiral.

Risk Analysis: In the risk analysis phase, the risks associated with the project are identified and
evaluated.

Engineering: In the engineering phase, the software is developed based on the requirements
gathered in the previous iteration.

Evaluation: In the evaluation phase, the software is evaluated to determine if it meets the
customer’s requirements and if it is of high quality.

Planning: The next iteration of the spiral begins with a new planning phase, based on the results
of the evaluation.

The Spiral Model is often used for complex and large software development projects, as it
allows for a more flexible and adaptable approach to software development. It is also well-
suited to projects with significant uncertainty or high levels of risk.

2.(a) Write short notes on QFD & DFD? 5M


Quality Function Deployment (QFD) is process or set of tools used to define the customer
requirements for product and convert those requirements into engineering specifications and
plans such that the customer requirements for that product are satisfied.

QFD helps to achieve structured planning of product by enabling development team to clearly
specify customer needs and expectations of product and then evaluate each part of product
systematically.

DFD is the abbreviation for Data Flow Diagram. The flow of data of a system or a process is
represented by DFD. It also gives insight into the inputs and outputs of each entity and the
process itself. DFD does not have control flow and no loops or decision rules are present.
Specific operations depending on the type of data can be explained by a flowchart. It is a
graphical tool, useful for communicating with users, managers and other personnel; it is useful
for analyzing existing as well as proposed system.

It provides an overview of:


 What data is system processes.

 What transformation is performed.

 What data are stored.

 What results are produced, etc.

(b)Write short notes on data modeling? 5M


• Data modeling is the process of creating a visual representation of either a whole
information system or parts of it to communicate connections between data points and
structures.

• Data modeling is the process of creating a simplified diagram of a software system and
the data elements it contains, using text and symbols to represent the data and how it
flows.

• A data model can be thought of as a flowchart that illustrates data entities, their
attributes and the relationships between entities.
© Explain the class and sequence diagrams with an example.
10M
• Sequence diagrams are interaction diagrams that illustrate the ordering of
messages according to time.

• Notations − These diagrams are in the form of two-dimensional charts. The objects that
initiate the interaction are placed on the x–axis.

• The messages that these objects send and receive are placed along the y–axis, in the
order of increasing time from top to bottom.

Example − A sequence diagram for the Automated Trading House System.


Class Diagram: Class diagrams are one of the most widely used diagrams. It displays
the system's class, attributes, and methods.

Example: Order System of an application

• First of all, Order and Customer are identified as the two elements of the system. They
have a one-to-many relationship because a customer can have multiple orders.

• Order class is an abstract class and it has two concrete classes (inheritance relationship)
Special Order and Normal Order.

• The two inherited classes have all the properties as the Order class. In addition, they
have additional functions like dispatch () and receive ().
3 (a) Discuss architectural styles and patterns?
5M

Architectural Style
The architectural style shows how we organize our code, or how the system will look
like from 10000 feet helicopter view to show the highest level of abstraction of our
system design. Furthermore, when building the architectural style of our system we
focus on layers and modules and how they are communicating with each other. There
are different types of architectural styles:

Structure architectural styles: such as layered, pipes and filters and component-based
styles.

Messaging styles: such as Implicit invocation, asynchronous messaging and publish-


subscribe style.

Distributed systems: such as service-oriented, peer to peer style, object request broker,
and cloud computing styles.

Shared memory styles: such as role-based, blackboard, database-centric styles.

Adaptive system styles: such as microkernel style, reflection, domain-specific language


styles.

Architectural Patterns
Software Architecture:

Software architecture is the blueprint of building software. It shows the overall structure
of the software, the collection of components in it, and how they interact with one
another while hiding the implementation. There are various ways to organize the
components in software architecture. And the different predefined organization of
components in software architectures is known as software architecture patterns.

Different Software Architecture Patterns:

 Layered Pattern

 Client-Server Pattern

 Event-Driven Pattern

 Microkernel Pattern

 Microservices Pattern

(b)Explain pattern based software design in a detail manner?


5M

Design Patterns
Design patterns are accumulative best practices and experiences that software
professionals used over the years to solve the general problem by – trial and error –
they faced during software development.

The design patterns set contain 23 patterns and categorized into three main sets:

1. Creational design patterns: Provide a way to create objects while hiding the creation
logic.

The creational design patterns are: Abstract factory pattern, Singleton pattern, Builder
Pattern, Prototype pattern.

2. Structural patterns: Concerned with class and object composition.

The structural design patterns are: Adapter pattern, Bridge pattern, Filter pattern,
Composite pattern, Decorator pattern.

3. Behavioral patterns: Behavioral patterns are concerned with communications


between objects.

The behavioral design patterns are: Responsibility pattern, Command pattern,


Interpreter pattern, Iterator pattern, Mediator pattern, Memento pattern, Observer
pattern, State pattern, Null object pattern.

© Describe about the conceptual building blocks of UML.


10M
Conceptual Modeling
A Conceptual Model consists of several interrelated concepts.

Objects: All entities involved in a solution design are knows as objects.

Example: persons, banks, companies.

Class: A Class is a generalized description of an object.


Messages: Objects communicate with each other by sending messages to each other.

Abstraction: It means displaying only essential information and hiding the details.

Encapsulation: The wrapping up of data and functions into a single unit is known as
Encapsulation.

Inheritance: It is the process of deriving new class from the existing ones.

Polymorphism: It is the mechanism of representing objects having multiples forms


used for different purposes.

4 (a)Discuss a framework for product metrics


5M
A fundamental framework and a set of basic principles for the measurement of product
metrics for software should be established. Talking in terms of software engineering:

- Measure provides a quantitative indication of extent, amount, dimension, size of an


attribute of a product or process. Measure is established when a single data point has
been collected.

- Measurement is an act of determining a measure. Measurement occurs when one or


more data points are collected.

- Metric is the quantitative measure of the degree to which a system, component or


process possess a given attribute. It relates individual measures in some way.

- Indicator is a metric or combination of metrics providing insight into software process,


project or product itself.

There is a need to measure and control software complexity. It should be possible to


develop measures of different attributes. These measures and metric can be used as
independent indicators of the quality of analysis and design models.

Product metrics assist in evaluation of analysis and design models, gives an indication
of the complexity and facilitate design of more effective testing. Steps for an effective
measurement process are:

- Formulation which means the derivation of software measures and metrics.

- Collection is the way to accumulate data required to derive the metrics.

- Analysis is the computation of metrics.

- Interpretation is the evaluation of metrics.

- Feedback is the recommendation derived after interpretation.

(b)Discuss about the top-down and bottom-up testing?5M


Top Down Testing
In top down Integration, testing takes place from top to bottom means system
integration begins with main modules.

If the invoked sub module hasn't been developed yet, stubs are used for temporarily
simulating that sub module.

Higher level modules are tested & integrated first and then lower level modules are
tested & integrated.

Main modules are created first and then sub modules are called from it.

Control flows from top to bottom.

This approach is advantageous if the significant bugs occur in the top modules.

Testing complexity is low.

Bottom Up Testing
In bottom up integration, testing takes place from bottom to top means system
integration begins with lower level or sub modules.

If the main module hasn't been developed yet, drivers are used for temporarily
simulating that main module.

Lower level modules are tested & integrated first and then higher level modules are
tested & integrated.
Different smaller modules are developed and then integrated with the main module.

Control flows from bottom to top.

This approach is advantageous if the crucial defects occur in the lower modules.

Testing complexity is high and data intensive.

©Discuss and Compare black box testing with white box testing?
10M
Software Testing can be majorly classified into two categories:

Black Box Testing is a software testing method in which the internal


structure/design/implementation of the item being tested is not known to the tester.
Only the external design and structure are tested.

White Box Testing is a software testing method in which the internal


structure/design/implementation of the item being tested is known to the tester.
Implementation and impact of the code are tested.

Black box testing is a testing technique in which the internal workings of the software
are not known to the tester. The tester only focuses on the input and output of the
software. Whereas, White box testing is a testing technique in which the tester has
knowledge of the internal workings of the software, and can test individual code
snippets, algorithms and methods.

Black box testing is easy to use, requires no programming knowledge and is effective in
detecting functional issues. However, it may miss some important internal defects that
are not related to functionality. White box testing is effective in detecting internal
defects, and ensures that the code is efficient and maintainable. However, it requires
programming knowledge and can be time-consuming.

Black-box test design techniques-

 Decision table testing


 All-pairs testing

 Equivalence partitioning

 Error guessing

White-box test design techniques-

 Control flow testing

 Data flow testing

 Branch testing

Types of Black Box Testing:


 Functional Testing

 Non-functional testing

 Regression Testing

Types of White Box Testing:

 Path Testing
 Loop Testing
 Condition testing

5 (a)Explain about RMMM Plan?


10M
A risk management technique is usually seen in the software Project plan. This can be
divided into Risk Mitigation, Monitoring, and Management Plan (RMMM). In this plan, all
works are done as part of risk analysis. As part of the overall project plan project
manager generally uses this RMMM plan.

Risk Mitigation:
It is an activity used to avoid problems (Risk Avoidance).
Steps for mitigating the risks as follows.

1. Finding out the risk.

2. Removing causes that are the reason for risk creation.

3. Controlling the corresponding documents from time to time.

4. Conducting timely reviews to speed up the work.

Risk Monitoring:
It is an activity used for project tracking.
It has the following primary objectives as follows.

1. To check if predicted risks occur or not.

2. To ensure proper application of risk aversion steps defined for risk.

3. To collect data for future risk analysis.

4. To allocate what problems are caused by which risks throughout the project.

Risk Management and planning:

It assumes that the mitigation activity failed and the risk is a reality. This task is done by
Project manager when risk becomes reality and causes severe problems. If the project
manager effectively uses project mitigation to remove risks successfully then it is
easier to manage the risks. This shows that the response that will be taken for each risk
by a manager. The main objective of the risk management plan is the risk register. This
risk register describes and focuses on the predicted threats to a software project.

(b)Explain risk projection in detail?


5M
Risk Projection: Risk projection is a process of identifying, analyzing, and predicting
potential risks that can affect a project or an organization.
The primary objective of risk projection is to assess the potential impact of risks on the
project and develop strategies to mitigate or avoid those risks. Risk projection involves
identifying potential risks, evaluating the likelihood and impact of those risks, and
developing a plan to address them.

It is an ongoing process that should be conducted at regular intervals throughout the


project's lifecycle to ensure that new risks are identified and addressed promptly.

©What is the risk refinement? 5M


Risk Refinement: Process of restating the risks as a set of more detailed risks that will
be easier to mitigate, monitor, and manage.

How to identify and refine the risk in software engineering?

 Identify risk factors.

 Assess risk probabilities and effects on the project.

 Develop strategies to mitigate identified risks.

 Monitor risk factors.

 Invoke a contingency plan.

 Manage the crisis.

You might also like