You are on page 1of 9

Unit 2

1. Describe Project Scheduling.


- Project scheduling is a process that allows you to create a schedule that you can use to keep
track of tasks and deadlines within a project. Typically, project schedules come in the form of a
calendar or a timeline. While scheduling a project, it’s important that you estimate the start and
end dates for individual tasks and project phases to make sure the project advances at the
desired speed. You can do this by carefully considering different project milestones, activities
and deliverables, which may impact task duration, budget and resource allocation
Benefits:-
Reduced costs: While scheduling a project, you take into consideration the resources that
each task requires. Having a clear outline of all tasks helps team members use those resources
more effectively, which often results in reducing costs.
Reduced lead time: A project schedule clearly outlines all ‘to do’, ‘work in progress’ and ‘work
waiting for next steps’ tasks. Notifying team members about tasks they’re about to complete or
can complete simultaneously often makes them spend less time on those tasks.
Improved productivity:When creating a project schedule, you can establish and define long-
and short-term project goals for the team. By providing a clear direction to all team members
and ensuring they always know what to do, you can improve their productivity and set standards
for the project.
Better risk management:While scheduling a project, you may find yourself analysing each step
and determining potential project risks. A precise project schedule allows you to predict and
prevent different issues from happening, resulting in better risk management.

2). Explain Halstead's metrics with an example


ANS:
1. Halstead's Software Metrics:
According to Halstead's "A computer program is an implementation of an algorithm considered
to be a collection of tokens which can be classified as either operators or operand."
2.Token Count:
In these metrics, a computer program is considered to be a collection of tokens, which may be
classified as either operators or operands. All software science metrics can be defined in terms
of these basic symbols. These symbols are called as a token.
3. The basic measures are
a. n1 = count of unique operators.
b. n2 = count of unique operands.
c. N1 = count of total occurrences of operators.
d. N2 = count of total occurrence of operands
4. Halstead metrics are:
A. Program Volume (V)
The unit of measurement of volume is the standard unit for size "bits." It is the actual size of a
program if a uniform binary encoding for the vocabulary is used.
V=N*log2n
B. Program Level (L)
The value of L ranges between zero and one, with L=1 representing a program written at the
highest possible level (i.e., with minimum size).
L=V*/V
C. Program Difficulty
The difficulty level or error-proneness (D) of the program is proportional to the number of the
unique operator in the program.
D= (n1/2) * (N2/n2)
D . Programming Effort (E)
The unit of measurement of E is elementary mental discriminations.
E=V/L=D*V
3) Explain software design specification
ANS:Software design specification (SDS) is a document that outlines the design and
architecture of a software system. It is typically created during the software development
process and serves as a blueprint for the implementation of the software.
The SDS typically includes the following information:
• Introduction: This section provides an overview of the software system and its purpose, scope,
and objectives.
• Functional requirements: This section defines the functional requirements of the software,
including its features, capabilities, and performance criteria.
• Non-functional requirements: This section defines the non-functional requirements of the
software, such as its reliability, scalability, maintainability, and security.
• System architecture: This section describes the overall architecture of the software system,
including its components, modules, and interfaces.
• Data design: This section outlines the data structures and data flow of the software system.
• User interface design: This section describes the user interface of the software, including the
layout, navigation, and interaction design.
• Software testing: This section outlines the testing plan for the software, including the test
cases and procedures.
• Implementation plan: This section describes the implementation plan for the software,
including the development environment, tools, and resources needed.
4)State all and Elaborate on any 4 characteristics on the metrics designed
for the Object Oriented Approach.

ANS:
Abstraction level: OOA metrics are designed to evaluate the abstraction level of
a software system. This means that they focus on how well the software system
is designed to represent real-world objects and concepts, and how well it
abstracts away the details that are not relevant to the system's purpose. For
example, metrics such as coupling and cohesion can be used to evaluate how
well a system is designed to encapsulate its data and functionality, and how well
it separates concerns between different objects.

Object-oriented principles: OOA metrics are based on the principles of


object-oriented design, such as encapsulation, inheritance, and polymorphism.
These principles are used to ensure that the system is designed in a way that is
modular, reusable, and extensible. For example, metrics such as inheritance
depth and method overriding can be used to evaluate how well a system uses
inheritance and polymorphism to achieve these goals.

Structural and behavioral analysis: OOA metrics can be used to analyze both
the structure and behavior of a software system. Structural analysis focuses on
the relationships between objects, such as coupling, cohesion, and inheritance.
Behavioral analysis focuses on how objects interact with each other, such as
through message passing and method calls. Metrics such as cyclomatic
complexity and code coverage can be used to evaluate the behavior of a system.

Tool support: OOA metrics are often supported by software tools that can
automatically analyze a software system and generate reports on its quality.
These tools can be used to identify potential design flaws and code smells, and
to suggest improvements that can make the system more modular, maintainable,
and reusable. For example, tools such as SonarQube and CodeClimate can be
used to analyze code quality using various OOA metrics.
5)Write a short note on coupling and cohesion.
ANS:Coupling and cohesion are two important concepts in software engineering
that describe the degree to which the components of a software system are
interconnected and how well they work together.

Coupling refers to the degree of interdependence between two or more modules


or components in a software system. High coupling means that the modules are
closely connected and rely heavily on one another, while low coupling means that
the modules are largely independent of each other. High coupling can lead to
issues such as difficulty in making changes to the system and increased risk of
failure if one module fails.

Cohesion, on the other hand, refers to the degree to which the components
within a module or class work together to achieve a single, well-defined purpose.
High cohesion means that the components are closely related and work together
to achieve a common goal, while low cohesion means that the components are
loosely related and may not work well together. High cohesion is generally
desirable as it leads to better maintainability, reusability, and ease of
understanding the system.

6)Give the difference between Function Oriented Design & Object Oriented
Design
ANS:
Function-oriented design (FOD) and Object-oriented design (OOD) are two
approaches to software design. Here are the key differences between them:
Focus:
FOD focuses on functions and procedures that are designed to perform specific
tasks. On the other hand, OOD focuses on objects and their interactions with
each other.
Data handling:
In FOD, data is passed between functions and modules as parameters, whereas
in OOD, data is encapsulated within objects, and objects communicate with each
other through methods.
Reusability:
FOD modules are typically designed to perform specific tasks and are often not
reusable in other parts of the application. In contrast, OOD encourages reuse of
objects and their functionality across different parts of the application.
Maintenance:
In FOD, changes to a function or module can potentially affect other parts of the
application, making maintenance challenging. OOD makes maintenance easier
as changes to an object typically only affect the object itself and its interactions
with other objects.
Modularity:
FOD divides the application into smaller, more manageable functions and
modules. In OOD, objects encapsulate data and behavior, making the application
more modular and easier to manage.
Inheritance:
OOD supports inheritance, where objects can inherit properties and behavior
from other objects, resulting in code reuse and reducing duplication. FOD does
not support inheritance.

7)Explain cost Estimation Techniques


ANS:Analogous Estimating: This technique is based on historical data and
uses similar past projects as a reference for estimating the cost of the current
project. The cost of the previous project is adjusted to reflect differences in
scope, complexity, and other factors that may affect the cost of the current
project.
Bottom-Up Estimating: This technique involves breaking down the project into
smaller tasks and estimating the cost of each task individually. The estimated
cost of each task is then aggregated to arrive at the total cost of the project. This
technique is often used in projects where the scope is well-defined and the tasks
can be accurately estimated.
Three-Point Estimating: This technique involves estimating the cost of a project
based on three estimates – the most likely estimate, the best-case estimate, and
the worst-case estimate. The most likely estimate is based on the project team's
best estimate of the actual cost of the project. The best-case estimate is the
lowest possible cost estimate, and the worst-case estimate is the highest
possible cost estimate.
Parametric Estimating: This technique involves using statistical models to
estimate the cost of a project based on parameters such as the size of the
project, the complexity of the project, and the historical cost data. The model
uses mathematical formulas to calculate the cost estimate based on these
parameters.
Expert Judgment: This technique involves soliciting input from experts in the
field who have experience with similar projects. The experts provide their
opinions on the cost of the project based on their knowledge and expertise. This
technique is often used when historical data is not available, or when the project
is unique and difficult to estimate.

8)Write a short note on COCOMO.


ANS:COCOMO (Constructive Cost Model) is a model for estimating the cost and
effort required to develop software. It was developed by Barry Boehm in the
1980s and has since become a widely used and well-known cost estimation
model in the software industry.
i)COCOMO is based on three different levels of software development
complexity: basic, intermediate, and advanced. Each level has a set of
parameters that are used to estimate the amount of effort required to complete
the project. These parameters include the size of the software, the complexity of
the software, the experience and capability of the development team, and the
software development environment.
ii)The basic COCOMO model is the simplest and most commonly used version of
the model. It uses a single formula to estimate the effort required to complete the
project based on the size of the software. The intermediate and advanced
COCOMO models are more complex and take into account additional factors that
can affect the cost and effort required to develop the software.
iii)COCOMO has been widely used in industry for over three decades, and many
organizations use it as a standard tool for cost estimation. While it is not without
its limitations and criticisms, COCOMO is a useful tool for estimating software
development costs and can help project managers and development teams plan
and budget their projects more effectively.

9) Describe Empirical Estimation model


ANS:
Historical Data: Empirical Estimation Models rely on historical data, such as
data on similar past projects, to estimate the effort, cost, and time required to
complete a software project. This data is used to develop a statistical model that
predicts the effort, cost, and time required for the current project.

Statistical Analysis: Empirical Estimation Models use statistical analysis to


estimate the effort, cost, and time required for a software project. This analysis
includes identifying patterns in historical data and using regression analysis to
develop a mathematical model that predicts the effort, cost, and time required for
the current project.

Types of Empirical Estimation Models: There are several types of Empirical


Estimation Models, including COCOMO (Constructive Cost Model), which uses a
set of equations to estimate the effort, cost, and time required for a software
project; PERT (Program Evaluation and Review Technique), which uses a
three-point estimation technique to estimate the effort, cost, and time required for
a software project; and Monte Carlo Simulation, which uses statistical sampling
techniques to estimate the effort, cost, and time required for a software project.

10)Explain the Concept of Make/Buy decision.

ANS:

1)A make-or-buy decision is an act of choosing between manufacturing a product


in-house or purchasing it from an external supplier.
2)A decision table is a good way to settle with different combination inputs with
their corresponding outputs and is also called a cause-effect table.
Importance of Decision Table:
i)Decision tables are very much helpful in test design techniques.
It helps testers to search the effects of combinations of different inputs and other
software states that must correctly implement business rules.
It provides a regular way of starting complex business rules, that is helpful for
developers as well as for testers.
It assists in the development process with the developer to do a better job.

11)Explain In brief about software scope and feasibility


ANS:
Software scope refers to the specific features and functionalities that a software
system must have to meet the needs of its users. The scope of a software project
outlines the boundaries of what the software will and will not do. It includes both
the functional requirements (i.e., what the software must do) and the
non-functional requirements (i.e., how well the software must do it).

Feasibility, on the other hand, refers to the extent to which a software project is
viable given its technical, economic, and organizational constraints. A feasibility
study is typically conducted early in the software development process to
determine whether the project is feasible or not. Factors that are considered
during a feasibility study may include the availability of resources, the technology
needed to implement the software, the financial resources required, and the
impact of the software on existing systems and processes.

12)Explain design modelling principles

Ans:Design modeling principles are a set of guidelines that can help designers
create effective models of products, systems, or processes. Here are some key
design modeling principles:
Abstraction: Design models should represent the essential features of a
product, system, or process, without getting bogged down in details that are not
relevant to the design goals.
Decomposition: Design models should be decomposed into smaller, more
manageable components that can be designed and analyzed separately.
Modularity: Design models should be organized into modules or components
that can be easily replaced or modified without affecting other parts of the design.
Hierarchy: Design models should be organized into a hierarchy of components,
with each component being responsible for a specific function or set of functions.
Coupling and Cohesion: Design models should aim for high cohesion, meaning
that components should have a clear and specific responsibility, and low
coupling, meaning that the components should have minimal interdependence.
Iteration: Design modeling is an iterative process, and models should be refined
and updated as the design progresses.
Consistency: Design models should be consistent with each other and with the
design goals and requirements.
Traceability: Design models should be traceable, meaning that it should be
possible to track the evolution of the design from the initial requirements through
to the final product.
13)explain diffrence between flowchart and dfd

You might also like