You are on page 1of 13

Experiment No.

7
Aim: Estimation of Project metrics using COCOMO Model

Requirements:

● Hardware
○ Pentium(R) 4 CPU 2.26 GHz, 2 GB RAM
● Software
○ STAR UML TOOL/Rational Rose

Theory:

Project Estimation Techniques

● A software project is not just about writing a few hundred lines of source code to
achieve a particular objective.
● The scope of a software project is comparatively quite large, and such a project could
take several years to complete. However, the phrase "quite large" could only give some
(possibly vague) qualitative information.
● As in any other science and engineering discipline, one would be interested to measure
how complex a project is. One of the major activities of the project planning phase,
therefore, is to estimate various project parameters in order to take proper decisions.
● Some important project parameters that are estimated include:
1. Project size: What would be the size of the code written say, in number of lines, files,
modules?
2. Cost: How much would it cost to develop a software? A software may be just pieces of
code, but one has to pay to the managers, developers, and other project personnel.

3. Duration: How long would it be before the software is delivered to the clients?

4. Effort: How much effort from the team members would be required to create the
software?

In this experiment we will focus on two methods for estimating project metrics: COCOMO

COCOMO
● COCOMO (Constructive Cost Model) was proposed by Boehm. According to him, there
could be three categories of software projects: organic, semidetached, and embedded.
● The classification is done considering the characteristics of the software, the
development team and environment. These product classes typically correspond to
application, utility and system programs, respectively.
● Data processing programs could be considered as application programs. Compilers,
linkers, are examples of utility programs.
● Operating systems, real-time system programs are examples of system programs. One
could easily apprehend that it would take much more time and effort to develop an OS
than an attendance management system.

The concept of organic, semidetached, and embedded systems are described below.

● Organic: A development project is said to be of organic type, if The project deals with
developing a well understood application ,The development team is small,The team
members have prior experience in working with similar types of projects
● Semidetached: A development project can be categorized as semidetached type, if The
team consists of some experienced as well as inexperienced staff, Team members may
have some experience on the type of system to be developed
● Embedded: Embedded type of development project are those, which Aims to develop a
software strongly related to machine hardware,Team size is usually large

Boehm suggested that estimation of project parameters should be done through three stages:
Basic COCOMO, Intermediate COCOMO, and Complete COCOMO.

Basic COCOMO Model

The basic COCOMO model helps to obtain a rough estimate of the project parameters. It
estimates effort and time required for development in the following way:

Effort = a * (KDSI)b PM

Tdev = 2.5 * (Effort)c Months

where

KDSI is the estimated size of the software expressed in Kilo Delivered Source Instructions

a, b, c are constants determined by the category of software project


Effort denotes the total effort required for the software development, expressed in person
months (PMs)

Tdev denotes the estimated time required to develop the software (expressed in months)

The value of the constants a, b, c are given below:

Intermediate COCOMO Model

● The basic COCOMO model considers that effort and development time depends only on
the size of the software.
● However, in real life there are many other project parameters that influence the
development process.
● The intermediate COCOMO take those other factors into consideration by defining a set
of 15 cost drivers (multipliers) as shown in the table below [i].
● Thus, any project that makes use of modern programming practices would have lower
estimates in terms of effort and cost. Each of the 15 such attributes can be rated on a
six-point scale ranging from "very low" to "extra high" in their relative order of
importance.
● Each attribute has an effort multiplier fixed as per the rating. The product of effort
multipliers of all the 15 attributes gives the Effort Adjustment Factor (EAF).
EAF is used to refine the estimates obtained by basic COCOMO as follows:

Effort|corrected = Effort * EAF

Tdev|corrected = 2.5 * (Effort| corrected) c

Complete COCOMO Model

● Both the basic and intermediate COCOMO models consider a software to be a single
homogeneous entity -- an assumption, which is rarely true. In fact, many real life
applications are made up of several smaller sub-systems. (One might not even develop
all the sub-systems -- just use the available services). The complete COCOMO model
takes these factors into account to provide a far more accurate estimate of project
metrics.
● To illustrate this, consider a very popular distributed application: the ticket booking
system of the Indian Railways. There are computerized ticket counters in most of the
railway stations of our country. Tickets can be booked / cancelled from any such
counter. Reservations for future tickets, cancellation of reserved tickets could also be
performed. On a high level, the ticket booking system has three main components:
1. Database
2. Graphical User Interface (GUI)
3. Networking facilities

Among these, development of the GUI is considered as an organic project type; the database
module could be considered as a semi-detached software. The networking module can be
considered as an embedded software. To obtain a realistic cost, one should estimate the costs
for each component separately, and then add it up.

Advantages of COCOMO Model

COCOMO is a simple model, and should help one to understand the concept of project metrics
estimation.

Features of Data Flow Diagram:

● Information and/or data flow is represented by a labelled arrow

● Processes (transformations) are represented by labelled circles (bubbles)

● Information sources and sinks are represented by boxes

● Files and depositories are represented by a rounded rectangle or a double line.

Purpose of Data Flow Diagram

● Definition of the data items, their sources, and destinations


● Determining boundaries of the system. I.e., which of the data items are the internal
elements and which are external
● Developing Level 0 basic overview diagram
● Going further into the specific processes and functions (Level 1, 2 and so on)
● Going back and confirming the accuracy and consistency of causality

Data Flow Diagram Notation

All data flow diagrams include four main elements: entity, process, data store and data flow.

1. External Entity 
● Also known as actors, sources or sinks, and terminators,
external entities produce and consume data that flows
between the entity and the system being diagrammed.
● These data flows are the inputs and outputs of the DFD.
● They are external to the system being analyzed, these
entities are typically placed at the boundaries of the
diagram.
● They can represent another system or indicate a
subsystem.
2. Process 
● An activity that changes or transforms data flows. They
transform incoming data to outgoing data; all
processes must have inputs and outputs on a DFD.
● This symbol is given a simple name based on its
function, such as “Ship Order,” rather than being
labeled “process” on a diagram.
● Processes are typically oriented from top to bottom
and left to right on a data flow diagram.
3. Data Store 
● A data store does not generate any operations but simply holds data for later
access.
● Data stores could consist of files held long term or a batch
of documents stored briefly while they wait to be
processed.
● Input flows to a data store include information or
operations that change the stored data.
● Output flows would be data retrieved from the store.
4. Data Flow 
● Movement of data between external entities,
processes and data stores is represented with an
arrow symbol, which indicates the direction of
flow.
● This data could be electronic, written or verbal.
● Input and output data flows are labeled based on the type of data or its
associated process or data store, and this name is written alongside the arrow

Steps to draw Data Flow Diagram in StarUml

Step 1 - Open StarUML


Step 2 - Right-click on Model, which will produce a menu containing the ‘add diagram’ option.
Select ‘Data Flow Diagram’ from a drop-down list of ‘add diagram’.

Step 3 - Draw a Data Flow Diagram for your problem statement


DataFlow Diagram - Level 0.0

DataFlow Diagram - Level 1.0


DataFlow Diagram - Level 1.1

DataFlow Diagram - Level 2.0


A) Data Dictionary
A data dictionary is a file or a set of files that includes a database's metadata. The data
dictionary holds records about other objects in the database, such as data ownership, data
relationships to other objects, and other data. The data dictionary is an essential component of
any relational database. Ironically, because of its importance, it is invisible to most database
users. Typically, only database administrators interact with the data dictionary.

Features of Data Dictionary

● It gives the well-structured and clear information about the database. One can
analyse the requirement, any redundancy like duplicate columns, tables etc.
Since it provides good documentation on each object, it helps to understand the
requirement and design to a great extent.

● It is very helpful for the administrator or any new DBA to understand the
database. Since it has all the information about the database, DBA is easily able
to track any chaos in the database.

● Since the database is very huge, and will have lots of tables, views, constraints,
indexes etc, it will be difficult for anyone to remember. Data dictionary helps
users by providing all the details in it.

Purpose of Data Dictionary

● There is less information and details provided by data models. So, a data dictionary
is very essential and needed to have proper knowledge and usage of contents.

● Data Dictionary provides all information about names that are used in system
models.

● Data Dictionary also provides information about entities, relationships, and


attributes that are present in a system model.

● As a part of structured analysis and design tools, implementation of a data dictionary


is done.

Data Dictionary Notations


Following type of Information is used to store in a data dictionary:

Data Dictionary includes following types of notations:

Data Dictionary outlining a Bug in Bug Tracking System


Data Item Data Type Field Description Example
Size

bug_id integer 5 Unique identifier for a bug 10001

bug_type string 10 Type of bug Functional Bugs

bug_status string 8 Status of bug indicating whether it Not solved


is solved or not

bug_desc string 50 Description regarding bug The search


doesn’t react to
the user input

project_id integer 5 Identifier for a project where 50025


current bug occured

Data Dictionary outlining a Project in Bug Tracking System

Data Item Data Type Field Description Example


Size

project_id integer 5 Unique identifier for a 50025


project

project_status string 20 Status of project Ongoing


indicating whether it
is completed, ongoing
or cancelled

project_desc string 50 Description regarding Project is to develop an automated


the project machine learning system for sentiment
analysis in social media texts such as
Twitter.

manager_id integer 5 Identifier of the 30258


project manager
assigned for current
project

developers list 80 List of developers [20054, 25684, 26541, 28542, 29415]


assigned to complete
current project

testers list 80 List of testers [44044, 49884, 41541, 48682, 41265]


assigned for current
project

start_date date 10 Starting date of the 02/01/2021


(dd/mm/yyyy) current project

end_data date 10 Last date of current 15/05/2021


(dd/mm/yyyy) project

Conclusion

Thus we have designed an DataFlow Diagram and Data Dictionary for Bug Tracking System
using the STAR UML tool.

You might also like