You are on page 1of 35

UNIT-IV

DESIGNING USING UML


I. OVERVIEW OF OOAD METHODOLOGY :
Object-oriented analysis and design (OOAD) is a technical approach for analyzing and
designing an application, system, or business by applying object-oriented programming,
as well as using visual modeling throughout the software development process to guide
stakeholder communication and product quality.

What is Object Oriented Methodology?


• It is a new system development approach, encouraging and facilitating re-use of software
components.
• It employs international standard Unified Modeling Language (UML) from the Object
Management Group (OMG).
• Using this methodology, a system can be developed on a component basis, which enables
the effective re-use of existing components, it facilitates the sharing of its other system
components.

1|Page
There are three types of Object Oriented Methodologies:

1. Object Modeling Techniques (OMT)


2. Object Process Methodology (OPM)
3. Rational Unified Process (RUP)

1. Object Modeling Techniques (OMT) :


• It was one of the first object oriented methodologies and was introduced by Rumbaugh in
1991.
• OMT uses three different models that are combined in a way that is analogous to the older
structured methodologies.

a. Analysis
b. OMT Models
I. Object Model
II. Dynamic Model
III. Functional Model
c. Design

2|Page
a. Analysis
• The main goal of the analysis is to build models of the world.
• The requirements of the users, developers and managers provide the information needed
to develop the initial problem statement.

b. OMT Models :

I. Object Model :

• It depicts the object classes and their relationships as a class diagram, which represents
the static structure of the system.
• It observes all the objects as static and does not pay any attention to their dynamic nature.

II. Dynamic Model :

• It captures the behavior of the system over time and the flow control and events in the
Event-Trace Diagrams and State Transition Diagrams.
• It portrays the changes occurring in the states of various objects with the events that might
occur in the system.

III. Functional Model :

• It describes the data transformations of the system.


• It describes the flow of data and the changes that occur to the data throughout the system.

c. Design :
• It specifies all of the details needed to describe how the system will be implemented.
• In this phase, the details of the system analysis and system design are implemented.
• The objects identified in the system design phase are designed.

3|Page
2. Object Process Methodology (OPM) :
• It is also called as second generation methodology.
• It was first introduced in 1995.
• It has only one diagram that is the Object Process Diagram (OPD) which is used for
modeling the structure, function and behavior of the system.
• It has a strong emphasis on modeling but has a weaker emphasis on process.
• It consists of three main processes:
I. Initiating: It determines high level requirements, the scope of the system and the
resources that will be required.
II. Developing: It involves the detailed analysis, design and implementation of the system.
III. Deploying: It introduces the system to the user and subsequent maintenance of the
system.

3. Rational Unified Process (RUP) :


• It was developed in Rational Corporation in 1998.
• It consists of four phases which can be broken down into iterations.
I. Inception
II. Elaboration
III. Construction
IV. Transition
• Each iteration consists of nine work areas called disciplines.
• A discipline depends on the phase in which the iteration is taking place.

4|Page
Benefits of Object Oriented Methodologies :

1. It represents the problem domain, because it is easier to produce and understand


designs.
2. It allows changes more easily.
3. It provides nice structures for thinking, abstracting and leads to modular design.

4. Simplicity:
The software object's model complexity is reduced and the program structure is very clear.

5. Reusability:
• It contains both data and functions which act on data.
• It makes easy to reuse the code in a new system.
• Messages provide a predefined interface to an object's data and functionality.

6. Increased Quality:

This feature increases in quality is largely a by-product of this program reuse.

7. Maintainable:
• The objects can be maintained separately, making locating and fixing problems easier.

8. Scalable:
• The object oriented applications are more scalable than structured approach.

9. Modularity:
The OOD systems are easier to modify.

10. Modifiability:
It is easy to make minor changes in the data representation or the procedures in an object
oriented program.

11. Client/Server Architecture:


It involves the transmission of messages back and forth over a network.

5|Page
II.USE CASE MODEL DEVELOPMENT :

In UML, use-case diagrams model the behavior of a system and help to capture the
requirements of the system. Use-case diagrams describe the high-level functions and
scope of a system. These diagrams also identify the interactions between the system and its
actors.

Use case diagrams can be used for −

• Requirement analysis and high level design.

• Model the context of a system.

• Reverse engineering.

• Forward engineering.

Purpose of Use Case Diagrams :


The purposes of use case diagrams can be said to be as follows −
• Used to gather the requirements of a system.
• Used to get an outside view of a system.
• Identify the external and internal factors influencing the system.
• Show the interaction among the requirements are actors.

A use-case model is a model of how different types of users interact with the system to solve
a problem. As such, it describes the goals of the users, the interactions between the users
and the system, and the required behavior of the system in satisfying these goals.

A use-case model consists of a number of model elements. The most important model
elements are: use cases, actors and the relationships between them.

The use-case model may contain packages that are used to structure the model to simplify
analysis, communications, navigation, development, maintenance and planning.

6|Page
The use-case model serves as a unifying thread throughout system development. It is used
as the primary specification of the functional requirements for the system, as the basis for
analysis and design, as an input to iteration planning, as the basis of defining test cases and
as the basis for user documentation.

Basic Use Case model elements :

1. Actor

2. Use Case

3. Associations

Actor :

A model element representing each actor. Properties include the actors name and brief
description.

Use Case :

A model element representing each use case. Properties include the use case name and use
case specification.

Associations :

Associations are used to describe the relationships between actors and the use cases they
participate in. This relationship is commonly known as a “communicates-association”.

Advanced Use Case model elements model elements :

1. Subject

2. Use-Case Package

3. Generalizations

4. Dependencies

7|Page
1. Subject

A model element that represents the boundary of the system of interest.

2. Use-Case Package

A model element used to structure the use case model to simplify analysis, communications,
navigation, and planning. If there are many use cases or actors, you can use use-case
packages to further structure the use-case model in much the same manner you use folders
or directories to structure the information on your hard-disk.

You can partition a use-case model into use-case packages for several reasons, including:

• To reflect the order, configuration, or delivery units in the finished system thus supporting
iteration planning.
• To support parallel development by dividing the problem into bite-sized pieces.
• To simplify communication with different stakeholders by creating packages for
containing use cases and actors relevant to a particular stakeholder.

3. Generalizations

A relationship between actors to support re-use of common properties.

4. Dependencies

A number of dependency types between use cases are defined in UML. In particular,
<<extend>> and <<include>>.

<<extend>> is used to include optional behavior from an extending use case in an extended
use case.

<<include>> is used to include common behavior from an included use case into a base use
case in order to support re-use of common behavior.

8|Page
Example Use-Case Diagram

Figure 1 shows a use-case diagram from an Automated Teller Machine (ATM) use-case
model.

Figure 1: ATM Use-Case Diagram

9|Page
III. DOMAIN MODELLING :
A domain model is a visual representation of conceptual classes or real - situation
objects in a domain.
Domain models have also been called conceptual models (the term used in the first
edition of this book), domain object models, and analysis object models.
Applying UML notation, a domain model is illustrated with a set of class diagrams in
which no operations (method signatures) are defined. It provides a conceptual perspective.
lt may show:

• domain objects or conceptual classes


• associations between conceptual classes
• attributes of conceptual classes.

What are Conceptual Classes?


The domain model illustrates conceptual classes or vocabulary in the domain. Informally,
a conceptual class is an idea, thing, or object. More formally, a conceptual class may be
considered in terms of its symbol, intension, and extension.
• Symbol - words or images representing a conceptual class.
• Intension - the definition of a conceptual class.
• Extension - the set of examples to which the conceptual class applies.

10 | P a g e
The objects known throughout domain analysis is classified into 3 types:

1. Boundary objects
2. Controller objects
3. Entity objects

• Boundary objects:
The boundary objects area unit those with that the actors move.

• Entity objects:
These ordinarily hold info like information tables and files that require to outlast use
case execution, e.g. Book, BookRegister, LibraryMember, etc.

• Controller objects:
The controller objects coordinate the activities of a collection of entity objects and
interface with the boundary objects to produce the general behavior of the system.
The controller objects effectively decouple the boundary and entity objects from
each other creating the system tolerant to changes of the computer programme and
process logic.

11 | P a g e
IV. IDENTIFICATION OF ENTITY OBJECTS :

Entity Relationship Modeling :

The Unified Modeling Language (UML) is a widely accepted language used by


analysts and software developers that is an excellent fit for the graphic
representation of ER diagrams. By using UML, development teams gain significant
benefits, including easier communication between team members, easy integration
to repositories due to this language based on meta- models, use of a standardized
input/output format (XMI), universal use for application and data modeling, unified
representation from analysis to implementation to deployment, and completeness of
specification.

Core Elements of ER Modeling :


ER modeling is based on artifacts, which can be either a representation of physical
artifacts, such as Product or Employee, or a representation of a transaction between
artifacts, such as Order or Delivery. Each artifact contains information about itself.
ER modeling also focuses on relationships between artifacts. These relationships
can be either binary, connecting two artifacts, or ternary, among several artifacts.

The four essential elements of ER modeling are:

• Entity types

• Attributes

• Relationship types

• Attributes on relationships

Entity Types :

12 | P a g e
An entity type is a set of artifacts with the same structure and independent existence
within the enterprise. Examples of an entity type would be Employees or Products.

Attributes :
The structure of an entity type is defined with attributes. An attribute can be seen
as a property of an entity type. Attributes of an Employee might be Name,
Address, Social Security Number, Birth Date, Date Joined, and Position.

Entities differ from each other by the values of their attributes.

Relationship Types :
While entity types describe independent artifacts, relationship types describe
meaningful associations between entity types. To be precise, the relationship type
describes that entities of entity types participating in the relationship can build a
meaningful association. The actual occurrence of the association between entities
is called a relationship.

It is important to understand that although we defined a relationship type, this does


not mean that every pair of entities builds a relationship. A relationship type defines
only that relationships can occur.

Attributes of Relationship Types :


Relationship types can also contain attributes. For example the relationship type
Services between the Employee and the Product could contain the attributes Date
and Status, which identify the date of the service and the status of the product after
the service is done.

V. BROOCH’S OBJECT IDENTIFICATION METHOD :

13 | P a g e
Booch's Object identification model was used to identify the different objects of
software while doing domain analysis (domain modeling).

It covers the analysis- and design phases of an object-oriented system. ... The
Booch method includes six types of diagrams such as class diagrams, object diagrams,
state transition diagrams, module diagrams, process diagrams and interaction diagrams.

According to Booch's method, a potential object found after the lexical analysis was

considered legitimate only if it satisfies the following criteria: Retained information,

multiple attributed and common operations.

The Booch notation is characterized by cloud shapes to represent classes and distinguishes
the following diagrams :

Model Type Diagram UML correspondence

Logical Static Class diagram Class diagram

Object diagram Object diagram

State transition
Dynamic State chart diagram
diagram

Interaction diagram Sequence diagram

Physical Static Module diagram Component diagram

Process diagram Deployment diagram

The process is organized around

1. Macro Process

14 | P a g e
2. Micro process.

1. Macro Process :

The Macro process identifies the following activities cycle:

• Conceptualization : establish core requirements


• Analysis : develop a model of the desired behavior
• Design : create an architecture
• Evolution: for the implementation
• Maintenance : for evolution after the delivery

2. Micro Process :

The micro process is applied to new classes, structures or behaviors that emerge during the
macro process. It is made of the following cycle:

• Identification of classes and objects


• Identification of their semantics
• Identification of their relationships
• Specification of their interfaces and implementation

Retained Information:

The object must contain dome specific and retained information regarding itself. If the

object has no specific information or any sort of private data, then it is not considered to
play an important role in the software. Hence, every valid object must have some

information that could be able to define the object and its usefulness.

Multiple attributes:

15 | P a g e
The multiple attributes in the definition of the object mean that the object supports

multiple methods. The more the number of methods (or attributes), the more the object

can offer functionalities and can be related to other objects.

Objects with single or very few attributes are usually considered to be a part of other

objects or are directly driven by other objects. Hence, these types of objects are irrelevant

and not efficient to construct.

Common operations:

Several common operations are applicable to the potential objects. If all these operations

are applicable to all the occurrences of the object, then they can easily be implemented
to the whole class. If these operations are not applicable to the instances, then the object

is not considered valid because every function of the class must be applicable to every

instance of the class. If such cases hold, then they are considered under special objects

where various sub-classes are needed to define them.

VI. INTERACTION MODELING :

16 | P a g e
In UML models, an interaction is a behavior that represents communication
between one or more participants. A sequence diagram is a UML interaction diagram that
models the messages that pass between participants, such as objects and roles, as well as
the control and conditional structures, such as combined fragments.

This interactive behavior is represented in UML by two diagrams known as Sequence


diagram and Collaboration diagram. The basic purpose of both the diagrams are similar.

Sequence diagram emphasizes on time sequence of messages and collaboration


diagram emphasizes on the structural organization of the objects that send and receive
messages.

Purpose of Interaction Diagrams :

The purpose of interaction diagrams is to visualize the interactive behavior of the system.
Visualizing the interaction is a difficult task. Hence, the solution is to use different types of
models to capture the different aspects of the interaction.

Sequence and collaboration diagrams are used to capture the dynamic nature but from a
different angle.

The purpose of interaction diagram is −

• To capture the dynamic behaviour of a system.

• To describe the message flow in the system.

• To describe the structural organization of the objects.

• To describe the interaction among objects.

VII. CRC CARDS :

17 | P a g e
Class-responsibility-collaboration (CRC) cards are a brainstorming tool used in the
design of object-oriented software. They were originally proposed by Ward Cunningham and
Kent Beck as a teaching tool, but are also popular among expert designers and
recommended by extreme programming supporters.

A class represents a collection of similar objects. An object is a person, place, thing,


event, or concept that is relevant to the system at hand.

CRC Card Layout :

Collaboration takes one of two forms: A request for information or a request to do something.
For example, the card Student requests an indication from the card Seminar whether a space
is available, a request for information.

So how do you create CRC models? Iteratively perform the following steps:

• Find classes.
• Find responsibilities
• Define collaborators.
• Move the cards around.

18 | P a g e
VIII. APPLICATIONS OF THE ANALYSIS AND DESIGN
PROCESS :
UML is a graphical language for visualizing, specifying, constructing, and documenting
information about software-intensive systems. UML gives a standard way to write a system
model, covering conceptual ideas. With an understanding of modeling, the use and
application of UML can make the software development process more efficient.

Fields applying UML :

UML has been used in following areas

• Enterprise information systems


• Banking and financial services
• Telecommunications
• Defense

Modeling applications of UML using various diagrams :

The following lists of UML diagrams and functionality summaries enable understanding of
UML applications in real-world examples.

Structure diagrams and their applications :

Structuring diagrams show a view of a system that shows the structure of the objects,
including their classifiers, relationships, attributes and operations:

• Class diagram
• Component diagram
• Composite structure diagram
• Deployment diagram
• Object diagram
• Package diagram
• Profile diagram

19 | P a g e
Behaviour diagrams and their applications :

Behaviour diagrams are used to illustrate the behavior of a system, they are used extensively
to describe the functionality of software systems.

Some Behaviour diagrams are:

• Activity diagram
• State machine diagram
• Use case diagram

Interaction diagrams and their applications :

Interaction diagrams are subset of behaviour diagrams and emphasize the flow of control and
data among the things in the system being modelled:

• Communication diagram
• Interaction overview diagram
• Sequence diagram
• Timing diagram

20 | P a g e
IX. OBJECT –ORIENTED DESIGN PRINCIPLES :
The Principles are :
1. Single Responsibility Principle (SRP)
2. Open-Closed Principle (OCP)
3. Liskov Substitution Principle (LSP)
4. Interface Segregation Principle (ISP)
5. Dependency Inversion Principle (DIP)

1. Single Responsibility Principle (SRP) :


The SRP requires that a class should have only a single responsibility.

2. Open-Closed Principle (OCP) :


The OCP requires that each software entity should be open for extension, but closed for
modification.This is can also be achieved by using subclasses of a base class Abstract
Validation Rule that has an override-able function validate(Order order). Subclasses can
implement the method differently without changing the base class functionality.

3. Liskov Substitution Principle (LSP) :


The LSP requires that objects in a program should be replaceable with instances of
their subclasses without altering the correctness of that program.The users must be able to
use objects of subclasses via references to base classes without noticing any difference.
When using an object through its base class interface, the object of a subclass must not
expect the user to obey preconditions that are stronger than those required by the base class.
4. Interface Segregation Principle (ISP) :

The ISP requires that clients should not be forced to depend on interfaces that they do not
use.

5. Dependency Inversion Principle (DIP) :The DIP requires that high level modules should
not depend on low level modules, both should depend on abstraction. Also, abstraction
should not depend on details, details should depend on abstractions.

X. OOD(OBJECT-ORIENTED DESIGN) GOODNESS CRITERIA :

21 | P a g e
Characteristics of Good Object Oriented Design
1. Coupling guidelines
2. Cohesion guideline
3. Hierarchy and factoring guidelines
4. Keeping message protocols simple
5. Number of Methods
6. Depth of the inheritance tree
7. Number of messages per use case
8. Response for a class

1.Coupling guidelines:
The number of messages between 2 objects or among a gaggle of objects ought to be
minimum. Excessive coupling between objects is decided to standard style and prevents
reuse.
2. Cohesion guideline:
In OOD, cohesion is regarding 3 levels:
A. Cohesiveness of the individual methods
B. Cohesiveness of the data and methods within a class
C. Cohesiveness of an entire class hierarchy
A. Cohesiveness of the individual methods:
The cohesiveness of every of the individual technique is fascinating since it
assumes that every technique will solely a well-defined perform.
B. Cohesiveness of the data and methods within a class:
This is fascinating since it assures that the ways of associate object do actions that
the thing is, of course, accountable, i.e. it assures that no action has been
improperly mapped to associate object.
C. Cohesiveness of an entire class hierarchy:
The cohesiveness of ways among a category is fascinating since it promotes
encapsulation of the objects.

22 | P a g e
3. Hierarchy and factoring guidelines:
A base category mustn’t have too several subclasses. If too several subclasses area unit
derived from one base category, then it becomes troublesome to grasp the planning. In
fact, there ought to about be no quite 7±2 categories derived from a base category at any
level.
4.Keeping message protocols simple:
Complex message protocols area unit a sign of excessive coupling among objects. If a
message needs quite three parameters, then it’s a sign of dangerous style.
5.Number of Methods:
Objects with an outsized range of ways area unit possible to be additional application-
specific and conjointly troublesome to understand – limiting the likelihood of their employ.
Therefore, objects mustn’t have too several ways. this can be alive of the quality of a
category. it is possible that the categories having quite regarding seven ways would have
issues.
6.Depth of the inheritance tree:
The deeper a category is within the class inheritance hierarchy, the bigger is that the
range of ways it’s possible to inherit, creating it additionally advanced. Therefore, the peak
of the inheritance tree mustn’t be terribly giant.
7.Number of messages per use case:
If ways of an outsized range of objects area unit invoked in a very chain action in
response to one message, testing and debugging of the objects becomes difficult.
Therefore, one message mustn’t end in excessive message generation and transmission
in a very system.
8.Response for a class:
This is a life of the most range of ways that associate instance of this category would
decision. If an identical technique is termed quite once, then it’s counted just the once. a
category that calls quite regarding seven totally different ways is liable to errors.

23 | P a g e
XI. CK Metrics / Chidamber & Kemerer object-oriented metrics
suite :
The CK metrics suite consists of six metrics: Weighted Methods Per Class (WMC),
Depth of Inheritance Tree (DIT), Number of Children (NOC), Cou- pling between Object
Classes (CBO), Response For a Class (RFC), and Lack of Cohesion in Methods (LCOM).
The CK metrics can be used to measure some characteristics of OO systems
such as classes, message passing, inheritance, and encapsulation. ... Therefore, it is
necessary to develop new metrics for software maintainers to better understand the
complexity of classes as well as the potential effects of changing classes.
The Chidamber & Kemerer metrics suite originally consists of 6 metrics calculated for
each class:
1. Weight Methods Per Class (WMC)
2. Depth of Inheritance Tree (DIT)
3. Number of Children (NOC)
4. Coupling between Object Classes (CBO)
5. Response for a Class (RFC)
6. Lack of Cohesion of Methods (LCOM1)

1. WMC Weighted Methods Per Class :

Despite its long name, WMC is simply the method count for a class.

WMC = number of methods defined in class

WMC is a predictor of how much time and effort is required to develop and maintain the class.

2. DIT Depth of Inheritance Tree :

DIT = maximum inheritance path from the class to the root class

3. NOC Number of Children :

24 | P a g e
NOC = number of immediate sub-classes of a class

NOC equals the number of immediate child classes derived from a base class.

NOC measures the breadth of a class hierarchy, where maximum DIT measures the
depth. Depth is generally better than breadth, since it promotes reuse of methods through
inheritance. NOC and DIT are closely related. Inheritance levels can be added to increase
the depth and reduce the breadth.

4. CBO Coupling between Object Classes :

CBO = number of classes to which a class is coupled

Two classes are coupled when methods declared in one class use methods or instance
variables defined by the other class. The uses relationship can go either way: both uses and
used-by relationships are taken into account, but only once.

Multiple accesses to the same class are counted as one access. Only method calls and
variable references are counted. Other types of reference, such as use of constants, calls to
API declares, handling of events, use of user-defined types, and object instantiations are
ignored. If a method call is polymorphic (either because of Overrides or Overloads), all the
classes to which the call can go are included in the coupled count.

5. RFC and RFC´ Response for a Class :

25 | P a g e
The response set of a class is a set of methods that can potentially be executed in response
to a message received by an object of that class. RFC is simply the number of methods in
the set.

RFC = M + R (First-step measure)


RFC’ = M + R’ (Full measure)
M = number of methods in the class
R = number of remote methods directly called by methods of the class
R’ = number of remote methods called, recursively through the entire call
tree

A given method is counted only once in R (and R’) even if it is executed by several
methods M.

Since RFC specifically includes methods called from outside the class, it is also a
measure of the potential communication between the class and other classes.

A large RFC has been found to indicate more faults. Classes with a high RFC are more
complex and harder to understand. Testing and debugging is complicated. A worst case value
for possible responses will assist in appropriate allocation of testing time.

6. LCOM1 Lack of Cohesion of Methods :

The 6th metric in the Chidamber & Kemerer metrics suite is LCOM (or LOCOM), the lack
of cohesion of methods. This metric has received a great deal of critique and several
alternatives have been developed.

XII. LK METRICS :
Lorenze and Kidd proposed the following OO metrics.There are 4 metrics :

26 | P a g e
Metric 1: Class Size metric (CS)
Metric 2: Number of Operations (methods) Overridden by a subclass (NOO)

Metric 3: Number of Operations (methods) Added by a subclass (NOA)


Metric 4: Specialization Index (SI)

Metric 1: Class Size metric (CS) :


The overall size of a class can be determined by using the following measurements:
1. Total number of methods that are encapsulated within the class
2. Total number of attributes that are encapsulated within the class

Metric 2: Number of Operations (methods) Overridden by a subclass (NOO) :

There are instances when a subclass replaces a method, inherited from its super class with
a specialized version, for its own use. This type of replacement is called overriding. A large
value of NOO generally indicates a design complexity problem, which makes a class difficult
to test and modify.

Metric 3: Number of Operations (methods) Added by a subclass (NOA) :

Subclasses are specialized by adding methods and attributes. When the value of NOA
increases, the subclass drifts away from the abstraction implied by the super class. This
indicates a low quality of design .

Metric 4: Specialization Index (SI) :

SI provides a rough indication of the degree of specialization for each of the


subclasses in an OO software system. Specialization can be achieved by either adding or
overriding methods.
Where: NOO: number of operations overridden by a subclass.
Level: level in the class hierarchy at which the class resides
M total: total number of methods of a class

XIII.MOOD METRICS :

27 | P a g e
The MOOD metrics consist of the following software quality indicators: Attribute
Hiding Factor (AHF), Method Hiding Factor (MHF), Method Inheritance Factor (MIF),
Attribute Inheritance Factor (AIF), Coupling Factor (COF), and Polymorphism Factor (POF).

MOOD Metrics Suite

The MOOD metric set is used to measure the properties of the system in which the
design of the system is according to the concepts of Object-Oriented Design that is
Encapsulation, Coupling, Inheritance, Information Hiding, Polymorphism.

28 | P a g e
The set contains six main metrics to measure the design of the system.

1. Method Hiding Factor(MHF) and Attribute Hiding Factor(AHF) :

To measure how the attributes and methods of one class are encapsulated, Method and
Attribute hiding factors are being used. MHF and AHF show the average amount of how the
members of a class are hidden in the system.

MHF = 1- Methods Visible

AHF = 1- Attributes Visible

Methods Visible =SUM (MV) / (C-1) / Number of Methods

MV= Number of other classes where method is visible.

Attribute Visible = SUM (AV) / (C-1) / Number of Attributes

AV= Number of other classes where attribute is visible.

C = Number of classes

2. Method Inheritance Factor (MIF) and Attribute Inheritance


Factor(AIF) :

In Inheritance, the child or subclass inherits the properties (Attribute and Methods) of
the parent or superclass. The extent to which these methods and attributes are inherited is
defined by Method Inheritance Factor(MIF) and Attribute Inheritance Factor(AIF).

MIF = Inherited Methods / Total Methods Available in Classes

AIF = Inherited Attributes / Total attributes in Classes

A child class that inherits a large number of methods and attributes from its parent
class contains a large value of MIF and AIF.

3. Polymorphism Factor (PF / POF) :

29 | P a g e
To measure the degree or extent of method overriding by the child class form the parent
or superclass, Polymorphism Factor (PF) is used.

In polymorphism, the child class can implement the method in a different way. The same
method can be implemented in different ways in the child and parent class.

It is defined by the ratio of an actual number of method overrides and the maximum number
of total method overrides. To keep the code clean and clear and provide high quality we can
use high PF but it increases the complexity of the system.

Polymorphism factor is associated with method overriding, it is not associated with method
overloading.

4. Coupling Factor (CF / COF) :

If two or more clauses are related to each other by means of inheritance or aggregation
or association then, in that case, these classes are said to be coupled.

Many functionalities of the system can be done with the help of coupled classes. It is
not advisable to have too many independent classes. The high value of Cf shows that the
classes of the system are more inter-connected and inter-dependent.

To measure the actual coupling between different classes Coupling Factor (CF) is
used. It is the ratio of Actual coupling between different classes and maximum possible
coupling can happen in the system. If a class can access the method and attributes of the
second class then it is said that the first class is coupled with the second class.

MOOD2 METRICS :
To enhance the original MOOD metrics, the MOOD2 metrics set was later added. The
new suite provides the original six metrics along with that it provides new metrics - OHEF,
AHEF, IIT, PPF.

30 | P a g e
1. Operation Hiding Effectiveness Factor(OHEF) and Attribute Hiding Effectiveness
Factor(AHEF) :

OHEF and AHEF are better and improved versions of MHF and AHF. MHF measures
the level of method hiding in the system whereas OHEF measures how good the hiding is
and how well it is successfully implemented. It measures the goodness of the scope of the
operation of the method of the class.

OHEF = Classes that do access operations / Classes that can access Operations

AHEF = Classes that do access attributes / Classes that can access attributes

AHF measures the level of attribute hiding in the system whereas AHEF measures
how the attributes are hiding and how successful the attribute hiding is. These two factors are
used widely for the system that imposed the object-oriented paradigm.

2. Internal Inheritance Factor (IIF) :

Internal Inheritance Factor(IIF) is the improvised version of Method/ Attribute


Inheritance Factor(MIF/AIF). To measure the internal inheritance of the software system IIF
is being used. Internal inheritance means classes are inherited by the classes which are
present in the given system.

3. Parametric polymorphism Factor (PPF) :

It is an improvised version of the PF. It is defined as the ratio of Parametrized classes in


the system to Toal no of classes in the system. A parameterized class can be considered as
a generic class.

XIV . CODE REFACTORING :


Definition : Refactoring consists of improving the internal structure of an existing program's
source code, while preserving its external behavior. The noun “refactoring” refers to one

31 | P a g e
particular behavior-preserving transformation, such as “Extract Method” or “Introduce
Parameter.”

Refactoring is the process of restructuring code, while not changing its original
functionality. The goal of refactoring is to improve internal code by making many small
changes without altering the code's external behavior.

Computer programmers and software developers refactor code to improve the design,
structure and implementation of software. Refactoring improves code readability and reduces
complexities. Refactoring can also help software developers find bugs or
vulnerabilities hidden in their software.

The refactoring process features many small changes to a program's source code.
One approach to refactoring, for example, is to improve the structure of source code at one
point and then extend the same changes systematically to all applicable references
throughout the program. The thought process is that all the small, behavior-preserving
changes to a body of code have a cumulative effect. These changes preserve the software's
original behavior and do not modify its behavior.

What is the purpose of refactoring?


Refactoring improves code by making it:

• More efficient by addressing dependencies and complexities.

• More maintainable or reusable by increasing efficiency and readability.

• Cleaner so it is easier to read and understand.

• Easier for software developers to find and fix bugs or vulnerabilities in the code.

When should code be refactored ?


Refactoring can be performed after a product has been deployed, before adding updates and
new features to existing code, or as a part of day-to-day programming.

32 | P a g e
If it is not, then the developer can refactor the existing code. Once the new code is added,
the developer can refactor the same code again to make it clearer.

What are the benefits of refactoring?


Refactoring can provide the following benefits:

• Makes the code easier to understand and read because the goal is to simplify code
and reduce complexities.

• Improves maintainability and makes it easier to spot bugs or make further changes.

• Encourages a more in-depth understanding of code. Developers have to think further


about how their code will mix with code already in the code base.

• Focus remains only on functionality. Not changing the code's original functionality
ensures the original project does not lose scope.

Techniques to perform code refactoring :


Organizations can use different refactoring techniques in different instances. Some examples
include:

• Red, green. This widely used refactoring method in Agile development involves three
steps. First, the developers determine what needs to be developed; second, they get
their project to pass testing; and third, they refactor that code to make improvements.

• Inline. This technique focuses on simplifying code by eliminating unnecessary


elements.

• Moving features between objects. This technique creates new classes, while
moving functionality between new and old data classes.

• Extract. This technique breaks down code into smaller pieces and then moves those
pieces to a different method. Fragmented code is replaced with a call to the new
method.

33 | P a g e
• Refactoring by abstraction. This technique reduces the amount of duplicate code.
This is done when there is a large amount of code to be refactored.

• Compose. This technique streamlines code to reduce duplications using multiple


refactoring methods, including extraction and inline.

Code refactoring best practices :


Best practices to follow for refactoring include:

• Plan for refactoring. It may be difficult to make time for the time-consuming practice
otherwise.

• Refactor first. Developers should do this before adding updates or new features to
existing code to reduce technical debt.

• Refactor in small steps. This gives developers feedback early in the process so they
can find possible bugs, as well as include business requests.

• Set clear objectives. Developers should determine the project scope and goals early
in the code refactoring process. This helps to avoid delays and extra work, as
refactoring is meant to be a form of housekeeping, not an opportunity to changes
functions or features.

• Test often. This helps to ensure refactored changes do not introduce new bugs.

• Automate wherever possible. Automation tools make refactoring easier and faster,
thus, improving efficiency.

• Fix software defects separately. Refactoring is not meant to address software flaws.
Troubleshooting and debugging should be done separately.

• Understand the code. Review the code to understand its processes, methods,
objects, variables and other elements.

• Refactor, patch and update regularly. Refactoring generates the most return on
investment when it can address a significant issue without taking too much time and
effort.
34 | P a g e
• Focus on code deduplication. Duplication adds complexities to code, expanding the
software's footprint and wasting system resources.

35 | P a g e

You might also like