You are on page 1of 7

Object-oriented software engineering (OOSE) is an object modeling language & methodology OOSE was developed by Ivar Jacobson in 1992

while at Objectory AB. It is the first objectoriented design methodology to employ use cases to drive software design. It also uses other design products similar to those used by OMT. It was documented in the 1992 book Object-Oriented Software Engineering: A Use Case Driven Approach, ISBN 0-201-54435-0 The tool Objectory was created by the team at Objectory AB to implement the OOSE methodology. After success in the marketplace, other tool vendors also supported OOSE. After Rational Software bought Objectory AB, the OOSE notation, methodology, and tools became superseded.

As one of the primary sources of the Unified Modeling Language (UML), concepts and notation from OOSE have been incorporated into UML. The methodology part of OOSE has since evolved into the Rational Unified Process (RUP). The OOSE tools have been replaced by tools supporting UML and RUP.

OOSE has been largely replaced by the UML notation and by the RUP methodology. Software people seem to have a love-hate relationship with metrics. On one hand, they despise and distrust anything that sounds or looks like a measurement. They are quick to point out the "flaws" in the arguments of anyone who talks about measuring software products, software processes, and (especially) software people. On the other hand, these same people seem to have no problems identifying which programming language is the best, the stupid things that managers do to "ruin" projects, and who's methodology works in what situations.

What Are Software Engineering Metrics?


Metrics are units of measurement. The term "metrics" is also frequently used to mean a set of specific measurements taken on a particular item or process. Software engineering metrics are units of measurement that are used to characterize:

software engineering products, e.g., designs, source code, and test cases, software engineering processes, e.g., the activities of analysis, designing, and coding, and software engineering people, e.g., the efficiency of an individual tester, or the productivity of an individual designer.

If used properly, software engineering metrics can allow us to:

quantitatively define success and failure, and/or the degree of success or failure, for a product, a process, or a person,

identify and quantify improvement, lack of improvement, or degradation in our products, processes, and people, make meaningful and useful managerial and technical decisions, identify trends, and make quantified and meaningful estimates.

Over the years, I have noticed some common trends among software engineering metrics. Here are some observations:

A single software engineering metric in isolation is seldom useful. However, for a particular process, product, or person, 3 to 5 well-chosen metrics seems to be a practical upper limit, i.e., additional metrics (above 5) do not usually provide a significant return on investment. Although multiple metrics must be gathered, the most useful set of metrics for a given person, process, or product may not be known ahead of time. This implies that, when we first begin to study some aspect of software engineering, or a specific software project, we will probably have to use a large (e.g., 20 to 30, or more) number of different metrics. Later, analysis should point out the most useful metrics. Metrics are almost always interrelated. Specifically, attempts to influence one metric usually have an impact on other metrics for the same person, process, or product. To be useful, metrics must be gathered systematically and regularly -- preferably in an automated manner. Metrics must be correlated with reality. This correlation must take place before meaningful decisions, based on the metrics, can be made. Faulty analysis (statistical or otherwise) of metrics can render metrics useless, or even harmful. To make meaningful metrics-based comparisons, both the similarities and dissimilarities of the people, processes, or products being compared must be known. Those gathering metrics must be aware of the items that may influence the metrics they are gathering. For example, there are the "terrible H's," i.e., the Heisenberg effect and the Hawthorne effect. Metrics can be harmful. More properly, metrics can be misused.

Object-oriented software engineering metrics are units of measurement that are used to characterize:

object-oriented software engineering products, e.g., designs, source code, and test cases, object-oriented software engineering processes, e.g., the activities of analysis, designing, and coding, and object-oriented software engineering people, e.g., the efficiency of an individual tester, or the productivity of an individual designer.

Why Are Object-Oriented Software Engineering Metrics Different?


OOSE metrics are different because of:

localization, encapsulation, information hiding, inheritance, and object abstraction techniques.

Localization is the process of placing items in close physical proximity to each other:

Functional decomposition processes localize information around functions. Data-driven approaches localize information around data. Object-oriented approaches localize information around objects.

In most conventional software (e.g., software created using functional decomposition), localization is based on functionality. Therefore:

A great deal of metrics gathering has traditionally focused largely on functions and functionality Units of software were functional in nature, thus metrics focusing on component interrelationships emphasized functional interrelationships, e.g., module coupling.

In object-oriented software, however, localization is based on objects. This means:

Although we may speak of the functionality provided by an object, at least some of our metrics identification and gathering effort (and possibly a great deal of the effort) must recognize the "object" as the basic unit of software. Within systems of objects, the localization between functionality and objects is not a oneto-one relationship. For example, one function may involve several objects, and one object may provide many functions.

Encapsulation is the packaging (or binding together) of a collection of items:


Low-level examples of encapsulation include records and arrays. Subprograms (e.g., procedures, functions, subroutines, and paragraphs) are mid-level mechanisms for encapsulation. In object-oriented (and object-based) programming languages, there are still larger encapsulating mechanisms, e.g., C++'s classes, Ada's packages, and Modula 3's modules.

Objects encapsulate:

knowledge of state, whether statically maintained, calculated upon demand, or otherwise, advertised capabilities (sometimes called operations, method interfaces, method selectors, or method interfaces), and the corresponding algorithms used to accomplish these capabilities (often referred to simply as methods), [in the case of composite objects] other objects, [optionally] exceptions, [optionally] constants, and

[Most importantly] concepts.

In many object-oriented programming languages, encapsulation of objects (e.g., classes and their instances) is syntactically and semantically supported by the language. In others, the concept of encapsulation is supported conceptually, but not physically. Encapsulation has two major impacts on metrics:

the basic unit will no longer be the subprogram, but rather the object, and we will have to modify our thinking on characterizing and estimating systems.

Information hiding is the suppression (or hiding) of details.


The general idea is that we show only that information which is necessary to accomplish our immediate goals. There are degrees of information hiding, ranging from partially restricted visibility to total invisibility. Encapsulation and information hiding are not the same thing, e.g., an item can be encapsulated but may still be totally visible.

Information hiding plays a direct role in such metrics as object coupling and the degree of information hiding Inheritance is a mechanism whereby one object acquires characteristics from one, or more, other objects.

Some object oriented languages support only single inheritance, i.e., an object may acquire characteristics directly from only one other object. Some object-oriented languages support multiple inheritance, i.e. an object may acquire characteristics directly from two, or more, different objects. The types of characteristics which may be inherited, and the specific semantics of inheritance vary from language to language.

Many object-oriented software engineering metrics are based on inheritance, e.g.:


number of children (number of immediate specializations), number of parents (number of immediate generalizations), and class hierarchy nesting level (depth of a class in an inheritance hierarchy).

Abstraction is a mechanism for focusing on the important (or essential) details of a concept or item, while ignoring the inessential details.

Abstraction is a relative concept. As we move to higher levels of abstraction we ignore more and more details, i.e., we provide a more general view of a concept or item. As we move to lower levels of abstraction, we introduce more details, i.e., we provide a more specific view of a concept or item.

There are different types of abstraction, e.g., functional, data, process, and object abstraction. In object abstraction, we treat objects as high-level entities (i.e., as black boxes).

There are three commonly used (and different) views on the definition for "class,":

A class is a pattern, template, or a blueprint for a category of structurally identical items. The items created using the class are called instances. This is often referred to as the "class as a `cookie cutter'" view. A class is a thing that consists of both a pattern and a mechanism for creating items based on that pattern. This is the "class as an `instance factory'" view. Instances are the individual items that are "manufactured" (created) by using the class's creation mechanism. A class is the set of all items created using a specific pattern, i.e., the class is the set of all instances of that pattern.

A metaclass is a class whose instances are themselves classes. Some object-oriented programming languages directly support user-defined metaclasses. In effect, metaclasses may be viewed as classes for classes, i.e., to create an instance, we supply some specific parameters to the metaclass, and these are used to create a class. A metaclass is an abstraction of its instances. A parameterized class is a class some or all of whose elements may be parameterized. New (directly usable) classes may be generated by instantiating a parameterized class with its required parameters. Templates in C++ and generic classes in Eiffel are examples of parameterized classes. Some people differentiate metaclasses and parameterized classes by noting that metaclasses (usually) have run-time behavior, whereas parameterized classes (usually) do not have run-time behavior. Several object-oriented software engineering metrics are related to the class-instance relationship, e.g.:

number of instances per class per application, number or parameterized classes per application, and ratio of parameterized classes to non-parameterized classes.

Case Studies of Object-Oriented Software Engineering Metrics


We will break our look at case studies into the following areas:

anecdotal metrics information, the General Electric Report, Chidamber and Kemerer's research, Lorenz's research, and my own experience.

Anecdotal object-oriented software engineering metrics information includes:

It takes the average software engineer about 6 months to become comfortable with object-oriented technology. The average number of lines of code per method is small, e.g., typically 1-3 lines of code, and seldom more than 10 lines of code. The learning time for Smalltalk seems to be on the order of two months for an experienced programmer. Once a programmer understands a given object-oriented programming language, he or she should plan on taking one day per class to (eventually) understand all the classes in the class library. Object-oriented technology yields higher productivity, e.g., fewer software engineers accomplishing more work when compared to traditional teams.

Deborah Boehm-Davis and Lyle Ross conducted a study for General Electric (in 1984) comparing several development approaches for Ada software (i.e., Structured Analysis/Structured Design, Object-Oriented Design (Booch), and Jackson System Development). They found that the object-oriented solutions, when compared to the other solutions:

were simpler (using McCabe's and Halstead's metrics) were smaller (using lines of code as a metric) appeared to be better suited to real-time applications took less time to develop

Shyam Chidamer and Chris Kemerer have developed a small metrics suite for object-oriented designs. The six metrics they have identified are:

weighted methods per class: This focuses on the complexity and number of methods within a class. depth of inheritance tree: This is a measure of how many layers of inheritance make up a given class hierarchy. number of children: This is the number of immediate specializations for a given class. coupling between object classes: This is a count of the number of other classes to which a given class is coupled. response for a class: This is the size of the set of methods that can potentially be executed in response to a message received by an object. lack of cohesion in methods: This is a measure of the number of different methods within a class that reference a given instance variable.

Mark Lorenz and Jeff Kidd have published the results of their object-oriented software engineering metrics work. Some of the more interesting items in their empirical data include:

The ratio of key (important) classes to support classes seems to be typically 1 to 2.5, and user interface intensive applications tend to have many more support classes. The average number of person-days to develop a class is much higher with C++ than it is with Smalltalk, e.g., 10 days per Smalltalk class and 20 to 30 days per C++ class. The higher the number of lines of code per method, the less object-oriented the code is.

Smalltalk applications appear to have a much lower average number of instance variables per class when compared to C++ applications.

Edward V. Berard has worked on object-oriented projects since 1983. Some observations from some of the projects include:

On a very large (over 1,000,000 lines of code) object-oriented project, all of the source code was run through a program that reported on the metrics for that software. Some observations: o Over 90% of all the methods in all of the classes had fewer than 40 lines of code (carriage returns). o Over 95% of the methods had a cyclomatic complexity of 4 or less. On a small project (about 25,000 lines of code), staffed by 3 software engineers each working half-time on the project: o The project was completed in six calendar months, i.e., a total of 9 software engineering months were expended. o When the code was first compiled, 7 compilation errors were found, and no more errors were found before the code was delivered to the customer

You might also like